My Journey into The World of Containers, Docker, CoreOS and Kubernetes -- Part 1

Introduction

If you haven’t heard the words Containers or Docker in the past two years you must have been stranded on a deserted island in the Pacific somewhere. Docker has the same buzz today as AWS did for public cloud back in 2011. Last year I decided to just see what Docker is all about, and play around with it on my laptop. After doing a hello world example, I could see its just virtualization taken to the next level. I go oh that’s nice and moved back onto other projects knowing I will come back to it one day when I have more time.

For the past few weeks I have been diving in deep into Docker, CoreOS, and Kubernetes. How did CoreOS and Kubernetes get thrown into the mix? CoreOS I remember reading about them and also seeing their bus parked in downtown Denver in 2014 for GopherCon, so they always stuck in my head for when I would start looking at Docker more. Kubernetes came into the mix after working with CoreOS for a bit, but more on CoreOS and Kubernetes later on. For now lets stick with Docker and Containers.

In my previous blog post towards the end I was discussing taking my example corporate website stack which was deployed with Puppet to EC2 instances, and “Dockerizing” the application stack but seeing how Puppet would incorporate into Docker. From what I have learned from my steps below is I could deploy this stack on Docker/Kubernetes and use Docker/Kubernetes for my configuration management. Now does this mean that I would never use Puppet in a Dockerized application stack? I’m not sure at this point since I need more experience with Docker. But with tools like confd & etcd utilized with Docker and other tools I would say there is a very good possibility I would never have to use Puppet in a Docker world. But I am getting ahead of myself lets focus on Docker and Dockerizing my example corporate website stack.

First off, if you are running Windows as your OS on your laptop, do yourself a favor and spin yourself up an Ubuntu Desktop VM on VirtualBox or better yet dual boot your laptop with Ubuntu Desktop. This is the route I took as I found myself working more and more in that Ubuntu VM, because I found it difficult to use a lot of these open source tools in Windows (maybe better with Ubuntu integration in Windows 10 Anniversary Update). Any commands shown in this series are being run on Ubuntu Desktop 16.04 LTS.

Docker and Docker Compose

The two tools we will be working with in this post are Docker and Docker Compose, I found a great shell script on GitHub Gist for making sure the latest versions are installed on my laptop. Using Docker’s installation instructions version 1.5 of Compose is installed from the apt repo, and we will be using Version 2 of the compose file format which is only supported in version 1.6+.

Next to get your feet wet with Docker if you haven’t already, I would follow this tutorial from DigitalOcean before moving on.

If we look back at my Puppet code on GitHub for the application stack, we can see our stack consisted of Apache running PHP5 with a MariaDB database server. Our website code was being installed through RPM packages created by my Bamboo server and being configured with Puppet template files. There was also an HAProxy component to the stack, which will be omitted in this example as when we introduce Kubernetes we will be utilizing AWS’s ELB service for load balancing duties.

In the Docker world since our containers are lightweight and running in their own user space on the underlying compute instance’s OS, this allows for much greater flexibility than traditional virtualization. If utilizing a clustered OS like CoreOS then your containers can be highly available and dynamically scale across hosts. So when looking at the application stack now we should not be concerned on how many compute instances (i.e. EC2) we are running for our stack because our containers will be running on already provisioned compute instances for our clustered container environment. Now we can look at application stack at a micro level, splitting out components into individual containers that can be individually managed, and that will work together to create the application stack.

In our new stack we will be utilizing Nginx as our web server, utilizing containers running PHP-FPM (FastCGI Process Manager) to handle the processing of our PHP code for Nginx. We will still be utilizing MariaDB as our database backend. The only changes that were needed to make our application stack work on Docker was updating or PHP code to work on PHP7, I could have just deployed version 5 of PHP-FPM but decided to see if it would work on latest version. All we needed to do was update the mysql functions to the new mysqli extension functions. I also needed to change the way variables would be defined in my code, since we would not be utilizing Puppet to take care of this for us.

Docker Compose File

GitHub repository for this stack

So in my GitHub repository, lets take a look at the docker-compose.yml file to see how our application stack will be deployed locally on our laptop for development purposes utilizing the Docker Compose tool. We see the YAML file is splitting our stack out into services, “db,web and php” If we look at the db service we are utilizing the latest official MariaDB Docker image, defining environment variables that will be used to configure the MariaDB container when it boots. The port is redundant since the container will automatically expose port 3306 when it boots, but this value is needed when we start using Kubernetes. The volumes configuration reference mounts the local “./sql” directory to “/docker-entrypoint-initdb.d” in the container. Inside this directory is our sql file that will populate the web_counter database with our countdetail table. Any sql or sh files placed in this directory will execute when the container boots the first time. More about the MariaDB docker container can be read here. In the networks configuration reference we are placing the MariaDB container in a network called “back-tier” and assigning the container an alias of “mariadb” which will be used by our php container. A note is that Docker containers are and should be designed to be ephemeral. In our Docker Compose example we will just use Docker’s default storage, which means when the container is destroyed all underlying data will be lost for that container as well. In our next post utilizing Kubernetes we will attach persistent storage to our MariaDB container where our database files will reside and be persistent.

The web section of the file is pretty self-explanatory except two notable differences. First in the ports reference we are mapping a local port on our laptop 8080 to port 80 on the Nginx container. This allows us to reach the Nginx container from our local laptop to test the webpage. The second difference is the links reference where we are defining that our web service needs to talk to the php service and it also defines there is a dependency between these two services.

In the php section the image we are using is a custom one I built which I will go into more details in the next section. One of the references to make note of is “networks” we see the PHP container is in the front and back tier networks because it will need to communicate with both the Nginx and MariaDB containers. The last section of the YAML file we are just defining our two networks.

Creating a Docker Image using a Dockerfile

So in order for the PHP-FPM container to process our PHP code, I needed a container with the mysqli extension installed. Since I did not see an image with a tag for the mysqli extension I just created my own container based off the official “php:7-fpm” image and just added the extension. Since it’s so easy to create an image I thought this route was the best as I will know what’s installed on the image. So if we take a look at the Dockerfile we see its two simple instructions, first we are going to build and image FROM the php:7-fpm image. Second during the Docker build process it is going to RUN the “docker-php-ext-install” helper script to install the mysqli extension.

Now its time to build the image with one simple command:

docker build -t evergreenitco/php7-fpm_mysqlcli .

Above we are invoking the docker build command, tagging our new image with our Docker Hub username and the repository name of “php7-fpm_mysqlcli” and with a trailing “period” we are specifying the location of our Dockerfile. You should see an output similar to below:

Next we need to upload the image to the Docker Hub so we can pull our images in the future on machines that do not have the images cached already like our development laptop. First you need to setup a Docker Hub account, then on the laptop you need to run the command “docker login” and provide your appropriate credentials. Afterwards we will push our image up with one simple command:

docker push evergreenitco/php7-fpm_mysqlcli

Now you should see your image up on the Docker Hub:

Before we bring up the stack, modify the “nginx.conf” file to modify the “server_name” for Nginx so it knows how to handle our request to whatever DNS name you prefer, and add to your hosts file pointing to 127.0.0.1

Let’s Bring Up Our Stack

Now that we have our PHP-FPM container with the mysqli extension created and uploaded to Docker Hub, lets get our application stack up, again with one simple command (run in the same directory as our docker-compose.yml file):

docker-compose up

You should see our containers starting up and the log output from each container being written to the screen. Once the logs stop streaming your environment should be up and Nginx be reachable on port 8080 from your machine. If you don’t get an output like below and are getting the Nginix welcome page check your “nginx.conf” file on your laptop to make sure “server_name” is set correctly. You should be also seeing the log output updating for each request you make to the site.

Let’s say your done testing, or you do need to make changes to your code, you can bring down the stack with a simple “Ctrl + C” if you get an error stop your machines with this command “docker stop $(docker ps -a -q)“ After making changes to your code all you have to do is run the “docker-compose up” command, since our Nginx and PHP containers are mounting to the local code directory these changes will be reflected in the containers, and you can easily keep testing your code.

What’s Next?

This concludes Part 1 of my series on Docker, CoreOS and Kubernetes. In my next post I will focus on how to get our application stack running in a production ready AWS EC2, CoreOS & Kubernetes environment.

Cheers!

Share