Mesosphere DCOS, Azure, Docker, VMware & Everything Between – Part 7

Now that we have all our puzzle pieces in place the real fun begins as we start to move containers around. From the developer laptop through the integration team tests and finally to a running container in production. Let’s get it on with playing the developer role…

Note: For the purpose of this post, I’m assuming you have basic Docker container command line experience but even if not, don’t worry as I will try to explain the logic behind every command execution and will attach all the needed screenshots.

Azure Container Registry will act as the “bridge” between worlds and by the end of this post, you will a custom made docker image repository on it.

Let’s remember in our diagram and flow from the first part in this series.

Dev to Production CI/CD Flow

  • A developer does some coding on a container deployed locally on his workstation.
  • He then pushes an “Integration Ready” docker image to a private container registry.
  • Integration team pulls the image into a DC/OS cluster deploy on Azure to do some extra integration and testing work. Once done, a new “Production Ready” image is being pushed to the container registry.
  • The “Production Ready” container is being pulled to the DC/OS production Cluster I deployed on vSphere.

To put things into a bit more context, we will simulate the following using the Tutum “Hello World” web application running on top of Docker container.

  1. Acting as a developer, I will run the container on my MacBook and make some minor changes on the application index file.
  2. I will then commit the changes and create a docker image out of this container followed by me pushing it to our Azure Container Registry.
  3. Switching to an Integration team member, I will then check if I can run the container at scale on the DCOS cluster deployed on Azure. This will simulate a simple integration practice.
  4. For the final step in the process and as part of the continuous deployment, I will then pretend to be a member of the DevOps team and deploy the “Production Ready” to the production DCOS cluster on top of vSphere.

Running & Changing an Application inside a container

It’s time to run the container in your working environment. In my case, a MacBook running Doctor Engine. Run it and check for the external port allocation (you can get your short container id using a simple docker ps command).

sudo docker run -d -p 80 tutum/hello-world

docker port “container-id” 80

While it’s running open the URL to check the web page in its original form.

http://localhost:port

To make some changes, we need to open a shell session to the running container and first install nano editor. Since this container uses a very small Alpine Linux distro, I’m going to use apk package manager. Run the sh command (opens shell session) inside a running container

docker exec -i -t “container-id” /bin/sh

To install nano text editor

apk add nano

Now that we have nano installed, edit the application web page index php file. In my case, I’ve changed the text from “Hello world!” to “Hello Player!”  😎

nano /www/index.php

Save your changes using Ctrl+x

Exit the shell session using Ctrl+P followed by Crtl+Q (UPPERCASE). With this, the container is still running (PID-1) in the background of my MacBook. To apply changes we just did, restart the container.

docker restart “container-id”

You will notice how the allocated port has changed. Now, login to the same web page URL using the new port and make sure you see your change.

Capture Docker Container Image

Now that we have a container with a web application that went through a long development cycle, (not really but you know what I mean 🙂 ) we need to commit the changes and make a docker image out of it.

docker commit “container-id”  [REPOSITORY[:TAG]]

For example:

docker commit 88fe1f5835e0 xlab/tutum:version1

After the image has been created, you can easily see it using docker images -a

Authenticating with Azure Container Registry  

Before we can push the new container image we’ve just created, we first need to authenticate with ACR for the first time, generate an authentication config file and upload it to the storage account blob container we created in part 6 for this series.

At this point, I have to give credit to my fellow architect and teammate Itay Shakury. He wrote a great, kinda of KB article around deploying Docker containers on DCOS using the “fetch” function while authenticating against ACR using the docker authentication config file. In this post and the ones to follow, I will leverage this method with addition screenshots and more context to our use case.

In a real world scenario, it will probably be the DevOps team job to upload the file but for the purpose of this part let’s assume me (as the developer) will do it. It is also very important to mention not to do this next few packaging steps via MacBookThis is because how OSX store SSH keys in the keychain.

For this part, I will use my Linux Bootstrap VM (which is installed on my vSphere environment) but it can be any Linux box internet access. I will use it to log into ACR and to create the tar.gz file. 

Login to ACR and enter your username and password. This can be found under your ACR “Access Keys” section.

 The login format is easy:

docker login [Login Server] -u [Username] – p [Password]

Note: The following is taken originally from “Step 1” (Registry 2.0 – Docker 1.6 and up)

https://mesosphere.github.io/marathon/docs/native-docker-private-registry.html

Now that I have my docker.tar.gz file containing my ACR authentication config file saved in my $HOME folder, I can upload it to the storage account blob container.

Upload Docker Authentication Config Package to Blob Container

Under the ACR storage account, navigate to the blob container we created in the previous part and upload the docker.tar.gz file.

Click on the file and copy the file URL to a text file, we will need this later on in the next parts.

Push Docker Container to Azure Container Registry

The last stage for this part is to tag the container image and to push it to ACR. Before doing so, let’s look at the Repository section in ACR to see it’s original empty form.

To tag and push the container image to ACR which will take few seconds depending on your bandwidth, use the following format.

docker tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]

docker push [OPTIONS] NAME[:TAG]

For example, in our case:

docker tag 50bb5e0cc7c3 xlabacr.azurecr.io/xlabimages/tutum

docker push xlabacr.azurecr.io/xlabimages/tutum

Now, if we will refresh the repository list, we should see the new image repository in ACR.

In the next part (yes, there is more 😉 ), we will switch roles and become a member of the integration team. In this role, our job is to deploy a container on DCOS out of the docker image we’ve just created.

Be the first to comment

Leave a Reply