Mesosphere DCOS, Azure, Docker, VMware & Everything Between – Part 8

Now, when Azure Container Registry contains our repository, new containers can be deployed out of it. Let’s switch role and become members of the Integration team as our role now is to deploy the application at scale on the DC/OS cluster deployed on Azure.

The word “scale” is to most important one in our scenario. DC/OS lets you scale out your services easily but every comes with a great load balancing responsibility 🙂

Before diving into this post, I highly recommend you go over the following KB.

https://docs.microsoft.com/en-us/azure/container-service/container-service-mesos-marathon-ui

Deploying Marathon Load Balancer

The first thing we need to to do, in DC/OS, we need to deploy Marathon Load Balancer service. Simply go to Universe, search for marathon and hit the install button.

You can look for more details in the following KB:

https://docs.microsoft.com/en-us/azure/container-service/container-service-load-balancing

A few seconds later, you will be able to see it running under the Service tab.

Deploying Docker Container using Azure Container Registry Repository

It’s now time to deploy our custom Tutum application using the repository in Azure Container Registry.

Since I already have my JSON deployment configuration file, I am going to work in JSON mode but you can also deploy the container (service) almost entirely from the GUI. In the JSON mode, copy & paste your configuration.

There are few parameters that worth talking about here as every deployment will have its own set of configurations.

  • “Image” – the URL of your image repository. This can be found under the repository tab in ACR.
  • “labels” – Those parameters under “labels” are responsible for getting the deployed container to use the marathon LB we deployed a few moments ago. You can find the “HAPROXY_0_VHOST” URL under the Azure Container Service object. For more info, check out the following KB.
    https://docs.microsoft.com/en-us/azure/container-registry/container-registry-intro
  • “fetch” – This function is responsible for making the deployment use the Docker authentication tar.gz file we used in the previous session. This is where the package URL you copied in the part-7 will become useful.

It’s now time to hit the deploy button. If you notice, we are only deploying a single instance of this container as I will show you how to simply scale the service it out in few moments. A few seconds later and you should see a new running “integration” service.

Remember the “HAPROXY_0_VHOST” parameter from the deployment JSON?! Copy the URL to your browser and your custom web tutum application (which was developed in the “developer” MacBook) up & running.

The hostname that you are seeing is the Docker container id running in the background on one of DC/OS cluster private nodes deployed on Azure.

Scale-Out

As you remember, in our case, the integration team job is to see if the application can be placed behind a load balancer (which we already established) but also to be able to scale-out so let’s do this.

Back in the services list, click on the scale button and go up to 5 instances.

Few seconds after and you will have 5 running instances behind a load balancer. To verify that this actually working, go back to your browser and start hitting the refresh button. You will see the container id keeps changing which means your HTTP request is handled by the LB and gets forwarded to different container each time. SWEET!

OK, I think this is pretty darn cool, don’t you?!

In the next and final part of this series, we will switch role and became a member of the DevOps team. Our job will be to deploy our application on our on-premises production DC/OS cluster which will complete our CI/CD flow.

Be the first to comment

Leave a Reply