It’s time to become a member of the DevOps team and deploy our application to production. If you think we used Azure Container Registry only to deploy containers on top of DC/OS in Azure, well, think again.
In our scenario, the DevOps team members are responsible for deploying the application on top of DC/OS production cluster which as you already know was installed in a VMware vSphere environment. This application started her lifecycle on the developer laptop in a Docker container which was deployed from Docker Hub, went through some minor tweaks and from there was given to the integration team to test scalability using Azure Container Services.
Deploying Docker Container using Azure Container Registry Repository on an On-Premises DC/OS Cluster
For our production deployment, things will be just a bit different. Since I didn’t deploy any load balancer in front of my private agent nodes (unlike in Azure where you have its embedded LB deployed for you), there is no need for the marathon LB. Outside the lab, things will probably be a bit different.
In your on-premises DC/OS, go to the Services section, start a new service deployment wizard and select to deploy a single container.
This time around, my deployment parameters described in JSON are looking a bit different. It does not include the labels we had in our Azure-based deployment which was used by the deployment to allow the service (and it’s containers for that matter) to use the marathon LB. Since I don’t have such LB in production, we don’t those labels.
Another difference between the two deployments is the fact I removed all the health checks, again, from the same LB reason I just mentioned.
The most important thing here for you to notice is that even though I am working on a local, on-premise DC/OS cluster, I am using Azure Container Registry. This is a through a hybrid solution which acts as a bridge between the public and the private cloud!
Hit the “Review & Run” then the “Run Service” button and check you have a new running service.
Click on the service and then on the running instance in the top of the list. You will then find the URL to your new production application.
You guessed it right, click on the URL and there you go, a new production tutum web application has been deployed.
Notice that the IP is coming from one of my DC/OS private agent nodes. If I’ll SSH it, we will also be able to how the Docker container id matches the one you see in on the web page.
The last and final step for this post and the entire series is to see if we can scale-out our production application as well. Even though we don’t have a load balancer, it is still very much possible. Hit the scale button and go up to 5 instances.
Again, 5 seconds after and I have 5 running instances of the web application. You can click the service again and this time you will see the containers are deployed on all agent nodes.
Since I don’t have a load balancer configured, I can enter each instance and click on each URL independently. Just select a different instance from the original one that was deployed in the previous step (in my case the one which was deployed on 192.168.0.168 agent node) and see the web page is opening with new IP, port and Docker container id.
With that ladies and gents, this 9-part “Mesosphere DCOS, Azure, Docker, VMware & Everything Between” has come to its end. I really enjoyed writing it and I hope you enjoyed reading it.
With this series, I tried to show how we can leverage Azure cloud services such as Container Service and Container Registry and mix those with an on-premises infrastructure to form a true hybrid solution around Docker containers orchestration.
As always, comments are more than welcome and feel free to hit me up on Twitter.