In the past few weeks, I’ve been doing some work on deploying Kubernetes clusters using Azure Container Service Engine aka acs-engine so I thought this might be a great topic to do a multiple parts blog series.
It’s time to become a member of the DevOps team and deploy our application to production. If you think we used Azure Container Registry only to deploy containers on top of DC/OS in Azure, well, think again.
Now, when Azure Container Registry contains our repository, new containers can be deployed out of it. Let’s switch role and become members of the Integration team as our role now is to deploy the application at scale on the DC/OS cluster deployed on Azure.
Now that we have all our puzzle pieces in place the real fun begins as we start to move containers around. From the developer laptop through the integration team tests and finally to a running container in production. Let’s get it on with playing the developer role…
We have two working DC/OS clusters, one on Azure and another on vSphere – Great progress so far! Now, it’s time deploy Azure Container Registry (ACR) which will be used as a private catalog for our docker images.
What a joy! We have a working DC/OS cluster on top of vSphere but now, it’s time to deploy another cluster using Azure Container Service (ACS). Fear not, it will be much quicker to get this baby up & running in Azure with no pain what so ever.
Now that we have the docker engine up and running and all of our network & security related configurations in place, it’s time to get the DC/OS cluster rolling on top of VMware vSphere. This is the first major milestone in our entire platform setup. Let’s get moving…
After clearing out all the security-related tweaks and configurations and having all of our DC/OS cluster nodes installed with the docker engine, It’s time to create the SSH authorized keys file and establish the trust relationships between the bootstrap node to all other nodes in the cluster.