Mesosphere DCOS, Azure, Docker, VMware & Everything Between – Part 5

What a joy! We have a working DC/OS cluster on top of vSphere but now, it’s time to deploy another cluster using Azure Container Service (ACS). Fear not, it will be much quicker to get this baby up & running in Azure with no pain what so ever.

As you remember, in our scenario, we will have two DC/OS clusters. One will be used to run the “Production” docker containers and the second one for “Integration & Testing”.

To deploy the cluster in Azure, we will use the magic of Azure Container Service which is a semi-managed containers orchestration platform. It supports all big-3 – DC/OS, Kubernetes and Docker Swarm. Unlike a manual on-premises deployment, ACS will do the heavy lifting for us. All you need to do is to state how many Master and Slave nodes you want and that’s it.

Another major difference between ACS deployment and an on-premises one is that in Azure, DC/OS must be deployed with both private and public slave nodes. If you remember, in our vSphere based deployment, we didn’t install any public agents.

Now, there are many blog posts, KBs and articles around how to use and deploy DC/OS using ACS so I’ll try to make it short but comprehensive as possible. IMHO, Microsoft ACS documentation is a very good place to start with.

https://docs.microsoft.com/en-us/azure/container-service/

Also, my good old Spanish friend Juan Manuel Rey started it’s own around ACS which worth reading!

http://blog.jreypo.io/containers/microsoft/azure/cloud/cloud-native/getting-started-with-azure-container-service/

Deploy DC/OS via ACS

The first step is to generate SSH public key. PuTTY Key Generator is the easiest way for me. Just click the Generate button, start moving your mouse like a crazy person and save your public key. Also, copy your key to the clipboard, you will need it in a sec.

In Azure, look for Azure Container Service and follow the wizard. Select DC/OS as your orchestrator, create a new resource group (or use an existing one) and choose the location.

Enter DNS prefix which you will use to login to DCOS UI, username and copy your SSH public key.

For this deployment, I selected 3 master nodes and 3 agent nodes which good enough for a lab environment. I also decide on “Standard DS3_V2” VM type for the agent VM which will give me 4 core and 14GB of RAM per node. Hit the OK and wait for the cluster to provision.

Comparing to all the steps we had to go through in our on-premises deployment, that was a piece of cake!!!

NAT & Network Security Group Rules

At this point, DC/OS on Azure is already running in the background now we need to do minor network and security tweaks to make it accessible from the internet.

Under the ACS resource group, go to the master load balancer in order to add ports 80 & 443 (HTTP/HTTPS) inbound NAT rules.

For the target virtual machine and IP, select the first master node. After creating the HTTP rule, do the same for HTTPs. After creation, you should see it under your LB inbound NAT rules.

Next, we need to add Network Security Group (NSG) rule to allow those ports. Under the ACS resource group, go to the master NSG and add both HTTP and HTTPs inbound security rules.

That’s it, DCOS is now accessible from the outside world. Look for it’s public DNS FQDN under the Container Service object and navigate to it’s URL.

Differences

There are several noticeable differences between our “vSphere DC/OS” and the one we just deployed.

  1. Version 1.9 vs. 1.8.8
    Nothing to worry here as this will not have an impact on how containers will be deployed. Official 1.9 support in ACS is coming.
  2. Number of slave nodes
    As you remember, when we deployed DC/OS via ACS we chose to deploy 3 agents but when looking at the DC/OS UI we see 6 nodes. The reason for that is that ACS will automatically deploy 3 public nodes and 3 private nodes (each role in each own Azure VM Scale Set), unlike our on-premises deployment which included only 3 private nodes.
  3. Load Balancer
    Like I mentioned, each slave node role VM group (public/private) deployed inside a VM Scale Set with the public nodes scale set seating behind an Azure Load Balancer.
    Although it is not deployed in scale set, the master nodes also seat behind a load balancer which is the same one we configured the inbound NAT rules earlier.

Obviously, there are more differences between the two deployments but those are the ones that I highly recommend you explore Azure Container Service S DC/OS (and ACS in general) on GitHub.

https://github.com/Azure/acs-engine/blob/master/docs/dcos.md

In the next part, we will lay down the last piece in our infrastructure – Azure Container Registry. ACR will act as the “bridge” between the clusters and more importantly, between Dev & Ops.

1 Comment

1 Trackback / Pingback

  1. Mesosphere DCOS, Azure, Docker, VMware & Everything Between – Deploying DC/OS with Azure Container Service – Cloud Data Architect

Leave a Reply