Deploying Kubernetes Clusters Using Azure Container Service Engine – Mixed Linux & Windows Agents

We’ve come a long way with understanding and leveraging acs-engine but wait, there is more to it. We can also add Windows Agents to the mix a deploy a specific Kubernetes version. 

[Note] For the purpose of this post, I’m assuming you have basic K8s knowledge but if you don’t, that’s totally fine as I will do my best to describe the steps in the most detailed way possible.  

Let’s start by asking a couple of questions:

  • What if we want to manage not just Linux Kubernetes pods and we need some Windows applications as well?! What?! On the same cluster?! Yes, on the same cluster!
  • And what if I need to test a newer K8s version?

Although I’m sure you can think of more and far more sexy use-cases, for the purpose of this post, I will deploy a couple of simple web servers. One will be based on the same ngnix we deployed in the previous post (and based on Linux) and the second one will be an ASP.Net application that will be using Windows Core container. 

This time, we will deploy a 1.8.2 K8s cluster with 3 master nodes, 3 Linux and 3 Windows agent nodes inside a new Azure Resource Group named ACS-Engine-K8s-Mixed”. It is important to understand that you can also deploy the cluster exclusively with Windows agent nodes and the mixed” thing is not mandatory. Also, note that the master nodes will always be deployed with Linux OS.

For networking, I will let acs-engine to deploy the entire stack instead of using an already created VNet like I did in the previous post. The reason being is a current issue with using an existing VNet alongside Windows-based agent. If you will dig into the issue a bit you will be able to see that there is a way of doing so but since I want to keep thing a bit simpler, I will not go down that route. 

I will also not going to leverage Azure CNI. Currently, Azure CNI supports only Linux based nodes and not Windows, therefore, the Azure Route Table created for us via the ARM template will be used.

Pre-Deployment 

Let’s review the template I will be using and the parameters we going to edit. 


The first thing to notice is that now we have two agent pools, one for Linux and one for Windows agents. The order which they are written in the template is very important as this will have an effect on which pool is the default K8s pool for pods deployments. As you can see, the Linux pool is the first one. We will get to this later on…

orchestratorRelease

Until now, we deployed 1.7.10 K8s cluster which is the default version acs-engine uses when going with the major 1.7 release configured. This time, I will change the major version to 1.8 which will result the cluster to be deployed with the 1.8.2 version.  

count (in both masterProfile & agentPoolProfiles) 

3 masters, 3 Linux agents and 3 Windows agents.

dnsPrefix

Make sure to change the name to be relevant. For this deployment, I will be using k8smixed

vmSize (in both masterProfile & agentPoolProfiles)

I will be using the Standard_D2_v3 VM type 

“storageProfile” (in both masterProfile & agentPoolProfiles)

Let’s use Azure Managed Disks again.

OSDiskSizeGB (in both masterProfile & agentPoolProfiles)

For the masters and the Linux agents, I will be using 64GB disks. Since Windows-based containers are larger in nature, I will add some extra buffer room and configure the Windows agents with 128GB disk size. 

osType (in windowspool” agentPoolProfiles)

For Windows agents, we simply need to state that the OS is Windows”. 

windowsProfile

Since we now have Windows agents as well, an admin username and passwords are needed.

Finally, after editing all the parameters, the acs-engine cluster definition template should look something like this.

We can now go ahead and generate the template.

One thing to point out is what’s the result of having the Linux pool before the Windows pool in the cluster definition template. 

Let’s jump to the azuredeploy.json file created in the generation process and do a search for primaryAvailabilitySetName”. As you can see, linuxpool” is our primary availability set. What this actually means is that when we will deploy Kubernetes pods, the default agents for those will be, well, the Linux agent nodes. In few minutes, I will show you how you can control your Windows application pods to get deployed on the Windows agent nodes. 

Great! Now that we have everything in place, we are ready to deploy the cluster using the Azure CLI, same way as we did in previous posts.

Post-Deployment 

We now have a running, K8s 1.8.2 mixed cluster and as you can see, 3 masters, 3 Linux agents and 3 Windows agents VMs were created as well as a new VNet with a single subnet. 

[Note] If you are planning to deploy a cluster to an already existed VNet and can’t or don’t want to use Azure CNI, you will have to configure the created Kubernetes-RouteTable to work with your existing subnets. 

The only thing that is left to do is start deploying pods into our cluster. Like a mentioned in the beginning of this post, I will be using a simple Linux-based webserver and a simple ASP.Net webserver. 

In order to make sure we will have our Linux-based pods deployed on the Linux agents nodes and for the Windows-based pods deployed on the Windows agents nodes, we can leverage K8s labeling. We can use our own self-created labels and attach those to nodes or we can use the built-in one which is what I’m going to do. 

To get those labels we can use either kubectl or K8s web UI. Let’s open HTTP proxy to access the Kubernetes UI using the kubectl proxy command.

Once done, open http://127.0.0.1:8001/ui on your web browser navigate to Nodes. You can see that both my Linux and Windows nodes have labels one for Linux OS and one for Windows OS respectively. 

If you want to use the command line, use the following command and track the label (I recommend a high-resolution screen for this one).

So, what do we do with those labels you might ask?! Well, we will use those in a pod YAML descriptor files which we will now create, one for nginx and for IIS (based on Windows container). For that, I’ve created a couple of basic templates and saved those into my local Temp directory.

The important thing to notice here is the image and the nodeSelector parameters values which will allow for the nginx image to be deployed on nodes with the beta.kubernetes.io/os: linux” label and for the IIS image to be deployed on nodes with the beta.kubernetes.io/os: windows” label. 

Let’s test our deployments by running the following command:

[Note] Windows-based containers are larger by nature so it might take few more minutes to deploy since you pulling the image from Docker Hub for the first time.

The fact that both pods are in Running” state means that our configurations worked but let’s verify it via the web UI by navigating to Pods”. You can see both pods deployed in the right node (and agent pool for that manner).

One last thing will be to expose a service for each of our deployments and open its external IP address in the web browser. To do that, let’s list our deployments, expose a service to the internet on top of deployment and list those services using the following commands. 

[Note] Exposing the services to the internet will result in the deployment of an Azure external load-balancer and inbound rules to be added to the Network Security Group. If you try to list the services before that, you will see it a pending” state.  

That’s it! All that is left to do is to browse to external IP services that we’ve just exposed.

I hope you enjoyed this 5-part deep dive series, I sure did!  Please note that there are many more acs-engine customizations and configurations for you to play with RBAC, Persistent Volumes, Scaling, etc. I highly encourage you to dig into the project github repository. 

 

Be the first to comment

Leave a Reply