Deploying Kubernetes Clusters Using Azure Container Service Engine – Existing VNet & Managed Disks

In the previous post, I showed you how easy it is to deploy a basic K8s cluster using acs-engine. That’s great and all but most deployments requires you to consider the existing Azure environment and customer requirements. In this post, we will play a bit with custom VNet and Azure managed disks.

Let’s start by asking several questions:

  • What if you already have a network policy or configuration in place? maybe something like S2S VPN that configure against your Azure VNet or some security policy you must enforce.
  • Azure got managed disks so why should I manage storage accounts instead of letting Microsoft do the dirty work?
  • How can I make the cluster more highly available and scalable? 

Pre-Deployment 

If you didn’t get this by now, acs-engine can be customized in many ways. Let try to take all these questions and take this apart piece by piece.

For the purpose of this post, I’ve created an example Resource Group which holds a production” VNet with multiple subnets and deployed a single Windows Server 2016 Backend” server on the Backend-Subnet”. By the end of this post, when we will have our up and running K8s cluster I will be using this server to test connectivity to a web (frontend) server deployed inside a Kubernetes pod sounds good?!

I want to deploy my K8s cluster in this VNet and don’t want the deployment to create a new VNet like in our previous deployment. In addition, I also don’t want the cluster resources to be deployed inside my existing Resource Group since I want to be able to delete all of the cluster resources at one shot without accidentally deleting any of my Production” resources.

As you already know, K8s is deployed with master and agent nodes so I’ve created two additional subnets in order to accommodate those K8s-Masters-Subnet” and K8s-Agents-Subnet”. You can deploy both master and agent nodes in a single subnet but I recommended you split it into two subnets as it will ease the management and just make more sense.

The second requirement was to create the cluster without affecting other resources in case I need to delete hit in one shot. In order to do that and like we did in the previous, basic deployment, I’ve created another K8s Resource Group called ACS-Engine-K8s-Custom”.

Let’s have a look at the acs-engine template file I will be using this time in order to deploy the cluster and go over the new parameters.

networkPolicy

Kubernetes clusters can be configured to use the Azure CNI plugin which provides an Azure native networking experience. Pods will receive IP addresses directly from the VNet subnet on which they’re hosted. To enable Azure integrated networking the following must be added to your cluster definition. This should be configured with azure”.

The network piece is something worth spending time on. Go to the end of this features.md file and read the networking part.

count (in both masterProfile & agentPoolProfiles)

This time around, I want to make the cluster more highly available with a bit more scale. In Azure, you can deploy 1,3 or 5 masters and some very large K8s clusters when it comes to the number of agent nodes. I want to make things more simple so I will go with just 3 agents.

dnsPrefix

Make sure to change the name to be relevant. For this deployment, I will be using k8scustom

vmSize (in both masterProfile & agentPoolProfiles)

I will be using the Standard_DS2_v2 VM type 

vnetSubnetId (in both masterProfile & agentPoolProfiles)

This would be the Azure resource id for the K8s-Masters-Subnet” & K8s-Agents-Subnet” subnets we’ve just created. To extract it, using Azure CLI, execute the following command and copy the id” values:

firstConsecutiveStaticIP (masterProfile)

The first private IP that will be handled to the first master node (we have 3 in this deployment). My K8s-Masters-Subnet” prefix is 172.18.5.0/24 so I will be starting with 172.18.5.240 as my first IP with the next two masters to follow with 241 & 242 respectively.

vnetCidr

Once again, I will point you to the end of this features.md file and read the networking part.

“storageProfile” (in both masterProfile & agentPoolProfiles)

Azure managed disks are cool as It takes the work of managing storage account out of your hand which is always nice. Let’s change the profile to ManagedDisks”.

OSDiskSizeGB (in both masterProfile & agentPoolProfiles)

64GB, simple as that. It’s just another customization possible when using acs-engine to deploy K8s clusters. 

Finally, after editing all the parameters, the acs-engine cluster definition template should look something like this.

After we saved our template as a new JSON file and like we did in our previous post, we now need to generate the new ARM template and deploy the cluster in it’s dedicated Resource Group. Don’t forget to point to the new JSON and change the folder path when deploying the cluster.  

Post-Deployment 

As you can see, we now have 3 master & 3 agent nodes deployed with 64GB managed disks across the board.

As for the networking part, you can see that both masters and agents subnets now have tens of connected devices. Those are the pre-allocated IP address provisioned as a result of the use of Azure CNI and which will be used by pods deployed in the cluster. 

SSH to the public IP or the DNS of the cluster, cat the .kube/config file and copy it to your local machine and get the cluster nodes using the kubectl command, same as we did in the previous post. 

We now have a custom working K8s cluster!

The last thing last left to do is a quick integration test between the Backend” server deployed in my Backend-Subnet” deployed and a new example nginx web server pod I will deploy inside the cluster. Let’s start with deploying the pod, make him listen on port 80, check that it is in a Running” state and discover it’s IP using the following commands: 

As you can see both the agent node and the pod are getting their IP from the same subnet flat”.

From my Backend” server, deployed in a different subnet and a different Resource Group in Azure (“Production”), I am now able to connect to the web server I’ve just deployed in Kubernetes good stuff!  

Congrats on making it this far, you did well  🙂 

In the next post for this series will deploy a mixed Linux & Windows agents K8s cluster and play with versioning a bit. Stay tuned…

Be the first to comment

Leave a Reply