Deploying Kubernetes Clusters Using Azure Container Service Engine – Basic Deployment

Now that we have all tools installed and a working IDE, it’s time to deploy an actual Kubernetes cluster using acs-engine. 

As I mentioned in the first post in this series, there are plenty of acs-engine cluster definition JSON examples in the GitHub repository. For the purpose of this basic deployment, I will edit the following template:

Now, the first advantage that pops is the fact you can easily decide on the K8s version you want to deploy. With the other methods available on Azure today you’re kinda limited to. You can’t change the version when using ACS and can only choose from K8s 1.77 or 1.8.1 when using AKS (by the time of this post writing). If you need an older version, say 1.6, acs-engine is the way to go.

My basic deployment configuration will include a K8s cluster with a single master node and a single agent node, all Linux-based. Let’s edit the following parameters:

  • dnsPrefix
  • adminUsername
  • keyData (this is the ssh public key created in the previous post)
  • clientId (this is the SPN app id created in the previous post)
  • secret (this is the SPN secret id created in the previous post)

The version thing is not that relevant right now so I’m not gonna spend time talking about that. In the next post, we will see how to create a cluster with a specific K8s version.

As you remember, this template will be used by acs-engine for generating an ARM template along with all the cluster configurations and dependent file structure.

Once you saved acs-engine cluster definition template file, its time to run a simple generate command. I named my file acs_engine_k8s_basic.json so within my acs-engine working directory, the command I need to run will look like this:

Great. Let’s go back to our acs-engine folder. You can now see that new _output/”dnsPrefix” folder has been created which contains the ARM template and all its supportive files.

Before we will go and deploy our cluster, we need to make sure that we have an Azure Resource Group in place. This can be created easily via the GUI or a similar CLI command:

To deploy the cluster, run the following Azure CLI deploy command which will use the ARM template JSON file. This will take few minutes, depending on the number of nodes and Azure resources need to be deployed.

Once the deployment has finished (you will see the ARM template output on your terminal), you can go to the resource group in Azure and check all the created resources.

Note that a new VNet deployed as well. In the next post, I will be talking about the custom VNet scenario.

The last part is to connect to the cluster and check it’s health. In order to do that we need K8s config file available to us from where we are trying to connect which in my case, my MB. In the cluster resource group, look for the only Azure Public IP resource and ssh to it (or the DNS name) using the username you used in the cluster definition template.

If you don’t have the public key we created in the previous post, you need to make sure to copy it to your local machine but if you are connecting from the same machine you deployed the cluster, you should be fine.

Now that you ssh’d into the master node, cat the kubectl config file located in the .kube folder and paste it into a new config file on your local machine. Don’t be alarmed by the length of the file content, just make sure you are coping the entire thing (including the — dashes on the top).

It’s is a good practice to create the file with a descriptive name in case you are working with several clusters (which will be our case by the end of this series).

Get the cluster nodes from your local machine using the kubectl get nodes command using –kubeconfig flag pointing the config file you’ve just created.

There you have it, a running 1.7.10 K8s cluster!

In the next part, we will spice things up a bit with more networking, storage and versioning customizations. Stay tuned…

Be the first to comment

Leave a Reply