nSights Talks

Auto-Scaling Amazon EKS Managed Node Groups

Tutorial Highlights & Transcript

00:00 - Intro - EKS Managed Node Groups
Hi everyone, today we have the topic where we see how we can manage and create the node groups of two different types of instances. One will be on-demand, on capacity type of instances and one is spot instances. And we will enable the auto-scaling on both the node groups and after that we will also enable the Kubernetes pod auto-scaler on our Kubernetes deployment.
00:32 - Scope of the Demo
So, in the scope of the demo, I have already stated that we will create two types of node groups. So, why we are using an on-demand capacity type node group as well to do the cluster management of Kubernetes? We need some of the Kubernetes API servers, controllers, and the workload that should not be interrupted at any time. So, we have to use the on-demand instances because spot instances can be terminated at any time. After that, we will be creating the spot instances capacity type node group as well where we will let deploy on the spot instances so that we can save the cost. And after that, once both have been done, we will then move to create the auto-scaling on the clusters node groups and enable the horizontal auto-scaling on pods of our deployment.
01:20 - Workflow for Autoscaling Node Groups
And this is the smart workflow of how the Kubernetes cluster will auto-scale both the node groups and the for that I have already created a document that in this video I will be going through where I have mentioned all the steps for the commands which you need to create.
01:37 - Steps and Commands
And with this command, we can create the EKS cluster, and here you can see I have mentioned the name of the node group which we’ll be using is on demand. And here we have mentioned that we need the opportunity to access this node group and there are some of the tags which I’ve already created and this one will take some time to create the cluster. So, I have already created the cluster because it will take around 25 to 20 minutes. And similarly, this section will also create the on-demand capacity type of node group with the cluster itself. And after that, as we know that the Spot Instances are not available every time, the type of instances of which we needed, and to get the instance type which is being available within the same region and with AWS. This is a command which we can use to get all the details of which instances are available. So, let me show you how we can use this. Here in command, I am stating that we need only a CPU having two cores of CPUs. With this command, we can get to know which instance type is available. You can see here, that it has generated the proper list and this list we will be using this list to create the capacity type of Spot Instances and the new node group. As you can see here, this is the command to create the node group and I have mentioned that it has to be mined with the cluster as each step has been already created. And these are all the instances type that I have collected from the previous command and yes, the capacity type has to be small so that they are the same. As been mentioned here, the minimum and the maximum node which were available should be available in our node group. I mentioned that a maximum of 10 has to be available once it is needed. And after that, with this command, we can check the details that if the instances or you can see the nodes are available in the cluster with the capacity type of Spot Instances, and once the node group for this Spot Instances has been already collected because it also takes some time to create. So to manage the time and so that the demo can be finished in proper time. So, that has been already created.
03:57 - Deploying the Kubernetes Cluster Autoscaler
So now we will be deploying the Kubernetes Cluster Autoscaler on our node group of having the capacity type of Spot Instance. So we just have to apply the deployment here for the cluster. And you can see, it has been deployed. But yes not only this thing, but we also have to mention in the if parameter file that we want to use this auto-scaler with our cluster asg-test so we have to mention and do these changes, as well. Or we have to mention the name of the cluster also in the raw file which we have downloaded and created as the two parameters. We need to mention these two lines so that we can have enough available compute, as well. And also there should be no problem if we want to scale pods to zero in any open nodes. So let me do that as well. With this command, we can edit our deployment, as well. So this we have to do under this section of the containers detail and you can see it here. But here we have to do the changes, and then after, these two lines have to be added.

That is all so, we have edited our deployment file for the auto-scaler. Once it is been done, after that, we will have to have the deployment, as well. We will mention that we need the replica set of only one replica set currently. In this, we have also mentioned that these pods have a one CPU of one core as well as the one gig of memory, as well. But yes, the main difference which has been created, we just have to mention which nodes that we have to use and which capacity of node types we have to use. This is the line that we need to edit so, that is being already created here, you can see that file has already been made. And we just have to now apply this deployment to our cluster. It is being deployed and now we can get the details that they’ve got all the resources we have in our cluster.

You can see a deployment is being created and the replica set as well as the code is running. Now just to confirm our auto-scaling has been working properly with the node groups so we are just going to scale up our replica sets to six to one from one to six so that it can put the loads on the node group and then we can see from loads of our Kubernetes auto-scaler so that we can see that it has commanded to the node group that we have to scale from one to six and as per the requirements. With this one, we can check the loads, as well. It will take some time, as well. Here, we can check.

Yes, you can see here it has commanded that the nodes should be from one to six that it has scaled in an open one. So this will take some time to be running in a steady-state and we can also check again all the resources. We can see that currently, the pod will be in a pending state because of the less amount of code that can be issued on the nodes. Yeah, it will take some time.

In the meantime, we can check on AWS, as well. This is our EKS cluster and there are two types of node groups currently, two nodes are running. One is having the capacity of on-demand and the other is spot instances. Let me refresh it if some of the nodes have been spinning up. You can see now we have three nodes as the load has been increased. We have three nodes. It will spin up to more nodes, as well. You can see now we have five nodes. This way, we have seen how we can auto-scale our node groups so that we can meet the requirement of having the proper nodes. It is done.

08:25 - Horizontal Auto-Scaling for Pods
Now let’s move to how we can create the horizontal for auto-scaling on our pods, as well. We just have to deploy HP and you can see the whole Pod Autoscaler on our deployment and then it will scale up the pods as per the requirement of resource utilization and the resource we are using that if the CPU utilization of any of the pods is equal to or above 50% then it has to scale up over a replica set.

Yes, for that also I have some of the commands and the steps that I will have to perform. But yes, the auto-scaler only works once it will have the Kubernetes metric server with this because it will take all the resource utilization from the Kubernetes metrics of the pod only. So with this command, we can deploy the metrics server as well or on our cluster. And after that. Yeah, so after that, again, we have to create a dummy deployment where we will see how we can put a load on the deployment and it will auto-scale the pod or you can see the replica set on the pod deployment. And not only this, if you again want to do all the pods on the node which has the capacity type of spot, we just need to add these lines on these two nodes under our deployment so we can add those lines, as well. It’s been done.

10:54 - Checking Resources Post-Deployment
Now we have our deployment. Let’s check what all resources we do have. We can see here, that one pod is running with us and they do deployment with the name PHP Apache is there. Now we have to apply the HP or you can say the horizontal pod auto-scaler on all our deployments. This is the name of our deployment. Here we have mentioned that if the CPU utilization of our replica set or support has been equal to 50%, then it has to be scaled up to 10 replica sets. Let me do that, as well. It’s deployed and we can get the detail of the HPA using this command. Right now it is showing unknown because Kubernetes metrics again will take some time to calculate the resource utilization of each and every pod. With this command again and now we have to put the load on our deployment. With this command, we can put the load as well, but in the meantime, we can also ensure that the pod which is running are our own spot nodes only.

Let me describe the code, as well. You can see it is running on the node having the IP address. It needs to be two to eleven.

Check again. As you can see, here is the node of type spot instances. Now, let’s again check if HPA has been ready or not.

Yes, you can see here it is showing zero percent because there is no load currently in our deployment. Let me put the load using those commands. This is the dummy load that we have. Load is now starting to increase. We will see the same on the other dummy node the load has been increasing and it will start to increase the replica set according to the load. Currently, this is zero and currently, you can see only one replica set is running and once the load is going up and above the 50% it will start to increase the replica set as well over deploying. We just have to wait for some time. You can see, that the load is now up. It is 133%. Now it will start to increase the replica set.

You can see now the replica set has three and it will scale up to 10. It will have to be 50% or below the 50% until it will be scaling over replica set up to 10 only as we have delivered. So this is all for my demo that we have seen that how we have like enabled the auto-scaling on node groups as well as on our post-deployment.

Jasmeet Singh

Rishav Bagga

Support Team Lead

nClouds

Rishav is a Support Team Lead at nClouds. He has a Bachelor of Technology Engineering degree from the Rayat & Bahra Institute of Engineering & Nano Technology. Rishav has achieved AWS certifications as a Cloud Practitioner, Solutions Architect (Associate), SysOps Administrator (Associate), and is a Datadog Certified Technical Specialist.