nSights Talks

Introducing AWS Controllers for Kubernetes (ACK)

Tutorial Highlights & Transcript

00:00 - Introduction
Hi, everyone. My name is Jasmeet Singh, and I’m from On-Call Support Services. My demo is about AWS Controllers for Kubernetes(ACK). Let’s start.
00:17 - What is ACK?
What is ACK? Most of the time, our applications running on Kubernetes depend on the different AWS services. For example, we want our applications to store the logs in S3, and we want to take advantage of AWS-managed RDS databases. And also sometimes we want to store the images of our application in the Amazon ECR. Now developers who love to use the Kubernetes API can deploy their applications as well as the AWS services from one place using ACK. In simple words, AWS Controllers for Kubernetes ACK lets you define and use AWS services directly from Kubernetes. By using the ACK, we can take advantage of AWS-managed services for our Kubernetes applications without needing to define resources outside of the cluster.
01:09 - How it works?
How it works. Basically, ACK is based on the Kubernetes objects CRDs and their custom controllers. We as admins install a controller for a specific AWS service. And the images for these controllers are available on the Amazon public ECR. So these controllers are deployed as deployments along with the CRDs. We need to provide the permissions to the service accounts of these ports so these can create the AWS services that we need. And finally, we have to create the custom resource manifest file for a specific AWS service.
01:46 - Architecture Diagram
This is the architecture diagram of the working of the ACK controller. So we as admin, install the controller port for any AWS service, in this case, for the S3. And along with this controller, its CRD also gets installed. And we provide permission to this S3 controller current controller pod to create the S3 bucket with the IRSA. This is the IAM role for the service accounts in Kubernetes. And after that, we need to create that custom resource manifest file for the S3. And when we create this custom resource manifest file, this ACK controller will look for this file. And having the required permission, it will create the S3 bucket for us in our AWS account.
02:34 - Demo
Let’s do some hands on this. For this demo, I have an EKS cluster running and in this demo, I am going to create an S3 bucket using the ACK. I have written down the commands so that I can show you how it works. Let me copy the first command here. In this command, I’m exporting some environment variables as you can see here, the services S3 and the release version of this controller pod and the namespace, which is a secret system. And after that, this command is going to log into the Amazon public repository. After that, it will install the ACK controller pod for the S3 using the helm command, you can see that we have successfully logged in to the Amazon public repository. And I also want to show you that you can get the different controllers for different AWS services on the Amazon public repository. As you can see here that we have the S3 controller Helm chart and the RDS, the ECR, and so on. You can see here that we have successfully installed the controller pod for the S3 here. Let us verify first if its pod is running fine here. Now you can see that my S3 controller pod is running fine here. And with this pod, you can see that the CRD on which the controller pod depends is also getting installed by that command. So let me show you. You can see that this CRD bucket S3.serviceChaos.AWS.

So now my next command is to create the OIDC provider for this EKS cluster. So let me copy this command. Basically, this OIDC provider will create trust between our AWS account and the third-party entities. Our OIDC provider is created here. Let me go to the IAM console here. Let me refresh this page. For this OIDC provider that we created recently, we have to create a trust policy for this OIDC provider. We have to go to the assigned role here and create a new rule for this OIDC provider. The trusted entity will be the web identity. And the audience is STS.Amazon.aws.com. So click on the next permissions, and tags, review the policy and let me give it a name. The purpose of this trusted policy is that this OIDC provider will generate the temporary credentials for the service account of our S3 controller pod.

Let me show you one more thing here. When we check the service account of this controller pod, you can see that this service account is attached to the controller pod of our S3. The name of this controller service account is the ACK S3 controller. In our next step, we have to provide the required permissions to create the S3 bucket in our account. Let me copy the last command here.

This command will create a role whose name is S3, full access, ACK. And I’m attaching full access for the S3 service here. And we are going to attach this role and the S3 policy to the service account of our S3 controller pod. This will take a few seconds. In the meantime, let me copy my next one.

In our next command, we are going to restart that deployment of the S3 controller pod, so the pod can leverage the permissions of this service account. We are good here, let me restart that deployment of our S3 controller pod. We are good to go here. You can see that in my S3 console, we have only three buckets here. To create the bucket from the Kubernetes API, we have to define a custom resource for our S3 bucket. I have created a manifest file for the S3, let me show you that file. And the version for this file is the V1 alpha 1, and that kind will be a bucket. And under the spec section, I am going to add some tags for my bucket. And lastly, the name of my bucket will my demo is ACK bucket. We can create this object as we create another object in the Kubernetes API. Let me create this bucket here. You can see that my demo ACK bucket is created. We can also lift down this bucket by executing the simple command KubeCTL to get a bucket. You can see that my demo ACK bucket is created and let me refresh the S3 console here. Okay, great, you can see that we have successfully created the bucket directly from Kubernetes.

This is the whole power of the ACK. And by using ACK, we can create our AWS resources directly from Kubernetes. Additionally, I want to show you some features of this ACK. You can see that we have some existing buckets in our AWS account. We can also manage these buckets from Kubernetes. By default, every ACK controller for any AWS service contains the logic for adopting resources for its particular service. In our case, this S3 controller has the ability to adopt the existing S3 buckets and this can be done by adopting resources CRD.

Let me show you that CRD also. Recently when we installed the S3 controller pod along with its CRD, one additional CRD was also gets installed. That CRD is the adopted resource. And this will help us to adopt the existing S3 buckets from our account. In this demo, I’m going to adopt this test adopted bucket so that I can manage this bucket directly from Kubernetes. For this, we also have to create a manifest file. I have created one let me show you. For this manifest file, that kind will be the adopted source. Under the spec section, we have to provide the name of that bucket here. And we can also provide the ARN of that bucket here and under the Kubernetes section, we have to provide the group name and the kind of bucket, and the metadata of that bucket. So let me create this object also.

We have successfully adopted that bucket. Let me show you the bucket. If it’s here. Yes, you can see that we have successfully adopted that bucket from our AWS console. We can manage this bucket. Let me show you how we can manage this. I’m going to delete this bucket from the Kubernetes API. This will also get deleted from our AWS console. So here we go, you can see that we have created the bucket from our Kubernetes API. This is all from my side. In this way, you can create the AWS resources directly from Kubernetes. Also, I want to show you one more thing. This is the official documentation of the AWS controllers for Kubernetes. And you can see here that different services are in the released state and we can create these services which are in the released state, you can check all the services here. So thank you, everyone. If you have any questions kindly let me know.

Jasmeet Singh

Jasmeet Singh

Senior Support Engineer

nClouds

Jasmeet joined nClouds in 2020 as a Support Engineer. Since then, he has been promoted to Senior Support Engineer.