Sometimes I wonder which technology has more buzz: Kubernetes or AI? Honestly, being a DevOps engineer at a DevOps consulting company, I would say the answer is Kubernetes. 🙂 But, just like any popular technology, there is some confusion because there are many options for deployment:
So, how do you choose which way to go? It all depends on your business and technical objectives.
It simplifies a Kubernetes cluster deployment by taking away the hassle of maintaining a master control plane. It leaves the worker node provisioning to you, which is simplified by Amazon EKS pre-configured Amazon Machine Images (AMIs).
If you find yourself in any of the above scenarios, I recommend that you deploy a self-managed Kubernetes cluster on AWS using Terraform and kops.
Our client, an innovative developer of IT service and support solutions, wanted to speed up the deployment of their applications. They had a mix of local and AWS environments and asked nClouds to help them migrate their data application suite to the Kubernetes platform. Their primary data store was an Apache Cassandra database, and they had an additional data store using a relational database.
A key business objective was to speed up the deployment of their applications to be more responsive to changing market conditions. I used Terraform for infrastructure provisioning on AWS and kops for Kubernetes cluster configuration.
I applied nClouds’ expertise in migration, containerization, and AWS cloud infrastructure. For DevTest, I set up the database to be deployed within the Kubernetes cluster (as a single node). In the production stage, I set it up to be deployed separately. I provisioned and configured the Kubernetes cluster by using kubeadm.
At a high level, here are the steps I took:
As a result. our customer achieved 50% faster deployment time.
Let’s get into the nitty-gritty:
git clone https://github.com/nclouds/generalized.git --branch rogue_one --depth 1
Then we will use the cd
command to move into the right directory:
cd generalized/terraform/terraform-kops/
The Amazon S3 bucket and Amazon DynamoDB table need to be in the same AWS Region and can have any name you want. Be sure to keep them handy as we will be using those later. The only restriction is that the Amazon DynamoDB table must have a partition key named LockID
.
backend.tf
file using the template provided in the repository.
cp backend.tf.example backend.tf
In this file, you need to input the information about your Amazon S3 bucket and Amazon DynamoDB table.
terraform {
required_version = "~> 0.10"
backend "s3"{
bucket = "<YOUR_BUCKET_NAME>"
region = "<YOUR_BUCKET_REGION>"
key = "backend.tfstate"
workspace_key_prefix = "terraform"
dynamodb_table = "<YOUR_DYNAMODB_TABLE_NAME>"
}
}
cp config/env.tfvars.example <env_name>.tfvars
export env=<env_name>
For this example, we will assume that <env_name>
is set to test
. The commands would look like this:
cp config/env.tfvars.example test.tfvars
export env=test
config/test.tfvars
. After doing so it should look something like this:
environment = "test"
cluster_name = "nclouds"
region = "us-west-2"
### VPC MODULE
vpc= {
cidr = "10.2.0.0/16",
dns_hostnames = true,
dns_support = true,
tenancy = "default",
}
public_subnets = ["10.2.0.0/24","10.2.1.0/24","10.2.5.0/24"]
private_subnets = ["10.2.2.0/24","10.2.3.0/24","10.2.4.0/24"]
### KUBERNETES MODULE
kops_state_bucket = "<YOUR_BUCKET_NAME>/kops"
worker_node_type = "t3.medium"
min_worker_nodes = "1"
max_worker_nodes = "2"
master_node_type = "t3.medium"
terraform init
. If everything is correct, the output should look like this:
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work.
If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
terraform plan -var-file=config/${env}.tfvars
The output of the plan command lets us know how many resources will be created:
Plan: 27 to add, 1 to change, 0 to destroy.
-----------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform can't guarantee that these exact actions will be performed if "terraform apply" is subsequently run.
Releasing state lock. This may take a few moments…
terraform apply
. Execute the following command and answer “yes” when prompted:
terraform apply -var-file=config/${env}.tfvars
Apply complete! Resources: 27 added, 0 changed, 0 destroyed.
Releasing state lock. This may take a few moments...
export KOPS_STATE_STORE=s3:// # Get this values from config/<env_name>.tfvars
kops validate cluster
INSTANCE GROUPS
NAME ROLE MACHINE TYPE MIN MAX SUBNETS
agent Node t3.medium 1 2 PrivateSubnet-0,PrivateSubnet-1,PrivateSubnet-2
master-us-west-2a Master t3.medium 1 1 PrivateSubnet-0
master-us-west-2b Master t3.medium 1 1 PrivateSubnet-1
master-us-west-2c Master t3.medium 1 1 PrivateSubnet-2
NODE STATUS
NAME ROLE READY
ip-10-2-2-68.us-west-2.compute.internal master True
ip-10-2-3-217.us-west-2.compute.internal master True
ip-10-2-3-218.us-west-2.compute.internal node True
ip-10-2-4-251.us-west-2.compute.internal master True
Your cluster <cluster_name>.k8s.local is ready
kubectl get nodes
command should provide output something like this:
NAME STATUS ROLES AGE VERSION
ip-10-2-2-68.us-west-2.compute.internal Ready master 5m v1.11.9
ip-10-2-3-217.us-west-2.compute.internal Ready master 5m v1.11.9
ip-10-2-3-218.us-west-2.compute.internal Ready node 4m v1.11.9
ip-10-2-4-251.us-west-2.compute.internal Ready master 5m v1.11.9
terraform destroy -var-file=config/${env}.tfvars
As illustrated in the diagram below:
If your Kubernetes deployment requires custom settings or specific add-ons, then one way to achieve this is by using kops and Terraform to migrate to the Kubernetes platform.
Need help with implementing containers? The nClouds team is here to help with that and all your AWS infrastructure requirements.
Top takeaways: AWS Managed Microsoft AD and Microsoft Active Directory
2022-12-05 15:25:16