Tutorial Highlights & Transcript
00:00 - Beginning of Video
Hello everyone, today I’ll be presenting about Terraform automation, how we can automate our Terraform deployments smartly with the Atlantis tool.
00:24 - What is Atlantis?
First of all, what is Atlantis? It is basically an open source tool for automating Terraform deployments. And it directly connects with your SCM, which is source control management, such as GitHub, GitLab, or Bitbucket, and it uses the pull request. Whenever there is a change in your Terraform code for the infrastructure, it will detect the changes and perform the Terraform deployments like plan or applying automatically for you. Also, you can control the Terraform plan or apply if you really don’t want to automatically apply. You can just go ahead and manually run the Terraform apply within your GitHub domain, we don’t really have to use our local Terraform tool, it will be directly within the domain of our, for example, GitHub repository.
01:27 - Why use Atlantis?
Going forward, why we should use Atlantis. There are a number of advantages, like it increases our visibility, like if someone creates a new pull request and says, there is a change in existing S3 bucket or a new resource has come up. Your other team members can check the new changes and , can collaborate with you and , can also update the existing PR. It increases the visibility for your team members, like what new resources or existing resources are changed, and what kind of behavior are we expecting in our infrastructure, Terraform. Atlantis also supports a review, which is indirectly the Terraform plan. When there is a new PR created, it will perform the plan for us and it will, , generate pretty smart output like it will show you like some specified resource has been created or updated. And it also supports Terraform locking, because locking is a native feature of Terraform.If one PR is using a specific back end, let’s say S3 and there comes a new PR. It will lock that PR and it will complain that some other PR is also working in the same state. And first you have to run the Apply on that or unlock it first, to continue your other PR work. And it provides faster deployments like you don’t really have to, go ahead and select specific workspace or, set up your AWS credentials every time. Because it will be within the GitHub and it will be handled directly with our GitHub pull request comments. And it also has one cool feature like when the plan is done, and we are ready to perform the Terraform apply with Atlantis. When the apply is done, and it’s okay, it will automatically merge the PR for us and it will close the outdated branch. This is optional. Yeah, it’s a cool tool to configure for this.
04:07 - Atlantis Set Up - Demo Use Case
There are a number of ways we can set up Atlantis. For my demo use case, I have used a Docker container on an EC2 instance. This is the official Atlantis image, you can just directly pull it and run it with a number of arguments. I have used a GitHub repository and an organization for my demo. You can also use the Bitbucket and GitLab repository. First of all, we need a Personal Access Token in order to communicate with our GitHub repository. And Atlantis also recommends using the GitHub organization where we can whitelist a number of repositories within our organization against which the Atlantis will run. And also, there would be a Webhook to integrate Atlantis directly with the repository, like it will send a post request stating that something has changed, and it will trigger that Atlantis for us. This is a helping command to fire up the Atlantis server. The first argument is that it is expecting a GitHub user. You can always use a bot user. But in my demo, I’m using the GitHub user of myself, because I’m using a free version of the GitHub organization. I’m sticking with my user. And here we are passing the Personal Access Token. And the next flag is for the repo allowed list. What repos are you wishing to use Atlantis for, and the GitHub organization name, the port, and this is the flag of auto merge, it will enable the auto merge feature, if you use this flag. There are also other features like the parallel plan and apply or , disabling the plan, automatic automatic plan. But for the demo use case, I’ve used the auto merge feature for now.
06:12 - Terraform Automation with Atlantis - Demo Use Case
This is the workflow of my demo use case. Assuming that we have Atlantis up and running, and we have changed some code in our Terraform project, and we have freezed up here, it will send a Webhook post request to our Atlantis and it will automatically trigger an Atlantis plan for us. If the plan looks good, we will run the Atlantis apply command within the GitHub pull request comment, and it will execute the Terraform deploy, apply, and the resources will be deployed. And after the graceful deployment of the resources, it will automatically merge the pull request for us to that master branch.
07:02 - Demo of Terraform Automation with Atlantis
Let’s see all this in action. First of all, I have used EC2. And if you go to a public IP, this is a landing page of Atlantis here. There is an option that you can directly disable or enable the Apply command. If you disabled apply commands, you cannot run the Apply. And here it is stating that no locks are found because we haven’t created any PR. When there is a PR, it will generate one lock for that PR, and once that PR is completed and merged it will unlock the Terraform lock. Okay. This is my GitHub organization. And this is my demo repository. And I have a very basic Terraform file where we are setting the backend S3 with the DynamoDB of Terraform, the provider and just one simple resource of S3 bucket. I’m just passing a dummy name for the S3 bucket and other properties are default. Let’s go ahead and create a new branch. And has to have a couple of changes. I’m adding a new resource or the same S3 And I’m changing the name and and I’m committing the change. And now we are ready to create a pull request. As soon as we hit the Create Pull Request button, it will automatically detect the change in our Terraform project and it has initiated the plan for us. It will take a while depending on your infrastructure and code. How many resources are there, but for me, it’s pretty small. It has detected one change which is to add S3 bucket resources. Also, Atlantis is instructing us on how we can apply it or plan it again. And we can just go ahead and use Atlantis with a plan or apply. If we run the plan again, it will again run the plan for us. Right now it is using the default workspace because I haven’t specified any workspace and I’m using the default workspace for this demo. When the plan was initiated, it had locked uses the root directory with the default workspace and it is stated as locked, because the S3 back end is using this pull request. Once we are ready we can go ahead and perform the Atlantis apply and comment and you see it immediately triggered to apply for us and it will apply the Terraform resources. It ran apply for the directory, which is a route and a workspace default and the output is that one has been added zero change and zero destroyed and GitHub user which is actually myself and this is an automated comment here that it is automatically merging because all plans have been successfully applied and it has automatic PR. If I go ahead and check if there are any changes, and you can see that my new S3 bucket has been created. There could be a number of questions like how we can set up multiple backends like one repository can have multiple S3 backends and different Terraform projects. For this, Atlantis provides a YAML file where we can specify the repo level configuration. You can specify the project name, the directory, the workspace and different Terraform versions and also about the custom workflows like for plan what are you willing to perform like, what do we want to run against and different features like parallel applying with plan, disabling apply and plan all together. And just using the GitHub PR command feature to explicitly planner
Yasir is a DevOps Support Engineer at nClouds. He has multiple technical certifications including AWS Certified Solutions Architect - Associate and Certified Kubernetes Administrator.