nSights Talks

AWS CodePipeline Custom Actions

Tutorial Highlights & Transcript

00:00 - Why AWS CodePipeline Custom Actions
My demo for today is about AWS CodePipeline using custom actions. Basically, [the client was] already using CodePipeline for everything else they had and it was working fine. We had it set up in Terraform and everything. But then there was this other service that was Windows, Windows containers, and everything else was on Linux. Everything was working fine. Then they threw a Windows container at our pipeline and it didn’t work because we found out that even though CodePipeline has support for Windows workers, it doesn’t work if you try to build containers on them. Because of the way Docker in Docker works in Linux, it’s not possible in Windows. We had two options. We could build a different pipeline for them to use in something else. Or we could make this one work somehow. To keep everything looking the same for the client, we kept it on CodePipeline. We actually found a blog post from AWS, where they set up custom actions to be able to build a Windows container. That’s what I’m going to show you today.
01:27 - Problem - Building Windows Container in CodePipeline
What we wanted to do was build a Windows container in CodePipeline. It was not supported because the windows workers in CodeBuild are not able to do the Docker in Docker setup so it fails. This is a blog by AWS with a solution that’s a little bit complicated. But if you are in the need of making this work in CodePipeline, then this is the way to go. And they already provide a CloudFormation template that sets you up to do everything you need. Here, they list the limitations that I mentioned and the solution is using custom actions.
02:13 - Solution - Custom Action
In this case, the custom action is using a Windows EC2 machine to run the Docker build. It’s not a CodeBuild worker. It’s fully separate things. This EC2 instance that it uses, you will be able to see on the EC2 console. It uses step functions, Lambdas, and EC2 as part of the whole solution. As I said, it’s a little bit overly complicated. But if this is the way that you have to go for that specific project using CodePipeline, then this solves a problem and it works. I will share some of my thoughts on it at the end.

This is how it works. You have your normal CodePipeline. You would have your source stage and your deployment stage. And in the middle, you would usually do a CodeBuild stage to build your container, package, or whatever. But in this case, we introduced a custom build action in the middle. It uses the Lambda function, step functions to orchestrate the whole thing. It uses EC2 instances to run the actual job. It uses Systems Manager to run the commands inside that machine. If I had to build this from scratch, I would probably just build another pipeline on something else for this specific service. But since this was already a packaged solution, it was easy to set up.

03:54 - Demo - Creating Custom EC2 CodePipeline Builder
How do we do it? I already have my pipeline here, but I want to show you what it looks like in another region where I don’t have my custom actions. Here we are US-East-1. I will create a dummy pipeline here, an existing one. In CodePipeline, you usually have source, build, and deploy. For source, we can choose a CodeCommit. Let’s pick a random repo branch. Next. Now we have the build stage. Here, we have a CodeBuild default, right? If we select CodeBuild, it prompts us for a project. We can pick an existing project or we can create our own, but those are the options that we have. As I mentioned, building Windows containers in the Windows CodeBuild workers doesn’t work. That’s why we need the custom action. You’ll notice here that on the options, we only see AWS CodeBuild and the other Jenkins option, but we don’t have anything else. If we go back to the region where I have my infra already deployed. Let’s do a little thing again. Create the pipeline, demo “Friday Demo,” source – CodeCommit, Friday Demo on Main. Next. On build providers, now we have an extra option. We have CodeBuild, we have Jenkins, but we have this Custom EC2 CodePipeline builder. And this is what the blog I was showing you is about. That blog basically creates this option here.

How do you deploy that? This repo right here has everything you need. The only thing you need to do to set this up in an account is to run these two commands. First, the AWS CloudFormation package gives it a bucket to store the sources. Second, AWS CloudFormation deploys, and it will deploy a CloudFormation stack for you. It’s already set up here. That’s why I have the option in CodePipeline. This is a stack. As I mentioned, it deploys a bunch of stuff – Lambda functions, CloudWatch event rules, step function setup, the roles for the instance, and all that. This is just a default thing that comes in the repo, you just deploy it and it becomes an option in your CodePipeline. Once you have that, then build provider, you get this custom EC2 code pipeline, and you can use it now.

07:29 - Demo - Building Windows Containers in CodePipeline
How does it work? It first needs an AMI. If you want to build a Windows container here, you should give it a Windows AMI instance type of your worker. Whatever you need to get your build complete. And the command. The command is what is going to get executed inside of that machine. In my case, it’s a Docker build, and that I have in the CodeCommit repo. Because it’s a Windows thing, I have this Docker file, nothing really, it’s just building from the .Net image. Just doing a work directory and a copy that’s not copying anything, really, because there’s just one file here. But we just needed a Docker image to build. This is a Microsoft Docker image. The other thing is the scripts that we want to execute, so this is a PowerShell script. The only thing it does is it logs into ECR runs the Docker build, and the Docker push.

And for those of you familiar with CodePipeline, when you use the ECS CodeDeploy action, you have to pass it a JSON file with the changes to your task definition. That is what these lines are doing. They are creating that JSON file and calling it Build.JSON. Why am I mentioning this, if we’re not adding a deploy stage yet? It’s because it’s one of the options here. Output artifact. Again, if you have used CodePipeline before, to be able to use files between CodeDeploy and CodeBuild, you need to define them as output artifacts. Because this is a custom action, the way to do it is you tell it here which files should become build outputs in this system, and then you will be able to use them in the next stage of your pipeline. I’m going to need more time to finish creating this one. I am just going to show you the one that is already in place. Let’s take a look at that one.

It has a source to pull the code from CodeCommit, and then it has build and it’s using the custom action right here. You can see the last build succeeded. I am going to trigger one more so we can see what happens. So let’s go back here. Let’s make a change in the Docker file. Change this to demo. Okay, so we have our change, and because the pipeline is set up to watch that branch, it should be starting right now. Okay, so the source stage is already triggered. Now we are building. Once this one starts, it takes a few seconds, but we will get a details link here. There we go. And the custom action is set up so that this link will take us to the step function.

This step function is the one from the custom action. Here we can see what it’s doing. The first part is creating an EC2 instance. In this case, it’s a Windows machine. It’s going to wait until the machine is ready. It reports us ready on Systems Manager. This can take a couple of minutes, then we move on to start command execution. This is the command that we gave it in our pipeline. For this one, I am executing the script I showed you.

On our build, we can edit. This is our configuration – an AMI instance type, output artifact. And here’s the important one, the command. It’s just executing that CI.PS1 PowerShell script. It’s getting the region as a parameter and the repository name. It’s an ECR on this account called Friday Demo. That’s the command execution that it’s going to start here. It looks like it’s already running. It starts and then it’s going to wait, and it’s going to be stuck here for awhile while that Docker build is happening. Then once that completes, it moves on to destroying the EC2 machine that it’s using. Then it sends a report back to CodePipeline. Now, one of the drawbacks of this being Windows and being this whole EC2, Systems Manager, Lambda thing is that it takes longer. For example, let’s go back to the previous execution. This one took 20 minutes, and if you remember the Docker file that I showed you, it’s not doing anything. It just takes 20 minutes to bootstrap everything. Then the Docker build part happens in two or three minutes tops. That’s one of the cons of this solution. But again, if you really needed to build Windows containers in CodePipeline, this is the easiest way to go.

One other thing we can take a look at while this is happening. You notice here it’s waiting. What is it waiting for? It’s waiting for a run command in Systems Manager. That is under Systems Manager here on the left Run Command. We see here in progress. One target because it’s just one instance. But if we go back to command history, this is the successful one from before. We can take a look at this one. This is the output of the PowerShell script. Here, it’s authenticating to ECR, and the login succeeded. Then it’s doing the Docker build, Working Directory, the copy, and then the push. That’s it. If we go back to ECR, my Friday Demo is here. You can see that we have one image right here. As soon as this one is done, we would have another Docker image. As I mentioned, it took 20 minutes to complete. I’m sure nobody wants to sit here and wait for this to complete execution. That is what I have for today. Again, really specific use case, but if you ever need to build Windows containers in CodePipeline, let me know and I can help you out.

Jasmeet Singh

Carlos Rodríguez

DevOps Team Lead


Carlos has been a Senior DevOps Engineer at nClouds since 2017 and works with customers to build modern, well-architected infrastructure on AWS. He has a long list of technical certifications, including AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, and AWS Certified SysOps Administrator - Associate.