nSights Talks

Introduction to AWS App Runner

Tutorial Highlights & Transcript

00:00 - Beginning of Video
Today, I will talk a little bit of AWS App Runner.
00:22 - What is AWS App Runner?
AWS App Runner is basically a full-managed service that makes it very easy to deploy containers to AWS. It helps you to scale the containers, it will monitor the containers in a very easy way. It runs in terms of Fargate, so you don’t have to configure all the things that we do when we deploy Fargate, which App Runner is very easy to launch containers. Basically, this is the process.
01:03 - How AWS App Runner works
First, we need to select one of the methods that we would like to add to the source. One could be the container image or a GitHub repository. Then you set some build steps to create your container to build with if you are using GitHub. If you’re using the container, it’s easy just to pull the container, then, and just specify how much memory and CPU you need. Then you review and you credit. You get an URL. That URL can be attached to Route 53, so it depends on a good DNS endpoint. These are the costs for App Runner.
01:51 - Cost of AWS App Runner
For each vCPU, this is how much we get charged. One of the main features of App Runner is that it has automatic deployments. For doing the builds, it’s the cost as well.
02:13 - Cons of using AWS App Runner
One of the things that is not good about this App Runner is that scaling based on CPU and memory are not supported. Right now, we can scale only based on requests. That’s one of the limitations. There is also the problem that we cannot scale to zero containers. We need to always have at least one container running. What I see to me is that the main problem is that the containers that it runs, they are not running inside a VPC. Basically, you can connect to an RDS that you have in your private subnets or any database that you have in your private subnets, you won’t be able to connect with. For me, that’s the biggest limitation that we have at the moment, and it does not support private container registries at the moment.
03:22 - Pros of using AWS App Runner
The cost estimation is far simple because AWS charges you a fixed rate for CPU and memory per second. It’s very easy to use. It scale with traffic, and it has fast monitoring stuff to see how many [?] you get.
03:49 - Step-by-step demo tutorial on how to use AWS App Runner
We’re going to go through a demo. I will show you how you can create the App Runner service. First, I have two services for now. Let’s go, if we were going to create a new service, we come here. We have two source types. If we want to specify an ECR image, you just need to specify. Also, you can specify a source code repository. For that, you’ll need to create an AWS connected for GitHub. That basically is an application to grant permission to AWS to pull data to one or more repositories that you want to configure. For this, I created the AWS connector in my account. I specify only the demo example repository, and I select the branch I want to use. Then we have the deployment settings. You can go manual or you can go automatic. With manual, as it says here, you can use the console to deploy a new version or you can use the AWS CLI. If you click on automatic, every time you make a push to the repository, it’s going to deploy a new version of the server. Then you can configure the build. This is an interesting part of this. Actually, let me show you the report that we are using. We have three files. One is a basic Python web server. Basically, all it does is it writes, “Hello.” It looks for an OS for an environ variable name. If the environ variable is not there it will set the default value to world. It will set “Hello world,” enter Braulio. We have a requirement file. This is the only library that we install. Here, we have the apprunner.yaml. That is the configuration file for this. Basically, here we specify the runtime. We can specify the item and load here. We have two commands here. We have build and run. Basically, the build is because when App Runner detects a new deployment or you draw a new deployment, it’s going to create a new docker image from your code. Basically, it’s going to use a container from Python 3, and then the second layer of that container will be to run pip install -r requirements. It creates the container, and then after it creates the container, it sets the command to be Python server.py. Also, it sets the network port. App Runner knows where that application is going to run. Then you can also set environ variables to the container. Basically, this thing from here, it builds the container, it builds the docker image, then it also sets the entry point, let’s say. It also sets which ports we are going to use to deploy the application, and you can also set environ variables. As you can see, we are using name here. Our name is user here. This is the environ variable that we are getting. We can do that here. We can specify all of these things here or you can use the configuration file, the one I just showed you. That will be the one that App Runner checks. Then you need to specify the service name, the virtual CPU and the memory. Then we have the auto scaling section where you can specify the scaling policies. The default policy to scale is every 100 requests it will scale with one container. Then you have the health check for the container, it’s a TCP health check. It does not check the application or balance that we can set apart to check. It’s only a normal TCP check. The port is open. It’s going to mark it as available. Then we have the security. If you want to add a role to the container, this setting. Then after that, you click on that, you review, and you can create it. Let’s go back to the sample. I had created two examples because one is using the GitHub code and the other one is using an ECR image. The one that is using the GitHub code, it gives you the DNS entry point where you can see, Hello, nClouds. Braulio. If we come here. For example, if I change this to nCloudsdemo, and then I change this to helloooooooo that way. If I do a commit, then I come back here. I go to services again. You can see that there is an operation in progress because actually, as we set deployments to be automatically every time we get a push to the repository, every time it’s going to build a container, and it’s going to run the container. The build is also here. This is why it’s very easy to have a forecast of the cost. Because for build each minute, this is the cost. For automatic deployment, you get $1 per application. As you set the vCPU and the gigabytes of memory you want. That’s why it’s very easy to understand how much the billing is going to be. For the other, this is just a container. You cannot set automatic deployment for this because it expects to have a new docker image. As you can see, the source of this is an image. Basically, when you select ECR, you are not able to do automatic deployments, you can only deploy using the CLI or the console. We come back here. We just need to wait a couple seconds. Meanwhile, this gets updated. After that, it will refresh. We’ll get the change done. You didn’t have to do anything. It does it automatically and is very easy to use.
Jasmeet Singh

Braulio Rosales

DevOps Engineer


Braulio has been a Senior DevOps Engineer at nClouds since 2016. He has experience in architecting/automating and optimizing complex deployments over a variety of large scale infrastructure. He has a long list of technical certifications, including AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, and AWS Certified SysOps Administrator - Associate.