Blog Post

Terraform Multi State Management

06Mar,17 Post Image

Terraform_remote_state resource allows the state produced by one config to be consumed by another so that you can test your changes in isolation. If you’re testing something in QA, you can feel pretty confident that you are not going to affect production resources. If you are managing route53 domains using Terraform, you should have a separate state file for that.

In this post, we will use different state files to manage networking, security and app infrastructure.

Demo Content: https://github.com/nclouds/terraform-multi-state-demo

S3 can be used as a remote backend for multiple teams to share the state. It’s recommended to enable bucket versioning.

Requirements:

  • Install AWSCLI and Configure it, if not already.
  • Git
  • Create an S3 bucket in the desired region (for demo create in US Standard)

Network:

Let’s start by creating networking related resources. network/terraform.state is in S3 bucket and exposing vpc_id, public_subnets and private_subnets as outputs.

  • Clone the repo:
    git clone  https://github.com/nclouds/terraform-multi-state-demo
  • If you are using AWS profile other than the default profile, ensure that it is made default
    export AWS_PROFILE=non-default-profile
  • Run the configure.sh file within network directory to configure remote state for networks
    cd terraform-multi-state-demo/network
    ./configure.sh "s3-bucket-name"
  • After configuring the remote state, create terraform.tfvars file with variable list, specific to your environment. This file can be completely ignored for demo purposes by using default AWS Profile and default values for other variables (defaults are in input.tf)
    profile = "non-default-profile"
    vpc_cidr = "192.168.0.0/24"
  • Now run, terraform plan and apply. This will store the state in network/terraform.state inside the configure S3 bucket
    terraform plan
    terraform apply

All the outputs exposed in output.tf can be consumed by other resources

Security:

Now let’s create security groups that should be used inside the VPC created by network state file. security/terraform.state. Inside is in S3 bucket and exposing security group id.

  • Now cd into the security directory
    cd ../security
    ./configure.sh "s3-bucket-name"
  • After configuring the remote state, create terraform.tfvars file with the variable list, specific to your environment. This file can be completely ignored for demo purposes by using default AWS Profile and default values for other variables (defaults are in input.tf)
    profile = "non-default-profile"
  • Now run terraform plan and apply. This will store the state in security/terraform.state inside the configure S3 bucket
    terraform plan
    terraform apply

The state file of the network is configured as a data resource in the security template. Check this in security/aws.tf

data "terraform_remote_state" "network" {
	backend = "s3"
	config {
    	bucket = "terraform-state-useast1"
    	key = "network/terraform.tfstate"
    	region = "us-east-1"
	}
}

Now the exposed outputs by the network can be accessed by, using

${data.terraform_remote_state.network.vpc_id}

Check this under security/sg.tf, line 4. The vpc_id is attribute name used by network to expose the vpc_id. Check network/output.tf

output "vpc_id" {
  value = "${aws_vpc.vpc.id}"
}

Web:

Web can use both the states above to launch an instance in a particular subnet and assign the security group.

  • Now cd into the security directory
    cd ../web
    ./configure.sh "s3-bucket-name"
  • After configuring the remote state, create terraform.tfvars file with variable list, specific to your environment. ssh_key has no default value and its important (defaults are in input.tf)
    profile = "non-default-profile"
    ssh_key = "ssh-key-existing-in-the-aws-account"
  • Now run, terraform plan and apply. This will store the state in web/terraform.state inside the configure S3 bucket
    terraform plan
    terraform apply

This will show an output site_url which will show a default apache2 page installed using remote-exec terraform provisioner

Note: The upcoming version of Terraform 0.9.0 will support backend and environments to maintain different environments in different state files
Download 0.9.0-beta for Mac OSX, Linux and Windows

The remote backends can be configured via configuration block instead of running the configure.sh, which of course runs a terraform remote config command. Backend configuration is also shown in terraform.tf in the demo content, but they will not be utilized when using terraform version below 0.9.0 (currently beta-2)

In case you’re using the latest beta of terraform 0.9.0, you can replace the ./configure.sh on the above tutorial with terraform init after changing the bucket name in the terraform.tf

Cloudformation has similar functionality to reference output from different stacks or references within the same stack using nested templates, and it is super useful.

Please leave comments or questions regarding this post. We would love to learn how you leverage multi-state files to build your infrastructure.

Subscribe to Our Newsletter