DynaDev

Containerizing Terraform: A Local Dev Guide

When working with Terraform, especially in a team setting, two key aspects are critical: establishing a cohesive process for collaboration and ensuring a consistent environment for Terraform operations.

Collaboration often involves maintaining a remote state and integrating CI/CD tooling, like Terraform Cloud, to ensure visibility and coordination among team members. Remote state management allows team members to share and access the Terraform state, while CI/CD tooling automates the provisioning processes.

But, how do we maintain environment consistency? Consistency is essential, particularly when provisioning production-grade infrastructure via CI/CD. This typically involves using a CI build agent or container, ensuring that Terraform runs in a controlled, versioned environment, avoiding discrepancies due to different operating systems or system configurations.

However, the challenge arises with local development and testing. Relying solely on CI/CD can impede the speed of development. To mirror the consistency of the CI/CD environment on local machines, containerization becomes a game-changer. The goal is to have a CI/CD pipeline that checks out the Terraform code, mounts it to a container for provisioning, and then replicates this exact process on local machines. Let’s explore how this can be effectively implemented in practice.

Onto the actual code…

Now, onto the technical details of our approach. We aim to design a workflow that maximizes efficiency and clarity in Terraform operations.

Our first step is to use a container with Terraform pre-installed. This ensures a consistent environment for our Terraform operations. We then mount our local Terraform project into this container, allowing us to execute Terraform commands directly within this environment.

To enhance our workflow, we’ll utilize Docker Compose to establish an interactive terminal. This setup allows us to review the Terraform plan and confirm its execution interactively — a crucial step for cautious provisioning.

In this tutorial, we’ll be focusing on how to provision an AWS S3 bucket, showcasing the practical application of our containerized Terraform setup.

We’ll start with our Terraform project file structure:

terraform-project/
├── docker-compose.yml
├── entrypoint.sh
└── main.tf

Each file plays a critical role: docker-compose.yml sets up our Docker environment, entrypoint.sh contains commands to run in the container, and main.tf holds our Terraform configuration.

Now, let’s delve into the docker-compose.yml configuration:

version: "3.8"

services:
  terraform:
    image: hashicorp/terraform:latest
    working_dir: /app
    entrypoint: /app/entrypoint.sh
    volumes:
      - .:/app
      - ~/.aws/credentials:/root/.aws/credentials:ro
    stdin_open: true
    tty: true
    command: ["apply"]
  • Lines 3-4 define the “terraform” service.
  • Line 5 specifies using the latest Terraform Docker image from Hashicorp.
  • Lines 6-7 set the working directory to /app and the entry point to entrypoint.sh.
  • Lines 8-10 handle volume mounting, including the project directory and AWS credentials for authentication.
  • Lines 11-12 enable interactive execution, allowing us to review and confirm the Terraform plan.
  • Line 13 defines the default Terraform command as apply, with the flexibility to change it (like to destroy).

This configuration forms the backbone of our containerized Terraform environment, ensuring a consistent and interactive workflow for provisioning infrastructure.

The entrypoint.sh script is crucial in orchestrating our Terraform operations within the Docker container. It manages the logic for planning, applying, or destroying infrastructure based on user input.

#!/bin/sh

action=${1:-"apply"}

terraform init

if [ "$action" = "destroy" ]; then
    terraform plan -destroy -out=tfplan
else
    terraform plan -out=tfplan
fi

read -p "Press ENTER to apply the plan or Ctrl+C to abort."

terraform apply tfplan
  • Line 3 sets the default action to apply, which can be overridden by Docker Compose.
  • Line 5 initialises Terraform to prepare for operations.
  • Lines 7-11 determines whether to create or destroy resources, generating a corresponding plan.
  • Line 13 pauses execution, awaiting user confirmation to proceed.
  • Line 15 applies the Terraform plan based on the user’s input.

Shifting our focus to main.tf, this file, while seemingly straightforward, is pivotal in defining our infrastructure:

provider "aws" {
  region  = "eu-west-1"
  profile = "tf"
}

resource "aws_s3_bucket" "example" {
  bucket = "tf-docker-project-demo-bucket"
  acl    = "private"
}
  • Line 3 instructs Terraform to use the [tf] profile specified in the ~/.aws/credentials file, ensuring the correct AWS account and permissions are used for provisioning.

This succinct configuration in main.tf lays out the groundwork for creating a private AWS S3 bucket in the specified region under the defined AWS profile.

To bring this project to life, execute the following command:

docker-compose run --rm terraform

This Docker Compose command initiates the Terraform process within our container, applying the configuration defined in our Terraform files. The --rm flag ensures that the container is removed after the run, keeping our environment clean.

If you need to reverse your deployment and destroy the resources you’ve created, the process is just as straightforward. Override the default ‘apply’ action by using the ‘destroy’ command:

docker-compose run --rm terraform "destroy"

Executing this command will instruct Terraform to remove all the resources defined in your configuration, effectively dismantling the infrastructure you’ve set up.

Happy coding, and here’s to effortlessly managing your infrastructure with Terraform and Docker!


Posted

in

,

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *