Testing Terraform with InSpec (Part 2)
In this post, we will set it all up for easy working in Visual Studio Code. Let’s start!
VSCode Dev Container
First, we need to understand Visual Studio Code Dev Containers. One of the biggest problems in software engineering and IaC is the “works on my machine” syndrome. Did you remember the last time when you wanted to use some software on your computer, but it did not work at all? While your colleague kept happily working away with the same configuration?
Visual Studio Code added support for configuring a development environment in your repository based on Docker to combat this problem and unify development environments. You add a Dockerfile
and some configuration to your repository, and the IDE will use it for a consistent development experience.
The principle is straightforward: When opening the folder, VSCode will ask you if you want to reopen it in the container, build it on first use and then drop you into it. You will not notice much of this, as all your editing experience and even GIT integration will work as usual. But as soon as you open a terminal (Terminal / New Terminal), you will notice this is inside a container. And that one only contains the tools and settings from your repository.
No more version confusion, missing dependencies, missing tools, etc. You can even use this if you have a subscription to (cloud-based) GitHub Workspaces, which means your development environment is available whenever you want to do something.
A Terraform Dev Container
To build a container, you first have to define its contents. Luckily, this is a universal standard now, and we can set up everything with a Dockerfile.
As base docker containers tend to be as small as possible, we need to add our usual tools and dependencies. Our use case involves Python (for the AWS CLI) and Ruby (for Test Kitchen12 and InSpec3). We also need tools like the Terraform Switcher (check out the section below) and TFLint for our style checks. Then, we are just missing Test Kitchen and are finished with the setup.
.devcontainer/Dockerfile
:
FROM ubuntu:20.04
ARG USERNAME=vscode
ARG USER_UID=1000
ARG USER_GID=$USER_UID
RUN apt-get update \
&& export DEBIAN_FRONTEND=noninteractive \
&& apt-get install --no-install-recommends --yes \
lsb-release vim sudo curl wget apt-utils dialog apt-transport-https ca-certificates unzip software-properties-common git python3-pip git less ruby2.7 ruby2.7-dev build-essential \
#
&& groupadd --gid $USER_GID $USERNAME \
&& useradd -s /bin/bash --uid $USER_UID --gid $USER_GID -m $USERNAME \
&& echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME\
&& chmod 0440 /etc/sudoers.d/$USERNAME \
#
&& curl -L https://raw.githubusercontent.com/warrensbox/terraform-switcher/release/install.sh | bash \
&& curl -s https://raw.githubusercontent.com/terraform-linters/tflint/master/install_linux.sh | bash \
#
&& pip install awscli awsume \
&& awsume-configure \
&& gem install --no-document kitchen-terraform
#
&& dpkg --purge build-essential \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
A Dockerfile alone will not get us there - Visual Studio Code needs to know how to wire it up to your environment. For ease of working, we want our AWS and SSH configuration directories mapped into it. And as it’s a Terraform project, we also get Hashicorp’s Terraform VSCode extension and configure pass-through of AWS credentials.
.devcontainer/devcontainers.json
:
{
"name": "Terraform",
"build": {
"dockerfile": "Dockerfile"
},
"mounts": [
"source=${localEnv:HOME}/.aws,target=/home/vscode/.aws,readonly,type=bind",
"source=${localEnv:HOME}/.ssh,target=/home/vscode/.ssh,readonly,type=bind"
],
"extensions": [
"hashicorp.terraform"
],
"settings": {
"remote.containers.logLevel": "info"
},
"remoteUser": "vscode",
"remoteEnv": {
"PATH": "/home/vscode/bin:/home/vscode/.local/bin:${containerEnv:PATH}",
"AWS_ACCESS_KEY_ID": "${localEnv:AWS_ACCESS_KEY_ID}",
"AWS_SECRET_ACCESS_KEY": "${localEnv:AWS_SECRET_ACCESS_KEY}",
"AWS_REGION": "${localEnv:AWS_REGION}",
"AWS_SESSION_TOKEN": "${localEnv:AWS_SESSION_TOKEN}"
},
"postAttachCommand": "(command -v tfswitch && tfswitch) >/dev/null; terraform init"
}
TFSwitcher
After seeing this setup, you might be curious why there is no Terraform installation in it. Easy: if you work on different Terraform projects, you will probably need multiple versions. If we put this into our Dev Containers configuration, we must adjust it for every project.
The alternative is called Terraform Switcher4: By executing the tfswitch
command, this tool will analyze the current Terraform project and determine if the exactly needed version is present already. If yes, it will wire this up to your usual terraform
command. If not, it will be installed and switch versions accordingly.
If determining this version automatically sounds scary to you (version pinning is a virtue after all!), you can also create a file in your project called .terraform_version
, which contains the desired version. This file is a compatibility feature from the tfenv
tool, which does the same job.
The magic is in the postAttachCommand
, executed every time you connect to the container. It will automatically invoke tfswitch
and do a terraform init
to retrieve all needed Terraform modules for your project.
Terraform Example
What good is a demo if you do not provide something to test it out? The following main.tf
includes everything to set up a Security Group and EC2 instance inside your default VPC. We will use this to simulate our project under test and show you how to use it together with Test Kitchen.
main.tf
:
terraform {
required_version = ">= 1.0, < 2.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.38.0, < 4.0"
}
}
}
provider "aws" {
region = var.region
}
module "myip" {
source = "4ops/myip/http"
version = "1.0.0"
}
variable "key_name" {}
variable "ami" {}
variable "instance_type" {}
variable "region" {}
data "aws_region" "current" {}
resource "aws_security_group" "allow_ssh" {
name = "allow_ssh"
description = "Allow SSH inbound traffic"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${module.myip.address}/32"]
}
}
resource "aws_instance" "example" {
ami = var.ami
instance_type = var.instance_type
key_name = var.key_name
security_groups = [aws_security_group.allow_ssh.name]
tags = {
"Name" = "testinstance",
}
}
output "public_dns" {
value = "${aws_instance.example.public_dns}"
}
Kitchen Configuration
By now, you could use your usual commands like terraform plan
and terraform apply
to create the Security Group/Instance. But, as I wrote in part 1 of this post, we want to use Test Kitchen for lifecycle management and testing.
Test Kitchen gets configured with a kitchen.yml
file, which states the plugins responsible for creating infrastructure and testing it. Its contents look unusual for everybody familiar with Test Kitchen in the Terraform case because all steps refer to the terraform
plugin. Documentation on the kitchen-terraform
project is a bit confusing, admittedly.
Most important here is the connection to a .tfvars
file to fill our variables and the verifier
section.
kitchen.yml
:
---
driver:
name: terraform
variable_files:
- testing.tfvars
provisioner:
name: terraform
platforms:
- name: ubuntu
verifier:
name: terraform
systems:
- name: default
backend: ssh
user: ubuntu
key_files:
- ~/.ssh/my-keyfile.pem
hosts_output: public_dns
controls:
- instance
- name: aws
backend: AWS
controls:
- aws
suites:
- name: default
Test Kitchen Lifecycle
Projects using Test Kitchen have an easy lifecycle:
create
the infrastructure neededconverge
it into a known stateverify
the assumptions from a test suitedestroy
everything after the tests are finished
So let’s converge our project (aka: create everything in main.tf
):
$ kitchen converge
-----> Starting Test Kitchen (v3.2.1)
-----> Creating <default-ubuntu>...
Terraform v1.0.11
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.63.0
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching ">= 3.38.0, < 4.0.0"...
- Installing hashicorp/aws v3.67.0...
- Installed hashicorp/aws v3.67.0 (signed by HashiCorp)
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
Created and switched to workspace "kitchen-terraform-default-ubuntu"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
Finished creating <default-ubuntu> (0m5.40s).
-----> Converging <default-ubuntu>...
Terraform v1.0.11
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.67.0
Success! The configuration is valid.
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_instance.example will be created
+ resource "aws_instance" "example" {
+ ami = "ami-0a8e758f5e873d1c1"
...
Plan: 2 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ public_dns = (known after apply)
aws_security_group.allow_ssh: Creating...
aws_security_group.allow_ssh: Creation complete after 3s [id=sg-01a5bd44b1450cd91]
aws_instance.example: Creating...
aws_instance.example: Still creating... [10s elapsed]
aws_instance.example: Creation complete after 14s [id=i-06ddc4ac722925b70]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
public_dns = "ec2-54-74-166-149.eu-west-1.compute.amazonaws.com"
Finished converging <default-ubuntu> (0m24.67s).
-----> Test Kitchen is finished. (0m33.29s)
You can see this is identical to running terraform apply -auto-approve
. But the advantage is that we could now execute our tests. If we had them written already…
InSpec Configuration
Creating an InSpec test suite in your repository is pretty straightforward. As InSpec organizes every test inside a profile, we need to create one inside test/integration/default
. Just create that directory and a file inspec.yml
inside of it.
test/integration/default/inspec.yml
:
---
name: default
title: My Cool Project
version: 0.1.0
supports:
- platform: aws
- os-family: linux
All InSpec tests reside in a subdirectory `test/integration/default/controls’ and are written in InSpec DSL, just as we learned in the first post.
test/integration/default/controls/aws_spec.rb
:
control 'aws' do
describe aws_ec2_instance(name: 'testinstance') do
it { should exist }
it { should be_running }
it { should_not have_roles }
its('instance_type') { should eq 't3a.nano' }
end
describe aws_security_group(group_name: 'allow_ssh') do
it { should exist }
its('inbound_rules_count') { should cmp 1 }
it { should_not allow_in(port: 22, ipv4_range: '0.0.0.0/0') }
end
end
You can see our easy test here. Instance name, its status, and instance type get tested. Also, we check if the Security Group was created and has the rules we expect. But we can go a step further now and also test the reachability and operating system type of the created instance:
test/integration/default/controls/instance_spec.rb
:
control 'instance' do
describe command('lsb_release -a') do
its('stdout') { should match (/Ubuntu/) }
end
end
In the same file, we could also test for installed packages, do a curl
to an external service, or do many different things.
Executing our Tests
This part is pretty simple: The next step after converge
is verify
. So let’s run this.
-----> Starting Test Kitchen (v3.2.1)
-----> Verifying <default-ubuntu>...
Profile: Example (default)
Version: 0.1.0
Target: ssh://ubuntu@ec2-54-74-166-149.eu-west-1.compute.amazonaws.com:22
✔ instance: Command: `lsb_release -a`
✔ Command: `lsb_release -a` stdout is expected to match /Ubuntu/
Profile Summary: 1 successful control, 0 control failures, 0 controls skipped
Test Summary: 1 successful, 0 failures, 0 skipped
Profile: Example (default)
Version: 0.1.0
Target: aws://
✔ aws: EC2 Instance testinstance
✔ EC2 Instance testinstance is expected to exist
✔ EC2 Instance testinstance is expected to be running
✔ EC2 Instance testinstance is expected not to have roles
✔ EC2 Instance testinstance instance_type is expected to eq "t3a.nano"
✔ EC2 Security Group sg-01a5bd44b1450cd91 is expected to exist
✔ EC2 Security Group sg-01a5bd44b1450cd91 is expected to not allow in {:ipv4_range=>"0.0.0.0/0", :port=>22}
✔ EC2 Security Group sg-01a5bd44b1450cd91 inbound_rules_count is expected to cmp == 1
Profile Summary: 1 successful control, 0 control failures, 0 controls skipped
Test Summary: 7 successful, 0 failures, 0 skipped
Finished verifying <default-ubuntu> (0m6.66s).
-----> Test Kitchen is finished. (0m10.78s)
And now it’s clear everything worked. Of course, this makes more sense with more complex projects. Especially if you have a Terraform project which creates vastly different configurations depending on inputs, like a 1-AZ dev system versus a 3-AZ production version, testing all the different “suites” (execution paths) becomes easy.
As we know the tests to work, we can tear everything down again with kitchen destroy
and save money.
Apply this to Your Projects
As most of the shown configuration is rather generic, you will only need to copy the .devcontainer
directory and the kitchen.yml
file into your Terraform projects. Depending on your project, you will need to write the specific tests inside test/integration/default
, and that’s it.
Visual Studio will open the project inside the container, and you can work with unified development environments and integration tests right away.
Have fun testing!
Updated February 4th 2022: Use of the myip
module to show a more security-conscious example. Thanks André!