Over the last few years, I have used HashiCorp’s Packer and Ansible to build a lot of images. One of the problems that I have had with my setup generally is how to set various variable that are needed for Packer to build the images successfully. For example, at a minimum I need to have VPC ID, Subnet ID, and and AWS Region set in my Packer file to be able to build an image. On top of that, I almost always have some secret or another that I need to pass to the instance to configure it (such as a DataDog API key) that I can’t check in. For a long time, I passed environment variables to the json file to by sourcing a local .env file, but having to recreate that file every time I need to is a pain. Since I already have the VPC information in a Terraform state file, I started wondering if I could use Terraform to build out my Packer images. This would also allow me to pull my secrets from our Vault server.

As I started digging in, I found that there just wasn’t a lot of information on the internet to show how to do it. I was able to find some repositories in GitHub that had some examples, so after reviewing what others have done, I picked the bits I liked from each of them and created my own.


locals {
  ami_name = join("-", [var.name, formatdate("YYYYMMDDhhmmss", timestamp())])
}
resource "null_resource" "packer" {
  triggers = {
    ami_name = local.ami_name
  }
  
  provisioner "local-exec" {
    working_dir = "./packer"
    command = <<EOF
packer build \
  -var region=${var.aws_region} \
  -var vpc_id=${data.terraform_remote_state.my-vpc.outputs.vpc_id} \
  -var subnet_id=${data.terraform_remote_state.my-vpc.outputs.private_subnets[2]} \
  -var datadog_api_key=${data.vault_generic_secret.datadog.data["api"]} \
  -var id=${self.id} \
  myami.json
EOF
  }
}

I started without the trigger, but quickly found that the null resource would only build the first time and then would “exist” on subsequent runs. The local variable ami_name sets the name to something unique so that on each apply I get a new image. The trigger is one area I struggle with. I don’t necessarily want to rebuild the image every time I make a change to something else in my Terraform, but I have not found an acceptable alternative.

The id variable sets itself to the ID of the null_resource so that I can add it to the tag of the AMI image so that the image lookup I run as part of the deployment waits for the null_resource to complete before moving to the next step.


data "aws_ami" "this" {
  filter {
    name   = "tag:id"
    values = [null_resource.packer.id]
  }

  most_recent = true
  owners      = ["self"]
}

All of this worked great, except when it didn’t. If the Packer image failed to build, it would still mark the null resource as successful, then the image lookup would fail because it could not lookup the ID. At that point, none of my terraform would run properly until I commented out the tag:id filter and looked it up by name (assuming one existed). To solve that I came across this GitHub issue where someone recommends a function to check for successful completion. While I couldn’t get his function to work properly in my environment, I was able to rework it to get the same outcome.


resource "null_resource" "packer" {
  triggers = {
    ami_name = local.ami_name
  }
  provisioner "local-exec" {
    working_dir = "./packer"
    command = <<EOF
RED='\033[0;31m' # Red Text
GREEN='\033[0;32m' # Green Text
BLUE='\033[0;34m' # Blue Text
NC='\033[0m' # No Color

packer build \
  -var region=${var.aws_region} \
  -var vpc_id=${data.terraform_remote_state.my-vpc.outputs.vpc_id} \
  -var subnet_id=${data.terraform_remote_state.my-vpc.outputs.private_subnets[2]} \
  -var datadog_api_key=${data.vault_generic_secret.datadog.data["api"]} \
  -var id=${self.id} \
  myami.json

if [ $? -eq 0 ]; then
  printf "\n $GREEN Packer Succeeded $NC \n"
else
  printf "\n $RED Packer Failed $NC \n" >&2
  exit 1
fi
EOF
  }
}

Now if the image fails to build, the terraform will error and I will be able to fix the issue and restart. Now I can build my Packer images using the same process that I use to deploy my Terraform. This will make things run a lot better.