About a year ago we switched from building our Jenkins controller using a Dockerfile to Terraform driven Packer build. I wrote about the process here, and it has worked really well for us. We not only use this process for building EC2 images, but we also use it to build Docker containers.

The only problem with using it to build Docker containers is the need to have ECR credentials to be able to upload the images. When we originally moved to this pattern, we did not yet have Vault setup in our environment, so we just created some IAM user credentials that had permission for ECR and added them to Jenkins. After we setup Vault, we did migrate them over so we could use the Vault Terraform provider to pull them out.

Since we are required to rotate our credentials every 90 days, we have to go into the console, generate new API credentials, and then add them to Vault. That’s not so bad, but we also get notified every day starting thirty days out and due to some bug in the job that does the alerting, we get three emails. I decided it was time to put an end to the password rotations and email notifications and configure our Vault server to be able to automatically generate AWS credentials.

Creating the IAM Role

Since Jenkins runs in AWS, the easiest way to give Vault access to IAM is by attaching an instance profile. In our case, we run on an ECS Cluster, so we attach the instance profile to the task.

Start by creating a role that Vault will assume that has permission to push and pull to ECR. It can have the ability to push to any repository or to a specific one. The below example can push to any repository in ECR.

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
            "Resource": "arn:aws:ecr:*:111111111111:repository/*"
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "ecr:GetAuthorizationToken",
            "Resource": "*"

Setting Up Vault

Enable the AWS secrets backend on the Vault server.

 vault secrets enable aws

Once the backend is initialized, add the assume role to the AWS backend.

 vault write aws/roles/ecr-staging \
	role_arns=arn:aws:iam::111111111111:role/ecr-staging \

Updating Vault Task

Finally, update the ECS task role that Vault uses to allow it to assume the role.

	"Sid": "",
   "Effect": "Allow",
   "Action": "sts:AssumeRole",
   "Resource": [

Testing Vault Setup

Using the vault command, you can test whether or not the configuration is working.

vault write aws/sts/ecr-staging ttl=60m

Update our Terraform and Packer

Finally, update the Packer file to include the aws_token variable to include the security token.

   "type": "docker-push",
   "ecr_login": true,
   "aws_access_key": "user%20%60staging_access_key%60",
   "aws_secret_key": "user%20%60staging_secret_key%60",
   "aws_token": "user%20%60staging_token%60",
	"login_server": "https://111111111111.dkr.ecr.us-east-2.amazonaws.com"

and the Terraform project to pull the data in from Vault and pass it to the null resource that builds Packer.

data "vault_aws_access_credentials" "staging_ecr" {
  backend = "cdp/aws/staging"
  type    = "sts"
  role    = "ecs-staging"

and our null resources to pass the credentials.

-var prod_access_key=${data.vault_aws_access_credentials.production_ecr.access_key} \
-var prod_secret_key=${data.vault_aws_access_credentials.production_ecr.secret_key} \
-var prod_token=${data.vault_aws_access_credentials.production_ecr.security_token} \

That is all there is to it. Now I can disable my credentials and stop getting the emails every single day.