A few months ago I wrote a Terraform Module for deploying an ECS cluster. The module allows you the option to attach an Elastic File System to the cluster using my EFS Module. I quickly ran into an issue with the AutoScaling Group (ASG) deployed before the EFS mount points were ready, so the EFS filesystem would not be mounted on the node. It turns out the EFS ID that is needed for my user-data is available long before the mount points are, which would allow the Launch Configuration and ASG deployment to proceed.

To solve the problem, I created a variable called depends_on_efs that is set to the EFS mount point IDs and then set it as a depends_on in my user-data template. This will ensure the IDs are populated before creating user-data, which causes the ASG deployment to wait until EFS is ready.

data "template_file" "user_data-efs" {
  depends_on = [var.depends_on_efs]
  count      = var.attach_efs ? 1 : 0
  template   = <<EOF
Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0
--==BOUNDARY==
Content-Type: text/cloud-boothook; charset="us-ascii"
# Install amazon-efs-utils
cloud-init-per once yum_update yum update -y
cloud-init-per once install_amazon-efs-utils yum install -y amazon-efs-utils
# Create /efs folder
cloud-init-per once mkdir_efs mkdir /efs
# Mount /efs
cloud-init-per once mount_efs echo -e '$${efs_id}:/ /efs efs defaults,_netdev 0 0' >> /etc/fstab
mount -a
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Set any ECS agent configuration options
echo "ECS_CLUSTER=$${ecs_cluster_name}" >> /etc/ecs/ecs.config
--==BOUNDARY==--
EOF

  vars = {
    ecs_cluster_name = aws_ecs_cluster.this.name
    efs_id           = var.efs_id
  }
}

This worked as intended, keeping the ASG from deploying before the EFS mount points were ready, but it also introduced another issue. Since Terraform defers the read action until the apply phase, the resource is always marked for update (and in turn the Launch Configuration and ASG will update as well). Every time I run terraform plan, it tells me that there are changes to the code, even if there are none (it’s a known issue). Fortunately, I was able to solve it by making a slight change to my template block. Instead of using the depends_on clause, I moved it to the vars section of my data block. Even though the template file doesn’t use it, the rest of the block will wait for it to be available before proceeding.

data "template_file" "user_data-efs" {
  count    = var.attach_efs ? 1 : 0
  template = <<EOF
Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0

--==BOUNDARY==
Content-Type: text/cloud-boothook; charset="us-ascii"

# Install amazon-efs-utils
cloud-init-per once yum_update yum update -y
cloud-init-per once install_amazon-efs-utils yum install -y amazon-efs-utils

# Create /efs folder
cloud-init-per once mkdir_efs mkdir /efs

# Mount /efs
cloud-init-per once mount_efs echo -e '$${efs_id}:/ /efs efs defaults,_netdev 0 0' >> /etc/fstab
mount -a

--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

#!/bin/bash
# Set any ECS agent configuration options
echo "ECS_CLUSTER=$${ecs_cluster_name}" >> /etc/ecs/ecs.config

--==BOUNDARY==--

EOF

  vars = {
    ecs_cluster_name = aws_ecs_cluster.this.name
    efs_id           = var.efs_id
    depends_on       = join("", var.depends_on_efs)
  }
}

Now when I run terraform plan, it will tell me my infrastructure is up to date if I haven’t actually made any changes and it will still wait for the EFS to be ready on a clean deploy. Win win.