I have been working on improving the portability of my Terraform code, so last week I refactored the code that builds my ECS cluster with an EFS disk attached into two Terraform Modules, one for EFS and one for ECS. Separating them into different modules introduced an interesting race condition. The autoscaling group gets created before the mount targets have been fully provisioned. Because the mount targets are not ready, the userdata fails to mount the EFS.

Unfortunately, Terraform does not have any type of depends_on for modules, so I had to figure out how to do that myself. It may not be the most elegant of solutions, but it works.

I started by creating a variable in my ECS module called depends_on_efs, which is just a list of strings.

variable "depends_on_efs" {
  description = "If attaching EFS, it makes sure that the mount targets are ready"
  type    = list(string)
  default = []
}

Then I added a depends_on clause to the template_file data block I use for my userdata that requires the depends_on_efs variable to be set.

data "template_file" "user_data-efs" {
  depends_on = [var.depends_on_efs]
  count = var.attach_efs ? 1 : 0
  template = <<EOF
Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0
...
EOF

  vars = {
    ecs_cluster_name = aws_ecs_cluster.this.name
    efs_id           = var.efs_id
  }
}

Finally, when I call the ECS module, I set the depends_on_efs variable to the mount target IDs for the EFS.

module "ecs-0" {
  source                        = "AustinCloudGuru/ecs/aws"
  version                       = "1.1.2"
  ecs_name                      = var.ecs_name
  ...
  depends_on_efs                = module.efs-0.mount_target_ids
}

While the target IDs are not needed in the ECS module, they force the module to have to wait until they have completed before moving on to create the ECS cluster, allowing the autoscaling group to mount the EFS file system properly at boot.