Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

Using terraform's for_each with data that doesn't have a unique key

When using terraform’s for_each you have to specify a unique id to be used as a way to link the generated recource with its source definition.

I’d like to use a natural index for this, rather than an arbitrary unique value. In this case I’m working with DNS, so the natural index would be the DNS record name (FQDN)… Only that isn’t always unique; i.e. you can have multipe A records for example.com to allow load balancing, or you may have multiple TXT records for providing verification to multiple vendors.

Is there a way to combine the natural index with a calculated value to provide a unique value; e.g. so we have the natural index followed by a 1 if it’s the first time this value’s seen, a 2 for the first duplicate, etc?

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

Specific Requirement / Context

I’m working on migrating our DNS records to be managed via IaC using Terraform/Terragrunt (this is for scenarios where the entries are manually managed, rather than those where the related service is also under IaC).
I’m hoping to hold the record data in CSVs (or similar) to avoid those managing the records day to day from requiring familiarity with TF/TG; instead allowing them to just update the data, and have the pipeline take care of the rest.

The CSV format would be something like this:

myId RecordName Type Value
1 A 1.2.3.4
2 A 2.3.4.5
3 test A 3.4.5.6
4 test A 4.5.6.7
5 www cname example.com

Note: I’m considering each DNS Zone would have a folder with its name, and a CSV formatted as above which gives the records for that zone; so the above would be in the /example.com/ folder, and thus we’d have 2 A records for example.com, 2 for test.example.com and one CName for www.example.com which pointed to example.com.

locals {
  instances = csvdecode(file("myDnsRecords.csv"))
}

resource aws_route53_zone zone {
  name = var.domainname
  provider = aws
}

resource aws_route53_record route53_entry {
  for_each = {for inst in local.instances : inst.myId => inst}
  name = "${each.value.RecordName}${each.value.RecordName == "" ? "" : "."}${var.domainname}"
  type = each.value.Type
  zone_id = aws_route53_zone.zone.zone_id
  ttl = 3600
  records = [each.value.Value]
}

I don’t want the myId column though; as that doesn’t add value / has no real relationship to the records; so if we were to remove/insert a record early in the CSV and renumber the following records it would result in a number of changes being required to records which hadn’t really changed, just because their related "index" had changed.

I also don’t want those working with these CSVs to have to manually manage such fields; i.e. I could provide another column and ask that they populate this as below… but that’s asking for human error and adding complexity:

myId RecordName Type Value
1 A 1.2.3.4
2 A 2.3.4.5
test1 test A 3.4.5.6
test2 test A 4.5.6.7
www1 www cname example.com

Question

Is there a way I can use a for_each loop with CSV data such as below, whilst working around the unique constraint?

RecordName Type Value
A 1.2.3.4
A 2.3.4.5
test A 3.4.5.6
test A 4.5.6.7
www cname example.com

>Solution :

You can add unique keys to the data structure:

locals {
  instances = csvdecode(file("myDnsRecords.csv"))
  instance_map = zipmap(range(0,length(local.instances)), local.instances)
}

 resource "..." "..." {
   for_each = local.instance_map
   ...
 }
Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading