I have a CSV file that contains the following content:
id,value
123,{"M":{"name_1":{"S":"value_1"}, "name_2":{"S":"value_2"}}}
I’m trying to read this CSV file and create records in DynamoDB the following way:
locals {
custom_data = csvdecode(file("${path.module}/../custom_data.csv"))
}
resource "aws_dynamodb_table_item" "custom_table_item" {
for_each = {for row in local.custom_data : row.id => row}
table_name = aws_dynamodb_table.custom_table.name
hash_key = aws_dynamodb_table.custom_table.hash_key
item = jsonencode({
"id" : { "S" : each.value.id },
"value" : jsondecode(each.value.value)
})
lifecycle {
ignore_changes = [item]
}
}
However, this code doesn’t work and I can’t find any example on how to read the double-quoted values from the CSV file in a way that jsondecode can create the appropriate JSON structure.
Does anyone know how to do that?
>Solution :
Sanitize your csv file:
id,value
123,"{""M"":{""name_1"":{""S"":""value_1""}, ""name_2"":{""S"":""value_2""}}}"
then:
$ terraform-repl
> csvdecode(file("${path.module}/custom_data.csv"))
tolist([
{
"id" = "123"
"value" = "{\"M\":{\"name_1\":{\"S\":\"value_1\"}, \"name_2\":{\"S\":\"value_2\"}}}"
},
])
> jsondecode(csvdecode(file("${path.module}/custom_data.csv"))[0].value)
{
"M" = {
"name_1" = {
"S" = "value_1"
}
"name_2" = {
"S" = "value_2"
}
}
}