Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid dynamic for_each value for launch templates #1325

Closed
inge4pres opened this issue Apr 29, 2021 · 11 comments
Closed

Invalid dynamic for_each value for launch templates #1325

inge4pres opened this issue Apr 29, 2021 · 11 comments

Comments

@inge4pres
Copy link

inge4pres commented Apr 29, 2021

Description

Hello πŸ˜„
I faced this error when upgrading to v15.1.0

β”‚ Error: Invalid dynamic for_each value
β”‚ 
β”‚   on .terraform/modules/tests-eks/workers_launch_template.tf line 415, in resource "aws_launch_template" "workers_launch_template":
β”‚  415:     for_each = lookup(var.worker_groups_launch_template[count.index], "additional_ebs_volumes", local.workers_group_defaults["additional_ebs_volumes"])
β”‚     β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚     β”‚ count.index is 0
β”‚     β”‚ local.workers_group_defaults["additional_ebs_volumes"] is empty tuple
β”‚     β”‚ var.worker_groups_launch_template is tuple with 1 element
β”‚ 
β”‚ Cannot use a tuple value in for_each. An iterable collection is required.

Versions

  • Terraform: 0.15.1
  • Provider(s):
+ provider registry.terraform.io/hashicorp/aws v3.37.0
+ provider registry.terraform.io/hashicorp/helm v2.1.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.1.0
+ provider registry.terraform.io/hashicorp/kubernetes-alpha v0.3.2
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/hashicorp/template v2.2.0
+ provider registry.terraform.io/hashicorp/tls v3.1.0
  • Module: terraform-aws-modules/eks/aws

Reproduction

Steps to reproduce the behavior:

  • configure/upgrade a cluster using both worker_groups and worker_groups_launch_template
  • set
  workers_group_defaults = {
    root_volume_type = "gp2"
  }

to fix the "gp3" errormentioned in #1205

  • run a terraform plan to see the upgrade plan
  • error is thrown

Code Snippet to Reproduce

module "tests-eks" {
  source = "terraform-aws-modules/eks/aws"
  version = "15.1.0"
  cluster_name = local.eks_cluster_name
  cluster_version = local.cluster-version
  subnets = concat(module.eks-vpc.public_subnets, module.eks-vpc.private_subnets)
  vpc_id = module.eks-vpc.vpc_id
  enable_irsa = true
  write_kubeconfig = false

  map_users = concat(local.cluster-users, [
    local.github])

  workers_group_defaults = {
    root_volume_type = "gp2"
  }

  worker_groups = [
     {
      name = "kube-core"
      instance_type = "t3.large"
      asg_max_size = 6
      asg_desired_capacity = 3
      asg_min_size = 2
      autoscaling_enabled = true
      kubelet_extra_args = join(" ", [
        "--node-labels=node.kubernetes.io/lifecycle=core",
        "--register-with-taints=${local.taints.core.key}=${local.taints.core.value}:${local.taints.core.effect}"
      ])
      subnets = ...
    }]

   worker_groups_launch_template = [
    {
      name = "app"
      instance_type = "c5.2xlarge"
      asg_max_size = 100
      asg_desired_capacity = 0
      asg_min_size = 0
      key_name = aws_key_pair.ssh.id
      autoscaling_enabled = true
      public_ip = true
      kubelet_extra_args = join(" ", [
        "--node-labels=node.kubernetes.io/lifecycle=spot",
        "--system-reserved=memory=500Mi,ephemeral-storage=2Gi"
      ])
       subnets = ...

Expected behavior

The plan should execute successfully.

Actual behavior

The error above is reported.
Tried inspecting the module with TF_LOG=DEBUG but didn't find any useful hints.

Additional context

Faced during an upgrade of the module from the latest 13.x version.
Same error with Terraform 0.15.0.

@inge4pres
Copy link
Author

So a workaround that worked for me was removing the default option

workers_group_defaults = {
    root_volume_type = "gp2"
  }

and adding in every worker_groups item the root_volume_type key.
At that point I was able to run a plan successfully, I gave a look into the module but no idea why adding defaults would mess up with the Terraform type inference when iterating in that dynamic block.
After searching similar issues in the Terraform repo, I reckon probably specifying explicitly the map types might help, but I have not a clear understanding of what's happening.

@pmontanari
Copy link

Hi,
Having the same error but the workaround won't work for me.
Thanks

@mmcguinn
Copy link

Ran into the same issue when trying an upgrade to TF 0.15.3 (from 0.14) today. Diving into the error and looking around has led to a lot of confusion.

The format for additional_ebs_volumes given here is a list of maps. But looking at one of the several places this value is used (either via a default or a value passed as one of the worker group configs) it is passed into a for each.

The docs for for_each specify it must take either a map or set of strings.

Except, for_each inside of a dynamic seems to be treated specially according the the docs for dynamic. Importantly, it states that for_each accepts "any collection or structural value".

At least at the moment this feels to me like a regression in terraform itself (which is treating the 'special' for_each inside dynamic as a normal one), but I haven't had time to try and pin it down further yet.

@davidgp1701
Copy link

Hi,

Same issue, proposed workaround does not work for me.

Thanks,

@mmcguinn
Copy link

Working from my thoughts about the two scopes of for_each above, spent some time today trying to replicate the issue at the terraform level with the goal to get something that would be reportable as a bug to the TF repo directly. First I tried using aws_s3_bucket resources (since its doesn't rely on anything existing, and using the lifecycle rules blocks as a dynamic target), but I wasn't able to reproduce the error. I then tried by taking the aws_launch_template from this module and stripping anyway as much unrelated to the ebs configurations as I could, but I still can't seem to replicate that way either.

Perhaps someone else will have more luck isolating? I still don't think its an issue with this module due to the nature of the error and it only presenting on a TF upgrade, but it doesn't seem my idea above was correct.

@davidgp1701
Copy link

@mmcguinn thanks for checking it out, I will try to replicate the error.

I saw there is a new version of the module: 16.0.0, I just updated my code to it, but this error still persists.

@davidgp1701
Copy link

davidgp1701 commented May 20, 2021

Hi,

Yesterday busy day at work and I didn't have time to replicate the problem.

Anyway, I just saw a new Terraform version, 0.15.4, I checked the changelog and I saw the fixed this: hashicorp/terraform#28509. Not sure if related, but I just executed a terraform plan with this version and EKS module 16.0.1 and I'm not able to replicate the error now.

I will check with other clusters that I still have with Terraform 0.14.11 to see if I can replicate it, but probably I need to update other things in that code to make it Terraform 0.15 friendly.

Update: Doing a Terraform plan to other cluster using EKS Terraform module version 15.2.0 and Terraform 0.15.4, everything seems to be working fine. In my particular case this bug has been fixed.

@mmcguinn
Copy link

@davidgp1701 That changelog line caught my eye as well; I can confirm that I am getting clean plans now from 14.x to 15.4.

@barryib
Copy link
Member

barryib commented May 29, 2021

Can we close this ? Since it sounds like it's fixed in terraform core ?

@inge4pres
Copy link
Author

I didn't test it with the provided combination but given other have succeeded I can close πŸ˜„

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 20, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants