Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Module attempting to change desired_size of managed node_group #681

Closed
1 of 4 tasks
davidalger opened this issue Jan 13, 2020 · 1 comment · Fixed by #691
Closed
1 of 4 tasks

Module attempting to change desired_size of managed node_group #681

davidalger opened this issue Jan 13, 2020 · 1 comment · Fixed by #691

Comments

@davidalger
Copy link
Contributor

davidalger commented Jan 13, 2020

I have issues

I'm submitting a...

  • bug report
  • feature request
  • support request - read the FAQ first!
  • kudos, thank you, warm fuzzy

What is the current behavior?

Deployed an EKS cluster using an AWS Managed Node Group for which support was added in v8.0.0 of this module. There is an autoscaler deployed into this managed node group and it has scaled up from 1 to 2 nodes. When running a plan, Terraform reports the following change to be made:

  # module.eks.module.node_groups.aws_eks_node_group.workers["0"] will be updated in-place
  ~ resource "aws_eks_node_group" "workers" {
        ami_type        = "AL2_x86_64"
        arn             = "<redacted>"
        cluster_name    = "<redacted>"
        disk_size       = 20
        id              = "<redacted>:<redacted>-0-evolving-mongoose"
        instance_types  = [
            "t3.medium",
        ]
        labels          = {}
        node_group_name = "<redacted>-0-evolving-mongoose"
        node_role_arn   = "arn:aws:iam::<redacted>:role/<redacted>20200110165639082800000001"
        release_version = "1.14.7-20190927"
        resources       = [
            {
                autoscaling_groups              = [
                    {
                        name = "<redacted>"
                    },
                ]
                remote_access_security_group_id = ""
            },
        ]
        status          = "ACTIVE"
        subnet_ids      = [
            "subnet-<redacted>",
        ]
        tags            = {
            "tf-workspace" = "<redacted>"
        }
        version         = "1.14"

      ~ scaling_config {
          ~ desired_size = 2 -> 1
            max_size     = 5
            min_size     = 1
        }
    }

If this is a bug, how to reproduce? Please include a code sample if relevant.

What's the expected behavior?

Should be a lifecycle policy on the aws_eks_node_group resource to ignore changes to desired_size similar to the one currently on worker_groups:

  lifecycle {
    create_before_destroy = true
    ignore_changes        = [desired_capacity]
  }

The aws_eks_node_group resource is missing the ignore_changes rule.

Are you able to fix this problem and submit a PR? Link here if you have already.

#691

Environment details

  • Affected module version:
  • OS:
  • Terraform version:

Any other relevant info

davidalger added a commit to davidalger/terraform-aws-eks that referenced this issue Jan 17, 2020
max-rocket-internet pushed a commit that referenced this issue Jan 17, 2020
* Ignore changes to desired_size of node_groups

Resolves #681

* Update CHANGELOG.md
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 28, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant