-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Lifecycle tags in node_group prevents nodegroup from being changed #1525
Comments
+1 we are ranning to the same issue. |
Nope this is not a bug in module. This is how managed Node groups are working and this limitation is incoming from AWS. |
@daroga0002 Really ? Under the hood EKS is just implementing EC2 autoscaling groups and you can change their types. The only thing about them you cannot change easily is the primary instance type (first one in the list). If you're changing the 2nd or latter instance then there is no problem. We're doing https://docs.aws.amazon.com/cli/latest/reference/autoscaling/update-auto-scaling-group.html after all. Note on the common sense. You're suggesting 2 terraform applies and a few kubectl commands that will have an impact on a statefulset (replicasets are assumed to be stateless so no impact there).
Not exactly friendly esp. If I can either use AWS cli or web console to change the ASG |
Maybe not firendly but this is AWS limitation in node groups. Node groups are AWS managed solution which has own limitation, if you want make this more dynamic then you can always use a worker groups (where you just creating own managed autoscaling groups). If you dont belive me I just encourage you to check Node groups in EKS cluster UI: Where you can modify only those: You can check also https://github.com/aws/containers-roadmap maybe this will be changed (or you can open them a feature request if not already existing) |
@daroga0002 your first screen shot has cut off the pertinent part in the Details box : |
yup, but this is not supported by AWS and is a hack. Node groups are created using resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group which is using EKS API endpoint https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateNodegroupConfig.html To clarify more node group api endpoint is creating under the hood autoscaling group and other EC2 resources so in result terraform using node groups is even not using EC2 API (which is responsible for autoscaling groups) |
I don't understand why this bug is closed. And the whole argument like it's a normal behaviour and a limitation of the AWS API doesn't make sense to me. The node group name must be unique. Because of this, the I dig the history. In the first version of file, there was
In this PR #1372 the random_pet suffix was removed hence the node group name is not unique on re-creation, and since Btw from https://www.terraform.io/docs/language/meta-arguments/lifecycle.html#create_before_destroy
|
let me analyze this again |
Digging a bit more I found this https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/upgrades.md#upgrade-module-to-v1700-for-managed-node-groups
I haven't try it (in my next upgrade I will) but I think there might be no name collision if using
If my understanding is correct, when using prefix_name, the node group are suffixed by an increment to avoid name collision. |
I checked and in general lifecycle hook is there to avoid a downtime for workload when you modify node groups which are using Unfortunately lifecycle meta arguments cannot be conditional in terraform what is causing that we must choose which
From my side I think we must accept issue 1, as issue 2 will make this module will be not allowing for rolling updates at all. |
@damienleger thank you for putting attention on this 🥇 I will work later on PR which will add to docs limitations regarding using fixed |
Thank you @daroga0002 for the quick look. Yes I agree with you using |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had recent activity since being marked as stale. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
Changing instance types in node groups requires a manual workaround because lifecycle tags are incorrect
If I try to switch an instance type eg t3.small -> t3.medium or even just adding more types I cannot apply without performing a targeted delete or simply removing the node group.
The issue is https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/modules/node_groups/node_groups.tf#L84 attempts to create a replacement identically named node group before removing the old one.
Before you submit an issue, please perform the following first:
.terraform
directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/
terraform init
All of these steps are performed by our CI server
Versions
Reproduction
Code Snippet to Reproduce
Replace
instance_types = ["t3.medium"]
withinstance_types = ["t3a.medium"]
after the eks cluster has been created and attempt to re-apply (note it plans fine)Expected behavior
Terraform should apply the change without requiring
terraform destroy -target ....
or similarActual behavior
The text was updated successfully, but these errors were encountered: