-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deleting an old node group is attempting to overwrite a newly created node group #2001
Comments
yes, unfortunately this is one of the drawbacks of the versions prior to v18.x - the solution would be to upgrade to v18.x to avoid this disruptive behavior |
@bryantbiggs ye i thought that might be the case.... hmm i can't upgrade now unfortunately. I want to clean this up, do you think it would be enough to just delete the autoscaling group manually in the aws dashboard and also the corresponding terraform state.
and i guess the launch template right? |
you could try - in the end you need the order in state to match the order of your code (array index order that is) otherwise all node groups after the affected node group index are at risk of re-creation |
hmm ok |
honestly, if you're going through all this I would just upgrade and be done with this problem.
|
As far as i remember, trying to upgrade from 15 -> 18 caused terraform changes that would attempt to recreate the eks cluster it self, so i stopped attempting that and just worked with what i have |
I don't know about coming from v15, but for most coming from v17 the following worked to avoid replacing the control plane: prefix_separator = ""
iam_role_name = $CLUSTER_NAME
cluster_security_group_name = $CLUSTER_NAME
cluster_security_group_description = "EKS cluster security group." Ref: #1744 (comment) |
Yeah i am aware of that thread, i attempted that on a test environment and for me it was still attempting to recreate the eks cluster itself. You can see i even made some comments on that thread and got stuck, it seems like v15 cause additional issues |
Does this problem ( in this issue) occur also in version 17? If not i might attempt a 15 -> 17 upgrade |
yes, there were a number of issues related to this in v17.x that we fixed in v18.x such as #1105 |
Okay~ i think the best option here is just to eventually recreate the eks cluster at some point and migrate my apps across. As the main problem with following the upgrade guides and that main upgrade help thread is the terraform state names which people suggest to manipulate does not correspond with what i have in my state file. As a temporary solution, ive left the configuration for the existing/old |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
Originally i had a node group called
ops-node
with instance of typem5.xlarge
. I have created a new node group calledop-node
with instance of typem5.2xlarge
and migrated the apps from the original node group over to this new one because they required more resources.Now i want to clean up the original/old node group
ops-node
. When i attempt to delete the old node group configuration (removing the entry {} in the [], it also wants to delete the newly created node group.Versions
Module version [Required]: 15.1.0
Terraform version:
Terraform v1.0.6
Reproduction Code [Required]
initially
after
Steps to reproduce the behavior:
op-node
with almost same configuration except different size instance type and root volume size.ops-node
from the terrafom config and then do terraform plan/apply.Expected behavior
I expect just the first node group
ops-node
to be deleted.Actual behavior
Its attempting to delete the first one and recreate the newly created one.
Terminal Output Screenshot(s)
Additional context
New node group i want to keep :
geeiq-prod-k8s-op-nodes20220409095313235600000003
Old node group i want to delete :
geeiq-prod-k8s-ops-nodes20210422145135744400000006
I'm aware this is an old version of this module but unfortunately we don't have the time or resources right now to make the upgrade to the latest version.
From what i can understand it thinks the new
op-node
is an upgrade of originalops-node
rather than a completely new node group. Is anyone able to offer advise on what i can do to separate them. ( i assumed a different name would have been enough )The text was updated successfully, but these errors were encountered: