-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detect kubectl version during kubeconfig generation and avoid known non-working configurations #5288
Conversation
…on-working configurations (#5257)
@ConnorJC3 Hi! Thank you for your contribution! :) Much appreciated. Please link a detailed manual test flow of both unsupported and supported versions. Also, as your comment writes,
|
👷 Deploy request for eksctl pending review.Visit the deploys page to approve it
|
@Skarlso Hi! I didn't realize For manual testing: Testing/Confirming the old behavior: Make sure Then, create a cluster with Look at the generated kubeconfig (location is specified by users:
- name: example
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
... Note that the generated API version is Testing the new behavior: After completing the above steps to test/confirm the old API version, use the same configuration with the following change: Make sure Then, create a cluster with Look at the generated kubeconfig (location is specified by users:
- name: example
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
... Note that the generated API version is |
Sorry, I wasn't really clear on this. :D I didn't mean testing steps, I meant output of an actual manual flow showing that this all works. :)) Sorry about that. :) |
@Skarlso ah, my bad, I didn't understand. Testing original behavior: # Running with old kubectl, so this should use the old behavior of using v1alpha1
$ kubectl version --client --output=json
{
"clientVersion": {
"major": "1",
"minor": "21+",
"gitVersion": "v1.21.2-13+d2965f0db10712",
"gitCommit": "d2965f0db1071203c6f5bc662c2827c71fc8b20d",
"gitTreeState": "clean",
"buildDate": "2021-06-26T01:02:11Z",
"goVersion": "go1.16.5",
"compiler": "gc",
"platform": "linux/amd64"
}
}
$ export KUBECONFIG=./kubeconfig-old
$ ./eksctl create cluster --name example-old --region us-east-2
2022-05-19 20:50:35 [ℹ] eksctl version 0.99.0-dev+dba24ece.2022-05-19T14:01:54Z
2022-05-19 20:50:35 [ℹ] using region us-east-2
2022-05-19 20:50:35 [ℹ] setting availability zones to [us-east-2c us-east-2a us-east-2b]
2022-05-19 20:50:35 [ℹ] subnets for us-east-2c - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-19 20:50:35 [ℹ] subnets for us-east-2a - public:192.168.32.0/19 private:192.168.128.0/19
2022-05-19 20:50:35 [ℹ] subnets for us-east-2b - public:192.168.64.0/19 private:192.168.160.0/19
2022-05-19 20:50:35 [ℹ] nodegroup "ng-1961941d" will use "" [AmazonLinux2/1.22]
2022-05-19 20:50:35 [ℹ] using Kubernetes version 1.22
2022-05-19 20:50:35 [ℹ] creating EKS cluster "example-old" in "us-east-2" region with managed nodes
2022-05-19 20:50:35 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-05-19 20:50:35 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=example-old'
2022-05-19 20:50:35 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "example-old" in "us-east-2"
2022-05-19 20:50:35 [ℹ] CloudWatch logging will not be enabled for cluster "example-old" in "us-east-2"
2022-05-19 20:50:35 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=example-old'
2022-05-19 20:50:35 [ℹ]
2 sequential tasks: { create cluster control plane "example-old",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-1961941d",
}
}
2022-05-19 20:50:35 [ℹ] building cluster stack "eksctl-example-old-cluster"
2022-05-19 20:50:35 [ℹ] deploying stack "eksctl-example-old-cluster"
2022-05-19 20:51:05 [ℹ] waiting for CloudFormation stack "eksctl-example-old-cluster"
2022-05-19 20:52:05 [ℹ] waiting for CloudFormation stack "eksctl-example-old-cluster"
... omitting duplicate log entries ...
2022-05-19 21:06:44 [ℹ] waiting for the control plane availability...
W0519 21:06:44.189354 5554 loader.go:221] Config not found: ./kubeconfig-old
W0519 21:06:44.189438 5554 loader.go:221] Config not found: ./kubeconfig-old
W0519 21:06:44.189470 5554 loader.go:221] Config not found: ./kubeconfig-old
2022-05-19 21:06:44 [✔] saved kubeconfig as "./kubeconfig-old"
2022-05-19 21:06:44 [ℹ] no tasks
2022-05-19 21:06:44 [✔] all EKS cluster resources for "example-old" have been created
2022-05-19 21:06:44 [ℹ] nodegroup "ng-1961941d" has 2 node(s)
2022-05-19 21:06:44 [ℹ] node "ip-192-168-39-111.us-east-2.compute.internal" is ready
2022-05-19 21:06:44 [ℹ] node "ip-192-168-69-81.us-east-2.compute.internal" is ready
2022-05-19 21:06:44 [ℹ] waiting for at least 2 node(s) to become ready in "ng-1961941d"
2022-05-19 21:06:44 [ℹ] nodegroup "ng-1961941d" has 2 node(s)
2022-05-19 21:06:44 [ℹ] node "ip-192-168-39-111.us-east-2.compute.internal" is ready
2022-05-19 21:06:44 [ℹ] node "ip-192-168-69-81.us-east-2.compute.internal" is ready
2022-05-19 21:06:45 [ℹ] kubectl command should work with "./kubeconfig-old", try 'kubectl --kubeconfig=./kubeconfig-old get nodes'
2022-05-19 21:06:45 [✔] EKS cluster "example-old" in "us-east-2" region is ready
# Notice the apiVersion is v1alpha1
$ cat ./kubeconfig-old
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: [REDACTED]
server: [REDACTED]
name: example-old.us-east-2.eksctl.io
contexts:
- context:
cluster: example-old.us-east-2.eksctl.io
user: [REDACTED]
name: [REDACTED]
current-context: [REDACTED]
kind: Config
preferences: {}
users:
- name: [REDACTED]
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- eks
- get-token
- --cluster-name
- example-old
- --region
- us-east-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
provideClusterInfo: false Testing the new behavior (the only change I made between these runs is updating # Running with new kubectl, so this will use the behavior added in this PR to default to v1beta1
$ kubectl version --client --output=json
{
"clientVersion": {
"major": "1",
"minor": "24",
"gitVersion": "v1.24.0",
"gitCommit": "4ce5a8954017644c5420bae81d72b09b735c21f0",
"gitTreeState": "clean",
"buildDate": "2022-05-03T13:46:05Z",
"goVersion": "go1.18.1",
"compiler": "gc",
"platform": "linux/amd64"
},
"kustomizeVersion": "v4.5.4"
}
$ export KUBECONFIG=./kubeconfig-new
$ ./eksctl create cluster --name example-new --region us-east-2
2022-05-19 22:00:10 [ℹ] eksctl version 0.99.0-dev+dba24ece.2022-05-19T14:01:54Z
2022-05-19 22:00:10 [ℹ] using region us-east-2
2022-05-19 22:00:10 [ℹ] setting availability zones to [us-east-2a us-east-2c us-east-2b]
2022-05-19 22:00:10 [ℹ] subnets for us-east-2a - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-19 22:00:10 [ℹ] subnets for us-east-2c - public:192.168.32.0/19 private:192.168.128.0/19
2022-05-19 22:00:10 [ℹ] subnets for us-east-2b - public:192.168.64.0/19 private:192.168.160.0/19
2022-05-19 22:00:10 [ℹ] nodegroup "ng-874cf3f4" will use "" [AmazonLinux2/1.22]
2022-05-19 22:00:10 [ℹ] using Kubernetes version 1.22
2022-05-19 22:00:10 [ℹ] creating EKS cluster "example-new" in "us-east-2" region with managed nodes
2022-05-19 22:00:10 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-05-19 22:00:10 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=example-new'
2022-05-19 22:00:10 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "example-new" in "us-east-2"
2022-05-19 22:00:10 [ℹ] CloudWatch logging will not be enabled for cluster "example-new" in "us-east-2"
2022-05-19 22:00:10 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=example-new'
2022-05-19 22:00:10 [ℹ]
2 sequential tasks: { create cluster control plane "example-new",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-874cf3f4",
}
}
2022-05-19 22:00:10 [ℹ] building cluster stack "eksctl-example-new-cluster"
2022-05-19 22:00:10 [ℹ] deploying stack "eksctl-example-new-cluster"
2022-05-19 22:00:40 [ℹ] waiting for CloudFormation stack "eksctl-example-new-cluster"
2022-05-19 22:01:10 [ℹ] waiting for CloudFormation stack "eksctl-example-new-cluster"
... omitting duplicate log entries ...
2022-05-19 22:16:23 [ℹ] waiting for the control plane availability...
W0519 22:16:23.481034 5542 loader.go:221] Config not found: ./kubeconfig-new
W0519 22:16:23.481126 5542 loader.go:221] Config not found: ./kubeconfig-new
W0519 22:16:23.481166 5542 loader.go:221] Config not found: ./kubeconfig-new
2022-05-19 22:16:23 [✔] saved kubeconfig as "./kubeconfig-new"
2022-05-19 22:16:23 [ℹ] no tasks
2022-05-19 22:16:23 [✔] all EKS cluster resources for "example-new" have been created
2022-05-19 22:16:23 [ℹ] nodegroup "ng-874cf3f4" has 2 node(s)
2022-05-19 22:16:23 [ℹ] node "ip-192-168-6-27.us-east-2.compute.internal" is ready
2022-05-19 22:16:23 [ℹ] node "ip-192-168-78-207.us-east-2.compute.internal" is ready
2022-05-19 22:16:23 [ℹ] waiting for at least 2 node(s) to become ready in "ng-874cf3f4"
2022-05-19 22:16:23 [ℹ] nodegroup "ng-874cf3f4" has 2 node(s)
2022-05-19 22:16:23 [ℹ] node "ip-192-168-6-27.us-east-2.compute.internal" is ready
2022-05-19 22:16:23 [ℹ] node "ip-192-168-78-207.us-east-2.compute.internal" is ready
2022-05-19 22:16:25 [ℹ] kubectl command should work with "./kubeconfig-new", try 'kubectl --kubeconfig=./kubeconfig-new get nodes'
2022-05-19 22:16:25 [✔] EKS cluster "example-new" in "us-east-2" region is ready
# Notice the apiVersion is now v1beta1
$ cat ./kubeconfig-new
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: [REDACTED]
server: [REDACTED]
name: example-new.us-east-2.eksctl.io
contexts:
- context:
cluster: example-new.us-east-2.eksctl.io
user: [REDACTED]
name: [REDACTED]
current-context: [REDACTED]
kind: Config
preferences: {}
users:
- name: [REDACTED]
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- eks
- get-token
- --cluster-name
- example-new
- --region
- us-east-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
provideClusterInfo: false |
Nice, well done! :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one suggestion but otherwise LGTM.
Co-authored-by: Chetan Patwal <[email protected]>
@cPu1 thanks for the review; I applied your suggestion and updated to latest |
Great, thanks for the contribution! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🎉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
} | ||
} */ | ||
type KubectlVersionData struct { | ||
Version string `json:"gitVersion"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Theoretically more robust to compare against ${major}.${minor}
rather than assuming a format for gitVersion
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jglick I think it's a safe bet that kubernetes will continue to use semver for the foreseeable future. The only real worry I would have is if they drop the v
prefix, but the code should deal with a prefix-less version correctly.
eksctl version before v0.100.0 generated kubeconfig with apiVersion "v1alpha1", needed to upgrade it to 0.100.0 or later to have "v1beta1" generated. for more info, please check for the links below: - https://github.com/weaveworks/eksctl/releases - https://github.com/weaveworks/eksctl/releases/tag/v0.100.0 - eksctl-io/eksctl#5288 - eksctl-io/eksctl#5287
* Add clusterName as debug info for troubleshooting * Bump eksctl to v0.100.0 for fixing apiVersion changes eksctl version before v0.100.0 generated kubeconfig with apiVersion "v1alpha1", needed to upgrade it to 0.100.0 or later to have "v1beta1" generated. for more info, please check for the links below: - https://github.com/weaveworks/eksctl/releases - https://github.com/weaveworks/eksctl/releases/tag/v0.100.0 - eksctl-io/eksctl#5288 - eksctl-io/eksctl#5287
* Add clusterName as debug info for troubleshooting * Bump eksctl to v0.100.0 for fixing apiVersion changes eksctl version before v0.100.0 generated kubeconfig with apiVersion "v1alpha1", needed to upgrade it to 0.100.0 or later to have "v1beta1" generated. for more info, please check for the links below: - https://github.com/weaveworks/eksctl/releases - https://github.com/weaveworks/eksctl/releases/tag/v0.100.0 - eksctl-io/eksctl#5288 - eksctl-io/eksctl#5287
Description
kubectl
from version 1.24.0 and onwards has removed support for execcredentials of versionclient.authentication.k8s.io/v1alpha1
: kubernetes/kubernetes#108616This PR changes the fallback API version to
client.authentication.k8s.io/v1beta1
when the user haskubectl
installed andkubectl
is version 1.24.0 or above. In this case we know thatv1alpha1
is a guaranteed fail, so it is better to tryv1beta1
which might not work if the authenticator doesn't support it but isn't guaranteed to fail likev1alpha1
is.Indirectly fixes #5257
Checklist
README.md
, or theuserdocs
directory)area/nodegroup
) and kind (e.g.kind/improvement
)