Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detect kubectl version during kubeconfig generation and avoid known non-working configurations #5288

Merged
merged 8 commits into from
May 24, 2022

Conversation

ConnorJC3
Copy link
Contributor

Description

kubectl from version 1.24.0 and onwards has removed support for execcredentials of version client.authentication.k8s.io/v1alpha1: kubernetes/kubernetes#108616

This PR changes the fallback API version to client.authentication.k8s.io/v1beta1 when the user has kubectl installed and kubectl is version 1.24.0 or above. In this case we know that v1alpha1 is a guaranteed fail, so it is better to try v1beta1 which might not work if the authenticator doesn't support it but isn't guaranteed to fail like v1alpha1 is.

Indirectly fixes #5257

Checklist

  • Added tests that cover your change (if possible)
  • Added/modified documentation as required (such as the README.md, or the userdocs directory)
  • Manually tested
  • Made sure the title of the PR is a good description that can go into the release notes
  • (Core team) Added labels for change area (e.g. area/nodegroup) and kind (e.g. kind/improvement)

@Skarlso
Copy link
Contributor

Skarlso commented May 19, 2022

@ConnorJC3 Hi!

Thank you for your contribution! :) Much appreciated.

Please link a detailed manual test flow of both unsupported and supported versions.

Also, as your comment writes, --short is deprecated. You can use something like this:

kubectl version --client --output json | grep "gitVersion" | awk '{print $2}' | tr -d '",'
v1.21.0

@netlify
Copy link

netlify bot commented May 19, 2022

👷 Deploy request for eksctl pending review.

Visit the deploys page to approve it

Name Link
🔨 Latest commit c64694c

@ConnorJC3
Copy link
Contributor Author

ConnorJC3 commented May 19, 2022

@Skarlso Hi! I didn't realize --output=json was supported by kubectl version, I've updated the PR to use that rather than parsing the short output via regex.

For manual testing:

Testing/Confirming the old behavior:

Make sure kubectl, if installed, is below version 1.24.0 (I tested with both an outdated installation of kubectl and without kubectl installed at all). Make sure that you are on a case that generates a v1alpha1 API version - the easiest way to do this is to uninstall aws-iam-authenticator or remove it from your path (and make sure you have the aws cli installed). Alternatively, install a version of aws-iam-authenticator before version 0.5.3, which will also use the v1alpha API version.

Then, create a cluster with eksctl like normal: https://eksctl.io/introduction/

Look at the generated kubeconfig (location is specified by KUBECONFIG env variable, defaults to ~/.kube/config if unset). You should see a section that looks like:

users:
- name: example
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      ...

Note that the generated API version is v1alpha1.

Testing the new behavior:

After completing the above steps to test/confirm the old API version, use the same configuration with the following change: Make sure kubectl is installed, is at least version 1.24.0, and is on your PATH.

Then, create a cluster with eksctl like normal: https://eksctl.io/introduction/

Look at the generated kubeconfig (location is specified by KUBECONFIG env variable, defaults to ~/.kube/config if unset). You should see a section that looks like:

users:
- name: example
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      ...

Note that the generated API version is v1beta1.

@Skarlso
Copy link
Contributor

Skarlso commented May 19, 2022

Sorry, I wasn't really clear on this. :D I didn't mean testing steps, I meant output of an actual manual flow showing that this all works. :)) Sorry about that. :)

@ConnorJC3
Copy link
Contributor Author

ConnorJC3 commented May 19, 2022

@Skarlso ah, my bad, I didn't understand.

Testing original behavior:

# Running with old kubectl, so this should use the old behavior of using v1alpha1
$ kubectl version --client --output=json
{
  "clientVersion": {
    "major": "1",
    "minor": "21+",
    "gitVersion": "v1.21.2-13+d2965f0db10712",
    "gitCommit": "d2965f0db1071203c6f5bc662c2827c71fc8b20d",
    "gitTreeState": "clean",
    "buildDate": "2021-06-26T01:02:11Z",
    "goVersion": "go1.16.5",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

$ export KUBECONFIG=./kubeconfig-old

$ ./eksctl create cluster --name example-old --region us-east-2
2022-05-19 20:50:35 [ℹ]  eksctl version 0.99.0-dev+dba24ece.2022-05-19T14:01:54Z
2022-05-19 20:50:35 [ℹ]  using region us-east-2
2022-05-19 20:50:35 [ℹ]  setting availability zones to [us-east-2c us-east-2a us-east-2b]
2022-05-19 20:50:35 [ℹ]  subnets for us-east-2c - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-19 20:50:35 [ℹ]  subnets for us-east-2a - public:192.168.32.0/19 private:192.168.128.0/19
2022-05-19 20:50:35 [ℹ]  subnets for us-east-2b - public:192.168.64.0/19 private:192.168.160.0/19
2022-05-19 20:50:35 [ℹ]  nodegroup "ng-1961941d" will use "" [AmazonLinux2/1.22]
2022-05-19 20:50:35 [ℹ]  using Kubernetes version 1.22
2022-05-19 20:50:35 [ℹ]  creating EKS cluster "example-old" in "us-east-2" region with managed nodes
2022-05-19 20:50:35 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-05-19 20:50:35 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=example-old'
2022-05-19 20:50:35 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "example-old" in "us-east-2"
2022-05-19 20:50:35 [ℹ]  CloudWatch logging will not be enabled for cluster "example-old" in "us-east-2"
2022-05-19 20:50:35 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=example-old'
2022-05-19 20:50:35 [ℹ]
2 sequential tasks: { create cluster control plane "example-old",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-1961941d",
    }
}
2022-05-19 20:50:35 [ℹ]  building cluster stack "eksctl-example-old-cluster"
2022-05-19 20:50:35 [ℹ]  deploying stack "eksctl-example-old-cluster"
2022-05-19 20:51:05 [ℹ]  waiting for CloudFormation stack "eksctl-example-old-cluster"
2022-05-19 20:52:05 [ℹ]  waiting for CloudFormation stack "eksctl-example-old-cluster"
... omitting duplicate log entries ...
2022-05-19 21:06:44 [ℹ]  waiting for the control plane availability...
W0519 21:06:44.189354    5554 loader.go:221] Config not found: ./kubeconfig-old
W0519 21:06:44.189438    5554 loader.go:221] Config not found: ./kubeconfig-old
W0519 21:06:44.189470    5554 loader.go:221] Config not found: ./kubeconfig-old
2022-05-19 21:06:44 [✔]  saved kubeconfig as "./kubeconfig-old"
2022-05-19 21:06:44 [ℹ]  no tasks
2022-05-19 21:06:44 [✔]  all EKS cluster resources for "example-old" have been created
2022-05-19 21:06:44 [ℹ]  nodegroup "ng-1961941d" has 2 node(s)
2022-05-19 21:06:44 [ℹ]  node "ip-192-168-39-111.us-east-2.compute.internal" is ready
2022-05-19 21:06:44 [ℹ]  node "ip-192-168-69-81.us-east-2.compute.internal" is ready
2022-05-19 21:06:44 [ℹ]  waiting for at least 2 node(s) to become ready in "ng-1961941d"
2022-05-19 21:06:44 [ℹ]  nodegroup "ng-1961941d" has 2 node(s)
2022-05-19 21:06:44 [ℹ]  node "ip-192-168-39-111.us-east-2.compute.internal" is ready
2022-05-19 21:06:44 [ℹ]  node "ip-192-168-69-81.us-east-2.compute.internal" is ready
2022-05-19 21:06:45 [ℹ]  kubectl command should work with "./kubeconfig-old", try 'kubectl --kubeconfig=./kubeconfig-old get nodes'
2022-05-19 21:06:45 [✔]  EKS cluster "example-old" in "us-east-2" region is ready

# Notice the apiVersion is v1alpha1
$ cat ./kubeconfig-old
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: [REDACTED]
    server: [REDACTED]
  name: example-old.us-east-2.eksctl.io
contexts:
- context:
    cluster: example-old.us-east-2.eksctl.io
    user: [REDACTED]
  name: [REDACTED]
current-context: [REDACTED]
kind: Config
preferences: {}
users:
- name: [REDACTED]
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - eks
      - get-token
      - --cluster-name
      - example-old
      - --region
      - us-east-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      provideClusterInfo: false

Testing the new behavior (the only change I made between these runs is updating kubectl):

# Running with new kubectl, so this will use the behavior added in this PR to default to v1beta1
$ kubectl version --client --output=json
{
  "clientVersion": {
    "major": "1",
    "minor": "24",
    "gitVersion": "v1.24.0",
    "gitCommit": "4ce5a8954017644c5420bae81d72b09b735c21f0",
    "gitTreeState": "clean",
    "buildDate": "2022-05-03T13:46:05Z",
    "goVersion": "go1.18.1",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "kustomizeVersion": "v4.5.4"
}

$ export KUBECONFIG=./kubeconfig-new

$ ./eksctl create cluster --name example-new --region us-east-2
2022-05-19 22:00:10 [ℹ]  eksctl version 0.99.0-dev+dba24ece.2022-05-19T14:01:54Z
2022-05-19 22:00:10 [ℹ]  using region us-east-2
2022-05-19 22:00:10 [ℹ]  setting availability zones to [us-east-2a us-east-2c us-east-2b]
2022-05-19 22:00:10 [ℹ]  subnets for us-east-2a - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-19 22:00:10 [ℹ]  subnets for us-east-2c - public:192.168.32.0/19 private:192.168.128.0/19
2022-05-19 22:00:10 [ℹ]  subnets for us-east-2b - public:192.168.64.0/19 private:192.168.160.0/19
2022-05-19 22:00:10 [ℹ]  nodegroup "ng-874cf3f4" will use "" [AmazonLinux2/1.22]
2022-05-19 22:00:10 [ℹ]  using Kubernetes version 1.22
2022-05-19 22:00:10 [ℹ]  creating EKS cluster "example-new" in "us-east-2" region with managed nodes
2022-05-19 22:00:10 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-05-19 22:00:10 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=example-new'
2022-05-19 22:00:10 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "example-new" in "us-east-2"
2022-05-19 22:00:10 [ℹ]  CloudWatch logging will not be enabled for cluster "example-new" in "us-east-2"
2022-05-19 22:00:10 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=example-new'
2022-05-19 22:00:10 [ℹ]
2 sequential tasks: { create cluster control plane "example-new",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-874cf3f4",
    }
}
2022-05-19 22:00:10 [ℹ]  building cluster stack "eksctl-example-new-cluster"
2022-05-19 22:00:10 [ℹ]  deploying stack "eksctl-example-new-cluster"
2022-05-19 22:00:40 [ℹ]  waiting for CloudFormation stack "eksctl-example-new-cluster"
2022-05-19 22:01:10 [ℹ]  waiting for CloudFormation stack "eksctl-example-new-cluster"
... omitting duplicate log entries ...
2022-05-19 22:16:23 [ℹ]  waiting for the control plane availability...
W0519 22:16:23.481034    5542 loader.go:221] Config not found: ./kubeconfig-new
W0519 22:16:23.481126    5542 loader.go:221] Config not found: ./kubeconfig-new
W0519 22:16:23.481166    5542 loader.go:221] Config not found: ./kubeconfig-new
2022-05-19 22:16:23 [✔]  saved kubeconfig as "./kubeconfig-new"
2022-05-19 22:16:23 [ℹ]  no tasks
2022-05-19 22:16:23 [✔]  all EKS cluster resources for "example-new" have been created
2022-05-19 22:16:23 [ℹ]  nodegroup "ng-874cf3f4" has 2 node(s)
2022-05-19 22:16:23 [ℹ]  node "ip-192-168-6-27.us-east-2.compute.internal" is ready
2022-05-19 22:16:23 [ℹ]  node "ip-192-168-78-207.us-east-2.compute.internal" is ready
2022-05-19 22:16:23 [ℹ]  waiting for at least 2 node(s) to become ready in "ng-874cf3f4"
2022-05-19 22:16:23 [ℹ]  nodegroup "ng-874cf3f4" has 2 node(s)
2022-05-19 22:16:23 [ℹ]  node "ip-192-168-6-27.us-east-2.compute.internal" is ready
2022-05-19 22:16:23 [ℹ]  node "ip-192-168-78-207.us-east-2.compute.internal" is ready
2022-05-19 22:16:25 [ℹ]  kubectl command should work with "./kubeconfig-new", try 'kubectl --kubeconfig=./kubeconfig-new get nodes'
2022-05-19 22:16:25 [✔]  EKS cluster "example-new" in "us-east-2" region is ready

# Notice the apiVersion is now v1beta1
$ cat ./kubeconfig-new
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: [REDACTED]
    server: [REDACTED]
  name: example-new.us-east-2.eksctl.io
contexts:
- context:
    cluster: example-new.us-east-2.eksctl.io
    user: [REDACTED]
  name: [REDACTED]
current-context: [REDACTED]
kind: Config
preferences: {}
users:
- name: [REDACTED]
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - eks
      - get-token
      - --cluster-name
      - example-new
      - --region
      - us-east-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      provideClusterInfo: false

@Skarlso
Copy link
Contributor

Skarlso commented May 20, 2022

Nice, well done! :)

Copy link
Contributor

@cPu1 cPu1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one suggestion but otherwise LGTM.

@ConnorJC3
Copy link
Contributor Author

@cPu1 thanks for the review; I applied your suggestion and updated to latest main.

@cPu1
Copy link
Contributor

cPu1 commented May 24, 2022

@cPu1 thanks for the review; I applied your suggestion and updated to latest main.

Great, thanks for the contribution!

Copy link
Contributor

@cPu1 cPu1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🎉

@cPu1 cPu1 enabled auto-merge (squash) May 24, 2022 11:30
@cPu1 cPu1 merged commit 02bd176 into eksctl-io:main May 24, 2022
Copy link

@jglick jglick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

}
} */
type KubectlVersionData struct {
Version string `json:"gitVersion"`
Copy link

@jglick jglick May 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Theoretically more robust to compare against ${major}.${minor} rather than assuming a format for gitVersion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jglick I think it's a safe bet that kubernetes will continue to use semver for the foreseeable future. The only real worry I would have is if they drop the v prefix, but the code should deal with a prefix-less version correctly.

guessi added a commit to guessi/aws-load-balancer-controller that referenced this pull request Jun 17, 2022
eksctl version before v0.100.0 generated kubeconfig with apiVersion "v1alpha1",
needed to upgrade it to 0.100.0 or later to have "v1beta1" generated.

for more info, please check for the links below:

- https://github.com/weaveworks/eksctl/releases
- https://github.com/weaveworks/eksctl/releases/tag/v0.100.0
- eksctl-io/eksctl#5288
- eksctl-io/eksctl#5287
k8s-ci-robot pushed a commit to kubernetes-sigs/aws-load-balancer-controller that referenced this pull request Jun 24, 2022
* Add clusterName as debug info for troubleshooting

* Bump eksctl to v0.100.0 for fixing apiVersion changes

eksctl version before v0.100.0 generated kubeconfig with apiVersion "v1alpha1",
needed to upgrade it to 0.100.0 or later to have "v1beta1" generated.

for more info, please check for the links below:

- https://github.com/weaveworks/eksctl/releases
- https://github.com/weaveworks/eksctl/releases/tag/v0.100.0
- eksctl-io/eksctl#5288
- eksctl-io/eksctl#5287
Timothy-Dougherty pushed a commit to adammw/aws-load-balancer-controller that referenced this pull request Nov 9, 2023
* Add clusterName as debug info for troubleshooting

* Bump eksctl to v0.100.0 for fixing apiVersion changes

eksctl version before v0.100.0 generated kubeconfig with apiVersion "v1alpha1",
needed to upgrade it to 0.100.0 or later to have "v1beta1" generated.

for more info, please check for the links below:

- https://github.com/weaveworks/eksctl/releases
- https://github.com/weaveworks/eksctl/releases/tag/v0.100.0
- eksctl-io/eksctl#5288
- eksctl-io/eksctl#5287
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug] eksctl utils write-config breaks in kubectl 1.24 when aws-iam-authenticator absent
4 participants