Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Register nodes with multiple taints #1004

Closed
1 of 4 tasks
cthiebault opened this issue Sep 9, 2020 · 9 comments
Closed
1 of 4 tasks

Register nodes with multiple taints #1004

cthiebault opened this issue Sep 9, 2020 · 9 comments

Comments

@cthiebault
Copy link

I have issues

I'm submitting a...

  • bug report
  • feature request
  • support request - read the FAQ first!
  • kudos, thank you, warm fuzzy

What is the current behavior?

Using worker_groups I want to assign multiple taints to nodes.

It works when I assign a single taint:

kubelet_extra_args = "--node-labels=env-test=true --register-with-taints=env=prod:NoSchedule"

I tried multiple ways to assign several taints without success:

kubelet_extra_args = "--register-with-taints=env=prod:NoSchedule --register-with-taints=env=staging:NoSchedule" // env=staging:NoSchedule only

kubelet_extra_args = "--register-with-taints=env=prod:NoSchedule,env=staging:NoSchedule" // not even registered in EKS cluster

kubelet_extra_args = "--register-with-taints='env=prod:NoSchedule,env=staging:NoSchedule'" // no taints

kubelet_extra_args = "--register-with-taints='env=prod:NoSchedule','env=staging:NoSchedule'"  // no taints

kubelet_extra_args = "--register-with-taints 'env=prod:NoSchedule, env=staging:NoSchedule'" // no taint

What's the expected behavior?

I expect to have 2 taints:

  • env=prod:NoSchedule
  • env=staging:NoSchedule

Environment details

  • Affected module version: 12.2.0
  • Kubernetes version: 1.17
  • Terraform version:
Terraform v0.12.29
+ provider.aws v2.70.0
+ provider.kubernetes v1.12.0
@anarsen
Copy link

anarsen commented Sep 10, 2020

Do the arguments make their way into the EC2 user data?

@cthiebault
Copy link
Author

It seems so:

# Bootstrap and join the cluster
/etc/eks/bootstrap.sh --b64-cluster-ca 'xxx' --apiserver-endpoint 'https://xxx.yl4.eu-central-1.eks.amazonaws.com'  --kubelet-extra-args "--node-labels=env-test=true --register-with-taints=env=prod:NoSchedule,env=staging:NoSchedule" 'default'

@PG2000
Copy link

PG2000 commented Sep 14, 2020

@cthiebault can you post an minimal example of your terraform where you define your worker_groups? Then i will test it

@cthiebault
Copy link
Author

Here is my code:

locals {
  cluster_name = "gitlab"
  namespace    = "default"
}

module "vpc" {
  source               = "terraform-aws-modules/vpc/aws"
  version              = "~> 2.33"
  name                 = "eks-gitlab-vpc"
  cidr                 = "172.16.0.0/16"
  azs                  = [ "eu-central-1a", "eu-central-1b", "eu-central-1c" ]
  private_subnets      = [ "172.16.1.0/24", "172.16.2.0/24", "172.16.3.0/24" ]
  public_subnets       = [ "172.16.4.0/24", "172.16.5.0/24", "172.16.6.0/24" ]
  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true
  enable_dns_support   = true
  public_subnet_tags   = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
  }
  private_subnet_tags  = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
  }
}

module "eks" {
  source             = "terraform-aws-modules/eks/aws"
  version            = "~> 12.2.0"
  cluster_name       = local.cluster_name
  cluster_version    = "1.17"
  subnets            = module.vpc.private_subnets
  vpc_id             = module.vpc.vpc_id
  config_output_path = "config/"
  enable_irsa        = true
  worker_groups      = [
    {
      name                 = "gitlab"
      instance_type        = "t3.medium"
      root_volume_size     = "50"
      asg_min_size         = "1"
      asg_desired_capacity = "1"
      asg_max_size         = "1"
      kubelet_extra_args   = "--node-labels=env-infra=true --register-with-taints=env=prod:NoSchedule,env=staging:NoSchedule"
      subnets              = [ module.vpc.private_subnets[ 0 ] ]
    },
  ]
}

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.12.0"
}

@barryib
Copy link
Member

barryib commented Oct 8, 2020

It sounds like it comes from this change #474.

I just noticed that I closed #814 quickly and unfortunately I didn't see @eladitzhakian's comment.

The hole point here is to use single quote for the bootstrap.sh and document correctly how to solve #473. https://linuxhint.com/bash_escape_quotes/ a useful shell fu link for single and double quotes escape.

Can you please open a PR for this.

@yveslaroche
Copy link
Contributor

yveslaroche commented Nov 4, 2020

@cthiebault you can't apply taints with the same key and effect, but with different values. You can try this manually using kubectl after a node is added with the taint "env=prod:NoSchedule". You'd receive the following error:

❯ kubectl taint nodes ip-xx-xx-xx-xx.us-west-2.compute.internal env=staging:NoSchedule
error: node  ip-xx-xx-xx-xx.us-west-2.compute.internal already has env taint(s) with same effect(s) and --overwrite is false

@stale
Copy link

stale bot commented Feb 2, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Feb 2, 2021
@stale
Copy link

stale bot commented Mar 4, 2021

This issue has been automatically closed because it has not had recent activity since being marked as stale.

@stale stale bot closed this as completed Mar 4, 2021
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 22, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants