Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Unauthorized on .terraform/modules/eks/aws_auth.tf line 63, in resource "kubernetes_config_map" "aws_auth": #1723

Closed
dlapina opened this issue Dec 20, 2021 · 3 comments

Comments

@dlapina
Copy link

dlapina commented Dec 20, 2021

Description

Hello, i'm using this module to create my eks cluster, here is my configuration file

eks.tf

data "aws_vpc" "eks" {
  tags = {
    Name = "vpc-intern-shared-services"
  }
}

# Retrieve subnets Ids in vpc sirc_dta
data "aws_subnet_ids" "eks" {
  vpc_id = data.aws_vpc.eks.id
  tags = {
    "subnet_type" = "private"
  }
}

data "aws_iam_policy_document" "worker_autoscaling" {
  statement {
    sid    = "eksWorkerAutoscalingAll"
    effect = "Allow"

    actions = [
      "autoscaling:DescribeAutoScalingGroups",
      "autoscaling:DescribeAutoScalingInstances",
      "autoscaling:DescribeLaunchConfigurations",
      "autoscaling:DescribeTags",
      "ec2:DescribeLaunchTemplateVersions",
    ]

    resources = ["*"]
  }

  statement {
    sid    = "eksWorkerAutoscalingOwn"
    effect = "Allow"

    actions = [
      "autoscaling:SetDesiredCapacity",
      "autoscaling:TerminateInstanceInAutoScalingGroup",
      "autoscaling:UpdateAutoScalingGroup",
    ]

    resources = ["*"]

    condition {
      test     = "StringEquals"
      variable = "autoscaling:ResourceTag/kubernetes.io/cluster/${module.eks.cluster_id}"
      values   = ["owned"]
    }

    condition {
      test     = "StringEquals"
      variable = "autoscaling:ResourceTag/k8s.io/cluster-autoscaler/enabled"
      values   = ["true"]
    }
  }
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "17.24.0"

  cluster_version = "1.21"
  cluster_name    = "eks-cluster-${var.environment}"
  vpc_id          = data.aws_vpc.eks.id
  subnets         = data.aws_subnet_ids.eks.ids

  cluster_endpoint_private_access = false
  cluster_endpoint_public_access  = true

  node_groups_defaults = {
    ami_type  = "AL2_x86_64"
    disk_size = 50
  }

  node_groups = {
    on-demand = {
      desired_capacity = 1
      max_capacity     = 8
      min_capacity     = 1

      instance_types = ["t3.medium"]
    }
  }

  map_roles = var.map_roles
}

################################################################################
# Kubernetes provider configuration
################################################################################

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

my gitla-ci

variables:
  KUBE_CONTEXT: epsor/devops/infrastructure/shared-services:gitlab-agent

.kube-context:
  before_script:
    - if [ -n "$KUBE_CONTEXT" ]; then kubectl config use-context "$KUBE_CONTEXT"; fi

lint-eks:
  image:
    name: ghcr.io/terraform-linters/tflint
    entrypoint: [""]
  variables:
    TF_ROOT: ${CI_PROJECT_DIR}/eks-infra/terraform
  tags: ["docker"]
  stage: test
  before_script:
    - cd ${TF_ROOT}
  script:
    - tflint
  only:
    changes:
      - eks-infra/terraform/*

init-eks:
  tags: ["aws_ro"]
  variables:
    TF_ROOT: ${CI_PROJECT_DIR}/eks-infra/terraform
  stage: prepare
  before_script:
    - cd ${TF_ROOT}
  script:
    - gitlab-terraform init
  only:
    changes:
      - eks-infra/terraform/*

validate-eks:
  tags: ["docker"]
  variables:
    TF_ROOT: ${CI_PROJECT_DIR}/eks-infra/terraform
  stage: validate
  before_script:
    - cd ${TF_ROOT}
  script:
    - gitlab-terraform validate
  only:
    changes:
      - eks-infra/terraform/*

plan-eks:
  variables:
    TF_ROOT: ${CI_PROJECT_DIR}/eks-infra/terraform
  tags: ["aws_ro"]
  stage: build
  before_script:
    - cd ${TF_ROOT}
  script:
    - gitlab-terraform plan
    - gitlab-terraform plan-json
  artifacts:
    name: plan
    paths:
      - ${TF_ROOT}/plan.cache
    reports:
      terraform: ${TF_ROOT}/plan.json
  only:
    changes:
      - eks-infra/terraform/*

apply-eks:
  variables:
    TF_ROOT: ${CI_PROJECT_DIR}/eks-infra/terraform
  tags: ["aws_rw"]
  stage: deploy
  environment:
    name: production
  before_script:
    - cd ${TF_ROOT}
  script:
    - gitlab-terraform apply
  dependencies:
    - plan-eks
  when: manual
  only:
    refs:
      - main
    changes:
      - eks-infra/terraform/*

I have also gitlab-agent installed for this cluster with my gitlab project
I have no problem to create the cluster but when i do the update, i get the error


╷
│ Error: Unauthorized
│ 
│   with module.eks.kubernetes_config_map.aws_auth[0],
│   on .terraform/modules/eks/aws_auth.tf line 63, in resource "kubernetes_config_map" "aws_auth":
│   63: resource "kubernetes_config_map" "aws_auth" {
│ 
╵

Versions

  • Terraform:
terraform version 1.0 latest
  • Provider(s):
  • Module:
@bryantbiggs
Copy link
Member

this looks like you are missing roles/permissions in your aws-auth configmap

@bryantbiggs
Copy link
Member

closed in #1680

@github-actions
Copy link

github-actions bot commented Nov 8, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 8, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants