Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KMS Policy error in 19.0 when using iam role for aws provider #2327

Closed
pdeva opened this issue Dec 7, 2022 · 17 comments
Closed

KMS Policy error in 19.0 when using iam role for aws provider #2327

pdeva opened this issue Dec 7, 2022 · 17 comments

Comments

@pdeva
Copy link

pdeva commented Dec 7, 2022

Description

Trying to create a new eks cluster with module version 19.0 runs into a MalformedPolicyDocumentException error while creating the kms key internally.

  • [ x] ✋ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: 19.0.3

  • Terraform version: 1.3.6

  • Provider version(s): aws provider: 4.45.0

Reproduction Code [Required]

This simple code is enough to reproduce the error:

module "eks_cluster" {
  source          = "terraform-aws-modules/eks/aws"
  version         = "~> 19.0"
  vpc_id          = module.vpc.vpc_id
  cluster_name    = "k1"
  cluster_version = "1.24"
  subnet_ids      = module.vpc.private_subnet_ids

  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = false

}

AWS provider configuration:

provider "aws" {
  region              = "us-east-1"
  allowed_account_ids = ["172512501118"]
  assume_role {
    role_arn = "arn:aws:iam::172512501118:role/terraform"
  }
}

the iam role above has AdministrativeAccess policy attached.

Expected behavior

Cluster should be created without errors.

Actual behavior

Getting this error:

Error: error creating KMS Key: MalformedPolicyDocumentException: The new key policy will not allow you to update the key policy in the future.
with module.us_east_1.module.eks_cluster.module.kms.aws_kms_key.this[0]
on .terraform/modules/us_east_1.eks_cluster.kms/main.tf line 8, in resource "aws_kms_key" "this":
resource "aws_kms_key" "this" {

Terminal Output Screenshot(s)

Screenshot 2022-12-07 at 6 57 48 AM

Additional context

@pdeva pdeva changed the title KMS Policy error in 19.0 KMS Policy error in 19.0 when using iam role to for aws provider Dec 7, 2022
@pdeva pdeva changed the title KMS Policy error in 19.0 when using iam role to for aws provider KMS Policy error in 19.0 when using iam role for aws provider Dec 7, 2022
@bryantbiggs
Copy link
Member

please check open issues first per the template: duplicate of #2321 and #2325

@pdeva
Copy link
Author

pdeva commented Dec 7, 2022

hi @bryantbiggs. the reason to create this issue was that those 2 issues seem to relate to upgrading from older version of module to newer version. And there the OP wants to turn off encryption completely.

I think the right solution would have the cluster be created with encryption and without any errors, which is what I am trying to do here.

@bryantbiggs
Copy link
Member

they are related and will all be solved by #2318

@pdeva
Copy link
Author

pdeva commented Dec 7, 2022

@bryantbiggs so this issue is still present in 19.0.4 which has #2318.

Here is the exact code to reproduce:

module "eks_cluster" {
  source          = "terraform-aws-modules/eks/aws"
  version         = "19.0.4"
  vpc_id          = module.vpc.vpc_id
  cluster_name    = "k1"
  cluster_version = "1.24"
  subnet_ids      = module.vpc.private_subnet_ids

  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = false

}

And here is the error:
Screenshot 2022-12-07 at 9 57 54 AM

@bryantbiggs
Copy link
Member

I don't see those errors on my end - it looks like TFC/TFE is evaluating the policy, or attempting to, and stating that the entity creating the key won't be able to manage it in the future.

by default, the module will add the identity who creates the cluster resources into the key policy as an admin

key_administrators = coalescelist(var.kms_key_administrators, [data.aws_caller_identity.current.arn])

however, users can update to suit their needs. for example, you can enable the default KMS key policy which gives access to all users of the account by setting enable_default_policy = true or you scope the policy based on different roles and the type of access they need to the key

@pdeva
Copy link
Author

pdeva commented Dec 7, 2022

it looks like TFC/TFE is evaluating the policy

actually this error message is from the aws provider. tfc does not know anything special about aws, it just executes terraform. it seems the fact that the identity being used being a role and not a user might be causing the issue.

@bryantbiggs
Copy link
Member

this shows no issue when run locally, not on TFE/TFC

data "aws_availability_zones" "available" {}

locals {
  name            = "ex-${replace(basename(path.cwd), "_", "-")}"
  cluster_version = "1.24"
  region          = "eu-west-1"

  vpc_cidr = "10.0.0.0/16"
  azs      = slice(data.aws_availability_zones.available.names, 0, 3)
}

module "eks_cluster" {
  source  = "terraform-aws-modules/eks/aws"
  version = "19.0.4"

  vpc_id          = module.vpc.vpc_id
  cluster_name    = local.name
  cluster_version = "1.24"

  subnet_ids = module.vpc.private_subnets

  # Note: these are redundant since this is the default on v19
  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = false
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 3.0"

  name = local.name
  cidr = local.vpc_cidr

  azs             = local.azs
  private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
  public_subnets  = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]

  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
  }
}

@pdeva
Copy link
Author

pdeva commented Dec 7, 2022

so the key part i believe is this (pasting snippet from code snippet in my top post):

provider "aws" {
  assume_role {
    role_arn = "arn:aws:iam::172512501118:role/terraform"
  }
}

the aws provider is using a 'role' instead of a 'user' which might be the issue. when you grab the current identiy arn, its possible the code is expecting a 'user' only.

@erltho
Copy link

erltho commented Dec 13, 2022

is this one really closed?
@pdeva were you able to provision when assuming a role?

@pdeva
Copy link
Author

pdeva commented Dec 14, 2022

it does not work when you assume a role. the bug is actually open and can be easily reproduced if you use the code sample in original post, including the aws.assume_role block.

@erltho
Copy link

erltho commented Dec 15, 2022

Yes thank you, I can confirm we have problems as well.

@bryantbiggs Can you reopen this issue?

@bryantbiggs
Copy link
Member

@bryantbiggs
Copy link
Member

for now, to get around this you can set the kms_key_administrators directly. Using @pdeva's example:

provider "aws" {
  region              = "us-east-1"
  allowed_account_ids = ["172512501118"]
  assume_role {
    role_arn = "arn:aws:iam::172512501118:role/terraform"
  }
}

module "eks_cluster" {
  source          = "terraform-aws-modules/eks/aws"
  version         = "~> 19.0"
  vpc_id          = module.vpc.vpc_id
  cluster_name    = "k1"
  cluster_version = "1.24"
  subnet_ids      = module.vpc.private_subnet_ids
 
  # Add this line to match the assumed role ARN
  kms_key_administrators = ["arn:aws:iam::172512501118:role/terraform"]

  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = false
}

@fknittel
Copy link

@bryantbiggs I would instead suggest the following generic work-around, which uses aws_iam_session_context to resolve the issue and might also be the approach terraform-aws-eks should use internally:

provider "aws" {
  region              = "us-east-1"
  allowed_account_ids = ["172512501118"]
  assume_role {
    role_arn = "arn:aws:iam::172512501118:role/terraform"
  }
}
data "aws_caller_identity" "current" {}
data "aws_iam_session_context" "current" {
  # "This data source provides information on the IAM source role of an STS assumed role. For non-role ARNs, this data source simply passes the ARN through in issuer_arn."
  arn = data.aws_caller_identity.current.arn
}
module "eks_cluster" {
  source          = "terraform-aws-modules/eks/aws"
  version         = "~> 19.0"
  vpc_id          = module.vpc.vpc_id
  cluster_name    = "k1"
  cluster_version = "1.24"
  subnet_ids      = module.vpc.private_subnet_ids

  kms_key_administrators = [data.aws_iam_session_context.current.issuer_arn]

  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = false
}

@bryantbiggs
Copy link
Member

ooh, I did not know about this data source. let me give it a try - thank you

@bryantbiggs
Copy link
Member

oh that works beautifully - thank you @fknittel

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants