-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KMS Policy error in 19.0 when using iam role for aws provider #2327
Comments
hi @bryantbiggs. the reason to create this issue was that those 2 issues seem to relate to upgrading from older version of module to newer version. And there the OP wants to turn off encryption completely. I think the right solution would have the cluster be created with encryption and without any errors, which is what I am trying to do here. |
they are related and will all be solved by #2318 |
@bryantbiggs so this issue is still present in Here is the exact code to reproduce: module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
version = "19.0.4"
vpc_id = module.vpc.vpc_id
cluster_name = "k1"
cluster_version = "1.24"
subnet_ids = module.vpc.private_subnet_ids
cluster_endpoint_private_access = true
cluster_endpoint_public_access = false
} |
I don't see those errors on my end - it looks like TFC/TFE is evaluating the policy, or attempting to, and stating that the entity creating the key won't be able to manage it in the future. by default, the module will add the identity who creates the cluster resources into the key policy as an admin Line 125 in 7124d76
however, users can update to suit their needs. for example, you can enable the default KMS key policy which gives access to all users of the account by setting |
actually this error message is from the aws provider. tfc does not know anything special about aws, it just executes terraform. it seems the fact that the identity being used being a role and not a user might be causing the issue. |
this shows no issue when run locally, not on TFE/TFC data "aws_availability_zones" "available" {}
locals {
name = "ex-${replace(basename(path.cwd), "_", "-")}"
cluster_version = "1.24"
region = "eu-west-1"
vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
}
module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
version = "19.0.4"
vpc_id = module.vpc.vpc_id
cluster_name = local.name
cluster_version = "1.24"
subnet_ids = module.vpc.private_subnets
# Note: these are redundant since this is the default on v19
cluster_endpoint_private_access = true
cluster_endpoint_public_access = false
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0"
name = local.name
cidr = local.vpc_cidr
azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
} |
so the key part i believe is this (pasting snippet from code snippet in my top post): provider "aws" {
assume_role {
role_arn = "arn:aws:iam::172512501118:role/terraform"
}
} the aws provider is using a 'role' instead of a 'user' which might be the issue. when you grab the current identiy arn, its possible the code is expecting a 'user' only. |
is this one really closed? |
it does not work when you assume a role. the bug is actually open and can be easily reproduced if you use the code sample in original post, including the |
Yes thank you, I can confirm we have problems as well. @bryantbiggs Can you reopen this issue? |
for now, to get around this you can set the provider "aws" {
region = "us-east-1"
allowed_account_ids = ["172512501118"]
assume_role {
role_arn = "arn:aws:iam::172512501118:role/terraform"
}
}
module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.0"
vpc_id = module.vpc.vpc_id
cluster_name = "k1"
cluster_version = "1.24"
subnet_ids = module.vpc.private_subnet_ids
# Add this line to match the assumed role ARN
kms_key_administrators = ["arn:aws:iam::172512501118:role/terraform"]
cluster_endpoint_private_access = true
cluster_endpoint_public_access = false
} |
@bryantbiggs I would instead suggest the following generic work-around, which uses aws_iam_session_context to resolve the issue and might also be the approach terraform-aws-eks should use internally: provider "aws" {
region = "us-east-1"
allowed_account_ids = ["172512501118"]
assume_role {
role_arn = "arn:aws:iam::172512501118:role/terraform"
}
}
data "aws_caller_identity" "current" {}
data "aws_iam_session_context" "current" {
# "This data source provides information on the IAM source role of an STS assumed role. For non-role ARNs, this data source simply passes the ARN through in issuer_arn."
arn = data.aws_caller_identity.current.arn
}
module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.0"
vpc_id = module.vpc.vpc_id
cluster_name = "k1"
cluster_version = "1.24"
subnet_ids = module.vpc.private_subnet_ids
kms_key_administrators = [data.aws_iam_session_context.current.issuer_arn]
cluster_endpoint_private_access = true
cluster_endpoint_public_access = false
} |
ooh, I did not know about this data source. let me give it a try - thank you |
oh that works beautifully - thank you @fknittel |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
Trying to create a new eks cluster with module version 19.0 runs into a
MalformedPolicyDocumentException
error while creating the kms key internally.Versions
Module version [Required]: 19.0.3
Terraform version: 1.3.6
Reproduction Code [Required]
This simple code is enough to reproduce the error:
AWS provider configuration:
the iam
role
above hasAdministrativeAccess
policy attached.Expected behavior
Cluster should be created without errors.
Actual behavior
Getting this error:
Terminal Output Screenshot(s)
Additional context
The text was updated successfully, but these errors were encountered: