-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new karpenter policy fails #2306
Comments
This is the current way this is done for other resources (Fargate in the example below):
This is how it's done in the Karpenter module:
Since the first example above works, maybe refactoring to match that is the way to go since it should fix the issue and keep uniformity across the codebase. Thoughts?
Thoughts? It's a little more verbose but should work. I will try to test this out later today before making a PR. |
I can add another example which I just came across. Trying with just one input added -
results in
Which gives indication that the conditionals on the module are not correct ( or am I missing something here ) ? |
I'm not able to reproduce - this works: module "karpenter" {
source = "terraform-aws-modules/eks/aws//modules/karpenter"
version = "18.31.0"
cluster_name = module.eks.cluster_name
irsa_oidc_provider_arn = module.eks.oidc_provider_arn
irsa_namespace_service_accounts = ["karpenter:karpenter"]
# Since Karpenter is running on an EKS Managed Node group,
# we can re-use the role that was created for the node group
create_iam_role = false
iam_role_arn = module.eks.eks_managed_node_groups["initial"].iam_role_arn
} |
@bryantbiggs please try with
|
@bryantbiggs we already have karpenter in place, via blueprints. Is this a supported use of this module? |
absolutely - should be patched shortly with #2308 |
This issue has been resolved in version 18.31.2 🎉 |
just tested this on 18.31.2 on a new cluster and on 1st apply it fails. module "node_termination_queue" {
source = "terraform-aws-modules/eks/aws//modules/karpenter"
version = ">= 18.31.2, ~> 18.31"
cluster_name = var.eks_cluster_id
queue_name = var.eks_cluster_id # Queue Name needs to match the cluster name do to IAM policies # A queue name is case-sensitive and can have up to 80 characters. You can use alphanumeric characters, hyphens (-), and underscores ( _ ). # Rule name can not be longer than 64, which limits cluster_name to 35 characters.
enable_spot_termination = true
queue_managed_sse_enabled = true
create_iam_role = false
create_instance_profile = false
create_irsa = false
iam_role_attach_cni_policy = false
}
output "node_termination_queue" { value = module.node_termination_queue }
output "node_termination_queue_queue_name" { value = module.node_termination_queue.queue_name } ╷
│ Error: Invalid for_each argument
│
│ on .terraform/modules/base_system.karpenter.node_termination_queue/modules/karpenter/main.tf line 324, in resource "aws_iam_role_policy_attachment" "this":
│ 324: for_each = { for k, v in toset(compact([
│ 325: "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
│ 326: "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
│ 327: var.iam_role_attach_cni_policy ? local.cni_policy : "",
│ 328: ])) : k => v if local.create_iam_role }
│ ├────────────────
│ │ local.cni_policy is a string, known only after apply
│ │ local.create_iam_role is false
│ │ local.iam_role_policy_prefix is a string, known only after apply
│ │ var.iam_role_attach_cni_policy is false
│
│ The "for_each" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.
│
│ When working with unknown values in for_each, it's better to define the map keys statically in your configuration and place apply-time results only in the map values.
│
│ Alternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge. Please reopen this issue @antonbabenko |
closing since I am unable to reproduce: Reproductionmodule "node_termination_queue" {
source = "terraform-aws-modules/eks/aws//modules/karpenter"
version = ">= 18.31.2, ~> 18.31"
cluster_name = "example"
queue_name = "foo"
enable_spot_termination = true
queue_managed_sse_enabled = true
create_iam_role = false
create_instance_profile = false
create_irsa = false
iam_role_attach_cni_policy = false
}
output "node_termination_queue" {
value = module.node_termination_queue
}
output "node_termination_queue_queue_name" {
value = module.node_termination_queue.queue_name
} Output
|
I'll retest tomorrow. |
just tested on a new cluster, using v19.4
|
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the
examples/*
directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by runningterraform init && terraform apply
without any further changes.If your request is for a new feature, please use the
Feature request
template.Before you submit an issue, please perform the following first:
.terraform
directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/
terraform init
Versions
Module version [Required]:
Terraform version:
Reproduction Code [Required]
Steps to reproduce the behavior:
Expected behavior
Actual behavior
Terminal Output Screenshot(s)
Additional context
The text was updated successfully, but these errors were encountered: