Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EKS version 1.19 unable to find ami #1247

Closed
1 of 4 tasks
emilchitas opened this issue Feb 17, 2021 · 8 comments · Fixed by #1371
Closed
1 of 4 tasks

EKS version 1.19 unable to find ami #1247

emilchitas opened this issue Feb 17, 2021 · 8 comments · Fixed by #1371

Comments

@emilchitas
Copy link

emilchitas commented Feb 17, 2021

I have issues

I'm submitting a...

  • bug report
  • feature request
  • support request - read the FAQ first!
  • kudos, thank you, warm fuzzy

What is the current behavior?

When trying to deploy an EKS cluster, version 1.19, I get the following error:
Error: Your query returned no results. Please change your search criteria and try again.
It seems as if the module is unable to find a passing ami for the nodes.

If this is a bug, how to reproduce? Please include a code sample if relevant.

Deploy a version 1.19 EKS cluster

module "eks" {
  source                          = "terraform-aws-modules/eks/aws"
  version                         = "14.0.0"
  cluster_name                    = "${var.name}-${var.environment}-cluster"
  cluster_version                 = "1.19"
  subnets                         = var.private-subnets
  vpc_id                          = var.vpc_id
  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = false
  enable_irsa                     = true
  write_kubeconfig                = false

  node_groups = {
    default = {
      name             = "${var.name}-${var.environment}-node-group"
      min_capacity     = 1
      desired_capacity = var.eks_node_group.desired_capacity
      max_capacity     = var.eks_node_group.max_capacity
      instance_types   = [var.eks_node_group.instance_type]
      key_name         = var.keypair_name
      tags = [
        {
          "key"                 = "k8s.io/cluster-autoscaler/enabled"
          "propagate_at_launch" = "false"
          "value"               = "true"
        },
        {
          "key"                 = "k8s.io/cluster-autoscaler/${local.cluster_name}"
          "propagate_at_launch" = "false"
          "value"               = "true"
        }
      ]
    }
  }
  map_roles = local.roles
  map_users = local.users
  tags      = local.common_tags
}

What's the expected behavior?

Terraform should be able to find a suitable ami for this cluster version.

Are you able to fix this problem and submit a PR? Link here if you have already.

No

Environment details

  • Affected module version: 14.0.0
  • OS: Ubuntu
  • Terraform version: 0.14.6

Any other relevant info

None

@alghanmi
Copy link

I am able to reproduce this issue.

As far as I can tell, there are no EKS optimized windows AMIs for Kubernetes 1.19. The latest available is for Kubernetes 1.18.

Since I do not use Windows workers in EKS, I added the following variable in module block to resolve the issue:

worker_ami_name_filter_windows = "Windows_Server-2019-English-Core-EKS_Optimized-1.18-*"

@jmcgeheeiv
Copy link

When you implement this, you can also remove the code in local.tf that tests for Kubernetes >= 1.14, as the minimum EKS Kubernetes version is now 1.15:

  # Windows nodes are available from k8s 1.14. If cluster version is less than 1.14, fix ami filter to some constant to not fail on 'terraform plan'.
  worker_ami_name_filter_windows = (var.worker_ami_name_filter_windows != "" ?
    var.worker_ami_name_filter_windows : "Windows_Server-2019-English-Core-EKS_Optimized-${tonumber(var.cluster_version) >= 1.14 ? var.cluster_version : 1.14}-*"
  )

@barryib
Copy link
Member

barryib commented May 19, 2021

I just opened #1371 to address this. Will you please review it and test it ?

@newb1e
Copy link

newb1e commented May 19, 2021

I just opened #1371 to address this. Will you please review it and test it ?

To upgrade from 1.19 to 1.20 I ran with the module from the PR and the issue with the windows AMI is resolved.
Upgrade and TF run finished successfully

@nahidupa
Copy link

nahidupa commented Jun 9, 2021

Looks like in cluster_version: 1.20 it's broken again, with the following filter getting an error.

worker_ami_name_filter_windows= "Windows_Server-2004-English-Core-EKS_Optimized-1.20*"
worker_ami_owner_id_windows = "amazon"

Error: Your query returned no results. Please change your search criteria and try again.

@alghanmi alghanmi

@barryib
Copy link
Member

barryib commented Jun 9, 2021

This has been fixed in the latest version of this module.

Please upgrade your module and follow changelog and docs/upgrades.md for more info.

@nahidupa
Copy link

@barryib thanks for the reply I have tried with the latest master branch. Looks like it cannot locate the images with filter criteria, I have checked AWS the images is exist.
worker_ami_name_filter_windows= "Windows_Server-2004-English-Core-EKS_Optimized-1.20*"

Another question is can I create a worker group with directly assign AMI ?

The filter(worker_ami_name_filter_windows) is good, However, in production, without a test changing AMI can cause an issue.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 20, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants