Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can`t create node groups #1545

Closed
IgorKurylo opened this issue Aug 24, 2021 · 10 comments
Closed

Can`t create node groups #1545

IgorKurylo opened this issue Aug 24, 2021 · 10 comments

Comments

@IgorKurylo
Copy link

IgorKurylo commented Aug 24, 2021

I try to create EKS with this module, using with launch template and after the cluster creating successfully and starting to create nodes I am get the next error:

Error: error creating EKS Node Group (alert-app-alpha:alert-app-alpha-node20210824081946824200000001): InvalidRequestException: Cannot specify instance types in launch template and API request { RespMetadata: { StatusCode: 400, RequestID: "48d5fb36-c38f-4142-bd52-9b46317fbf45" }, ClusterName: "alert-app-alpha", Message_: "Cannot specify instance types in launch template and API request", NodegroupName: "alert-app-alpha-node20210824081946824200000001" } eks\modules\node_groups\node_groups.tf line 1, in resource "aws_eks_node_group" "workers": 1: resource "aws_eks_node_group" "workers" {

Providers required by state:
provider[registry.terraform.io/hashicorp/kubernetes]
provider[registry.terraform.io/hashicorp/aws]
provider[registry.terraform.io/hashicorp/local]
provider[registry.terraform.io/hashicorp/null]
provider[registry.terraform.io/hashicorp/random]
provider[registry.terraform.io/hashicorp/template]
provider[registry.terraform.io/terraform-aws-modules/http]

  • OS:
    Windows 10
  • Terraform version:
    Terraform v0.14.3
    • provider registry.terraform.io/hashicorp/aws v3.55.0
    • provider registry.terraform.io/hashicorp/cloudinit v2.2.0
    • provider registry.terraform.io/hashicorp/kubernetes v2.4.1
    • provider registry.terraform.io/hashicorp/local v2.1.0
    • provider registry.terraform.io/hashicorp/null v3.1.0
    • provider registry.terraform.io/hashicorp/random v3.1.0
    • provider registry.terraform.io/hashicorp/template v2.2.0
    • provider registry.terraform.io/terraform-aws-modules/http v2.4.1

I am take the example from this repo:
https://github.com/terraform-aws-modules/terraform-aws-eks/tree/v17.1.0/examples/launch_templates_with_managed_node_groups

@IgorKurylo IgorKurylo changed the title Cant create node groups Can`t create node groups Aug 24, 2021
@daroga0002
Copy link
Contributor

daroga0002 commented Aug 24, 2021

issue looks to be related to missing instance type in your region.

What region, which AZ and what instance type you trying to run this?

You can check this also by:

aws ec2 describe-instance-type-offerings --location-type availability-zone  --filters Name=instance-type,Values=t3.small --region us-east-1 --output table

(just change your region)

@IgorKurylo
Copy link
Author

issue looks to be related to missing instance type in your region.

What region, which AZ and what instance type you trying to run this?

You can check this also by:

aws ec2 describe-instance-type-offerings --location-type availability-zone  --filters Name=instance-type,Values=t3.small --region us-east-1 --output table

(just change your region)

the region is us-east-1 and instance type as you say t3.small

@daroga0002
Copy link
Contributor

which AZ you have subnets as in one of them AWS doesnt deliver t3 instances

can you also paste a terraform plan output?

@IgorKurylo
Copy link
Author

Sure, attached plan output
`
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:

  • create
    <= read (data resources)

Terraform will perform the following actions:

module.k8s_cluster.data.aws_eks_cluster.cluster will be read during apply

(config refers to values not yet known)

<= data "aws_eks_cluster" "cluster" {
+ arn = (known after apply)
+ certificate_authority = (known after apply)
+ created_at = (known after apply)
+ enabled_cluster_log_types = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ identity = (known after apply)
+ kubernetes_network_config = (known after apply)
+ name = (known after apply)
+ platform_version = (known after apply)
+ role_arn = (known after apply)
+ status = (known after apply)
+ tags = (known after apply)
+ version = (known after apply)
+ vpc_config = (known after apply)
}

module.k8s_cluster.data.aws_eks_cluster_auth.cluster will be read during apply

(config refers to values not yet known)

<= data "aws_eks_cluster_auth" "cluster" {
+ id = (known after apply)
+ name = (known after apply)
+ token = (sensitive value)
}

module.k8s_cluster.data.template_file.launch_template_boostrap will be read during apply

(config refers to values not yet known)

<= data "template_file" "launch_template_boostrap" {
+ id = (known after apply)
+ rendered = (known after apply)
+ template = <<-EOT
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="//"

        --//
        Content-Type: text/x-shellscript; charset="us-ascii"
        #!/bin/bash
        set -e
        
        # Bootstrap and join the cluster
        /etc/eks/bootstrap.sh --b64-cluster-ca '${cluster_auth_base64}' --apiserver-endpoint '${endpoint}' ${bootstrap_extra_args} --kubelet-extra-args "${kubelet_extra_args}" '${cluster_name}'
        
        --//--
    EOT
  + vars     = {
      + "bootstrap_extra_args" = ""
      + "cluster_auth_base64"  = (known after apply)
      + "cluster_name"         = "alert-alpha-eks-cluster"
      + "endpoint"             = (known after apply)
      + "kubelet_extra_args"   = ""
    }
}

module.k8s_cluster.aws_launch_template.default will be created

  • resource "aws_launch_template" "default" {
    • arn = (known after apply)

    • default_version = (known after apply)

    • description = "eks-launch-template"

    • id = (known after apply)

    • instance_type = "t3.small"

    • latest_version = (known after apply)

    • name = (known after apply)

    • name_prefix = "eks--"

    • tags = {

      • "Environment" = "alpha"
      • "Region" = "us-east-1"
        }
    • tags_all = {

      • "Environment" = "alpha"
      • "Region" = "us-east-1"
        }
    • update_default_version = true

    • block_device_mappings {

      • device_name = "/dev/xvda"

      • ebs {

        • delete_on_termination = "true"
        • iops = (known after apply)
        • throughput = (known after apply)
        • volume_size = 30
        • volume_type = "gp2"
          }
          }
    • metadata_options {

      • http_endpoint = (known after apply)
      • http_put_response_hop_limit = (known after apply)
      • http_tokens = (known after apply)
        }
    • monitoring {

      • enabled = true
        }
    • network_interfaces {

      • associate_public_ip_address = "false"
      • delete_on_termination = "true"
      • security_groups = (known after apply)
        }
    • tag_specifications {

      • resource_type = "instance"
      • tags = {
        • "Environment" = "alpha"
        • "Region" = "us-east-1"
          }
          }
    • tag_specifications {

      • resource_type = "volume"
      • tags = {
        • "Environment" = "alpha"
        • "Region" = "us-east-1"
          }
          }
          }

module.k8s_cluster.null_resource.updating_k8s_configuration will be created

  • resource "null_resource" "updating_k8s_configuration" {
    • id = (known after apply)
      }

module.rds_instance.aws_db_instance.rds will be created

  • resource "aws_db_instance" "rds" {
    • address = (known after apply)
    • allocated_storage = 30
    • apply_immediately = (known after apply)
    • arn = (known after apply)
    • auto_minor_version_upgrade = true
    • availability_zone = (known after apply)
    • backup_retention_period = (known after apply)
    • backup_window = (known after apply)
    • ca_cert_identifier = (known after apply)
    • character_set_name = (known after apply)
    • copy_tags_to_snapshot = false
    • db_subnet_group_name = (known after apply)
    • delete_automated_backups = true
    • endpoint = (known after apply)
    • engine = "postgres"
    • engine_version = "12"
    • engine_version_actual = (known after apply)
    • hosted_zone_id = (known after apply)
    • id = (known after apply)
    • identifier = (known after apply)
    • identifier_prefix = (known after apply)
    • instance_class = "db.t3.small"
    • kms_key_id = (known after apply)
    • latest_restorable_time = (known after apply)
    • license_model = (known after apply)
    • maintenance_window = (known after apply)
    • max_allocated_storage = 60
    • monitoring_interval = 0
    • monitoring_role_arn = (known after apply)
    • multi_az = false
    • name = "alert_alpha"
    • nchar_character_set_name = (known after apply)
    • option_group_name = (known after apply)
    • parameter_group_name = (known after apply)
    • password = (sensitive value)
    • performance_insights_enabled = false
    • performance_insights_kms_key_id = (known after apply)
    • performance_insights_retention_period = (known after apply)
    • port = (known after apply)
    • publicly_accessible = false
    • replicas = (known after apply)
    • resource_id = (known after apply)
    • skip_final_snapshot = true
    • snapshot_identifier = (known after apply)
    • status = (known after apply)
    • storage_encrypted = false
    • storage_type = "gp2"
    • tags = {
      • "Environment" = "alpha"
      • "Project" = "alert-app"
      • "Tier" = "network"
        }
    • tags_all = {
      • "Environment" = "alpha"
      • "Project" = "alert-app"
      • "Tier" = "network"
        }
    • timezone = (known after apply)
    • username = "alert_alpha"
    • vpc_security_group_ids = (known after apply)
      }

module.rds_instance.aws_db_subnet_group.rds_subnet_grp will be created

  • resource "aws_db_subnet_group" "rds_subnet_grp" {
    • arn = (known after apply)
    • description = "Managed by Terraform"
    • id = (known after apply)
    • name = "subgnet_rds_grp"
    • name_prefix = (known after apply)
    • subnet_ids = (known after apply)
    • tags = {
      • "Environment" = "alpha"
      • "Project" = "alert-app"
      • "Tier" = "network"
        }
    • tags_all = {
      • "Environment" = "alpha"
      • "Project" = "alert-app"
      • "Tier" = "network"
        }
        }

module.rds_instance.aws_security_group.rds_security_group_bastion will be created

  • resource "aws_security_group" "rds_security_group_bastion" {
    • arn = (known after apply)
    • description = "Allow connection to rds via bastion"
    • egress = [
      • {
        • cidr_blocks = [
          • "0.0.0.0/0",
            ]
        • description = ""
        • from_port = 0
        • ipv6_cidr_blocks = []
        • prefix_list_ids = []
        • protocol = "-1"
        • security_groups = []
        • self = false
        • to_port = 0
          },
          ]
    • id = (known after apply)
    • ingress = [
      • {
        • cidr_blocks = [
          • "10.200.200.179/32",
            ]
        • description = ""
        • from_port = 5432
        • ipv6_cidr_blocks = []
        • prefix_list_ids = []
        • protocol = "tcp"
        • security_groups = []
        • self = false
        • to_port = 5432
          },
          ]
    • name = "rds-sg-alert-app-alpha-bastion"
    • name_prefix = (known after apply)
    • owner_id = (known after apply)
    • revoke_rules_on_delete = false
    • tags = {
      • "Environment" = "alpha"
      • "Name" = "rds-sg-alert-app-alpha-bastion"
      • "Project" = "alert-app"
      • "Tier" = "network"
        }
    • tags_all = {
      • "Environment" = "alpha"
      • "Name" = "rds-sg-alert-app-alpha-bastion"
      • "Project" = "alert-app"
      • "Tier" = "network"
        }
    • vpc_id = (known after apply)
      }

module.rds_instance.aws_security_group.rds_security_group_nodes will be created

  • resource "aws_security_group" "rds_security_group_nodes" {
    • arn = (known after apply)
    • description = "Allow nodes to connection to rds"
    • egress = [
      • {
        • cidr_blocks = [
          • "0.0.0.0/0",
            ]
        • description = ""
        • from_port = 5432
        • ipv6_cidr_blocks = []
        • prefix_list_ids = []
        • protocol = "tcp"
        • security_groups = []
        • self = false
        • to_port = 5432
          },
          ]
    • id = (known after apply)
    • ingress = [
      • {
        • cidr_blocks = [
          • "10.100.1.0/24",
          • "10.100.2.0/24",
          • "10.100.3.0/24",
            ]
        • description = ""
        • from_port = 5432
        • ipv6_cidr_blocks = []
        • prefix_list_ids = []
        • protocol = "tcp"
        • security_groups = []
        • self = false
        • to_port = 5432
          },
          ]
    • name = "rds-sg-alert-app-alpha-nodes"
    • name_prefix = (known after apply)
    • owner_id = (known after apply)
    • revoke_rules_on_delete = false
    • tags = {
      • "Environment" = "alpha"
      • "Name" = "rds-sg-alert-app-alpha"
      • "Project" = "alert-app"
      • "Tier" = "network"
        }
    • tags_all = {
      • "Environment" = "alpha"
      • "Name" = "rds-sg-alert-app-alpha"
      • "Project" = "alert-app"
      • "Tier" = "network"
        }
    • vpc_id = (known after apply)
      }

module.rds_instance.random_password.rds_password will be created

  • resource "random_password" "rds_password" {
    • id = (known after apply)
    • length = 16
    • lower = true
    • min_lower = 0
    • min_numeric = 0
    • min_special = 0
    • min_upper = 0
    • number = true
    • result = (sensitive value)
    • special = false
    • upper = true
      }

module.rds_instance.random_string.rds_identifier will be created

  • resource "random_string" "rds_identifier" {
    • id = (known after apply)
    • length = 10
    • lower = true
    • min_lower = 0
    • min_numeric = 0
    • min_special = 0
    • min_upper = 0
    • number = false
    • result = (known after apply)
    • special = false
    • upper = true
      }

module.vpc.aws_eip.nat_gw_eip will be created

  • resource "aws_eip" "nat_gw_eip" {
    • allocation_id = (known after apply)
    • association_id = (known after apply)
    • carrier_ip = (known after apply)
    • customer_owned_ip = (known after apply)
    • domain = (known after apply)
    • id = (known after apply)
    • instance = (known after apply)
    • network_border_group = (known after apply)
    • network_interface = (known after apply)
    • private_dns = (known after apply)
    • private_ip = (known after apply)
    • public_dns = (known after apply)
    • public_ip = (known after apply)
    • public_ipv4_pool = (known after apply)
    • tags = {
      • "Name" = "alpha-eip-alert-app"
      • "Project" = "alert-app"
      • "Tier" = "networking"
        }
    • tags_all = {
      • "Name" = "alpha-eip-alert-app"
      • "Project" = "alert-app"
      • "Tier" = "networking"
        }
    • vpc = (known after apply)
      }

module.vpc.aws_internet_gateway.k8s_igw will be created

  • resource "aws_internet_gateway" "k8s_igw" {
    • arn = (known after apply)
    • id = (known after apply)
    • owner_id = (known after apply)
    • tags = {
      • "Name" = "alpha-igw-alert-app"
      • "Project" = "alert-app"
      • "Tier" = "networking"
        }
    • tags_all = {
      • "Name" = "alpha-igw-alert-app"
      • "Project" = "alert-app"
      • "Tier" = "networking"
        }
    • vpc_id = (known after apply)
      }

module.vpc.aws_route_table.public_k8s_route_table will be created

  • resource "aws_route_table" "public_k8s_route_table" {
    • arn = (known after apply)
    • id = (known after apply)
    • owner_id = (known after apply)
    • propagating_vgws = (known after apply)
    • route = [
      • {
        • carrier_gateway_id = ""
        • cidr_block = "0.0.0.0/0"
        • destination_prefix_list_id = ""
        • egress_only_gateway_id = ""
        • gateway_id = (known after apply)
        • instance_id = ""
        • ipv6_cidr_block = ""
        • local_gateway_id = ""
        • nat_gateway_id = ""
        • network_interface_id = ""
        • transit_gateway_id = ""
        • vpc_endpoint_id = ""
        • vpc_peering_connection_id = ""
          },
          ]
    • tags = {
      • "Name" = "alpha-public-route-table"
      • "Project" = "alert-app"
      • "Tier" = "networking"
        }
    • tags_all = {
      • "Name" = "alpha-public-route-table"
      • "Project" = "alert-app"
      • "Tier" = "networking"
        }
    • vpc_id = (known after apply)
      }

module.vpc.aws_route_table_association.public-subnet-1-association will be created

  • resource "aws_route_table_association" "public-subnet-1-association" {
    • id = (known after apply)
    • route_table_id = (known after apply)
    • subnet_id = (known after apply)
      }

module.vpc.aws_route_table_association.public-subnet-2-association will be created

  • resource "aws_route_table_association" "public-subnet-2-association" {
    • id = (known after apply)
    • route_table_id = (known after apply)
    • subnet_id = (known after apply)
      }

module.vpc.aws_route_table_association.public-subnet-3-association will be created

  • resource "aws_route_table_association" "public-subnet-3-association" {
    • id = (known after apply)
    • route_table_id = (known after apply)
    • subnet_id = (known after apply)
      }

module.vpc.aws_subnet.public_subnet_eks_1a will be created

  • resource "aws_subnet" "public_subnet_eks_1a" {
    • arn = (known after apply)
    • assign_ipv6_address_on_creation = false
    • availability_zone = "us-east-1a"
    • availability_zone_id = (known after apply)
    • cidr_block = "10.100.1.0/24"
    • id = (known after apply)
    • ipv6_cidr_block_association_id = (known after apply)
    • map_public_ip_on_launch = true
    • owner_id = (known after apply)
    • tags = {
      • "Name" = "alert-app-public-alpha-subnet-1a"
      • "Project" = "alert-app"
      • "Tier" = "networking"
      • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "shared"
      • "kubernetes.io/role/elb" = "1"
        }
    • tags_all = {
      • "Name" = "alert-app-public-alpha-subnet-1a"
      • "Project" = "alert-app"
      • "Tier" = "networking"
      • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "shared"
      • "kubernetes.io/role/elb" = "1"
        }
    • vpc_id = (known after apply)
      }

module.vpc.aws_subnet.public_subnet_eks_1b will be created

  • resource "aws_subnet" "public_subnet_eks_1b" {
    • arn = (known after apply)
    • assign_ipv6_address_on_creation = false
    • availability_zone = "us-east-1b"
    • availability_zone_id = (known after apply)
    • cidr_block = "10.100.2.0/24"
    • id = (known after apply)
    • ipv6_cidr_block_association_id = (known after apply)
    • map_public_ip_on_launch = true
    • owner_id = (known after apply)
    • tags = {
      • "Name" = "alert-app-public-alpha-subnet-1b"
      • "Project" = "alert-app"
      • "Tier" = "networking"
      • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "shared"
      • "kubernetes.io/role/elb" = "1"
        }
    • tags_all = {
      • "Name" = "alert-app-public-alpha-subnet-1b"
      • "Project" = "alert-app"
      • "Tier" = "networking"
      • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "shared"
      • "kubernetes.io/role/elb" = "1"
        }
    • vpc_id = (known after apply)
      }

module.vpc.aws_subnet.public_subnet_eks_1c will be created

  • resource "aws_subnet" "public_subnet_eks_1c" {
    • arn = (known after apply)
    • assign_ipv6_address_on_creation = false
    • availability_zone = "us-east-1c"
    • availability_zone_id = (known after apply)
    • cidr_block = "10.100.3.0/24"
    • id = (known after apply)
    • ipv6_cidr_block_association_id = (known after apply)
    • map_public_ip_on_launch = true
    • owner_id = (known after apply)
    • tags = {
      • "Name" = "alert-app-public-alpha-subnet-1c"
      • "Project" = "alert-app"
      • "Tier" = "networking"
      • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "shared"
      • "kubernetes.io/role/elb" = "1"
        }
    • tags_all = {
      • "Name" = "alert-app-public-alpha-subnet-1c"
      • "Project" = "alert-app"
      • "Tier" = "networking"
      • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "shared"
      • "kubernetes.io/role/elb" = "1"
        }
    • vpc_id = (known after apply)
      }

module.vpc.aws_vpc.vpc_kubernetes will be created

  • resource "aws_vpc" "vpc_kubernetes" {
    • arn = (known after apply)
    • assign_generated_ipv6_cidr_block = false
    • cidr_block = "10.100.0.0/16"
    • default_network_acl_id = (known after apply)
    • default_route_table_id = (known after apply)
    • default_security_group_id = (known after apply)
    • dhcp_options_id = (known after apply)
    • enable_classiclink = (known after apply)
    • enable_classiclink_dns_support = (known after apply)
    • enable_dns_hostnames = true
    • enable_dns_support = true
    • id = (known after apply)
    • instance_tenancy = "default"
    • ipv6_association_id = (known after apply)
    • ipv6_cidr_block = (known after apply)
    • main_route_table_id = (known after apply)
    • owner_id = (known after apply)
    • tags = {
      • "Name" = "alpha-vpc-kubernetes-alert-app"
      • "Project" = "alert-app"
      • "Tier" = "networking"
        }
    • tags_all = {
      • "Name" = "alpha-vpc-kubernetes-alert-app"
      • "Project" = "alert-app"
      • "Tier" = "networking"
        }
        }

module.k8s_cluster.module.eks.data.http.wait_for_cluster[0] will be read during apply

(config refers to values not yet known)

<= data "http" "wait_for_cluster" {
+ body = (known after apply)
+ ca_certificate = (known after apply)
+ id = (known after apply)
+ response_headers = (known after apply)
+ timeout = 300
+ url = (known after apply)
}

module.k8s_cluster.module.eks.aws_eks_cluster.this[0] will be created

  • resource "aws_eks_cluster" "this" {
    • arn = (known after apply)

    • certificate_authority = (known after apply)

    • created_at = (known after apply)

    • endpoint = (known after apply)

    • id = (known after apply)

    • identity = (known after apply)

    • name = "alert-alpha-eks-cluster"

    • platform_version = (known after apply)

    • role_arn = (known after apply)

    • status = (known after apply)

    • tags = {

      • "Environment" = "alpha"
      • "Project" = "alert-app"
      • "Tier" = "application"
        }
    • tags_all = {

      • "Environment" = "alpha"
      • "Project" = "alert-app"
      • "Tier" = "application"
        }
    • version = "1.20"

    • kubernetes_network_config {

      • service_ipv4_cidr = (known after apply)
        }
    • timeouts {

      • create = "1h"
      • delete = "15m"
        }
    • vpc_config {

      • cluster_security_group_id = (known after apply)
      • endpoint_private_access = false
      • endpoint_public_access = true
      • public_access_cidrs = [
        • "0.0.0.0/0",
          ]
      • security_group_ids = (known after apply)
      • subnet_ids = (known after apply)
      • vpc_id = (known after apply)
        }
        }

module.k8s_cluster.module.eks.aws_iam_policy.cluster_elb_sl_role_creation[0] will be created

  • resource "aws_iam_policy" "cluster_elb_sl_role_creation" {
    • arn = (known after apply)
    • description = "Permissions for EKS to create AWSServiceRoleForElasticLoadBalancing service-linked role"
    • id = (known after apply)
    • name = (known after apply)
    • name_prefix = "alert-alpha-eks-cluster-elb-sl-role-creation"
    • path = "/"
    • policy = jsonencode(
      {
      + Statement = [
      + {
      + Action = [
      + "ec2:DescribeInternetGateways",
      + "ec2:DescribeAddresses",
      + "ec2:DescribeAccountAttributes",
      ]
      + Effect = "Allow"
      + Resource = "*"
      + Sid = ""
      },
      ]
      + Version = "2012-10-17"
      }
      )
    • policy_id = (known after apply)
    • tags = {
      • "Environment" = "alpha"
      • "Project" = "alert-app"
      • "Tier" = "application"
        }
    • tags_all = {
      • "Environment" = "alpha"
      • "Project" = "alert-app"
      • "Tier" = "application"
        }
        }

module.k8s_cluster.module.eks.aws_iam_role.cluster[0] will be created

  • resource "aws_iam_role" "cluster" {
    • arn = (known after apply)

    • assume_role_policy = jsonencode(
      {
      + Statement = [
      + {
      + Action = "sts:AssumeRole"
      + Effect = "Allow"
      + Principal = {
      + Service = "eks.amazonaws.com"
      }
      + Sid = "EKSClusterAssumeRole"
      },
      ]
      + Version = "2012-10-17"
      }
      )

    • create_date = (known after apply)

    • force_detach_policies = true

    • id = (known after apply)

    • managed_policy_arns = (known after apply)

    • max_session_duration = 3600

    • name = (known after apply)

    • name_prefix = "alert-alpha-eks-cluster"

    • path = "/"

    • tags = {

      • "Environment" = "alpha"
      • "Project" = "alert-app"
      • "Tier" = "application"
        }
    • tags_all = {

      • "Environment" = "alpha"
      • "Project" = "alert-app"
      • "Tier" = "application"
        }
    • unique_id = (known after apply)

    • inline_policy {

      • name = (known after apply)
      • policy = (known after apply)
        }
        }

module.k8s_cluster.module.eks.aws_iam_role.workers[0] will be created

  • resource "aws_iam_role" "workers" {
    • arn = (known after apply)

    • assume_role_policy = jsonencode(
      {
      + Statement = [
      + {
      + Action = "sts:AssumeRole"
      + Effect = "Allow"
      + Principal = {
      + Service = "ec2.amazonaws.com"
      }
      + Sid = "EKSWorkerAssumeRole"
      },
      ]
      + Version = "2012-10-17"
      }
      )

    • create_date = (known after apply)

    • force_detach_policies = true

    • id = (known after apply)

    • managed_policy_arns = (known after apply)

    • max_session_duration = 3600

    • name = (known after apply)

    • name_prefix = "alert-alpha-eks-cluster"

    • path = "/"

    • tags = {

      • "Environment" = "alpha"
      • "Project" = "alert-app"
      • "Tier" = "application"
        }
    • tags_all = {

      • "Environment" = "alpha"
      • "Project" = "alert-app"
      • "Tier" = "application"
        }
    • unique_id = (known after apply)

    • inline_policy {

      • name = (known after apply)
      • policy = (known after apply)
        }
        }

module.k8s_cluster.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy[0] will be created

  • resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSClusterPolicy" {
    • id = (known after apply)
    • policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
    • role = (known after apply)
      }

module.k8s_cluster.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy[0] will be created

  • resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSServicePolicy" {
    • id = (known after apply)
    • policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
    • role = (known after apply)
      }

module.k8s_cluster.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0] will be created

  • resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSVPCResourceControllerPolicy" {
    • id = (known after apply)
    • policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
    • role = (known after apply)
      }

module.k8s_cluster.module.eks.aws_iam_role_policy_attachment.cluster_elb_sl_role_creation[0] will be created

  • resource "aws_iam_role_policy_attachment" "cluster_elb_sl_role_creation" {
    • id = (known after apply)
    • policy_arn = (known after apply)
    • role = (known after apply)
      }

module.k8s_cluster.module.eks.aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly[0] will be created

  • resource "aws_iam_role_policy_attachment" "workers_AmazonEC2ContainerRegistryReadOnly" {
    • id = (known after apply)
    • policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
    • role = (known after apply)
      }

module.k8s_cluster.module.eks.aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy[0] will be created

  • resource "aws_iam_role_policy_attachment" "workers_AmazonEKSWorkerNodePolicy" {
    • id = (known after apply)
    • policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
    • role = (known after apply)
      }

module.k8s_cluster.module.eks.aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy[0] will be created

  • resource "aws_iam_role_policy_attachment" "workers_AmazonEKS_CNI_Policy" {
    • id = (known after apply)
    • policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
    • role = (known after apply)
      }

module.k8s_cluster.module.eks.aws_security_group.cluster[0] will be created

  • resource "aws_security_group" "cluster" {
    • arn = (known after apply)
    • description = "EKS cluster security group."
    • egress = (known after apply)
    • id = (known after apply)
    • ingress = (known after apply)
    • name = (known after apply)
    • name_prefix = "alert-alpha-eks-cluster"
    • owner_id = (known after apply)
    • revoke_rules_on_delete = false
    • tags = {
      • "Environment" = "alpha"
      • "Name" = "alert-alpha-eks-cluster-eks_cluster_sg"
      • "Project" = "alert-app"
      • "Tier" = "application"
        }
    • tags_all = {
      • "Environment" = "alpha"
      • "Name" = "alert-alpha-eks-cluster-eks_cluster_sg"
      • "Project" = "alert-app"
      • "Tier" = "application"
        }
    • vpc_id = (known after apply)
      }

module.k8s_cluster.module.eks.aws_security_group.workers[0] will be created

  • resource "aws_security_group" "workers" {
    • arn = (known after apply)
    • description = "Security group for all nodes in the cluster."
    • egress = (known after apply)
    • id = (known after apply)
    • ingress = (known after apply)
    • name = (known after apply)
    • name_prefix = "alert-alpha-eks-cluster"
    • owner_id = (known after apply)
    • revoke_rules_on_delete = false
    • tags = {
      • "Environment" = "alpha"
      • "Name" = "alert-alpha-eks-cluster-eks_worker_sg"
      • "Project" = "alert-app"
      • "Tier" = "application"
      • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "owned"
        }
    • tags_all = {
      • "Environment" = "alpha"
      • "Name" = "alert-alpha-eks-cluster-eks_worker_sg"
      • "Project" = "alert-app"
      • "Tier" = "application"
      • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "owned"
        }
    • vpc_id = (known after apply)
      }

module.k8s_cluster.module.eks.aws_security_group_rule.cluster_egress_internet[0] will be created

  • resource "aws_security_group_rule" "cluster_egress_internet" {
    • cidr_blocks = [
      • "0.0.0.0/0",
        ]
    • description = "Allow cluster egress access to the Internet."
    • from_port = 0
    • id = (known after apply)
    • protocol = "-1"
    • security_group_id = (known after apply)
    • self = false
    • source_security_group_id = (known after apply)
    • to_port = 0
    • type = "egress"
      }

module.k8s_cluster.module.eks.aws_security_group_rule.cluster_https_worker_ingress[0] will be created

  • resource "aws_security_group_rule" "cluster_https_worker_ingress" {
    • description = "Allow pods to communicate with the EKS cluster API."
    • from_port = 443
    • id = (known after apply)
    • protocol = "tcp"
    • security_group_id = (known after apply)
    • self = false
    • source_security_group_id = (known after apply)
    • to_port = 443
    • type = "ingress"
      }

module.k8s_cluster.module.eks.aws_security_group_rule.workers_egress_internet[0] will be created

  • resource "aws_security_group_rule" "workers_egress_internet" {
    • cidr_blocks = [
      • "0.0.0.0/0",
        ]
    • description = "Allow nodes all egress to the Internet."
    • from_port = 0
    • id = (known after apply)
    • protocol = "-1"
    • security_group_id = (known after apply)
    • self = false
    • source_security_group_id = (known after apply)
    • to_port = 0
    • type = "egress"
      }

module.k8s_cluster.module.eks.aws_security_group_rule.workers_ingress_cluster[0] will be created

  • resource "aws_security_group_rule" "workers_ingress_cluster" {
    • description = "Allow workers pods to receive communication from the cluster control plane."
    • from_port = 1025
    • id = (known after apply)
    • protocol = "tcp"
    • security_group_id = (known after apply)
    • self = false
    • source_security_group_id = (known after apply)
    • to_port = 65535
    • type = "ingress"
      }

module.k8s_cluster.module.eks.aws_security_group_rule.workers_ingress_cluster_https[0] will be created

  • resource "aws_security_group_rule" "workers_ingress_cluster_https" {
    • description = "Allow pods running extension API servers on port 443 to receive communication from cluster control plane."
    • from_port = 443
    • id = (known after apply)
    • protocol = "tcp"
    • security_group_id = (known after apply)
    • self = false
    • source_security_group_id = (known after apply)
    • to_port = 443
    • type = "ingress"
      }

module.k8s_cluster.module.eks.aws_security_group_rule.workers_ingress_self[0] will be created

  • resource "aws_security_group_rule" "workers_ingress_self" {
    • description = "Allow node to communicate with each other."
    • from_port = 0
    • id = (known after apply)
    • protocol = "-1"
    • security_group_id = (known after apply)
    • self = false
    • source_security_group_id = (known after apply)
    • to_port = 65535
    • type = "ingress"
      }

module.k8s_cluster.module.eks.kubernetes_config_map.aws_auth[0] will be created

  • resource "kubernetes_config_map" "aws_auth" {
    • data = (known after apply)

    • id = (known after apply)

    • metadata {

      • generation = (known after apply)
      • labels = {
        • "app.kubernetes.io/managed-by" = "Terraform"
        • "terraform.io/module" = "terraform-aws-modules.eks.aws"
          }
      • name = "aws-auth"
      • namespace = "kube-system"
      • resource_version = (known after apply)
      • uid = (known after apply)
        }
        }

module.k8s_cluster.module.eks.local_file.kubeconfig[0] will be created

  • resource "local_file" "kubeconfig" {
    • content = (known after apply)
    • directory_permission = "0755"
    • file_permission = "0600"
    • filename = "./kubeconfig_alert-alpha-eks-cluster"
    • id = (known after apply)
      }

module.k8s_cluster.module.eks.module.node_groups.aws_eks_node_group.workers["node_0"] will be created

  • resource "aws_eks_node_group" "workers" {
    • ami_type = (known after apply)

    • arn = (known after apply)

    • capacity_type = (known after apply)

    • cluster_name = "alert-alpha-eks-cluster"

    • disk_size = (known after apply)

    • id = (known after apply)

    • instance_types = [

      • "t3.small",
        ]
    • node_group_name = "alert-app-alpha-alert-1a"

    • node_group_name_prefix = (known after apply)

    • node_role_arn = (known after apply)

    • release_version = (known after apply)

    • resources = (known after apply)

    • status = (known after apply)

    • subnet_ids = (known after apply)

    • tags = {

      • "Environment" = "alpha"
      • "Name" = "alert-app-alpha-0"
      • "Project" = "alert-app"
      • "Tier" = "application"
      • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "owned"
        }
    • tags_all = {

      • "Environment" = "alpha"
      • "Name" = "alert-app-alpha-0"
      • "Project" = "alert-app"
      • "Tier" = "application"
      • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "owned"
        }
    • version = (known after apply)

    • launch_template {

      • id = (known after apply)
      • name = (known after apply)
      • version = (known after apply)
        }
    • launch_template {

      • id = (known after apply)
      • name = (known after apply)
      • version = (known after apply)
        }
    • remote_access {

      • ec2_ssh_key = (known after apply)
      • source_security_group_ids = (known after apply)
        }
    • scaling_config {

      • desired_size = 1
      • max_size = 3
      • min_size = 1
        }
        }

module.k8s_cluster.module.eks.module.node_groups.aws_eks_node_group.workers["node_1"] will be created

  • resource "aws_eks_node_group" "workers" {
    • ami_type = (known after apply)

    • arn = (known after apply)

    • capacity_type = (known after apply)

    • cluster_name = "alert-alpha-eks-cluster"

    • disk_size = (known after apply)

    • id = (known after apply)

    • instance_types = (known after apply)

    • node_group_name = "alert-app-alpha-alert-1b"

    • node_group_name_prefix = (known after apply)

    • node_role_arn = (known after apply)

    • release_version = (known after apply)

    • resources = (known after apply)

    • status = (known after apply)

    • subnet_ids = (known after apply)

    • tags = {

      • "Environment" = "alpha"
      • "Name" = "alert-app-alpha-1"
      • "Project" = "alert-app"
      • "Tier" = "application"
      • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "owned"
        }
    • tags_all = {

      • "Environment" = "alpha"
      • "Name" = "alert-app-alpha-1"
      • "Project" = "alert-app"
      • "Tier" = "application"
      • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "owned"
        }
    • version = (known after apply)

    • launch_template {

      • id = (known after apply)
      • name = (known after apply)
      • version = (known after apply)
        }
    • launch_template {

      • id = (known after apply)
      • name = (known after apply)
      • version = (known after apply)
        }
    • remote_access {

      • ec2_ssh_key = (known after apply)
      • source_security_group_ids = (known after apply)
        }
    • scaling_config {

      • desired_size = 1
      • max_size = 3
      • min_size = 1
        }
        }

Plan: 41 to add, 0 to change, 0 to destroy.

Changes to Outputs:

  • bastion_private_ip = "10.200.200.179"
  • eks_cluster_endpoint = (known after apply)
  • eks_config_map_aws_auth = [
    • {
      • binary_data = null
      • data = (known after apply)
      • id = (known after apply)
      • metadata = [
        • {
          • annotations = null
          • generate_name = null
          • generation = (known after apply)
          • labels = {
            • "app.kubernetes.io/managed-by" = "Terraform"
            • "terraform.io/module" = "terraform-aws-modules.eks.aws"
              }
          • name = "aws-auth"
          • namespace = "kube-system"
          • resource_version = (known after apply)
          • uid = (known after apply)
            },
            ]
            },
            ]
  • eks_kube_config = (known after apply)
  • eks_node_group = {
    • node_0 = {
      • ami_type = (known after apply)
      • arn = (known after apply)
      • capacity_type = (known after apply)
      • cluster_name = "alert-alpha-eks-cluster"
      • disk_size = (known after apply)
      • force_update_version = null
      • id = (known after apply)
      • instance_types = [
        • "t3.small",
          ]
      • labels = null
      • launch_template = [
        • {
          • id = (known after apply)
          • name = (known after apply)
          • version = (known after apply)
            },
        • {
          • id = (known after apply)
          • name = (known after apply)
          • version = (known after apply)
            },
            ]
      • node_group_name = "alert-app-alpha-alert-1a"
      • node_group_name_prefix = (known after apply)
      • node_role_arn = (known after apply)
      • release_version = (known after apply)
      • remote_access = [
        • {
          • ec2_ssh_key = (known after apply)
          • source_security_group_ids = (known after apply)
            },
            ]
      • resources = (known after apply)
      • scaling_config = [
        • {
          • desired_size = 1
          • max_size = 3
          • min_size = 1
            },
            ]
      • status = (known after apply)
      • subnet_ids = (known after apply)
      • tags = {
        • "Environment" = "alpha"
        • "Name" = "alert-app-alpha-0"
        • "Project" = "alert-app"
        • "Tier" = "application"
        • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "owned"
          }
      • tags_all = {
        • "Environment" = "alpha"
        • "Name" = "alert-app-alpha-0"
        • "Project" = "alert-app"
        • "Tier" = "application"
        • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "owned"
          }
      • taint = []
      • timeouts = null
      • version = (known after apply)
        }
    • node_1 = {
      • ami_type = (known after apply)
      • arn = (known after apply)
      • capacity_type = (known after apply)
      • cluster_name = "alert-alpha-eks-cluster"
      • disk_size = (known after apply)
      • force_update_version = null
      • id = (known after apply)
      • instance_types = (known after apply)
      • labels = null
      • launch_template = [
        • {
          • id = (known after apply)
          • name = (known after apply)
          • version = (known after apply)
            },
        • {
          • id = (known after apply)
          • name = (known after apply)
          • version = (known after apply)
            },
            ]
      • node_group_name = "alert-app-alpha-alert-1b"
      • node_group_name_prefix = (known after apply)
      • node_role_arn = (known after apply)
      • release_version = (known after apply)
      • remote_access = [
        • {
          • ec2_ssh_key = (known after apply)
          • source_security_group_ids = (known after apply)
            },
            ]
      • resources = (known after apply)
      • scaling_config = [
        • {
          • desired_size = 1
          • max_size = 3
          • min_size = 1
            },
            ]
      • status = (known after apply)
      • subnet_ids = (known after apply)
      • tags = {
        • "Environment" = "alpha"
        • "Name" = "alert-app-alpha-1"
        • "Project" = "alert-app"
        • "Tier" = "application"
        • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "owned"
          }
      • tags_all = {
        • "Environment" = "alpha"
        • "Name" = "alert-app-alpha-1"
        • "Project" = "alert-app"
        • "Tier" = "application"
        • "kubernetes.io/cluster/alert-alpha-eks-cluster" = "owned"
          }
      • taint = []
      • timeouts = null
      • version = (known after apply)
        }
        }
  • nat_gw_eip_public_ip = (known after apply)
  • rds_arn = (known after apply)
  • rds_enpoint = (known after apply)
  • rds_hosted_zone_id = (known after apply)
  • rds_id = (known after apply)
  • rds_name = "alert_alpha"
  • rds_password = (sensitive value)
  • rds_resource_id = (known after apply)
  • rds_username = "alert_alpha"
  • vpc_all_subnets = [
    • (known after apply),
    • (known after apply),
    • (known after apply),
      ]
  • vpc_id = (known after apply)

`

@daroga0002
Copy link
Contributor

can you properly format this as this is not readable at all

@IgorKurylo
Copy link
Author

I have a other error now, i think is related to this.
NodeCreationFailure: Instances failed to join the kubernetes cluster
maybe try to change the instance type?
how you want i formatting? this is output from plan

@daroga0002
Copy link
Contributor

format using markdown to have something like instead randomly styled output:

nat_gw_eip_public_ip = (known after apply)
rds_arn = (known after apply)
rds_enpoint = (known after apply)
rds_hosted_zone_id = (known after apply)
rds_id = (known after apply)
rds_name = "alert_alpha"
rds_password = (sensitive value)
rds_resource_id = (known after apply)

can be done using opening and closing plan via ```

@stale
Copy link

stale bot commented Oct 1, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Oct 1, 2021
@stale
Copy link

stale bot commented Oct 9, 2021

This issue has been automatically closed because it has not had recent activity since being marked as stale.

@stale stale bot closed this as completed Oct 9, 2021
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 17, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants