Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"root_volume_size" has no effect on self-managed Windows node groups #1195

Closed
1 of 4 tasks
yafanasiev opened this issue Jan 25, 2021 · 3 comments · Fixed by #1401
Closed
1 of 4 tasks

"root_volume_size" has no effect on self-managed Windows node groups #1195

yafanasiev opened this issue Jan 25, 2021 · 3 comments · Fixed by #1401
Assignees

Comments

@yafanasiev
Copy link

I have issues

Root EBS volume of self-managed Windows node groups does not change when setting "root_volume_size".

I'm submitting a...

  • bug report
  • feature request
  • support request - read the FAQ first!
  • kudos, thank you, warm fuzzy

What is the current behavior?

When adding a "root_volume_size" to node group configuration, a new EBS volume is attached to EC2 instance alongside with root volume.

If this is a bug, how to reproduce? Please include a code sample if relevant.

{
      name                          = "windows-nodes"
      platform                      = "windows"
      asg_max_size                  = "4"
      asg_desired_capacity          = "4"
      asg_min_size                  = "2"
      subnets                       = REDACTED
      additional_security_group_ids = [REDACTED]
      key_name                      = "REDACTED"
      instance_type                 = "t3a.large"
      cpu_credits                   = "unlimited"
      protect_from_scale_in         = true
      kubelet_extra_args            = "--register-with-taints=\"os=windows:NoSchedule\""
      root_volume_size              = "150"
}

What's the expected behavior?

Modifying this parameter should change the root volume size instead.

Environment details

  • Affected module version: 12.0.0
  • OS: MacOS Big Sur
  • Terraform version: v0.14.4

Any other relevant info

The EBS volume is mounted under /dev/xvda, like in Linux EKS AMI's, instead of /dev/sda1 like it should. Changing configuration to

      ...
      root_volume_size              = "150"
      root_block_device_name        = "/dev/sda1"

fixes the issue. I believe the problem is in

local.workers_group_defaults["root_block_device_name"],

This always gets a root_block_device_name from Linux AMI, as specified in
root_block_device_name = data.aws_ami.eks_worker.root_device_name # Root device name for workers. If non is provided, will assume default AMI was used.

@stale
Copy link

stale bot commented Apr 25, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@barryib
Copy link
Member

barryib commented May 28, 2021

@yafanasiev is #1401 solves your issue ?

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 21, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants