Skip to content

Commit e9d9476

Browse files
committed
chore: update documentation, incorporate feedback suggestions
1 parent 84a6cf8 commit e9d9476

File tree

13 files changed

+123
-99
lines changed

13 files changed

+123
-99
lines changed

.github/images/security_groups.svg

+1
Loading

README.md

+50
Original file line numberDiff line numberDiff line change
@@ -168,6 +168,55 @@ module "eks" {
168168
}
169169
```
170170

171+
## Module Design Considerations
172+
173+
### General Notes
174+
175+
While the module is designed to be flexible and support as many use cases and configurations as possible, there is a limit to what first class support can be provided without over-burdening the complexity of the module. Below are a list of general notes on the design intent captured by this module which hopefully explains some of the decisions that are, or will be made, in terms of what is added/supported natively by the module:
176+
177+
- Despite the addition of Windows Subsystem for Linux (WSL for short), containerization technology is very much a suite of Linux constrcuts and therefore Linux is the primary OS supported by this module. In addition, due to the first class support provided by AWS, Bottlerocket OS and Fargate Profiles are also very much natively supported by this module. This module does not make any attempt to NOT support Windows, as in preventing the usage of Windows based nodes, however it is up to users to put in additional effort in order to operate Winodws based nodes when using the module. User can refere to the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html) for further details. What does this mean:
178+
- AWS EKS Managed Node Groups default to `linux` as the `platform`, but `bottlerocket` is also supported by AWS (`windows` is not supported by AWS EKS Managed Node groups)
179+
- AWS Self Managed Node Groups also default to `linux` and the default AMI used is the latest AMI for the selected Kubernetes version. If you wish to use a different OS or AMI then you will need to opt in to the necessary configurations to ensure the correct AMI is used in conjunction with the necessary user data to ensure the nodes are launched and joined to your cluster successfully.
180+
- AWS EKS Managed Node groups are current the preffered route over Self Managed Node Groups for compute nodes. Both operate very similarly - both are backed by autoscaling groups and launch templates deployed and visible within your account. However, AWS EKS Managed Node groups provide a better user experience and offer a more "managed service" experience and therefore has precedence over Self Managed Node Groups. That said, there are currently inherent limitations as AWS continues to rollout additional feature support similar to the level of customization you can achieve with Self Managed Node Groups. When reqeusting added feature support for AWS EKS Managed Node groups, please ensure you have verified that the feature(s) are 1) supported by AWS and 2) supported by the Terraform AWS provider before submitting a feature request.
181+
- Due to the plethora of tooling and different manners of configuring your cluster, cluster configuration is intentionally left out of the module in order to simplify the module for a broader user base. Previous module versions provided support for managing the aws-auth configmap via the Kubernetes Terraform provider using the now deprecated aws-iam-authenticator; these are no longer included in the module. This module strictly focuses on the infrastructure resources to provision an EKS cluster as well as any supporting AWS resources - how the internals of the cluster are configured and managed is up to users and is outside the scope of this module. There is an output attribute, `aws_auth_configmap_yaml`, that has been provided that can be useful to help bridge this transition. Please see the various examples provided where this attribute is used to ensure that self managed node groups or external node groups have their IAM roles appropriately mapped to the aws-auth configmap. How users elect to manage the aws-auth configmap is left up to their choosing.
182+
183+
### User Data & Bootstrapping
184+
185+
There are a multitude of different possible configurations for how module users require their user data to be configured. In order to better support the various combinations from simple, out of the box support provided by the module to full customization of the user data using a template provided by users - the user data has been abstracted out to its own module. Users can see the various methods of using and providing user data through the [user data examples](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/user_data) as well more detailed information on the design and possible configurations via the [user data module itself](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/modules/_user_data)
186+
187+
In general (tl;dr):
188+
- AWS EKS Managed Node Groups
189+
- `linux` platform (default) -> user data is pre-pended to the AWS provided bootstrap user data (bash/shell script) when using the AWS EKS provided AMI, otherwise users need to opt in via `enable_bootstrap_user_data` and use the module provided user data template or provide their own user data template to boostrap nodes to join the cluster
190+
- `bottlerocket` platform -> user data is merged with the AWS provided bootstrap user data (TOML file) when using the AWS EKS provided AMI, otherwise users need to opt in via `enable_bootstrap_user_data` and use the module provided user data template or provide their own user data template to boostrap nodes to join the cluster
191+
- Self Managed Node Groups
192+
- `linux` platform (default) -> the user data template (bash/shell script) provided by the module is used as the default; users are able to provide their own user data template
193+
- `bottlerocket` platform -> the user data template (TOML file) provided by the module is used as the default; users are able to provide their own user data template
194+
- `windows` platform -> the user data template (powershell/PS1 script) provided by the module is used as the default; users are able to provide their own user data template
195+
196+
Module provided default templates can be found under the [templates directory](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/templates)
197+
198+
### Security Groups
199+
200+
- Cluster Security Group
201+
- This module by default creates a cluster security group ("additional" security group when viewed from the console) in addition to the default security group created by the AWS EKS service. This "additional" security group allows users to customize inbound and outbound rules via the module as they see fit
202+
- The default inbound/outbound rules provided by the module are derived from the [AWS minimum recommendations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in addition to NTP and HTTPS public internet egress rules (without, these show up in VPC flow logs as rejects - they are used for clock sync and downloading necessary packages/updates)
203+
- The minimum inbound/outbound rules are provided for cluster and node creation to succeed without errors, but users will most likely need to add the necessary port and protocol for node-to-node communication (this is user specific based on how nodes are configured to communicate across the cluster)
204+
- Users have the ability to opt out of the security group creation and instead provide their own externally created security group if so desired
205+
- The security group that is created is designed to handle the bare minimum communication necessary between the control plane and the nodes, as well as any external egress to allow the cluster to successfully launch without error
206+
- Users also have the option to supply additional, externally created security groups to the cluster as well via the `cluster_additional_security_group_ids` variable
207+
208+
- Node Group Security Group(s)
209+
- Each node group (EKS Managed Node Group and Self Managed Node Group) by default creates its own security group. By default, this security group does not contain any additional security group rules. It is merely an "empty container" that offers users the ability to opt into any addition inbound our outbound rules as necessary
210+
- Users also have the option to supply their own, and/or additonal, externally created security group(s) to the node group as well via the `vpc_security_group_ids` variable
211+
212+
The security groups created by this module are depicted in the image shown below along with their default inbound/outbound rules:
213+
214+
<p align="center">
215+
<img src="https://raw.githubusercontent.com/terraform-aws-modules/terraform-aws-eks/master/.github/images/security_groups.svg" alt="Security Groups" width="100%">
216+
<!-- TODO - Delete this line below before merging -->
217+
<img src=".github/images/security_groups.svg" alt="Security Groups" width="100%">
218+
</p>
219+
171220
## Notes
172221

173222
- Setting `instance_refresh_enabled = true` will recreate your worker nodes without draining them first. It is recommended to install [aws-node-termination-handler](https://github.com/aws/aws-node-termination-handler) for proper node draining. See the [instance_refresh](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/instance_refresh) example provided.
@@ -305,6 +354,7 @@ Full contributing [guidelines are covered here](https://github.com/terraform-aws
305354
|------|-------------|------|---------|:--------:|
306355
| <a name="input_cloudwatch_log_group_kms_key_id"></a> [cloudwatch\_log\_group\_kms\_key\_id](#input\_cloudwatch\_log\_group\_kms\_key\_id) | If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) | `string` | `null` | no |
307356
| <a name="input_cloudwatch_log_group_retention_in_days"></a> [cloudwatch\_log\_group\_retention\_in\_days](#input\_cloudwatch\_log\_group\_retention\_in\_days) | Number of days to retain log events. Default retention - 90 days | `number` | `90` | no |
357+
| <a name="input_cluster_additional_security_group_ids"></a> [cluster\_additional\_security\_group\_ids](#input\_cluster\_additional\_security\_group\_ids) | List of additional, externally created security group IDs to attach to the cluster control plane | `list(string)` | `[]` | no |
308358
| <a name="input_cluster_additional_security_group_rules"></a> [cluster\_additional\_security\_group\_rules](#input\_cluster\_additional\_security\_group\_rules) | List of additional security group rules to add to the cluster security group created | `map(any)` | `{}` | no |
309359
| <a name="input_cluster_addons"></a> [cluster\_addons](#input\_cluster\_addons) | Map of cluster addon configurations to enable for the cluster. Addon name can be the map keys or set with `name` | `any` | `{}` | no |
310360
| <a name="input_cluster_enabled_log_types"></a> [cluster\_enabled\_log\_types](#input\_cluster\_enabled\_log\_types) | A list of the desired control plane logs to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) | `list(string)` | <pre>[<br> "audit",<br> "api",<br> "authenticator"<br>]</pre> | no |

examples/complete/main.tf

+18-16
Original file line numberDiff line numberDiff line change
@@ -48,9 +48,9 @@ module "eks" {
4848

4949
# Self Managed Node Group(s)
5050
self_managed_node_group_defaults = {
51-
launch_template_default_version = true
52-
vpc_security_group_ids = [aws_security_group.additional.id]
53-
iam_role_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"]
51+
update_launch_template_default_version = true
52+
vpc_security_group_ids = [aws_security_group.additional.id]
53+
iam_role_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"]
5454
}
5555

5656
self_managed_node_groups = {
@@ -120,17 +120,19 @@ module "eks" {
120120
GithubRepo = "terraform-aws-eks"
121121
GithubOrg = "terraform-aws-modules"
122122
}
123+
123124
taints = {
124125
dedicated = {
125126
key = "dedicated"
126127
value = "gpuGroup"
127128
effect = "NO_SCHEDULE"
128129
}
129130
}
130-
# TODO - this is throwing an error
131-
# update_config = {
132-
# max_unavailable_percentage = 50 # or set `max_unavailable`
133-
# }
131+
132+
update_config = {
133+
max_unavailable_percentage = 50 # or set `max_unavailable`
134+
}
135+
134136
tags = {
135137
ExtraTag = "example"
136138
}
@@ -200,10 +202,10 @@ module "self_managed_node_group" {
200202
module.eks.cluster_security_group_id,
201203
]
202204

203-
create_launch_template = true
204-
launch_template_name = "separate-self-mng"
205-
launch_template_default_version = true
206-
instance_type = "m5.large"
205+
create_launch_template = true
206+
launch_template_name = "separate-self-mng"
207+
update_launch_template_default_version = true
208+
instance_type = "m5.large"
207209

208210
tags = merge(local.tags, { Separate = "self-managed-node-group" })
209211
}
@@ -266,23 +268,23 @@ locals {
266268
kind = "Config"
267269
current-context = "terraform"
268270
clusters = [{
269-
name = "${module.eks.cluster_id}"
271+
name = module.eks.cluster_id
270272
cluster = {
271-
certificate-authority-data = "${module.eks.cluster_certificate_authority_data}"
272-
server = "${module.eks.cluster_endpoint}"
273+
certificate-authority-data = module.eks.cluster_certificate_authority_data
274+
server = module.eks.cluster_endpoint
273275
}
274276
}]
275277
contexts = [{
276278
name = "terraform"
277279
context = {
278-
cluster = "${module.eks.cluster_id}"
280+
cluster = module.eks.cluster_id
279281
user = "terraform"
280282
}
281283
}]
282284
users = [{
283285
name = "terraform"
284286
user = {
285-
token = "${data.aws_eks_cluster_auth.this.token}"
287+
token = data.aws_eks_cluster_auth.this.token
286288
}
287289
}]
288290
})

examples/eks_managed_node_group/main.tf

+19-20
Original file line numberDiff line numberDiff line change
@@ -69,9 +69,9 @@ module "eks" {
6969
ami_type = "BOTTLEROCKET_x86_64"
7070
platform = "bottlerocket"
7171

72-
create_launch_template = true
73-
launch_template_name = "bottlerocket-custom"
74-
launch_template_default_version = true
72+
create_launch_template = true
73+
launch_template_name = "bottlerocket-custom"
74+
update_launch_template_default_version = true
7575

7676
# this will get added to what AWS provides
7777
bootstrap_extra_args = <<-EOT
@@ -87,9 +87,9 @@ module "eks" {
8787
ami_id = "ami-0ff61e0bcfc81dc94"
8888
platform = "bottlerocket"
8989

90-
create_launch_template = true
91-
launch_template_name = "bottlerocket-custom"
92-
launch_template_default_version = true
90+
create_launch_template = true
91+
launch_template_name = "bottlerocket-custom"
92+
update_launch_template_default_version = true
9393

9494
# use module user data template to boostrap
9595
enable_bootstrap_user_data = true
@@ -171,16 +171,15 @@ module "eks" {
171171
}
172172
]
173173

174-
# TODO - this is throwing an error
175-
# update_config = {
176-
# max_unavailable_percentage = 50 # or set `max_unavailable`
177-
# }
174+
update_config = {
175+
max_unavailable_percentage = 50 # or set `max_unavailable`
176+
}
178177

179-
create_launch_template = true
180-
launch_template_name = "eks-managed-ex"
181-
launch_template_use_name_prefix = true
182-
description = "EKS managed node group example launch template"
183-
launch_template_default_version = true
178+
create_launch_template = true
179+
launch_template_name = "eks-managed-ex"
180+
launch_template_use_name_prefix = true
181+
description = "EKS managed node group example launch template"
182+
update_launch_template_default_version = true
184183

185184
ebs_optimized = true
186185
vpc_security_group_ids = [aws_security_group.additional.id]
@@ -270,23 +269,23 @@ locals {
270269
kind = "Config"
271270
current-context = "terraform"
272271
clusters = [{
273-
name = "${module.eks.cluster_id}"
272+
name = module.eks.cluster_id
274273
cluster = {
275-
certificate-authority-data = "${module.eks.cluster_certificate_authority_data}"
276-
server = "${module.eks.cluster_endpoint}"
274+
certificate-authority-data = module.eks.cluster_certificate_authority_data
275+
server = module.eks.cluster_endpoint
277276
}
278277
}]
279278
contexts = [{
280279
name = "terraform"
281280
context = {
282-
cluster = "${module.eks.cluster_id}"
281+
cluster = module.eks.cluster_id
283282
user = "terraform"
284283
}
285284
}]
286285
users = [{
287286
name = "terraform"
288287
user = {
289-
token = "${data.aws_eks_cluster_auth.this.token}"
288+
token = data.aws_eks_cluster_auth.this.token
290289
}
291290
}]
292291
})

examples/irsa_autoscale_refresh/charts.tf

+2-2
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ resource "helm_release" "cluster_autoscaler" {
5252
}
5353

5454
depends_on = [
55-
module.eks
55+
module.eks.cluster_id
5656
]
5757
}
5858

@@ -166,7 +166,7 @@ resource "helm_release" "aws_node_termination_handler" {
166166
}
167167

168168
depends_on = [
169-
module.eks
169+
module.eks.cluster_id
170170
]
171171
}
172172

examples/irsa_autoscale_refresh/main.tf

+10-12
Original file line numberDiff line numberDiff line change
@@ -43,10 +43,10 @@ module "eks" {
4343
max_size = 5
4444
desired_size = 1
4545

46-
instance_types = ["m5.large", "m5n.large", "m5zn.large", "m6i.large", ]
47-
create_launch_template = true
48-
launch_template_name = "refresh"
49-
launch_template_default_version = true
46+
instance_type = "m5.large"
47+
create_launch_template = true
48+
launch_template_name = "refresh"
49+
update_launch_template_default_version = true
5050

5151
instance_refresh = {
5252
strategy = "Rolling"
@@ -86,23 +86,23 @@ locals {
8686
kind = "Config"
8787
current-context = "terraform"
8888
clusters = [{
89-
name = "${module.eks.cluster_id}"
89+
name = module.eks.cluster_id
9090
cluster = {
91-
certificate-authority-data = "${module.eks.cluster_certificate_authority_data}"
92-
server = "${module.eks.cluster_endpoint}"
91+
certificate-authority-data = module.eks.cluster_certificate_authority_data
92+
server = module.eks.cluster_endpoint
9393
}
9494
}]
9595
contexts = [{
9696
name = "terraform"
9797
context = {
98-
cluster = "${module.eks.cluster_id}"
98+
cluster = module.eks.cluster_id
9999
user = "terraform"
100100
}
101101
}]
102102
users = [{
103103
name = "terraform"
104104
user = {
105-
token = "${data.aws_eks_cluster_auth.this.token}"
105+
token = data.aws_eks_cluster_auth.this.token
106106
}
107107
}]
108108
})
@@ -159,7 +159,5 @@ module "vpc" {
159159
"kubernetes.io/role/internal-elb" = 1
160160
}
161161

162-
tags = merge(local.tags,
163-
{ "kubernetes.io/cluster/${local.name}" = "shared" }
164-
)
162+
tags = local.tags
165163
}

examples/self_managed_node_group/main.tf

+8-9
Original file line numberDiff line numberDiff line change
@@ -117,10 +117,9 @@ module "eks" {
117117
GithubOrg = "terraform-aws-modules"
118118
}
119119

120-
# TODO - this is throwing an error
121-
# update_config = {
122-
# max_unavailable_percentage = 50 # or set `max_unavailable`
123-
# }
120+
update_config = {
121+
max_unavailable_percentage = 50 # or set `max_unavailable`
122+
}
124123

125124
create_launch_template = true
126125
launch_template_name = "self-managed-ex"
@@ -222,23 +221,23 @@ locals {
222221
kind = "Config"
223222
current-context = "terraform"
224223
clusters = [{
225-
name = "${module.eks.cluster_id}"
224+
name = module.eks.cluster_id
226225
cluster = {
227-
certificate-authority-data = "${module.eks.cluster_certificate_authority_data}"
228-
server = "${module.eks.cluster_endpoint}"
226+
certificate-authority-data = module.eks.cluster_certificate_authority_data
227+
server = module.eks.cluster_endpoint
229228
}
230229
}]
231230
contexts = [{
232231
name = "terraform"
233232
context = {
234-
cluster = "${module.eks.cluster_id}"
233+
cluster = module.eks.cluster_id
235234
user = "terraform"
236235
}
237236
}]
238237
users = [{
239238
name = "terraform"
240239
user = {
241-
token = "${data.aws_eks_cluster_auth.this.token}"
240+
token = data.aws_eks_cluster_auth.this.token
242241
}
243242
}]
244243
})

0 commit comments

Comments
 (0)