Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Added ability to pass different subnets for fargate and the cluster #1527

Merged
merged 10 commits into from
Sep 6, 2021

Conversation

DayneD89
Copy link
Contributor

PR o'clock

Description

Problem

Fargate profiles will fail to launch in public subnets. If using external load balancers the cluster may need public subnets.
Currently the module takes in one input, var.subnets, to populate both. I require the ability to have public subnets on the cluster but only the private subnets used for fargate profiles.

Changes

Added var.fargate_profiles as an list(string) value that is [] by default. Updated readme to reflect the change. If this is not set then the module behaves as it does now, not a breaking change. If it is set then this will be used for fargate profiles instead of var.subnets

@DayneD89 DayneD89 changed the title Added ability to pass different subnets for fargate and the cluster feat: Added ability to pass different subnets for fargate and the cluster Aug 11, 2021
@DayneD89
Copy link
Contributor Author

I can add this option for node_groups as well if needed (or make is so that fargate_subnets is worker_subnets, then use that for either) if people want to, but currently there is no way to use this module with fargate profiles and an external load balanancer, which is why I focused this there.

@DayneD89
Copy link
Contributor Author

Any comments on this? This is currently preventing me using this module, as I have a cluster that is run completely via fargate but that does require some internet-facing services. This is impossiable while the subnet variable is used for both the fargate profiles and cluster subnets. I'm happy to make changes to get this feature in, but at the moment it is a show stopper for this use.

@antonbabenko
Copy link
Member

Looks good to me, but let me think a bit about this till Monday to understand what else should we do (if anything) in regards to this feature (e.g., update example maybe).

@DayneD89
Copy link
Contributor Author

DayneD89 commented Sep 6, 2021

I've added it to the example, just in case.

@DayneD89
Copy link
Contributor Author

DayneD89 commented Sep 6, 2021

Any update on this? This is currently blocking us.

@antonbabenko
Copy link
Member

@DayneD89 Sorry, I forgot to shortlist it 10 days ago. I will finish this PR now.

@@ -61,7 +61,8 @@ module "eks" {
source = "../.."
cluster_name = local.cluster_name
cluster_version = "1.20"
subnets = module.vpc.private_subnets
subnets = concat(module.vpc.private_subnets,module.vpc.public_subnets)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you fix a little this example as this is quite bad practice to put things in public subnet on EKS

Copy link
Contributor Author

@DayneD89 DayneD89 Sep 6, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would it be bad practice? It would be a requirement to allow a public facing load balancer for services. The fargate profiles are still private, as they should be, but allowing for external access to some eks services is what I see as the use case for this change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in general best practice is putting all workers inside private subnets, publicly faced loadbalancers will be automatically created in public subnets and set backend members into private subnets.

Something like here:
image

To place a loadbalancer into public subnet you don`t require have there EKS Control plane endpoint.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So in my case all of my workers would be in private nodes, thats the fargate_profiles. Fargate would fail if I tried to put it in a public subnet. This allows for private workers, but for the eks cluster to have public subnets attached, which the aws ingress controller requires, as when asked to provide external access to a service running on fargate it will look for a public subnet for the load balancer that it will create.

In that diagram, where would the cluster be? If that would only be the fargate section, then it would require LBs to be made seperately from deployments, and then linked together rather than being able to use an ingress controller to handle creation of ingress depending on annotations. If the cluster in the diagram would include the load balancer then I don't see a way for the ingress controller to create it in a public subnet without that subnet being attached to the cluster.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Best practice for cluster and workers it to put them into private subnets.
You can try to deploy https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/examples/basic/main.tf and then push to kubernetes for example https://artifacthub.io/packages/helm/bitnami/nginx which will create Loadbalancer service type. Then you will see that Loadbalancer will be publicly available but your workers will be still in private subnets under the NAT

@@ -61,7 +61,8 @@ module "eks" {
source = "../.."
cluster_name = local.cluster_name
cluster_version = "1.20"
subnets = module.vpc.private_subnets
subnets = concat(module.vpc.private_subnets,module.vpc.public_subnets)
fargate_subnets = module.vpc.private_subnets
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better example will be just use as subnets slice(module.vpc.private_subnets, 0, 2) and for fargate module.vpc.private_subnets[2] or etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can change this if required, at least for the example. In your example though, whats would be the point of having different subnets for fargate and the cluster, if not to allow for external access? Wouldn't you want all the private subnets of the cluster to be used by the fargate profiles, in which case you wouldn't set fargate_subnets at all, causing it to use the subnets value for the fargate profiles anyway?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the purpose of the example, it is better to show usage of both subnets and fargate_subnets but for real setups, it can be different.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought that motivation is to separate fargate subnets from nodes subnets (this can be common use case for some organisations which are using different standards for subnets like NACL, routing thru firewalls and etc.).

But if you really require create kubernetes service as Loadbalancer type then you dont require at all this PR.

Here you have a logic behind creating Loadbalancer type services in EKS https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is exactly what we are using. Note that to create external lbs with the ingress controller you need to have them labeled and attached to the cluster. If they are not then the ingress controller will error saying that it has failed to find a suitable subnet. In the current module, you cannot attach public subnets to the cluster while using fargate as then it will try to attach that public subnet to the fargate profile, which is not allowed (or desirable).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this depends probably from ingress controller which you are using. Nginx is creating just a kubernetes Loadbalancer service type and thru cloud provider create required Loadbalancer automatically.

@antonbabenko antonbabenko merged commit 4a7678d into terraform-aws-modules:master Sep 6, 2021
@antonbabenko
Copy link
Member

v17.15.0 has been just released.

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 13, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants