-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Added ability to pass different subnets for fargate and the cluster #1527
feat: Added ability to pass different subnets for fargate and the cluster #1527
Conversation
I can add this option for node_groups as well if needed (or make is so that fargate_subnets is worker_subnets, then use that for either) if people want to, but currently there is no way to use this module with fargate profiles and an external load balanancer, which is why I focused this there. |
Any comments on this? This is currently preventing me using this module, as I have a cluster that is run completely via fargate but that does require some internet-facing services. This is impossiable while the subnet variable is used for both the fargate profiles and cluster subnets. I'm happy to make changes to get this feature in, but at the moment it is a show stopper for this use. |
Looks good to me, but let me think a bit about this till Monday to understand what else should we do (if anything) in regards to this feature (e.g., update example maybe). |
I've added it to the example, just in case. |
Any update on this? This is currently blocking us. |
@DayneD89 Sorry, I forgot to shortlist it 10 days ago. I will finish this PR now. |
examples/fargate/main.tf
Outdated
@@ -61,7 +61,8 @@ module "eks" { | |||
source = "../.." | |||
cluster_name = local.cluster_name | |||
cluster_version = "1.20" | |||
subnets = module.vpc.private_subnets | |||
subnets = concat(module.vpc.private_subnets,module.vpc.public_subnets) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you fix a little this example as this is quite bad practice to put things in public subnet on EKS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would it be bad practice? It would be a requirement to allow a public facing load balancer for services. The fargate profiles are still private, as they should be, but allowing for external access to some eks services is what I see as the use case for this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in general best practice is putting all workers inside private subnets, publicly faced loadbalancers will be automatically created in public subnets and set backend members into private subnets.
To place a loadbalancer into public subnet you don`t require have there EKS Control plane endpoint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So in my case all of my workers would be in private nodes, thats the fargate_profiles. Fargate would fail if I tried to put it in a public subnet. This allows for private workers, but for the eks cluster to have public subnets attached, which the aws ingress controller requires, as when asked to provide external access to a service running on fargate it will look for a public subnet for the load balancer that it will create.
In that diagram, where would the cluster be? If that would only be the fargate section, then it would require LBs to be made seperately from deployments, and then linked together rather than being able to use an ingress controller to handle creation of ingress depending on annotations. If the cluster in the diagram would include the load balancer then I don't see a way for the ingress controller to create it in a public subnet without that subnet being attached to the cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Best practice for cluster and workers it to put them into private subnets.
You can try to deploy https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/examples/basic/main.tf and then push to kubernetes for example https://artifacthub.io/packages/helm/bitnami/nginx which will create Loadbalancer service type. Then you will see that Loadbalancer will be publicly available but your workers will be still in private subnets under the NAT
examples/fargate/main.tf
Outdated
@@ -61,7 +61,8 @@ module "eks" { | |||
source = "../.." | |||
cluster_name = local.cluster_name | |||
cluster_version = "1.20" | |||
subnets = module.vpc.private_subnets | |||
subnets = concat(module.vpc.private_subnets,module.vpc.public_subnets) | |||
fargate_subnets = module.vpc.private_subnets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better example will be just use as subnets slice(module.vpc.private_subnets, 0, 2)
and for fargate module.vpc.private_subnets[2]
or etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can change this if required, at least for the example. In your example though, whats would be the point of having different subnets for fargate and the cluster, if not to allow for external access? Wouldn't you want all the private subnets of the cluster to be used by the fargate profiles, in which case you wouldn't set fargate_subnets at all, causing it to use the subnets value for the fargate profiles anyway?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the purpose of the example, it is better to show usage of both subnets
and fargate_subnets
but for real setups, it can be different.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that motivation is to separate fargate subnets from nodes subnets (this can be common use case for some organisations which are using different standards for subnets like NACL, routing thru firewalls and etc.).
But if you really require create kubernetes service as Loadbalancer type then you dont require at all this PR.
Here you have a logic behind creating Loadbalancer type services in EKS https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is exactly what we are using. Note that to create external lbs with the ingress controller you need to have them labeled and attached to the cluster. If they are not then the ingress controller will error saying that it has failed to find a suitable subnet. In the current module, you cannot attach public subnets to the cluster while using fargate as then it will try to attach that public subnet to the fargate profile, which is not allowed (or desirable).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this depends probably from ingress controller which you are using. Nginx is creating just a kubernetes Loadbalancer service type and thru cloud provider create required Loadbalancer automatically.
v17.15.0 has been just released. |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
PR o'clock
Description
Problem
Fargate profiles will fail to launch in public subnets. If using external load balancers the cluster may need public subnets.
Currently the module takes in one input, var.subnets, to populate both. I require the ability to have public subnets on the cluster but only the private subnets used for fargate profiles.
Changes
Added var.fargate_profiles as an list(string) value that is [] by default. Updated readme to reflect the change. If this is not set then the module behaves as it does now, not a breaking change. If it is set then this will be used for fargate profiles instead of var.subnets