-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node to node ports lower than 1025 not authorized is causing issue with ingress controllers with service type load balancer (AWS) #2359
Comments
this is to be expected. It is not known what ports users will run applications on and this access will need to be added by users. You can see more here https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/network_connectivity.md |
Hello @bryantbiggs, thank you for your answer. So, you assume that any user that deploys this module and want to use ingress-controllers will need to add custom rules on node security-group from itself to allow 80 (optionally) and 443 reserved port. However, in last v19 releases, you added specific rules for Karpenter, ALB Controller, Gatekepper and more recently metrics-server. So, i'm wondering why not for ingress-controllers 🤔 |
@jbdelpech I might be wrong but for me the issue wasn't that my ingress was listening on port 80, but that the target service was listening on port 80.
So my assumption is that @bryantbiggs is saying that users can choose whatever ports they want for their services and if you allow using privileged ones (0 - 1024), it's up to you to add extra ingress rules. |
In prior versions, we received a lot of feedback stating that users were frustrated that they had to create their own security groups plus copy over the security group rules of interest in order to have a better security posture. The position that we have taken starting with v18 of the module is to remove all access except for only those rules that are absolutely required for a cluster to come up successfully (this is the cluster and its components, this is not meaning a cluster with a users application). This approach, in my opinion, is the lesser of two evils because users can now start from a strong security perspective and only open up the access they require - it is impossible to start with the access open and try to close that access down (hence the need for users to create their own security groups which causes lifecycle issues, etc.) That said, in v19 we then added a parameter that is enabled by default that enables common/recommended rules for things that are typically found on a cluster - things like ALB/NGINX controllers, Karpenter, etc. I do not have a great "rule" as to what should or should not be added to this list of recommended rules, but potentially this is something that could accommodate the 80/443 rules listed above |
closing since I believe the question has been answered. if not, please feel free to respond below |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
Since we upgrade module from v18 to v19, we got some issue with our ingress controllers. Here is our configuration:
Load balancer
that is listening on 80 and 443 portNodes A and B are
InService
in our load balancer.Nodes C and D are
OutOfService
in our load balancer.I manually added port 80 and 443 as ingress rule on node security group from self, and all instances are
InService
.My questions :
Versions
Module version [Required]: >= v19.0.4
Terraform version: v1.3.6
Provider version(s): v4.47.0
Reproduction Code [Required]
Steps to reproduce the behavior:
Deploy an EKS cluster with managed node groups with 2 nodes at least.
Deploy an ingress controller in the cluster with Service type Load balancer and 1 pod.
Expected behavior
All nodes join are
InService
in Load balancer instances.Actual behavior
Only nodes where are deployed ingress controller pods are
InService
in Load balancer instances.Terminal Output Screenshot(s)
Additional context
The text was updated successfully, but these errors were encountered: