Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node to node ports lower than 1025 not authorized is causing issue with ingress controllers with service type load balancer (AWS) #2359

Closed
jbdelpech opened this issue Dec 19, 2022 · 6 comments

Comments

@jbdelpech
Copy link

Description

Since we upgrade module from v18 to v19, we got some issue with our ingress controllers. Here is our configuration:

  • We are deploying an EKS cluster with one managed node group with 4 EC2 instances
  • We have 1 ingress controllers with a service type Load balancer that is listening on 80 and 443 port
  • So we have 1 load balancer that registers our 4 EC2 instances
  • Our ingress controller is deployed as deployment with 2 pods on nodes A and B.

Nodes A and B are InService in our load balancer.
Nodes C and D are OutOfService in our load balancer.

I manually added port 80 and 443 as ingress rule on node security group from self, and all instances are InService.

My questions :

  • Is there a point to authorize reserved node between nodes of node security group ?
  • Should users of this module manage this kind of exception by using custom security group rules ?

Versions

  • Module version [Required]: >= v19.0.4

  • Terraform version: v1.3.6

  • Provider version(s): v4.47.0

Reproduction Code [Required]

Steps to reproduce the behavior:

Deploy an EKS cluster with managed node groups with 2 nodes at least.
Deploy an ingress controller in the cluster with Service type Load balancer and 1 pod.

Expected behavior

All nodes join are InService in Load balancer instances.

Actual behavior

Only nodes where are deployed ingress controller pods are InService in Load balancer instances.

Terminal Output Screenshot(s)

Additional context

@jbdelpech jbdelpech reopened this Dec 20, 2022
@bryantbiggs
Copy link
Member

this is to be expected. It is not known what ports users will run applications on and this access will need to be added by users. You can see more here https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/network_connectivity.md

@jbdelpech
Copy link
Author

Hello @bryantbiggs, thank you for your answer.

So, you assume that any user that deploys this module and want to use ingress-controllers will need to add custom rules on node security-group from itself to allow 80 (optionally) and 443 reserved port.

However, in last v19 releases, you added specific rules for Karpenter, ALB Controller, Gatekepper and more recently metrics-server. So, i'm wondering why not for ingress-controllers 🤔

@arthurio
Copy link

@jbdelpech I might be wrong but for me the issue wasn't that my ingress was listening on port 80, but that the target service was listening on port 80.

kubectl get ing
NAME      CLASS    HOSTS                  ADDRESS                                                               PORTS   AGE
xxx-api   <none>   xxx.example.com   k8s-xxxx-xxxxx-123456-123456.us-east-1.elb.amazonaws.com   80      370d
kubectl get svc
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
xxx-api                   NodePort    172.20.141.146   <none>        80:32682/TCP   370d

So my assumption is that @bryantbiggs is saying that users can choose whatever ports they want for their services and if you allow using privileged ones (0 - 1024), it's up to you to add extra ingress rules.
It caught me by surprise too because I think a lot of people will use ports 80 and/or 443 for their services and I don't really see a reason to not use the range 0 - 65536 instead.

@bryantbiggs
Copy link
Member

I think a lot of people will use ports 80 and/or 443 for their services and I don't really see a reason to not use the range 0 - 65536 instead.

In prior versions, we received a lot of feedback stating that users were frustrated that they had to create their own security groups plus copy over the security group rules of interest in order to have a better security posture. The position that we have taken starting with v18 of the module is to remove all access except for only those rules that are absolutely required for a cluster to come up successfully (this is the cluster and its components, this is not meaning a cluster with a users application). This approach, in my opinion, is the lesser of two evils because users can now start from a strong security perspective and only open up the access they require - it is impossible to start with the access open and try to close that access down (hence the need for users to create their own security groups which causes lifecycle issues, etc.)

That said, in v19 we then added a parameter that is enabled by default that enables common/recommended rules for things that are typically found on a cluster - things like ALB/NGINX controllers, Karpenter, etc. I do not have a great "rule" as to what should or should not be added to this list of recommended rules, but potentially this is something that could accommodate the 80/443 rules listed above

@bryantbiggs
Copy link
Member

closing since I believe the question has been answered. if not, please feel free to respond below

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 10, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants