Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix race condition in strict mode and annotate_pod_ip=true #322

Merged
merged 6 commits into from
Oct 24, 2024

Conversation

Pavani-Panakanti
Copy link
Contributor

Issue #, if available:

When ANNOTATE_POD_IP = false and NETWORK_POLICY_ENFORCING_MODE=strict --> all works fine

ipamd will send a request to NPA to attach probes to the new pod. NP agent returns success after attaching the probes. Then ipamd will mark pod as ready and running. Once the pod is ready network policy controller will discover the pod and add it to the policy endpoints. When NPA reconciles the policy endpoints, it notices the probes are already attached and just updates the maps accordingly

When ANNOTATE_POD_IP = true and NETWORK_POLICY_ENFORCING_MODE=strict --> issue

With ANNOTATE_POD_IP = true, pod can be discovered and added to policy endpoint by controller even before pod is ready and running. So in this case, we are hitting a race condition where we are trying to attach probes to pod from two different go routines. One which started from ipamd request and other which started from reconcile policy endpoint request. Both routines are trying to attach the probes at same time and as a result the internal data structures are not being updated correctly. Due to this network policies are not being applied to the right map ids

Description of changes:

Have a lock per pod which will be used while attaching the probes to the pod. While attaching probes, take the lock and attach the probes and release the lock after updating all the data structures. In this case even if two go routines come at the same time, one will wait for the lock and start only after the first one has completed attaching the probes. So when second one checks if probes are already attached, it will be true and does nothing

  • Testing *

Simulated the race condition with some sleeps.
Logs before the change

"level":"info","ts":"2024-10-23T06:48:15.213Z","logger":"controllers.policyEndpoints","caller":"controllers/policyendpoints_controller.go:316","msg":"Processing Pod: ","name:":"tester1-6b498cbd59-zqwbw","namespace:":"test","podIdentifier: ":"tester1-6b498cbd59-test"}
{"level":"info","ts":"2024-10-23T06:48:15.213Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:462","msg":"AttacheBPFProbes for","pod":"tester1-6b498cbd59-zqwbw"," in namespace":"test"," with hostVethName":"eni81e6d8fec9f"}
{"level":"info","ts":"2024-10-23T06:48:15.213Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:682","msg":"Load the eBPF program"}
{"level":"info","ts":"2024-10-23T06:48:15.213Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:684","msg":"In loadBPFProgram, Waiting for 7 secs for enforceNpToPod to complete attaching probes"}
{"level":"info","ts":"2024-10-23T06:48:19.290Z","logger":"rpc-handler","caller":"rpc/rpc_handler.go:59","msg":"Received Enforce Network Policy Request for Pod","Name":"tester1-6b498cbd59-zqwbw","Namespace":"test"}
{"level":"info","ts":"2024-10-23T06:48:19.290Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:770","msg":"No map instance found"}
{"level":"info","ts":"2024-10-23T06:48:19.290Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:462","msg":"AttacheBPFProbes for","pod":"tester1-6b498cbd59-zqwbw"," in namespace":"test"," with hostVethName":"eni81e6d8fec9f"}
{"level":"info","ts":"2024-10-23T06:48:19.290Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:682","msg":"Load the eBPF program"}
{"level":"info","ts":"2024-10-23T06:48:19.323Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:698","msg":"Prog Info: ","Pin Path: ":"/sys/fs/bpf/globals/aws/programs/tester1-6b498cbd59-test_handle_ingress"}
{"level":"info","ts":"2024-10-23T06:48:19.324Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:702","msg":"PinPath for this pod: ","PinPath: ":"/sys/fs/bpf/globals/aws/programs/tester1-6b498cbd59-test_handle_ingress"}
{"level":"info","ts":"2024-10-23T06:48:19.324Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:705","msg":"Prog Load Succeeded","for ":"ingress","progFD: ":19}
{"level":"info","ts":"2024-10-23T06:48:19.324Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:574","msg":"Attempting to do an Ingress Attach ","with progFD: ":19}
{"level":"info","ts":"2024-10-23T06:48:19.327Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:473","msg":"Successfully attached Ingress TC probe for","pod: ":"tester1-6b498cbd59-zqwbw"," in namespace":"test"}
{"level":"info","ts":"2024-10-23T06:48:19.327Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:682","msg":"Load the eBPF program"}
{"level":"info","ts":"2024-10-23T06:48:19.349Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:698","msg":"Prog Info: ","Pin Path: ":"/sys/fs/bpf/globals/aws/programs/tester1-6b498cbd59-test_handle_egress"}
{"level":"info","ts":"2024-10-23T06:48:19.349Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:702","msg":"PinPath for this pod: ","PinPath: ":"/sys/fs/bpf/globals/aws/programs/tester1-6b498cbd59-test_handle_egress"}
{"level":"info","ts":"2024-10-23T06:48:19.349Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:705","msg":"Prog Load Succeeded","for ":"egress","progFD: ":21}
{"level":"info","ts":"2024-10-23T06:48:19.349Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:608","msg":"Attempting to do an Egress Attach ","with progFD: ":21}
{"level":"info","ts":"2024-10-23T06:48:19.349Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:488","msg":"Successfully attached Egress TC probe for","pod: ":"tester1-6b498cbd59-zqwbw"," in namespace":"test"}
{"level":"info","ts":"2024-10-23T06:48:19.349Z","logger":"controllers.policyEndpoints","caller":"controllers/policyendpoints_controller.go:773","msg":"Active policies available against","podIdentifier":"tester1-6b498cbd59-test"}
{"level":"info","ts":"2024-10-23T06:48:19.349Z","logger":"rpc-handler","caller":"rpc/rpc_handler.go:81","msg":"Active policies present against this pod and this is a new Pod to the local node, configuring firewall rules...."}
{"level":"info","ts":"2024-10-23T06:48:19.349Z","logger":"rpc-handler","caller":"rpc/rpc_handler.go:83","msg":"Waiting for 5 secs before enforcing policies againts pod"}
{"level":"info","ts":"2024-10-23T06:48:22.247Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:698","msg":"Prog Info: ","Pin Path: ":"/sys/fs/bpf/globals/aws/programs/tester1-6b498cbd59-test_handle_ingress"}
{"level":"info","ts":"2024-10-23T06:48:22.247Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:702","msg":"PinPath for this pod: ","PinPath: ":"/sys/fs/bpf/globals/aws/programs/tester1-6b498cbd59-test_handle_ingress"}
{"level":"info","ts":"2024-10-23T06:48:22.247Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:705","msg":"Prog Load Succeeded","for ":"ingress","progFD: ":23}
{"level":"info","ts":"2024-10-23T06:48:22.247Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:574","msg":"Attempting to do an Ingress Attach ","with progFD: ":23}
{"level":"info","ts":"2024-10-23T06:48:22.248Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:473","msg":"Successfully attached Ingress TC probe for","pod: ":"tester1-6b498cbd59-zqwbw"," in namespace":"test"}
{"level":"info","ts":"2024-10-23T06:48:22.248Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:682","msg":"Load the eBPF program"}
{"level":"info","ts":"2024-10-23T06:48:22.292Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:698","msg":"Prog Info: ","Pin Path: ":"/sys/fs/bpf/globals/aws/programs/tester1-6b498cbd59-test_handle_egress"}
{"level":"info","ts":"2024-10-23T06:48:22.292Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:702","msg":"PinPath for this pod: ","PinPath: ":"/sys/fs/bpf/globals/aws/programs/tester1-6b498cbd59-test_handle_egress"}
{"level":"info","ts":"2024-10-23T06:48:22.292Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:705","msg":"Prog Load Succeeded","for ":"egress","progFD: ":25}
{"level":"info","ts":"2024-10-23T06:48:22.292Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:608","msg":"Attempting to do an Egress Attach ","with progFD: ":25}
{"level":"info","ts":"2024-10-23T06:48:22.292Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:488","msg":"Successfully attached Egress TC probe for","pod: ":"tester1-6b498cbd59-zqwbw"," in namespace":"test"}
{"level":"info","ts":"2024-10-23T06:48:22.292Z","logger":"controllers.policyEndpoints","caller":"controllers/policyendpoints_controller.go:329","msg":"Successfully attached required eBPF probes for","pod:":"tester1-6b498cbd59-zqwbw","in namespace":"test"}
{"level":"info","ts":"2024-10-23T06:48:22.292Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:726","msg":"Pod has an Ingress hook attached. Update the corresponding map","progFD: ":23,"mapName: ":"ingress_map"}
{"level":"info","ts":"2024-10-23T06:48:22.292Z","logger":"ebpf-client","caller":"utils/utils.go:163","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-10-23T06:48:22.292Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:982","msg":"Current L4 entry count for catch all entry: ","count: ":0}
{"level":"info","ts":"2024-10-23T06:48:22.293Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:984","msg":"Total L4 entry count for catch all entry: ","count: ":0}
{"level":"info","ts":"2024-10-23T06:48:22.293Z","logger":"ebpf-client","caller":"utils/utils.go:163","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-10-23T06:48:22.293Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:785","msg":"ID of map to update: ","ID: ":175}
{"level":"info","ts":"2024-10-23T06:48:22.293Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:740","msg":"Pod has an Egress hook attached. Update the corresponding map","progFD: ":25,"mapName: ":"egress_map"}
{"level":"info","ts":"2024-10-23T06:48:22.293Z","logger":"ebpf-client","caller":"utils/utils.go:163","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-10-23T06:48:22.293Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:982","msg":"Current L4 entry count for catch all entry: ","count: ":1}
{"level":"info","ts":"2024-10-23T06:48:22.293Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:984","msg":"Total L4 entry count for catch all entry: ","count: ":1}
{"level":"info","ts":"2024-10-23T06:48:22.293Z","logger":"ebpf-client","caller":"utils/utils.go:182","msg":"L4 values: ","protocol: ":6,"startPort: ":443,"endPort: ":0}
{"level":"info","ts":"2024-10-23T06:48:22.293Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:785","msg":"ID of map to update: ","ID: ":176}

Logs after the change

{"level":"info","ts":"2024-10-23T07:07:02.756Z","logger":"controllers.policyEndpoints","caller":"controllers/policyendpoints_controller.go:316","msg":"Processing Pod: ","name:":"tester1-6b498cbd59-h24t2","namespace:":"test","podIdentifier: ":"tester1-6b498cbd59-test"}
{"level":"info","ts":"2024-10-23T07:07:02.756Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:452","msg":"Got the lock and going ahead with attach probes"}
{"level":"info","ts":"2024-10-23T07:07:02.756Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:464","msg":"AttacheBPFProbes for","pod":"tester1-6b498cbd59-h24t2"," in namespace":"test"," with hostVethName":"eni44d70feca88"}
{"level":"info","ts":"2024-10-23T07:07:02.756Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:684","msg":"Load the eBPF program"}
{"level":"info","ts":"2024-10-23T07:07:02.756Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:686","msg":"In loadBPFProgram, Waiting for 7 secs for enforceNpToPod to complete attaching probes"}
{"level":"info","ts":"2024-10-23T07:07:06.828Z","logger":"rpc-handler","caller":"rpc/rpc_handler.go:59","msg":"Received Enforce Network Policy Request for Pod","Name":"tester1-6b498cbd59-h24t2","Namespace":"test"}{"level":"info","ts":"2024-10-23T07:07:06.828Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:772","msg":"No map instance found"}
{"level":"info","ts":"2024-10-23T07:07:09.800Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"Prog Info: ","Pin Path: ":"/sys/fs/bpf/globals/aws/programs/tester1-6b498cbd59-test_handle_ingress"}
{"level":"info","ts":"2024-10-23T07:07:09.801Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:704","msg":"PinPath for this pod: ","PinPath: ":"/sys/fs/bpf/globals/aws/programs/tester1-6b498cbd59-test_handle_ingress"}
{"level":"info","ts":"2024-10-23T07:07:09.801Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:707","msg":"Prog Load Succeeded","for ":"ingress","progFD: ":19}{"level":"info","ts":"2024-10-23T07:07:09.801Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:576","msg":"Attempting to do an Ingress Attach ","with progFD: ":19}
{"level":"info","ts":"2024-10-23T07:07:09.803Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:475","msg":"Successfully attached Ingress TC probe for","pod: ":"tester1-6b498cbd59-h24t2"," in namespace":"test"}
{"level":"info","ts":"2024-10-23T07:07:09.803Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:684","msg":"Load the eBPF program"}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"Prog Info: ","Pin Path: ":"/sys/fs/bpf/globals/aws/programs/tester1-6b498cbd59-test_handle_egress"}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:704","msg":"PinPath for this pod: ","PinPath: ":"/sys/fs/bpf/globals/aws/programs/tester1-6b498cbd59-test_handle_egress"}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:707","msg":"Prog Load Succeeded","for ":"egress","progFD: ":21}{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:610","msg":"Attempting to do an Egress Attach ","with progFD: ":21}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:490","msg":"Successfully attached Egress TC probe for","pod: ":"tester1-6b498cbd59-h24t2"," in namespace":"test"}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:495","msg":"Done with attach probes"}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"controllers.policyEndpoints","caller":"controllers/policyendpoints_controller.go:329","msg":"Successfully attached required eBPF probes for","pod:":"tester1-6b498cbd59-h24t2","in namespace":"test"}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:728","msg":"Pod has an Ingress hook attached. Update the corresponding map","progFD: ":19,"mapName: ":"ingress_map"}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"utils/utils.go:163","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:986","msg":"Total L4 entry count for catch all entry: ","count: ":0}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:452","msg":"Got the lock and going ahead with attach probes"}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:787","msg":"ID of map to update: ","ID: ":183}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:759","msg":"Pod already has Ingress Probe attached - ","Name: ":"tester1-6b498cbd59-h24t2","Namespace: ":"test"}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:763","msg":"Pod already has Egress Probe attached - ","Name: ":"tester1-6b498cbd59-h24t2","Namespace: ":"test"}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:464","msg":"AttacheBPFProbes for","pod":"tester1-6b498cbd59-h24t2"," in namespace":"test"," with hostVethName":"eni44d70feca88"}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:495","msg":"Done with attach probes"}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"controllers.policyEndpoints","caller":"controllers/policyendpoints_controller.go:773","msg":"Active policies available against","podIdentifier":"tester1-6b498cbd59-test"}
{"level":"info","ts":"2024-10-23T07:07:09.829Z","logger":"rpc-handler","caller":"rpc/rpc_handler.go:81","msg":"Active policies present against this pod and this is a new Pod to the local node, configuring firewall rules...."}
{"level":"info","ts":"2024-10-23T07:07:09.830Z","logger":"rpc-handler","caller":"rpc/rpc_handler.go:83","msg":"Waiting for 5 secs before enforcing policies againts pod"}
{"level":"info","ts":"2024-10-23T07:07:09.830Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:742","msg":"Pod has an Egress hook attached. Update the corresponding map","progFD: ":21,"mapName: ":"egress_map"}
{"level":"info","ts":"2024-10-23T07:07:09.830Z","logger":"ebpf-client","caller":"utils/utils.go:163","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-10-23T07:07:09.830Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:984","msg":"Current L4 entry count for catch all entry: ","count: ":0}
{"level":"info","ts":"2024-10-23T07:07:09.830Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:986","msg":"Total L4 entry count for catch all entry: ","count: ":0}
{"level":"info","ts":"2024-10-23T07:07:09.830Z","logger":"ebpf-client","caller":"utils/utils.go:163","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-10-23T07:07:09.830Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:787","msg":"ID of map to update: ","ID: ":184}

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@Pavani-Panakanti Pavani-Panakanti requested a review from a team as a code owner October 23, 2024 17:25
@orsenthil
Copy link
Contributor

The logic and the code behind lock looks good to me.

  • If the newer log messages that show the attachProbs lock taking effect is seen in the agent logs output, please share it.

Another comment. Looking at the code itself, I do not see ANNOTATE_POD_IP is to true is leading to the race, but irrespective the lock to avoid race seems alright.

We have to run the cyclonus test to ensure that we aren't introducing any regression with this change.

@Pavani-Panakanti
Copy link
Contributor Author

Thanks for the review @orsenthil. I added logs in the description. In logs before the change we can see the race condition where attach probes is happening twice. In logs after change, we can see the lock being taken by one thread and second one is waiting on the lock till first one completes

Regarding ANNOTATE_POD_IP, I added some details in the description. In strict mode IPAMD calls network agent to attach probes. NPA attaches the probes and returns success to IPAMD and IPAMD will mark pod as ready and running. The pod will be discovered by np controller only after pod enters ready state

When ANNOTATE_POD_IP=true, pod will be discovered by NP controller even before pod enters ready state. Cx usually set this to true for fast NP reconciliation. So in this case we are hitting the race condition. NPA from IPAMD call is in the process of attaching the ebpfprobes. Now NP controller reconcile request comes, it will check if probes are already attached. NP agent in other thread is in the process of attaching the probes and did not complete updating all internal datastructures. When reconcile request thread checks if probes are attached -> result is no and it will try to attach the probes. Both go routines are now happening parallely and end result is internal data structures not being updated correctly

@Pavani-Panakanti Pavani-Panakanti requested review from jayanthvn and a team October 24, 2024 16:44
@Pavani-Panakanti
Copy link
Contributor Author

Completed running cyclonus tests on the change. All tests passed

Copy link
Contributor

@orsenthil orsenthil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@orsenthil orsenthil merged commit b0942ec into aws:main Oct 24, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants