Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Checkov run with Prisma API arguments causes OOM #6968

Open
zagr0 opened this issue Jan 24, 2025 · 4 comments
Open

Checkov run with Prisma API arguments causes OOM #6968

zagr0 opened this issue Jan 24, 2025 · 4 comments
Labels

Comments

@zagr0
Copy link

zagr0 commented Jan 24, 2025

Describe the issue
We use Prisma cloud and run checkov jobs with Gitlab k8s runners on VM with 4 CPU cores and 16Gb of RAM for our infrastructure repository (mono repo), it contains terraform plans, ansible automations, helm charts, kustomize configurations. When we use Prisma API url and access keys arguments to report result to Prisma, checkov job always crushes, it's terminated by OOMkiller as chechov process consumes all the available memory on the node. The interesting thing is that if we run checkov without prisma integration it runs well and not OOM killed, no such memory consumption. Without Prisma arguments it takes ~4-5Gb of RAM, which is also pretty a lot actually.

Examples
Runs with Prisma, OOM: checkov -d . --repo-id our/repo-id --branch branch_name --prisma-api-url https://api.prismacloud.io --bc-api-key XXXXXXXXXXXXXX::YYYYYYYYYYYYYY --use-enforcement-rules -o junitxml
But runs well without Prisma: checkov -d . -o junitxml

Exception Trace
There is no checkov errors, it just ate all the memory and killed:

ERROR: Job failed (system failure): Error in container build: exit code: 137, reason: 'OOMKilled'

Desktop (please complete the following information):

  • Runs in GKE 1.30
  • Checkov Version 3.2.356

Additional context
Not sure but probably started to happen on v3, before we didn't face the issue.

@zagr0 zagr0 added the crash label Jan 24, 2025
@tsmithv11
Copy link
Collaborator

Hey @zagr0 since you are a Prisma Cloud customer, can you work with support to get a case opened for this? Then engineering can take a look

@robinsmidsrod
Copy link

Any movement on this issue?

After some trial and error I found out that running the following frameworks on our big FluxCD monorepo is causing OOM, so I've just had to skip them or we'll always get a crash:

  --skip-framework ansible \
  --skip-framework argo_workflows \
  --skip-framework bitbucket_pipelines \
  --skip-framework github_actions \
  --skip-framework gitlab_ci \
  --skip-framework json \
  --skip-framework kubernetes \
  --skip-framework kustomize \
  --skip-framework yaml \

It should at least help you to narrow things down to which scanners that are causing problems.

@robinsmidsrod
Copy link

And when I look at that list from a bit of a distance, it dawns on me that they are all based on scanning JSON and/or YAML files. Is it possible that you read in all the content of all those structured files into one big (memory) buffer, instead of allocating memory for one file at a time? And if this is also duplicated per framework, then it could cause this kind of issue.

@zagr0
Copy link
Author

zagr0 commented Feb 18, 2025

Hi @robinsmidsrod , from my side I have opened additional case to the Prisma Cloud Support, they are still investigating.
Seems the issue for us also related to our ArgoCD and kustomize configuration directory, appeared after v2 -> v3 upgrade, probably when json/yaml parser was changed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants