-
Notifications
You must be signed in to change notification settings - Fork 570
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Command terminated by signal 9 due to OOM (Out of Memory) #1509
Comments
Can the grype team help |
Hi @edgeinterrupts, thanks for the report! I don't know the answer off the top of my head, so I've added the |
👋 The grype config also can profile the process if you can generate a PPROF report for us we can take a look.
dev:
profile-mem: true I ran some large images locally: The largest memory report for that was If you're passing an image as the input to grype in the above example it might be that a certain cataloger of syft is causing the memory usage to spike. If we get the specific PPROF report then we'll be able to get more specifics =) |
@spiffcs thanks I will try to break things into two steps 1) syft sbom -> json, 2) cat the file to grype. |
Hi @edgeinterrupts, sounds good, if you run into more trouble please let us know and we can take a look at the profiling report. |
Hi @spiffcs , is the
If so, where can I find the output? For context, I am running grype on an 15 GB image, and the peak memory footprint I am observing is ~10 GB. I am trying to understand whether I can do anything to reduce grype's memory consumption. |
👋 hey @sfc-gh-dbasavin - what version of grype are you currently using? I think the config may have changed a bit when grype upgraded it's config values:
If you run
The reason I asked what version you're using is we recently merged a new syft (consumed by grype) that improved performance: Are you using the latest version and still seeing an issue? |
Hi @spiffcs, thank you very much for a quick response! I am using the most recent version I found on the Releases page:
So yes, I am still seeing this issue with the latest version of grype. Actually, the issue can be seen when running grype on a public image from Matlab (link), so feel free to check for yourself. Here is the command that results in ~10 GB memory usage at peak:
Edit: I was able to generate the PPROF report. Here it is for illustrative purposes:
I am not sure what to make of it. |
@sfc-gh-dbasavin - Thanks a million for the reproducible image and the report. We'll try and come back to grype performance tuning when we have a bit more time freed up on other tasks at the moment |
Hey @spiffcs, another update from me. I did more testing today, and it looks like one possible cause of high memory consumption is that grype/syft does something special when scanning images with a lot of files. To confirm, I created a test image with 3 million dummy files; total image size was only 30 MB. Despite its small size, grype/syft took 10 minutes to scan that image, and peak memory consumption was 12 GB, as reported by my Mac's Activity Monitor. The PPROF report for that test looks a bit different than the one above though:
I hope that information will be useful to you. We are trying to figure out how to make grype consume less memory (so it doesn't get killed by Linux OOM killer), regardless of the size/shape of the target image, but no luck so far. If you have any ideas whatsoever, then please do let me know. |
I have k8 pod in EKS cluster with limit 1GB memory, i run grype to scan a container image used local cache db with GRYPE_DB_AUTO_UPDATE=false and tried to scanned large images (1300+ MB), when i run "/usr/bin/time -v grype "image-name" -c config.yaml" command, it gives the Command terminated by signal 9, when i searched this error, i find out the OOM (Out of Memory) issue.
how much resources does grype need to have reserved for this task?
Is there any way to limit the grype resources?
The text was updated successfully, but these errors were encountered: