-
Notifications
You must be signed in to change notification settings - Fork 10k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak in 2.2 and 2.1 (2.0 is not affected) using Docker #10200
Comments
A couple of observations - I can make it OOM much quicker if I set MobyLinux to only use 2GB rather than 4GB of RAM |
I commented on the other issue but I meant to comment here. This is not supported in 2.2: https://github.com/dotnet/coreclr/issues/18971 |
@sebastienros - setting the limits to 512MB hasn't helped, same problem. I will have to try 3.0 preview a try to see if it helps. |
@sebastienros - can I get this re-opened please? I have tried 3.0 preview 4, no difference. Container still OOMs. I'm using the preview 4 'stretch-slim' images. I've got this set: falseand this in my compose: mem_reservation: 512m Still the only thing that works is going back to core 2.0 |
@PrefabPanda Make sure ServerGC is not running by logging out on startup. I would like to see the limit doubled again. Any chance you can attach a debugger to get some gc stats? |
Reopening as it might also repro on preview 4. @PrefabPanda why didn't you use preview5? You would have to specifically set the preview4 tag to get it, the latest docker image and current default for 3.0 is preview5. |
@sebastienros - I originally was trying preview 5, but was getting an error when it attempted to start debugging. It looks like the default images VS2019 wants to use aren't preview 5 compatible: mcr.microsoft.com/dotnet/core/aspnet:3.0-stretch-slim (they give me an error saying they can only find preview 4). So I dropped -stretch-slim and -stretch from the tag and now it runs. @DAllanCarr - Sadly I've repeated the test using a 512MB and 1GB container - same OOM problem. I have left the test app doing workstation GC (I have noticed that the setting doesn't take unless you do a clean solution). When running the tests I've got the debugger attached already. I'll attempt to grab some of the output. |
I reproduced the same docker image, and set it to There results are the following:
As you can see everything went normally. There were not a single bad response or socket error during the run. Here is the docker file I used, where
|
Here is the full source of the application I ran. For my own notes, the command line used to benchmark:
|
Thank you @sebastienros - I will try your sample project and report back. I notice you are using a different SDK image, so I'll be trying that too. |
@sebastienros - I have given your suggestion regarding turning https off, sadly no difference. Though I can't see your comment now, so assume you've deleted it. For reference, here is my code again with the https turned off. |
I have started running our reliability tests on https to detect a leak, and they are very stable, so it's probably not a leak or the memory would keep growing. I have a test that runs for a full week, which will give us even more information. What it could be though is that the internal cache that is used for loading certificates, and any other pools (arrays, string builders, ...), might require a fixed amount of memory which can't be released by the GC, and this might be over 128mb. I will try to figure out at what point the docker images start failing on my side when paging is disabled. Just to be clear, your Docker settings do disable paging, which is a good way to actually test memory limits. And I could also repro the issue with this setting at 128mb.
|
Have you ever reproed it without the debugger attached? |
I'm experiencing a similar issue in .NET Core 2.2, but I'm not using Docker. Should this be the thread that I follow? The previous thread here is now closed: #1976 |
The current recommendation is you figure out how much memory is necessary based on your actual scenario. I can't reproduce any memory leak, even after running the application on https for a complete week. A proof of a memory leak would be to show a memory profile listing which instances are leaking overtime. |
@sebastienros actually your article on .NET Core garbage collection helped a lot. I believe for me it was a combination of properly disposing EF DbContext and understanding GC when it comes to short-lived objects |
Apologies, I've lost track of the thread a little bit. I think it looks like there isn't a clear actionable work item for the servers right now so I'm moving this to discussions for now. Please feel free to let me know (tag me, etc.) if that changes! |
Me too |
@crauadams Please file a new issue of your own. This one will probably be closed as we can't repro any leak. |
Closing as per @sebastienros 's comment. @crauadams please do file a new issue if you have data and/or a repro for us to look at! |
Description:
RAM use in Docker container keeps climbing until an OOM is triggered. Does not happen in 2.0.
Steps to reproduce the behavior:
mem_reservation: 128m
mem_limit: 256m
memswap_limit: 256m
cpus: 1
(will need to change the yml version to 2.4 to support the memory limits)
e.g.
curl -X GET "https://localhost:44329/api/values/7" -H "accept: text/plain" --insecure
Expected behavior:
GC to release memory and/or respect Docker RAM limits
Additional context:
The text was updated successfully, but these errors were encountered: