-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Leak on ridiculously simple repo #7874
Comments
I found this out recently but you can use the Chrome console to debug Node scripts! You can try using the Chrome console to profile Jest while it's running to try and dig into the issue. I believe the command is: |
Do I understand correctly that using the workaround to force GC runs makes the heap size remain constant? In that case it's not really a memory leak, just v8 deciding not to run the GC because there is enough memory available. If I try running the repro with 50MB heap size node --max_old_space_size=50 node_modules/.bin/jest --logHeapUsage --runInBand --config=jest.config.js the tests still complete successfully, supporting this assumption. |
@milesj I ran through some memory dumps, but couldn't make much sense of them, I'm not too experienced with pursuing leaks and I didn't want to point in the wrong direction without something solid to count on. @jeysal you are right of course! The thing is our tests freeze in the middle of running since (I assume and could be wrong) we run out of memory. After spending a lot of time trying to figure this out, I found #7274. It seemed to me from the discussion that the behaviour I encountered here is not intended. wdyt @SimenB ? |
Bueller? |
My tests are also leaking massively on CI but the exact same setup locally doesn't really leak (much at least). It's so bad, I'm considering disabling tests on CI until I can make sense of what the difference is beside the OS. ): |
Hey guys! I simplified the memory leak case to a single file which runs tautological tests and eventually throws an exception due to a memory leak. I'm not sure how to move forward with this... help? @SimenB @jeysal @milesj
|
Similar here, jest + ts-jest, simple tests get over 1GB of memory and eventually crash. |
crashes for us too |
@javinor For a test file containing a ridiculous number of tests, I'm not sure there's much we can do, we have to keep the test objects around until the test file is finished - this is the heap while the tests are running: |
FYI @scotthovestadt is currently working on holistically improving memory efficiency of Jest, so improvements are coming (some of them in the next minor version). |
I wonder, why isn't it possible for Jest to spawn a process for each test file, which will guarantee that memory will be freed? Ok, it can be slower, of course, but in my case - it's much better to be slower rather than get a crash from out-of-memory and be blocked to use Jest alltogether... Maybe an option? Or a separate "runner" (not sure if I understand architecture and terminology right)? Is it architecturally possible? Or, will Node-experimental-workers solve it?.. |
I've made a few improvements to memory in the next release: I have a future plan to improve memory in a couple of ways:
The problem with your suggestion of just spawning a new worker for each test is that it would be very slow. A better suggestion along the same lines would be to monitor the memory usage of the processes and auto-restart them at some threshold. I have some concerns about that in general, I'd rather always fix memory leaks than paper them over, but if a PR did that I would accept it. Let me know if the release next week helps with the problems you've been experiencing. |
@scotthovestadt thanks for the info! I'll definitely check with the next release. My actual issue is reported here: #8247 |
Thanks for the responses guys! I think I can break this down to different two problems:
We're running thousands of tests, each creating a relatively big setup so we get bitten twice. The original screenshot showing the consumption growing from test file to test file, hinting to a leak between tests - I have a few guesses as to why this happens, but nothing solid yet. The exception I referred to later, as far as I can tell, really has to do with what @jeysal pointed out - having a large number of tests in the file. In our case, we have only hundreds of tests but with a very large setup. I'll try to provide a better reproduction of this. I'll update after the next release, when I get to poke around a bit more and see the additional fixes in action. Thanks guys! |
There must be something else wrong because I'm currently using Jest v23.6 and everything works fine, no memory leaks, no anything. If I upgrade to latest Jest then the memory leaks start to happen, but only on the GiLab CI runner. Works fine locally. |
New release is out: https://github.com/facebook/jest/releases/tag/v24.6.0 |
Meh, it's still leaking in my setup ):
|
After updating to 24.6.0, we are seeing the similar issue running our CI tests. When logging the heap usage, we see an increase of memory usage after each test file. |
This should help: #8282 Will be released soon. |
How soon? )': |
For those reading along at home, this went out in 24.8.0. |
This would also be a huge breaking change. |
Also encounter Originally I thought it was some circular dependency on my source code, but may be |
@unional if you're on Circle, make sure EDIT: To be clear, you should proactively specify |
@Supernats thanks. I think I did have that set during the failure, currently I'm running it with But it still fail once in a while: |
I have Jest 24.8.0 and #8282 doesn't seem to help. Also Pleaaaaaaase fix this ... |
Yes, following this thread for long since it still fails for us and in ~10% of the cases runs with "out of memory" for CircleCI 2Gb RAM instances. |
I've run some tests considering various configurations. Hope it helps someone.
|
Versions of Axios 1.x are causing issues with Jest (axios/axios#5101). Jest 28 and 29, where this issue is resolved, has other issues surrounding memory leaks (jestjs/jest#7874). Allow `>=0.25.0` for applications that cannot upgrade Jest at this moment.
From a preliminary run or two, it looks to me like going back to 16.10 is resolving these errors for us as well. |
All the info on the regression that specifically affects node >= 16.11 is found in this issue: #11956 |
Just spend about 2 days figuring out how to overcome this, until I discovered #11956. TLDR; regression introduced in node 16.11, fixed in 21.1. |
In case anyone stumbles across this and wants a simple solution, node 20.10.0 contains a fix for this. |
Reading the linked issue, it says '21.1', but might as well being already backported to '20.x', leaving '18.x' to be waiting for a fix🤷. For our team, switching our ci-builds to '21.x' did the deal, even if this might introduce runtime-confusion 😉. |
I'm currently investigating memory leaks in Jest (after #11956 solution), so I was interested in this case. I followed these steps:
The initial execution of the 30 test files barely reached 100MB at the last file, so I added more duplicates, totaling in 651 files. The tests slowly reached a close to 1GB heap size, but just before that, the heap was cleaned back to minimum (56MB): Results
I think it's safe to conclude that the initial issue is resolved. It's not to say that there aren't memory leaks, but only that they are not reproducible using a simple repo. For information on my progress with the other leaks, see #15215. |
adding |
Do you know if we have an update on this? I face this problem while upgrading to Expo version 50 as we have a significant test set from which I have tests with a considerable heap size. Example: Current setup: My current concerns:
Update: Original heap size with useFakeTimers
|
Our setup is a bit over 3000 test files sharded across 8 or 12 jest shards. So we are running 250-400 tests in each shard. These run in parallel on shared hardware. Measuring performance is really difficult in this environment because the run times are so inconsistent. Our main concern was reducing memory usage, though. We run our suite with:
These are the additional settings that helped with memory.
Read more about max-semi-space-size |
@CoryDanielson, great shared. Nevertheless, I have used the commands you suggested, using 17 chunks concurrently with parallel running in each chunk. I noticed the issue generally came from how asynchronous code and memory allocation are done; as explained above, Your other features are interesting, so I will try them next week and provide feedback. Do you use asynchronous testing syntax for your tests, and do you use Jest fake timers? |
@EduardoAC We have ~3100 test files.
I know run time is not really entirely related to memory, but our most problematic/flaky tests are component tests (react testing library) with lots of async calls, and a few that have large Are you running 17 chunks often? We use |
You guys do an awesome job and we all appreciate it! 🎉
🐛 Bug Report
On a work project we discovered a memory leak choking our CI machines. Going down the rabbit hole, I was able to recreate the memory leak using
Jest
alone.Running many test files causes a memory leak. I created a stupid simple repo with only
Jest
installed and 40 tautological test files.I tried a number of solutions from #7311 but to no avail. I couldn't find any solutions in the other memory related issues, and this seems like the most trivial repro I could find.
Workaround :'(
We run tests with
--expose-gc
flag and adding this to each test file:To Reproduce
Steps to reproduce the behavior:
Expected behavior
Each test file should take the same amount of memory (give or take)
Link to repl or repo (highly encouraged)
https://github.com/javinor/jest-memory-leak
Run
npx envinfo --preset jest
Paste the results here:
The text was updated successfully, but these errors were encountered: