-
Notifications
You must be signed in to change notification settings - Fork 29.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zlib deflate results in a memory leak #8871
Comments
Are you sure the node version is the same between the two? |
Yes. I've also recreated the issue on v4.5 and 5.12. |
@jasonmcaffee have you experimented with the sync version |
The loop is creating 30,000 concurrent zlib.deflate requests that are handed off to a threadpool; i.e., it's creating a giant backlog. They are dispatched eventually and that is why memory goes down again on OS X. On Linux (or more precisely, with glibc), something else happens. When you run it through
Node.js uses malloc/new and glibc translates those overwhelmingly to brk system calls. The problem is that the "program break" goes up but a few allocations with longer lifetimes make it impossible for the break to go down again. There is no real memory leak, it's ordinary - but rather catastrophic - memory fragmentation. IOW, can confirm but node.js is not the root cause. Maybe we can sidestep it by using a custom allocator like jemalloc but that has a lot of ramifications, it's not something we can do lightly. |
Thanks @bnoordhuis for the breakdown and explanation. |
FWIW, we just fixed a problem that had similar symptoms, and our solution was to disable transparent huge pages. I'd encourage you to investigate and see if a) they are turned on, and b) if disabling them fixes your problem. |
Could this be an issue if multiple clients are triggering async deflate calls within a few ms of each other? Should people just avoid deflate() all together? |
Does this go much further than just deflate()? Could this be affecting most async calls which allocate memory on Linux? |
Does anyone know which versions of node do not have this bug? We are seeing this on latest Node8 but also on all versions of 6 that I sampled. I run the code snippet above in node invoked with --expose-gc and call |
@TylerBrock Did you read through this issue and the linked issues? If you did, then you already know the answer. |
I did but I'm still not sure so I apologize if I missed something. People have suggested it could be new zlib, new v8, or something in crypto (possibly related to zlib) but going back to versions of node having older zlib and v8 (checked via process.versions) yielded similar results. Would you mind summarizing where we are at? |
From #8871 (comment):
In other words, this is not a Node.js, or zlib, or V8 issue, but rather caused by how the system memory allocator works. |
Disabling transparent huge pages (or setting it to madvise) may also help. |
Must be something different on my end because THP is already set to madvise. Thanks for the clarification. |
@TylerBrock Did you come to any solution? Disabling THP on our end gave us a little bit of headroom, but it's to the point where a machine blows every day or so due to use of |
@STRML We just went back to node 6. Things run for months and months without leaking memory. It's a bummer though, I'd love to use node 8. It's much faster, more memory efficient (when not leaking), and has async/await. What I don't understand is, why is memeory fragmentation an issue now. Did the allocation strategy or allocator change between 6 and 8? |
Not in Node – possibly in V8, but I don’t think it would have effects like this. Fwiw, I couldn’t reproduce measurable differences in behaviour/memory usage using the original issue comment script here. |
That's interesting @TylerBrock, I wouldn't have expected the version to matter. I see the same results in all versions I tested (>= 4). I have noticed that |
@puzpuzpuz It’s certainly worth experimenting with – does it fix the issue here for you? If yes, I’d say go for a PR :) |
@addaleax yes, it seems to mitigate the issue (see #8871 (comment)). According to my experiment, each zlib instance occupies around 87KB when the original script is run. I'll try to polish the patch and submit a fix PR. |
I've submitted #34048 which implements lazy zlib initialization, when possible. Behavior of the fix for the original script is described here: #8871 (comment) The difference with the latest const zlib = require('zlib');
const message = {
some: "data"
};
const payload = Buffer.from(JSON.stringify(message));
setInterval(() => {
for (let i = 0; i < 30000; ++i) {
zlib.deflate(payload, () => { });
}
}, 2000); With Any feedback, testing on other platforms and code reviews are appreciated. |
Closing this one as #34048 has landed. Please let me know if you still face the same behavior, so I could reopen the issue. |
@puzpuzpuz It seems that #34048 only mitigates the issue but does not fix it. Am I right? I'm using this test on Ubuntu 18.04 'use strict';
const util = require('util');
const zlib = require('zlib');
const deflate = util.promisify(zlib.deflate);
const payload = Buffer.from(JSON.stringify({ some: 'data' }));
async function testLeak() {
const promises = [];
for (let i = 0; i < 30000; i++) {
promises.push(deflate(payload));
}
await Promise.all(promises);
}
async function main() {
console.time('testLeak');
for (let i = 0; i < 10; i++) {
await testLeak();
}
console.timeEnd('testLeak');
}
main()
.then(function () {
setTimeout(function () {
console.log('Memory:', process.memoryUsage());
}, 10000);
})
.catch(console.error); and the memory usage grows linearly with the number of times |
@lpinca That may be true, as I did some experiments with the fix applied to the reproducer script, but would appreciate any feedback from the community. I'm going to reopen this issue, if necessary, once we get some feedback. Feel free to reopen it now, if you think that's necessary. The fix seems to be decreasing the fragmentation, but it doesn't get rid of it completely. So, probably it's correct to say that it "mitigates" the issue, rather than "fixes" it. I've also tried to run your snippet on the latest
And on b0b52b2 (which doesn't include the fix):
The RSS is noticeably lower with the fix, yet it's still relatively large when compared with the heap size. |
Yes, I get similar results on Linux. With the mitigation patch
Without the mitigation patch
On Windows the memory usage is stable at ~70 MB with or without the mitigation fix. I should try with jemalloc on Linux. I think it makes sense to reopen. |
I was also using Ubuntu 18.04 with glibc 2.27.
OK, reopening it then. |
@lpinca I've tried that with jemalloc 3.6.0-11 and here is the output:
|
Try using - zlib-bug-inflate |
That's without the mitigation patch right? I also tried with jemalloc and there is an order of magnitude difference (~40 MB with the patch and ~400 MB without it). |
@lpinca no, that's with the patch. I may be doing something wrong, as I don't see an order of magnitude difference. |
Nvm it seems it was me that did something wrong:
It seems jemalloc is a working workaround. |
Better than that, I mean that with jemalloc there is no leak at all.
|
Strictly speaking memory fragmentation is not a leak and the allocator should be able to get rid of it when the allocation rate gets lower and fewer allocated memory chunks remain. If jemalloc doesn't have a noticeable fragmentation in the discussed scenario, that's great. As a summary, it would be great to get some feedback from the community on v14.7.0 (which includes #34048) and v14.7.0 (or any previous version) with jemalloc. |
Yes "leak" is not the correct term in this case. My point is that with jemalloc memory usage remains pretty much stable regardless of how many times the |
How does this look in 2023 with Node 18+ ? There's no comments on this issue for 3 years so I'm wondering if it's safe to use permessage-deflate to gzip websockets and was pointed to this issue. |
I don’t know. We’ve moved to another solution. You would have to test it to
be sure.
…On Mon, Oct 23, 2023 at 10:39 AM D ***@***.***> wrote:
How does this look in 2023 with Node 18+ ? There's no comments on this
issue for 3 years so I'm wondering if it's safe to use permessage-deflate
to gzip websockets and was pointed to this issue.
—
Reply to this email directly, view it on GitHub
<#8871 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJEKP5IML76FCNMS77LAR3YAZ6RLAVCNFSM4CROFDHKU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCNZXGUZTKNJZGM2Q>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I've run the test from #8871 (comment)
During the test the process memory usage consistently oscillated between around 600 MB and 6,4 GB. |
…sageDeflate due to <nodejs/node#8871>, disable twilio client since unused currently, added safeguard to COPY and MOVE if target/destination the same or if no messages, added inbox safeguard to prevent user from accidentally moving all messages from INBOX to Trash, added INUSE imap response code to ioredis IMAP locking issues, optimized cache purge and re-setting in updateStorageUsed, sync locales
I'm using the graylog2 package for logging, and we ran into significant memory leak issues.
After tracking it down, I found zlib.deflate is the source of the issue.
The issue is magnified when running code inside of docker with the latest node distribution.
Running the below code on my macbook pro results in the memory spiking to ~3GB, then released down to 600MB.
Running the code in the latest node docker distro results in memory spiking to ~3GB, and it is never released.
This has resulted in our docker containers crashing due to memory exhaustion.
The text was updated successfully, but these errors were encountered: