-
Notifications
You must be signed in to change notification settings - Fork 29.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak when calling fs.write() in long synchronous loop #11289
Comments
I think this is probably expected. You're continuously queuing up write requests to the thread pool, which has a fixed number of threads available. If you write synchronously ( |
@mscdex I understand that some structures may accumulate in memory as long as the buffer is not flushed but note that the memory leak is still present AFTER everything has been wrote to the file and the file was closed. From:
What unsafe means here? Should the documentation be more specific? |
Quite possibly a variation on #8871 (comment) (summary: memory fragmentation, not memory leak) but could also be #11077. |
Is this close-able at this point or should this remain open? |
I don't think there is anything actionable. I'll close it out. Thanks for the bug report, @aalexgabi. |
I think it's actionable if it's an issue with fragmentation. @aalexgabi's gist should work with Node. Sure, it's pushing the edge with so many temporary buffers but it's a reasonable stress test. Node should be able to handle these kinds of edge cases without being fragile. This could be fixed through a better allocator. |
There are two things here:
Point 1 is not a bug but a peculiarity: since the test case doesn't pass a callback to (Should A non-deprecated calling pattern shows #8871 (comment), memory fragmentation. You say "could be fixed through a better allocator", I say that's not our bug to fix. Take it up with glibc or experiment with EDIT: To be clear, a custom allocator is not out of the question but for other reasons than as a workaround for a glibc deficiency. |
Yes, this is the thing.
Perhaps jemalloc would be another way forward.
Antirez has some comments on moving to jemalloc from glibc a few years back: http://oldblog.antirez.com/post/everything-about-redis-24.html).
Also, Jason Evans from 2011 (https://www.facebook.com/notes/facebook-engineering/scalable-memory-allocation-using-jemalloc/480222803919):
|
I'm not rabidly opposed to jemalloc (libc interop issues aside) but the "other reasons" I alluded to are exploiting known facts about allocations; e.g., that most memory is allocated and freed on the main thread (no synchronization needed), or that memory for a fs request should come from a different bin because it stays around longer than the temp storage used for flattening a string. That kind of thing. |
@aalexgabi Yes, I mentioned that in #11289 (comment). |
i have same issue writeFileSync |
To be honest I'm disappointed with the closure of this issue. The first imagined "real world" example that comes to mind it's a logger used by a service with a highly variable workload. This means that during intense workload if the cpu is faster than the disk for a given period of time, all the memory queued log write calls will generate leaks that will never be recovered in the lifetime of the service. In a way this means that node applications are "required" to be restarted/killed from time to time if there is no way to implement a back-pressure mechanism for handling IO writes. |
I think I found a memory leak when using fs.write() from a long synchronous loop. Here is the script that I used to reproduce:
With node v0.12.9:
After the process closed the file, there is 1.1 GB used by the process:
The file contains all the lines:
The file has 12 MB so I would eventually expect the node process to take around 70 MB of memory given that it takes 17 with an empty vm:
If I don't call the gc manually I get:
With node v7.5.0:
I have also noticed that v7.5.0 is several times slower when writing to file.
The text was updated successfully, but these errors were encountered: