-
Notifications
You must be signed in to change notification settings - Fork 29.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
buffer: support multiple pools for Buffer.allocUnsafe #30611
Comments
I think this could be implemented relatively easily as an npm module? It wouldn’t really have to do more than allocate a large buffer using |
Yes, that could be an npm module. But I'm certain that it's a natural addition to the standard API, which already implements such pool, yet it's global and its configuration is really inconvenient for libraries. So, having it in the core will be a valuable addition. Nevertheless, I'll add this option into Alternatives section. Thanks! |
I know @BridgeAR have a proposal to improve this situation without introducing a new API. |
@mcollina Although, I can imagine a totally different API that could be introduced in node core. I'm speaking of a "classical" pool with buffer reuse. Such API could reduce amount of litter - the problem that isn't solved by the mechanism we have inside of |
I think |
That's an interesting suggestion. Thanks for bringing it up! I've looked at I have some doubts in the performance of this approach. For large source buffers the cost of bookkeeping of all slices in the Potentially this pooling mechanism could be implemented in the existing core API (the global pool for If you think that this pool with |
I just opened a PR with my approach. it switches the current static pool to become dynamic instead. |
@BridgeAR In this issue, I'm advocating for library authors who want to make sure that performance of their libraries is not screwed by a misbehaving library. Also, it can be combined with your dynamic pool approach. Moreover, if both features would become available in standard API, node could start using a dedicated dynamic pool for its own purposes. |
It would indeed not be ideal if any third party library changes the default. That should only be done by the "actual application". I advocate against multiple pools for the following reason: having a single pool takes advantage of the pool as often as possible without increasing the overall memory footprint a lot. With multiple distinct pools it's neither possible for the application to prevent any pool usage (because they have very tight memory constraints) in a simple way, nor is it actually used ideally (quite a bit of each pool will be unused during the time that it's not in the hot code path. And even at that point, multiple pools could be used and that increases the overall memory usage). We have multiple APIs that are not meant for library authors to be changed and only by the application itself. This applies to pretty much all global default values. It would be great to fix these in general but increasing the overall memory usage by defining multiple pools does not sound ideal to me. |
Sounds like an enhancement for the standard documentation.
I get your point. In the end, it sounds like npm ecosystem could benefit from a buffer pool library that not only implements Please feel free to close this issue. |
Fwiw, I’m still a fan of the “actual pooling” approach laid out above. If we close this, I’d open a new issue about it, as it really seems like a good way forward here. |
I like the idea to have a way to reuse memory that won't be used anymore instead of allocating new parts. Is that going in the direction of what you originally had in mind (just by using a completely separate pool)? |
@addaleax
@BridgeAR |
Closing this one. Created #30683 to follow-up on the pooling mechanism. |
The problem
At the moment
Buffer.allocUnsafe
uses a global pool (strictly speaking, it's not a "classical" object pool, but let's use this terminology), i.e. pre-allocated internalBuffer
which is then sliced for newly allocated buffers, when possible. The size of the pool may be configured by settingBuffer.poolSize
property (8KB by default). Once the current internal buffer fills up, a new one is allocated by the pool.Here is the related fragment of the official documentation:
The problem with this approach is that this pool is global, so its size can't be configured by a Node.js library. Of course, a library can change
Buffer.poolSize
, but that may clash with another library or the application which may also change it. Also, if the application changes this setting to a sub-optimal value, it may impact the performance of libraries used within the application.Many libraries, especially client libraries, use
Buffer.allocUnsafe
on their hot path. So, predictable and optimal size of the pool is critical for their performance. Thus, it would be great to have a way to isolate libraries from each other and the application.The solution
The idea is to introduce the concept of multiple identified pools for
Buffer.allocUnsafe
. See the snippet:With this change libraries are free to use their own dedicated pool with a size that they find optimal.
Optionally, another function
Buffer.destroyUnsafePool
could be added. This function will destroy the pool, so its buffer could be GCed. But I don't think it's a must have as with current API it's not possible to destroy the global pool.Note that this change is backward-compatible, thus it won't impact current API and its behavior.
I'd like to hear some positive feedback from node collaborators first, so I could proceed with the implementation.
Alternatives
This could be implemented as an npm module. But I'm certain that it's a natural addition to the standard API, which already implements such pool, yet it's global and its configuration is really inconvenient for libraries. So, having it in the core will be a valuable addition.
Another thing to mention is that the default size doesn't seem large enough (at least for certain use cases) and could be increased (see #27121). But this won't help with the isolation.
The text was updated successfully, but these errors were encountered: