Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak introduced in v10.15.3 #26667

Closed
anthony-tuininga opened this issue Mar 14, 2019 · 46 comments
Closed

Memory leak introduced in v10.15.3 #26667

anthony-tuininga opened this issue Mar 14, 2019 · 46 comments
Labels
memory Issues and PRs related to the memory management or memory footprint. node-api Issues and PRs related to the Node-API.

Comments

@anthony-tuininga
Copy link
Contributor

  • Version: v10.15.3
  • Platform: Linux
  • Subsystem: napi

Our module that has been migrated to N-API is experiencing a memory leak in version 10.15.3 but not in version 10.15.2. I verified that reverting the changes made in this pull request eliminates the memory leak. Clearly that isn't the right solution but hopefully it will point in the right direction!

@addaleax addaleax added the node-api Issues and PRs related to the Node-API. label Mar 14, 2019
@addaleax
Copy link
Member

Is there any chance you could share a reproduction?

@anthony-tuininga
Copy link
Contributor Author

I have a reproduction that works with our node-oracledb driver -- but if you don't have an Oracle Database handy that won't help much. If you do I can share the code that causes the issue; otherwise, it will take some time to develop a test case that demonstrates the issue!

@addaleax
Copy link
Member

Depending on the complexity, looking at the code might help, because we know we’re looking for code that uses napi_create_reference() et al?

/cc @mhdawson as the PR author

@anthony-tuininga
Copy link
Contributor Author

The code for the driver can be found here. The code that is probably the most likely to be the source of the issue can be found here. Specifically, it acquires the value of a reference (for the constructor), creates an instance and then calls napi_wrap() to associate a structure with the newly created object.

@anthony-tuininga
Copy link
Contributor Author

I attach a zip file containing a simple module that simply creates an instance and wraps it:
issue_26667.zip. The pattern suggests that there is a race condition involved and that only occasionally it leaks -- but if you do it often enough it becomes obvious. Hopefully this is sufficient to track down the issue. After building the module, you can run the script as follows:

NODE_PATH=build/Release node --expose-gc demo.js

This will create a file called stats.txt which contains a dump of the number of iterations processed as well as the RSS and heap size.

@addaleax
Copy link
Member

The pattern suggests that there is a race condition involved and that only occasionally it leaks -- but if you do it often enough it becomes obvious. Hopefully this is sufficient to track down the issue.

@anthony-tuininga Are you saying that only occasionally there is a leak visible when running the program, or that you need many iterations within a single process in order to see the leak?

Because I’m having a hard time seeing an actual leak here – at the very least, the finalizer callback always seems to be called for exactly 25000 objects each iteration (so, the expected amount), at least locally…

@anthony-tuininga
Copy link
Contributor Author

anthony-tuininga commented Mar 16, 2019

What I meant was that the amount of memory lost is not sufficient to indicate that memory is being lost each iteration. Instead memory must be lost on some iterations only. Unfortunately, I ran both 10.15.2 and 10.15.3 with my simple module and discovered that both leak memory, albeit very slowly. I ran each of the loops 4 billion times and heap memory increased about 4 MB in both cases and RSS about 10 MB. So my example doesn't demonstrate the problem. :-(

But there is a marked difference between the two versions when using the node-oracledb driver test case and the difference goes away if I revert the pull request I mentioned earlier. I'll see if I can find a better test case next week.

@ChALkeR ChALkeR added the memory Issues and PRs related to the memory management or memory footprint. label Mar 16, 2019
@anthony-tuininga
Copy link
Contributor Author

anthony-tuininga commented Mar 18, 2019

Ok. I created a cut-down version of the node-oracledb module which has all of the Oracle bits ripped out. It demonstrates the problem. With Node.js 10.15.2 it stabilises around 350,000 iterations but with Node.js 10.15.3 it increases at around 100 bytes/iteration.

issue_26667.zip

Once unzipped and built, run with the following command:

NODE_PATH=. node --expose-gc demo.js

@anthony-tuininga
Copy link
Contributor Author

@addaleax, with the new test case are you able to replicate the issue?

@mhdawson
Copy link
Member

This is what I see running your example on master:
``
Num Iters,RSS,Heap Total,Heap Used,Ext
25000,84611072,43147264,2840488,9146
50000,82440192,39477248,2716024,9146
75000,84344832,38952960,2722368,9146
100000,84377600,39477248,2724104,9146
125000,86728704,39477248,2724104,9146
150000,89051136,39477248,2724104,9146
175000,91439104,39477248,2724240,9146
200000,94646272,38428672,2725080,9146
225000,96698368,38952960,2725080,9146
250000,99041280,39477248,2713416,9146
275000,101924864,38428672,2725080,9146
300000,104325120,38428672,2725088,9146
325000,106901504,39477248,2725088,9146
350000,109899776,38428672,2725088,9146
375000,111890432,38428672,2725088,9146
400000,114360320,38952960,2725088,9146
425000,116314112,39477248,2725088,9146
450000,119160832,39477248,2725088,9146
475000,121499648,38952960,2713424,9146
500000,124059648,39477248,2725088,9146
525000,126201856,38952960,2725088,9146
550000,128806912,38952960,2713424,9146
575000,131485696,38952960,2725088,9146
600000,134086656,38952960,2726368,9146
625000,136589312,39477248,2714704,9146
650000,139042816,38428672,2726368,9146
675000,141357056,38952960,2726368,9146


It that similar to what you were seeing on 10.15.3?

@anthony-tuininga
Copy link
Contributor Author

anthony-tuininga commented Mar 20, 2019

Yes. If you run with Node.js 10.15.2 you will see that the RSS remains relatively stable.

Num Iters,RSS,Heap Total,Heap Used,Ext
25000,136855552,42713088,4252656,8272
50000,140029952,43237376,4256360,8272
75000,141082624,43761664,4264688,8272
100000,142774272,44285952,4251600,8272
125000,147193856,44810240,4263944,8272
150000,147931136,45334528,4252536,8272
175000,148590592,45334528,4264080,8272
200000,148500480,45334528,4264120,8272
225000,148443136,45334528,4252744,8272
250000,148815872,45858816,4264136,8272
275000,149032960,45858816,4264184,8272
300000,149032960,45858816,4252744,8272
325000,149032960,45858816,4264152,8272
350000,149037056,45858816,4264120,8272
375000,149037056,45858816,4252744,8272
400000,149037056,45858816,4252728,8272
425000,149045248,45858816,4264168,8272
450000,149045248,45858816,4252760,8272
475000,149045248,45858816,4264184,8272
500000,149045248,45858816,4252760,8272
525000,148520960,45334528,4264152,8272
550000,148656128,45334528,4264136,8272
575000,148709376,45334528,4264136,8272
600000,148873216,45334528,4264152,8272
625000,148852736,45858816,4252776,8272
650000,149315584,45858816,4264104,8272
675000,149315584,45858816,4264152,8272

@mhdawson
Copy link
Member

I'll run with valgrind to see if anything obvious shows up.

@mhdawson
Copy link
Member

@hashseed one thing I thought of is that I think the check in #24494 assumes that the Finalizer called by the gc will run on the main thread as opposed to some other thread. Can you confirm that is a valid assumption?

@addaleax
Copy link
Member

@mhdawson Yes, Finalizer callbacks run on the main thread.

@hashseed
Copy link
Member

Yup. Finalizer needs to on the main thread.

@mhdawson
Copy link
Member

mhdawson commented Mar 21, 2019

After 575000 iterations with

NODE_PATH=. valgrind --leak-check=full node --expose-gc demo.js >valgrindresults 2>&

Num Iters,RSS,Heap Total,Heap Used,Ext
25000,301633536,42622976,2649888,9078
50000,313131008,40001536,2626792,9110
75000,341643264,41050112,2631344,9110
100000,357126144,41050112,2634272,9110
125000,368504832,40001536,2634272,9110
150000,382361600,40001536,2634272,9110
175000,412848128,40525824,2634408,9110
200000,428564480,41050112,2635248,9110
225000,438902784,40001536,2635264,9110
250000,451485696,40525824,2635264,9110
275000,478711808,41050112,2635264,9110
300000,490356736,40001536,2635264,9110
325000,505950208,40525824,2635264,9110
350000,517595136,40525824,2635264,9110
375000,555458560,40525824,2635264,9110
400000,567783424,40001536,2635264,9110
425000,582340608,40525824,2635264,9110
450000,608468992,41050112,2635264,9110
475000,619352064,40525824,2635264,9110
500000,636342272,40525824,2635264,9110
525000,648298496,40525824,2635264,9110
550000,672575488,40001536,2635264,9110
575000,687759360,40001536,2635264,9110
==24888== HEAP SUMMARY:
  51 ==24888==     in use at exit: 66,032,860 bytes in 612,932 blocks
  52 ==24888==   total heap usage: 14,117,119 allocs, 13,504,187 frees, 12,227,047,596 bytes allocated
.
.
.

==24888== LEAK SUMMARY:
==24888==    definitely lost: 0 bytes in 0 blocks
==24888==    indirectly lost: 0 bytes in 0 blocks
==24888==      possibly lost: 47,237 bytes in 583 blocks
==24888==    still reachable: 65,985,623 bytes in 612,349 blocks
==24888==         suppressed: 0 bytes in 0 blocks
==24888== Reachable blocks (those to which a pointer was found) are not shown.
==24888== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==24888==
==24888== For counts of detected and suppressed errors, rerun with: -v
==24888== ERROR SUMMARY: 1188705 errors from 442 contexts (suppressed: 0 from 0)

@mhdawson
Copy link
Member

Looking at the details in the valgrind results, nothing stands out and it only reports possibly lost: 47,237 bytes in 583 blocks

@mhdawson
Copy link
Member

I'll try a run with --show-leak-kinds=all if anything interesting is shown for reachable blocks.

@anthony-tuininga
Copy link
Contributor Author

I ran valgrind with Node.js 10.15.2 and 10.15.3 for 250,000 iterations using this script.

Here are the results for the different versions:
valgrind-10.15.2.txt
valgrind-10.15.3.txt

You'll note that the one for 10.15.3 shows leaks in napi_create_reference() whereas the one for 10.15.2 does not mention it at all. Hope this helps!

@mhdawson
Copy link
Member

With --show-leak-kinds=all I do see the following:

==29106== 24,803,016 bytes in 442,911 blocks are still reachable in loss record 1,986 of 1,987
==29106==    at 0x4C2B0E0: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==29106==    by 0x94D451: napi_create_reference (in /home/mhdawson/newpull/land/node/out/Release/node)
==29106==    by 0x98527C6: njsBaton_create (in /home/mhdawson/check2/issue_26667/build/Release/issue_26667.node)
==29106==    by 0x98550C5: njsUtils_createBaton (in /home/mhdawson/check2/issue_26667/build/Release/issue_26667.node)
==29106==    by 0x98543DF: njsSodaCollection_insertOneAndGet (in /home/mhdawson/check2/issue_26667/build/Release/issue_26667.node)
==29106==    by 0x9464A4: v8impl::(anonymous namespace)::FunctionCallbackWrapper::Invoke(v8::FunctionCallbackInfo<v8::Value> const&) (in /home/mhdawson/newpull/land/node/out/Release/node)
==29106==    by 0x3F8B80EC72C0: ???
==29106==    by 0x3F8B80EC7833: ???
==29106==    by 0x3F8B80ED3D3A: ???
==29106==    by 0x19C0128: ??? (in /home/mhdawson/newpull/land/node/out/Release/node)
==29106==    by 0x19E7B0D: ??? (in /home/mhdawson/newpull/land/node/out/Release/node)
==29106==    by 0x19A6B86: ??? (in /home/mhdawson/newpull/land/node/out/Release/node)
==29106==
==29106== 29,665,888 bytes in 529,748 blocks are still reachable in loss record 1,987 of 1,987
==29106==    at 0x4C2B0E0: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==29106==    by 0x94D451: napi_create_reference (in /home/mhdawson/newpull/land/node/out/Release/node)
==29106==    by 0x98527C6: njsBaton_create (in /home/mhdawson/check2/issue_26667/build/Release/issue_26667.node)
==29106==    by 0x98550C5: njsUtils_createBaton (in /home/mhdawson/check2/issue_26667/build/Release/issue_26667.node)
==29106==    by 0x98543DF: njsSodaCollection_insertOneAndGet (in /home/mhdawson/check2/issue_26667/build/Release/issue_26667.node)
==29106==    by 0x9464A4: v8impl::(anonymous namespace)::FunctionCallbackWrapper::Invoke(v8::FunctionCallbackInfo<v8::Value> const&) (in /home/mhdawson/newpull/land/node/out/Release/node)
==29106==    by 0x3F8B80EC72C0: ???
==29106==    by 0x3F8B80EC7833: ???
==29106==    by 0x3F8B80ECE51A: ???
==29106==    by 0x19C0128: ??? (in /home/mhdawson/newpull/land/node/out/Release/node)
==29106==    by 0x19E7B0D: ??? (in /home/mhdawson/newpull/land/node/out/Release/node)
==29106==    by 0x19A6B86: ??? (in /home/mhdawson/newpull/land/node/out/Release/node)

@mhdawson
Copy link
Member

Staring at the code I don't see how it won't be deleted either when the Delete is called or when the finalizer runs. I guess next step would be to try to identify if the finalizers have been run or not. If there is an option to print information on what's in the finalization queue for V8 that would be helpful.

@hashseed
Copy link
Member

@ulan @mlippautz

@mhdawson
Copy link
Member

Added a printf to show the number of References created versus the number times the finalizer was called. It shows that the gap between References created and times a finalizer was called is slowing growing:

ReferenceCount: 120615, Finalize count: 100000
Processed 25000 iterations...
ReferenceCount: 245215, Finalize count: 200000
Processed 50000 iterations...
ReferenceCount: 363529, Finalize count: 300000
Processed 75000 iterations...
ReferenceCount: 481237, Finalize count: 400000
Processed 100000 iterations...
ReferenceCount: 610009, Finalize count: 500000
ReferenceCount: 726385, Finalize count: 600000
Processed 125000 iterations...
ReferenceCount: 844669, Finalize count: 700000
Processed 150000 iterations...
ReferenceCount: 962929, Finalize count: 800000
Processed 175000 iterations...
ReferenceCount: 1080861, Finalize count: 900000
Processed 200000 iterations...
ReferenceCount: 1210063, Finalize count: 1000000
ReferenceCount: 1326451, Finalize count: 1100000
Processed 225000 iterations...
ReferenceCount: 1444675, Finalize count: 1200000
Processed 250000 iterations...
ReferenceCount: 1562929, Finalize count: 1300000
Processed 275000 iterations...
ReferenceCount: 1681177, Finalize count: 1400000
Processed 300000 iterations...
ReferenceCount: 1810015, Finalize count: 1500000
ReferenceCount: 1926411, Finalize count: 1600000
Processed 325000 iterations...
ReferenceCount: 2044675, Finalize count: 1700000
Processed 350000 iterations...
ReferenceCount: 2163493, Finalize count: 1800000
Processed 375000 iterations...
ReferenceCount: 2281773, Finalize count: 1900000

@mhdawson
Copy link
Member

mhdawson commented Mar 22, 2019

From the instrumentation, I added it looks to me that despite this being called:

   _persistent.SetWeak(
          this, FinalizeCallback, v8::WeakCallbackType::kParameter);

FinalizeCallback is not invoked in a percentage of the times (maybe 25%).

Using this check in Delete

 if (reference->RefCount() != 0 || (reference->_delete_self) || (reference->_finalize_ran)) {

instead of

 if  ((reference->_delete_self) || (reference->_finalize_ran)) {

Might be a good optimization and does remove the leak, however, it would likely just hide whatever the real problem is and I could not easily reproduce the original problem #24494 addressed to be able to validate it would be safe (the test case did not recreate with the parts I'd backed out maybe a fuller backout work replicate).

Unfortunately, I'm on vacation next week and I have a lot I have to get done tomorrow in advance of being away so I won't be able to investigate further a while.

The real questions is why FinalizeCallback is not being called after

  _persistent.SetWeak(
          this, FinalizeCallback, v8::WeakCallbackType::kParameter);

is called in some percentage of the calls. I think the code related to that is mostly in the node Persistent and V8 persistent as opposed to anything in napi itself.

EDIT: updated to clarify that it is only some % of the time where FinalizeCallback is not called.

@mhdawson
Copy link
Member

anthony-tuininga if you have time checking

  1. if when you backout n-api: handle reference delete before finalize #24494 the test js-native-api/test_reference fails
  2. if it does fail in 1) then with n-api: handle reference delete before finalize #24494 and updating the check in delete to be if ((reference->RefCount() != 0) || (reference->_delete_self) || (reference->_finalize_ran)) if that passes/fails the test js-native-api/test_reference.

It would at least confirm if its an optimization worth considering.

@anthony-tuininga
Copy link
Contributor Author

anthony-tuininga commented Mar 22, 2019

Yes, if I back out #24494 the test js-native-api/test_reference fails. And if I add in the extra check after restoring #24494 it still passes. And yes, with that modified code the memory leak also appears to be gone. :-)

@mhdawson
Copy link
Member

mhdawson commented Apr 2, 2019

@anthony-tuininga thanks for checking that out. Just back from vacation last week. I think I'll submit a PR with that optimization as I think it would be good to reduce pressure on the gc. I think we'd still want to look at the root cause of the problem as it would just cover that up.

@anthony-tuininga
Copy link
Contributor Author

You're welcome. And discovering the root cause of the problem would definitely be a good idea! Let me know if there is anything else I can do to help.

@mhdawson
Copy link
Member

mhdawson commented Apr 4, 2019

If you have time to help instrument/debug at the Node.js/V8 level that would help moves things along faster. If that is the case let me know and we can get together to work on some specific next steps.

@anthony-tuininga
Copy link
Contributor Author

I suppose that depends on how much time is required, but I can spare a few hours if that will help. :-)

@mhdawson
Copy link
Member

mhdawson commented Apr 5, 2019

PR for optimization #27085. Needs an update once I add it @anthony-tuininga I'm hoping you can help validate it.

mhdawson added a commit to mhdawson/io.js that referenced this issue Apr 5, 2019
nodejs#24494 fixed a crash
but resulted in increased stress on gc finalization. A leak
was reported in nodejs#26667 which
we are still investigating. As part of this investigation I
realized we can optimize to reduce amount of deferred finalization.
Regardless of the root cause of the leak this should be a
good optimization. It also resolves the leak for the case being
reported in nodejs#26667. The OP in 26667 has confirmed that he can
still recreate the original problem that 24494 fixed and that
the fix still works with this optimization
@mhdawson
Copy link
Member

mhdawson commented Apr 5, 2019

@anthony-tuininga would be great if you could recheck with the updated content of PR in #27085 in 10.x and your test case. Change should have the same result as what you tested but good to be sure.

@anthony-tuininga
Copy link
Contributor Author

Will do.

@anthony-tuininga
Copy link
Contributor Author

I have verified that a copy of 10.15.3, patched with the PR for optimization #27085 resolves the memory leak.

@mhdawson
Copy link
Member

mhdawson commented Apr 8, 2019

@anthony-tuininga, plan to land #27085 in master tomorrow.

@anthony-tuininga
Copy link
Contributor Author

Great. Thanks!

danbev pushed a commit that referenced this issue Apr 9, 2019
#24494 fixed a crash
but resulted in increased stress on gc finalization. A leak
was reported in #26667 which
we are still investigating. As part of this investigation I
realized we can optimize to reduce amount of deferred finalization.
Regardless of the root cause of the leak this should be a
good optimization. It also resolves the leak for the case being
reported in #26667. The OP in 26667 has confirmed that he can
still recreate the original problem that 24494 fixed and that
the fix still works with this optimization

PR-URL: #27085
Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
@orgads
Copy link
Contributor

orgads commented Aug 15, 2019

Was this fixed in v10, or only in v12?

@mhdawson
Copy link
Member

I believe it landed in master before 12.x was cut so it has #27085 as well.

@orgads
Copy link
Contributor

orgads commented Aug 15, 2019

But this is a regression in 10, right? What's the backporting policy?

@rolftimmermans
Copy link

rolftimmermans commented Oct 8, 2019

I am running into this issue or something very closely related. I am using buffers with external memory. The fix does not seem to help – I am testing this on Node.js v12.10.0 on macOS.

I have modified the test case to demonstrate this behaviour with napi_create_external_buffer(). The finalizer is never run at all!

issue_26667_buffer.zip

These are my results:

2500000,630390784,37289984,1959768,786747
5000000,939368448,38866944,2062784,786211
7500000,1264635904,40177664,2052776,786211
10000000,1591042048,40701952,2052880,786211
12500000,1916497920,40964096,2053160,786211
15000000,2242945024,41488384,2053136,786211
17500000,2568470528,42274816,2119496,786211
20000000,2893910016,42799104,2053896,786211
22500000,2926735360,43323392,2054176,786211
25000000,2865909760,43585536,2054152,786211

However the finalizers are run if we occasionally allow the event loop to continue with setImmediate(). See the modified demo.js: demo_modified.js.zip. This is quite clear from the log, memory usage stabilises pretty quickly:

Num Iters,RSS,Heap Total,Heap Used,Ext
2500000,485388288,37027840,1963424,786266
5000000,717176832,39129088,2054568,785730
7500000,686436352,40177664,2055376,785730
10000000,704942080,40964096,2055880,785730
12500000,708456448,41750528,2056104,785730
15000000,713523200,42012672,2056104,785730
17500000,713523200,42012672,2056104,785730
20000000,715087872,42536960,2056104,785730
22500000,718864384,42274816,2056104,785730
25000000,708702208,42536960,2056104,785730

I hope this helps!

@bnoordhuis
Copy link
Member

@rolftimmermans That's working as expected. Your code has a finalizer and those are always deferred to the next event loop tick:

node/src/node_api.cc

Lines 38 to 59 in 064e111

static void FinalizeBufferCallback(char* data, void* hint) {
std::unique_ptr<BufferFinalizer, Deleter> finalizer{
static_cast<BufferFinalizer*>(hint)};
finalizer->_finalize_data = data;
node::Environment* node_env =
static_cast<node_napi_env>(finalizer->_env)->node_env();
node_env->SetImmediate(
[finalizer = std::move(finalizer)](node::Environment* env) {
if (finalizer->_finalize_callback == nullptr) return;
v8::HandleScope handle_scope(finalizer->_env->isolate);
v8::Context::Scope context_scope(finalizer->_env->context());
finalizer->_env->CallIntoModuleThrow([&](napi_env env) {
finalizer->_finalize_callback(
env,
finalizer->_finalize_data,
finalizer->_finalize_hint);
});
});
}

(Note that SetImmediate() call.)

@rolftimmermans
Copy link

@bnoordhuis Thanks for the quick response. Is there any way to release memory earlier, or is this just the way things are expected to work with external buffers? Seems like it should be possible to release the memory sooner... :/

@bnoordhuis
Copy link
Member

@rolftimmermans It's by design. Finalizers can call into JS land. If n-api didn't defer the callback, it'd be pretty easy to create a cb→js→cb→js→etc. stack overflow

@rolftimmermans
Copy link

Seems the behaviour I saw only applies to napi_create_external_buffer() and not to napi_create_external_arraybuffer(). The latter works fine and leaks no memory in similar circumstances.

I'll open a new issue to address this.

@bnoordhuis
Copy link
Member

That's because it doesn't defer the callback:

static void SecondPassCallback(const v8::WeakCallbackInfo<Reference>& data) {
Reference* reference = data.GetParameter();
if (reference->_finalize_callback != nullptr) {
reference->_env->CallIntoModuleThrow([&](napi_env env) {
reference->_finalize_callback(
env,
reference->_finalize_data,
reference->_finalize_hint);
});
}
// this is safe because if a request to delete the reference
// is made in the finalize_callback it will defer deletion
// to this block and set _delete_self to true
if (reference->_delete_self) {
Delete(reference);
} else {
reference->_finalize_ran = true;
}
}

I'd say that's an oversight, possibly due to a misunderstanding of how the two GC passes work.

@mhdawson
Copy link
Member

I think this can be closed out as the issue was addressed, and 10.x is also now out of service. Please let me know if that was not the right thing to do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
memory Issues and PRs related to the memory management or memory footprint. node-api Issues and PRs related to the Node-API.
Projects
None yet
Development

No branches or pull requests

8 participants