-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
src: use STL containers instead of v8 values for static module data #24384
Conversation
Also @hashseed as I understand there is no way to use the cache if node is started with v8 flags which leads to a mismatch of the flag hash in the serialized code cache data, is that correct? |
src/node_native_module.cc
Outdated
return out; | ||
} | ||
|
||
Local<Set> ToJsSet(Local<Context> context, const std::set<std::string>& in) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, maybe this one could better be implemented as an ToV8Value()
overload from util.h
? It fits in with the others pretty well, I think (the downside being that we always use UTF8 there unconditionally…)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@addaleax The current implementation is only capable of converting std::set<const std::string>
to an array...not exactly what we want here, but I can leave a TODO
src/node_native_module.h
Outdated
|
||
namespace node { | ||
namespace native_module { | ||
|
||
// The native (C++) side of the native module compilation. | ||
typedef typename std::map<std::string, UnionBytes> NativeModuleRecordMap; | ||
typedef typename std::map<std::string, std::string> NativeModuleHashMap; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw, did you consider using std::unordered_map
and std::unordered_set
over std::map
/std::set
? I don’t know if we need to worry a lot about the performance tradeoffs here, but I personally tend to use the unordered versions when I can
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there is a reason not to use the unordered version...good idea, I'll change that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow! 😮 Looks great.
@joyeecheung could you gist the new |
@refack If I remove the static char data, they look like this: https://gist.github.com/joyeecheung/9b41e0b9309d9492270ba23a92e2733d |
Thanks for the reviews, updated. CI: https://ci.nodejs.org/job/node-test-pull-request/18673/ |
IMO they look much better than before 😄 |
So...interesting enough, this uncovers a circular dependency in I opened #24396 to fix this. |
env->set_native_modules_code_cache_hash(native_modules_code_cache_hash); | ||
Local<Object> NativeModuleLoader::GetSourceObject( | ||
Local<Context> context) const { | ||
return MapToObject(context, source_); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the return value cached / should it be cached? Right now it seems like every time internalBinding('native_module').source
is accessed a new object is created.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not cached, but that does not really matter since this is only used by tools/generate_code_cache.js
, the native module loader in the binary only gets to call into a C++ function that can access the std::unordered_map
directly.
So...interesting: https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks/275
https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks/276
I tried to google around and turns out that's not quite surprising, given that we only have < 200 items, and we only insert items and don't touch them once the map is initialized, hashing on the strings could be slower than searching in the tree. e.g. See https://stackoverflow.com/questions/4846798/why-would-map-be-much-faster-than-unordered-map (this almost tempted me to try implementing a trie..nah) |
Removing |
CI (rebased on master): https://ci.nodejs.org/job/node-test-pull-request/18724/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM other than some fairly superficial comments.
tools/generate_code_cache.js
Outdated
const cachedData = getCodeCache(key); | ||
if (!cachedData.length) { | ||
if (!(cachedData instanceof Uint8Array)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use isUint8Array
from util.types
?
tools/generate_code_cache.js
Outdated
console.error(`Failed to generate code cache for '${key}'`); | ||
process.exit(1); | ||
} | ||
|
||
const length = cachedData.length; | ||
const length = cachedData.byteLength; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change should be unneeded right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TimothyGu Yeah, just so that it's clearer since this length is only used to calculate a string displayed in KB
/MB
etc. human readable format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll switch this to const size = cachedData.byteLength
- I think that makes this a bit more self-explanatory and less ambiguous
src/node.cc
Outdated
|
||
// TODO(joyeecheung): use NativeModule::Compile | ||
// TODO(joyeecheung): use NativeModuleLoader::Compile | ||
// We duplicate the string literals here since once we refactor the boostrap |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: bootstrap
src/node_native_module.cc
Outdated
env->SetMethod(target, "compileFunction", NativeModule::CompileFunction); | ||
env->SetMethod(target, "compileCodeCache", NativeModule::CompileCodeCache); | ||
CHECK(target | ||
->SetAccessor(env->context(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’d make this a method, as it signals that getting the value is a relatively expensive operation (and that the returned object is different every time this is called).
Ditto for GetCacheUsage()
.
Instead of putting the source code and the cache in v8::Objects, put them in per-process std::maps. This has the following benefits: - It's slightly lighter in weight compared to storing things on the v8 heap. Also it may be slightly faster since the preivous v8::Object is already in dictionary mode - though the difference is very small given the number of native modules is limited. - The source and code cache generation templates are now much simpler since they just initialize static arrays and manipulate STL constructs. They are also easier to debug from the C++'s side, especially early in the bootstrap process when no inspector can be attached. - The static native module data can be accessed independent of any Environment or Isolate, and it's easy to look them up from the C++'s side. - It's now impossible to mutate the source code used to compile native modules from the JS land since it's completely separate from the v8 heap. We can still get the constant strings from process.binding('natives') but that's all. A few drive-by fixes: - Remove DecorateErrorStack in LookupAndCompile - We don't need to capture the exception to decorate when we encounter errors during native module compilation, as those errors should be syntax errors and v8 is able to decorate them well. We use CompileFunctionInContext so there is no need to worry about wrappers either. - The code cache could be rejected when node is started with v8 flags. Instead of aborting in that case, simply keep a record in the native_module_without_cache set. - Refactor js2c.py a bit, reduce code duplication and inline Render() to make the one-byte/two-byte special treatment easier to read.
Fixed a few more nits & rebased & squashed the fixup into one. CI: https://ci.nodejs.org/job/node-test-pull-request/18758/ |
Landed in 7778c03, thanks! |
Instead of putting the source code and the cache in v8::Objects, put them in per-process std::maps. This has the following benefits: - It's slightly lighter in weight compared to storing things on the v8 heap. Also it may be slightly faster since the preivous v8::Object is already in dictionary mode - though the difference is very small given the number of native modules is limited. - The source and code cache generation templates are now much simpler since they just initialize static arrays and manipulate STL constructs. - The static native module data can be accessed independently of any Environment or Isolate, and it's easy to look them up from the C++'s side. - It's now impossible to mutate the source code used to compile native modules from the JS land since it's completely separate from the v8 heap. We can still get the constant strings from process.binding('natives') but that's all. A few drive-by fixes: - Remove DecorateErrorStack in LookupAndCompile - We don't need to capture the exception to decorate when we encounter errors during native module compilation, as those errors should be syntax errors and v8 is able to decorate them well. We use CompileFunctionInContext so there is no need to worry about wrappers either. - The code cache could be rejected when node is started with v8 flags. Instead of aborting in that case, simply keep a record in the native_module_without_cache set. - Refactor js2c.py a bit, reduce code duplication and inline Render() to make the one-byte/two-byte special treatment easier to read. PR-URL: #24384 Fixes: https://github.com/Remove Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Tiancheng "Timothy" Gu <timothygu99@gmail.com> Reviewed-By: James M Snell <jasnell@gmail.com>
Instead of putting the source code and the cache in v8::Objects, put them in per-process std::maps. This has the following benefits: - It's slightly lighter in weight compared to storing things on the v8 heap. Also it may be slightly faster since the preivous v8::Object is already in dictionary mode - though the difference is very small given the number of native modules is limited. - The source and code cache generation templates are now much simpler since they just initialize static arrays and manipulate STL constructs. - The static native module data can be accessed independently of any Environment or Isolate, and it's easy to look them up from the C++'s side. - It's now impossible to mutate the source code used to compile native modules from the JS land since it's completely separate from the v8 heap. We can still get the constant strings from process.binding('natives') but that's all. A few drive-by fixes: - Remove DecorateErrorStack in LookupAndCompile - We don't need to capture the exception to decorate when we encounter errors during native module compilation, as those errors should be syntax errors and v8 is able to decorate them well. We use CompileFunctionInContext so there is no need to worry about wrappers either. - The code cache could be rejected when node is started with v8 flags. Instead of aborting in that case, simply keep a record in the native_module_without_cache set. - Refactor js2c.py a bit, reduce code duplication and inline Render() to make the one-byte/two-byte special treatment easier to read. PR-URL: #24384 Fixes: https://github.com/Remove Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Tiancheng "Timothy" Gu <timothygu99@gmail.com> Reviewed-By: James M Snell <jasnell@gmail.com>
Instead of putting the source code and the cache in v8::Objects, put them in per-process std::maps. This has the following benefits: - It's slightly lighter in weight compared to storing things on the v8 heap. Also it may be slightly faster since the preivous v8::Object is already in dictionary mode - though the difference is very small given the number of native modules is limited. - The source and code cache generation templates are now much simpler since they just initialize static arrays and manipulate STL constructs. - The static native module data can be accessed independently of any Environment or Isolate, and it's easy to look them up from the C++'s side. - It's now impossible to mutate the source code used to compile native modules from the JS land since it's completely separate from the v8 heap. We can still get the constant strings from process.binding('natives') but that's all. A few drive-by fixes: - Remove DecorateErrorStack in LookupAndCompile - We don't need to capture the exception to decorate when we encounter errors during native module compilation, as those errors should be syntax errors and v8 is able to decorate them well. We use CompileFunctionInContext so there is no need to worry about wrappers either. - The code cache could be rejected when node is started with v8 flags. Instead of aborting in that case, simply keep a record in the native_module_without_cache set. - Refactor js2c.py a bit, reduce code duplication and inline Render() to make the one-byte/two-byte special treatment easier to read. PR-URL: #24384 Fixes: https://github.com/Remove Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Tiancheng "Timothy" Gu <timothygu99@gmail.com> Reviewed-By: James M Snell <jasnell@gmail.com>
Instead of putting the source code and the cache in v8::Objects, put them in per-process std::maps. This has the following benefits: - It's slightly lighter in weight compared to storing things on the v8 heap. Also it may be slightly faster since the preivous v8::Object is already in dictionary mode - though the difference is very small given the number of native modules is limited. - The source and code cache generation templates are now much simpler since they just initialize static arrays and manipulate STL constructs. - The static native module data can be accessed independently of any Environment or Isolate, and it's easy to look them up from the C++'s side. - It's now impossible to mutate the source code used to compile native modules from the JS land since it's completely separate from the v8 heap. We can still get the constant strings from process.binding('natives') but that's all. A few drive-by fixes: - Remove DecorateErrorStack in LookupAndCompile - We don't need to capture the exception to decorate when we encounter errors during native module compilation, as those errors should be syntax errors and v8 is able to decorate them well. We use CompileFunctionInContext so there is no need to worry about wrappers either. - The code cache could be rejected when node is started with v8 flags. Instead of aborting in that case, simply keep a record in the native_module_without_cache set. - Refactor js2c.py a bit, reduce code duplication and inline Render() to make the one-byte/two-byte special treatment easier to read. PR-URL: nodejs#24384 Fixes: https://github.com/Remove Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Tiancheng "Timothy" Gu <timothygu99@gmail.com> Reviewed-By: James M Snell <jasnell@gmail.com>
Instead of putting the source code and the cache in v8::Objects,
put them in per-process std::unordered_maps. This has the following benefits:
v8 heap. Also it may be slightly faster since the preivous v8::Object is
already in dictionary mode - though the difference is very small
given the number of native modules is limited.
since they just initialize static arrays and manipulate STL
constructs. They are also easier to debug from the C++'s side,
especially early in the bootstrap process when no inspector
can be attached.
Environment or Isolate, and it's easy to look them up from the
C++'s side.
native modules from the JS land since it's completely separate
from the v8 heap. We can still get the constant strings from
process.binding('natives') but that's all.
A few drive-by fixes:
capture the exception to decorate when we encounter
errors during native module compilation, as those errors should be
syntax errors and v8 is able to decorate them well. We use
CompileFunctionInContext so there is no need to worry about
wrappers either.
Instead of aborting in that case, simply keep a record in the
native_module_without_cache set.
to make the one-byte/two-byte special treatment easier to read.
Checklist
make -j4 test
(UNIX), orvcbuild test
(Windows) passes