Skip to content
This repository has been archived by the owner on Feb 21, 2018. It is now read-only.

Native modules API: the FFI approach #10

Open
orangemocha opened this issue Oct 24, 2015 · 8 comments
Open

Native modules API: the FFI approach #10

orangemocha opened this issue Oct 24, 2015 · 8 comments

Comments

@orangemocha
Copy link
Contributor

The current API for native modules exposes the entire v8 API to native module developers. Even if NAN is used to insulate module code from the v8 API changes, it doesn't do anything to shield it from changes in the Node ABI, which in turn would require recompiling the module.

I think that an FFI-based approach has the potential of providing what's needed to implement the vast majority of native modules out there. The idea is to be able to marshal only basic types back and forth between JavaScript and C/C++. Since those types can hopefully be defined in standard terms, and they don't expose any engine-specific features or implementation details, the interface can stay consistent across engine versions and even across multiple engines. Native modules would have to be rewritten to expose their functionality through this marshaling layer, and they wouldn't have access to the v8 constructs. Note that arbitrary JavaScript objects would probably not be supported across the interface, because they risk exposing engine-specific implementation details. Instead, native modules following this model will likely need a JavaScript portion, to map the JavaScript-style API defined by the module to calls into the native portion that use only simple types.

I am hypothesizing that the vast majority of modules could be rewritten using this approach and that the only ones that couldn't are the ones that are designed to expose engine specific features (e.g. v8-profiler). Those will naturally need to continue to support specific engines, and be exposed to changes in the engine.

There is a widespread perception in the community that an FFI solution would be too slow to be of general use. I think the cause of this perception might be that the node-ffi module is known to introduce a lot of overhead. I haven't had a chance to study the node-ffi implementation, but I am guessing that it is using a reflection-based approach to do the marshaling, which may be the cause of the overhead. A template-based approach was suggested by @geoffkizer at nodejs/nan#349 (comment) which showed that the overhead can be very small. My experience with other platforms that use this approach (i.e. .NET) also leads me to believe that the overhead can be reasonable and that the approach should be feasible.

I am raising this issue so that at least we don't dismiss this possibility. It would be useful for this group to prove or disprove whether this can be an effective solution.

One of the open questions in my mind would be how to support the array/buffer type in a portable and performant way.

/cc @robpaveza (Chakra)

@TooTallNate
Copy link

I've done a lot of work trying to integrate FFI into core, but the discussion needs a 🔥 to reignite it.
See: https://github.com/nodejs/node/issues?utf8=✓&q=label%3Affi

@robpaveza
Copy link

The approach that @orangemocha was mentioning would be something like a compile-time FFI, based around template metaprogramming. I haven't done something like this with C++ (although I know it can be accomplished because Windows Runtime Library does this for the way it implements COM class registration). I have done something like it with C#, though, and can give an example of how I've done it in the past.

There is a particular database file format that contains very few types - int32, float32, bools, and strings, which are in a blob at the end of the file, and so the inline value in the row is an index into the string blob. All columns are 32-bit, so in order to comprehend a table from disk, you only need to know its row length and the schema. My initial pass at this was to say, "Here's a DbcTable<SomeObject>, and the internal guts would examine SomeObject at runtime and use the runtime reflection infrastructure (PropertyInfo.SetValue(theObject, theValue, null)), which is slow for a large number of invocations. As I understand FFI presently, that would be roughly equivalent; you specify details to the FFI infrastructure, and it creates marshaling stubs. (That's how it's been explained to me; I haven't dug in yet, so if I'm incorrect here, my apologies).

My next revision of this was to create a runtime dynamic function that did this. It would do the same thing that the reflection infrastructure did, but only do it once, and then compile a function that would thunk those accesses directly. It improved performance by an order of magnitude.

What we're proposing is that the template library can, for the most part, eliminate all of these steps, and create the fast path directly emitted by the C++ compiler. We imagine that it would be done initially for JavaScript primitive types (int, double, string, null, undefined) and then we could grow it. Thus, instead of writing the code by hand that marshals each value, you would use a template to do something like this:

class MyNodeModule {
public:
    static int add(int a, int b) { return a + b; }
}

NodeDeclareHostModuleFunction(L"add", MyNodeModule::add);

The NodeDeclareHostModuleFunction reference is a #define that would be the entry point into the template system. The same fast-paths, though, that are used today and manually authored by app developers would be used by the template library. They'd just be implicitly included by the templates.

@ianwjhalliday
Copy link

@orangemocha @ofrobots here is the template machinery that @geoffkizer proposed and experimented with.

It relies on C++11 variadic templates. This example only works for int32_t return and parameter types, but the only work remaining is to add ValueConverter implementations for the rest of the native types for which support is desired.

As was discussed on the nodejs/nan#349 a JavaScript layer on top of the native APIs exposed in this way would be necessary to handle unexpected argument types, or optional arguments and polymorphic APIs.

@ofrobots
Copy link

@ianwjhalliday that template trickery is great. I really like the simplicity (from a consumer's point of view). I had to make some tweaks to make this work with clang, but it does work really well.

--- TemplateBinding.h.orig  2016-03-14 13:56:45.000000000 -0700
+++ TemplateBinding.h   2016-03-14 13:56:29.000000000 -0700
@@ -22,11 +22,11 @@
 // Descriptor templates
 // These templates capture the metadata for calling a native function and the implementation that does it.

-template <typename ArgType, unsigned int index>
+template <typename ArgType_, unsigned int index_>
 struct ArgumentDescriptor
 {
-    typedef ArgType ArgType;
-    static const unsigned int index = index;
+    typedef ArgType_ ArgType;
+    static const unsigned int index = index_;
 };

 // Static function descriptor
@@ -37,7 +37,7 @@
     template <ReturnType(*nativeFunction)(typename ArgumentDescriptors::ArgType...)>
     static void BindNativeFunction(const v8::FunctionCallbackInfo<v8::Value>& callInfo)
     {
-        ReturnType result = nativeFunction(ValueConverter<ArgumentDescriptors::ArgType>::JSToNative(callInfo[ArgumentDescriptors::index])...);
+        ReturnType result = nativeFunction(ValueConverter<typename ArgumentDescriptors::ArgType>::JSToNative(callInfo[ArgumentDescriptors::index])...);
         ValueConverter<ReturnType>::SetReturnValue(callInfo, result);
     }
 };

I also want to explore LuaJIT style (jit-supported) FFI mechanisms. One of the problems that current modules using node-ffi have to deal with is the fact that most Windows users do not have a C++ compiler available at npm-install time.

It would be a lot more work to get a JIT supported bindings generator to work however, compared to the template approach above.

@ofrobots
Copy link

It turns out that V8 already has a simple lightweight binding layer implementation that is based on a similar template metaprogramming approach: https://code.google.com/p/chromium/codesearch#chromium/src/gin/README

@ianwjhalliday
Copy link

Good find. Looks like quality precedent for the approach. Would we want to use it directly in node and node modules? If its public API remains stable over V8 API changes then this brings a lot of the benefit that we're seeking. The only downside to using it directly then would be that it won't help move away from code directly tied to V8.

@ofrobots
Copy link

Good find. Looks like quality precedent for the approach. Would we want to use it directly in node and node modules?

Not necessarily. For example, it could be used as a starting point for something more neutral, once we have idea of what approach to use.

@Qard
Copy link
Member

Qard commented Mar 17, 2016

Certainly there's a lot less API surface area, with not needing to use V8 types.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants