Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] events: use Map for events and limit copying #17074

Closed
wants to merge 2 commits into from

Conversation

apapirovski
Copy link
Member

@apapirovski apapirovski commented Nov 16, 2017

Instead of creating a dictionary object, use Map to store all the event handlers.

Instead of copying on each emit, set a flag that indicates an emit is in progress on that particular event name and copy only if events are added or removed.

Some other assorted changes to make this work. The one question mark is the changes to _stream_readable.js as I know there are some other issues around that (given that it's an npm package).

events/ee-add-remove.js n=250000                     13.37 %   ***  4.747420e-60
events/ee-emit.js listeners=1 argc=0 n=2000000        1.72 %     *  3.735939e-02
events/ee-emit.js listeners=1 argc=10 n=2000000       0.03 %        9.727581e-01
events/ee-emit.js listeners=1 argc=2 n=2000000       -1.91 %     *  2.632552e-02
events/ee-emit.js listeners=1 argc=4 n=2000000        2.86 %    **  1.118489e-03
events/ee-emit.js listeners=10 argc=0 n=2000000      34.01 %   ***  1.645287e-91
events/ee-emit.js listeners=10 argc=10 n=2000000     18.01 %   ***  4.968277e-81
events/ee-emit.js listeners=10 argc=2 n=2000000      28.41 %   ***  1.278091e-93
events/ee-emit.js listeners=10 argc=4 n=2000000      30.05 %   ***  1.464278e-117
events/ee-emit.js listeners=5 argc=0 n=2000000       33.64 %   ***  1.940727e-91
events/ee-emit.js listeners=5 argc=10 n=2000000      21.51 %   ***  3.178913e-78
events/ee-emit.js listeners=5 argc=2 n=2000000       29.03 %   ***  2.043200e-81
events/ee-emit.js listeners=5 argc=4 n=2000000       31.52 %   ***  6.935736e-90
events/ee-listener-count-on-prototype.js n=50000000  15.93 %   ***  1.132573e-21
events/ee-listeners-many.js n=5000000                -2.36 %   ***  9.635218e-06
events/ee-listeners.js n=5000000                     -6.98 %   ***  1.357890e-31
events/ee-once.js n=20000000                         96.04 %   ***  1.081295e-27

(Also thanks to @addaleax for the help with the domains part after I spent half a day trying to make v8::Map work... life saver!)

Checklist
  • make -j4 test (UNIX), or vcbuild test (Windows) passes
  • tests and/or benchmarks are included
  • documentation is changed or added
  • commit message follows commit guidelines
Affected core subsystem(s)

events, src, test

@apapirovski apapirovski added events Issues and PRs related to the events subsystem / EventEmitter. performance Issues and PRs related to the performance of Node.js. refactor to ES6+ labels Nov 16, 2017
@nodejs-github-bot nodejs-github-bot added the lib / src Issues and PRs related to general changes in the lib or src directory. label Nov 16, 2017
@jasnell jasnell added the semver-major PRs that contain breaking changes and should be released in the next major version. label Nov 16, 2017
@targos
Copy link
Member

targos commented Nov 16, 2017

I think we should put the Map under a private symbol if we do that.
I'm generally in favor of this (maybe I'm biased. I tried to do it two years ago :D) if it's semver-major. Maybe we should even keep _events for backwards compatibility as a lazy Proxy or something...

@jasnell
Copy link
Member

jasnell commented Nov 16, 2017

While I'm overall quite favorable to this approach, this is code that we need to be very careful with. There is a significantly non-trivial amount of userland code that can be broken by this. We'll need to do some investigation on the impact. Marking semver-major accordingly.

@@ -48,12 +48,15 @@ function prependListener(emitter, event, fn) {
// userland ones. NEVER DO THIS. This is here only because this code needs
// to continue to work with older versions of Node.js that do not include
// the prependListener() method. The goal is to eventually remove this hack.
if (!emitter._events || !emitter._events[event])
if (!emitter._events || !emitter._events.has(event)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ping @mcollina @nodejs/streams ... please review this

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I don't think this is correct but I need some input on what the current relationship between the npm package & node is. That original block of code seems to only be there defensively for the npm version, I don't actually know if it's relevant in the current version of Node.

lib/domain.js Outdated
@@ -76,7 +76,6 @@ function Domain() {

Domain.prototype.members = undefined;


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: unrelated change?

@apapirovski
Copy link
Member Author

I think we should put the Map under a private symbol if we do that.

I'm not 100% certain but that might present other problems in relation to accessing it from C++, which we need to do in a few cases.

Maybe we should even keep _events for backwards compatibility as a lazy Proxy or something...

We might be able to make it a proxy anyway. I'm going to look into this.

@apapirovski
Copy link
Member Author

Also, for reference: most of the emit performance improvements are from using arrayClone less. The reason for switch to Map was that the change introduced a 7% regression to adding/removing events.

@jasnell
Copy link
Member

jasnell commented Nov 16, 2017

Exposing _events as a lazy proxy would also be semver-major, of course, but it's definitely a possibility to reduce the likelihood of breaking things. I am very much +1 on making this change, I just don't want to sign off just yet, at least until we have a better understanding of what might break.

@apapirovski apapirovski force-pushed the patch-ee-perf branch 2 times, most recently from 5449f3d to a6fd6f1 Compare November 16, 2017 17:01
@targos
Copy link
Member

targos commented Nov 16, 2017

Let's start with a CITGM check: https://ci.nodejs.org/view/Node.js-citgm/job/citgm-smoker/1077/

@mcollina
Copy link
Member

I'm -1 in landing this without a strategy for readable-stream@1.0.x and readable-stream@1.1.x.
Those two have cumulatively has 25 millions downloads per month in May, and they will break hard if this happens.
readable-stream@2 has this https://github.com/nodejs/readable-stream/blob/d6c391d7101946c8b8b97914fc46fd3322c450d1/lib/_stream_readable.js#L89-L101 to deal with this specific problem.

Since last time this issue was brought up, I have thought a lot on how to proceed to solve this problem.
We need a strategy to make this change happen:

  1. backport the change in https://github.com/nodejs/readable-stream/blob/d6c391d7101946c8b8b97914fc46fd3322c450d1/lib/_stream_readable.js#L89-L101 to readable-stream@1.0.x and readable-stream@1.1.x.
  2. release those lines, and check with NPM the download numbers by version.
  3. deprecate all the previous readable-stream releases, this will annoy users.
  4. node 10 comes out with this patch, if the usage of the offending versions goes down, merit of semver and the deprecation warning.

@nodejs/tsc @nodejs/streams what do you think?

@addaleax
Copy link
Member

I don’t see what the fuzz is. If we provide a backwards-compatible proxy for _events, what’s the issue with that? Why would that even be semver-major, except for maybe the perf impact (but we should not get into the habit of considering perf of internal APIs a guarantee)?

@mcollina
Copy link
Member

mcollina commented Nov 16, 2017

I'm 👍 with _events as a proxy allocated on demand. However I'm not 100% sure we can cover all cases just via the proxy, as we know of 1 major, but no others. The semver-major stand, but using the proxy solves the compatibility with readable-stream@1 and readable-stream@1.1

lib/events.js Outdated
}

if (type === 'error') {
events._hasErrorListener = true;
Copy link
Contributor

@mscdex mscdex Nov 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should be adding new, underscored properties to EventEmitter instances. Let's use symbols if we absolutely need new private properties...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the problem is that we need to get this from C++ so the alternative is a symbol that's declared in C++ and shared with JS? I'm not too familiar with those parts so open to feedback/corrections.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@apapirovski Sorry, as you know I was on mobile, I probably should have given more context.

What I had in mind was having this tracking be specific to Domains, not all EventEmitters; you could add .on('newListener'); and .on('removeListener'); checks to keep track of whether an error listener is present, then update a property directly on the Domain object.

And, in case I wasn’t clear on it: That still means we’re doing invalid stuff in ShouldAbortOnUncaughtException(), getting properties isn’t allowed either way – it’s just going to work as long as we don’t invoke any runnable code, I guess. (But fixing that is definitely out of scope for this PR. 😄)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what happens when you almost lose your mind to JS not working in that chunk of C++ code and forget how to write code... 😆Thanks @addaleax, that's a much cleaner solution.

lib/events.js Outdated
@@ -435,13 +453,27 @@ function listenerCount(type) {
}

EventEmitter.prototype.eventNames = function eventNames() {
return this._eventsCount > 0 ? Reflect.ownKeys(this._events) : [];
return this._events !== undefined && this._events.size ?
[...this._events.keys()] : [];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you benchmark this change? Is the spread slower than manually looping and filling an array?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be, honestly I wasn't sure if this was performance intensive code and it works quite fast as is. Happy to dive in deeper once I get the Proxy issue sorted.

lib/events.js Outdated
// Adding the second element, need to change to array.
existing = prepend ? [listener, existing] : [existing, listener];
events.set(type, existing);
} else if (existing.emitting) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps this check should be more specific to avoid array cloning if we're adding a listener for a different event than what is being emitted.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So existing actually refers to the specific event so it only clones if we're emitting on that exact event.

@mscdex
Copy link
Contributor

mscdex commented Nov 16, 2017

So how much of the change in performance is related to delaying array cloning vs the change to Map? I'm not yet convinced Map is generally fast enough.

@apapirovski
Copy link
Member Author

@mscdex emit is entirely about the cloning, once is slightly more about cloning and partially about Map, the others are all about Map.

@apapirovski
Copy link
Member Author

apapirovski commented Nov 16, 2017

Will have an update later tonight or tomorrow with the other bits that are needed to make this work properly (Proxy, etc.). Thanks for the reviews everyone!

(Also, I'm pretty sure Proxy can account for almost 100% of use cases of _events. I don't think this HAS to be semver-major. IMO we should just make sure to test lots via CITGM and I'll try to dig into what's out there in the wild.)

@mscdex
Copy link
Contributor

mscdex commented Nov 16, 2017

I'm also curious as to why Proxy usage seems to be more ok for some folks for _events but not for _readableState?

@TimothyGu
Copy link
Member

TimothyGu commented Nov 16, 2017

Also, I'm pretty sure Proxy can account for almost 100% of use cases of _events.

While I welcome a Proxy-based approach, as a person who has used Proxies extensively I would have to say it is more work than one might expect at first. One has to be very careful with Proxies, and it is very likely that more hooks than expected need to be implemented. With special care, the end result can be very darn close to what one would get with this._events = Object.create(null). I will try to detail some of the pitfalls below. But I am not the best judge on how close to the original _events semantics we should try to duplicate (i.e. "all semantics" vs. "semantics that will affect if a popular module works or not"; I'm going for the former here), or how much deviation from the original behavior should we allow within the bounds of semver-minor.

  1. A special challenge is non-configurable properties, which are mandated by the spec to be unforgeable through Proxy. This means that

    1. if a non-configurable property is Object.defineProperty'd, the defineProperty hook must define a non-configurable property on the target object, throw an exception, or return false (i.e. throw an exception if the calling code is in strict mode, silently ignore in sloppy mode); otherwise a TypeError is unconditionally thrown by Proxy internals
    2. if a non-configurable property exists on the target object, it must be visible through the accessor hooks; otherwise a TypeError is unconditionally thrown.

    Non-configurable properties are probably only a minor inconvenience in the grand scheme of things, since defining a property is unconfigurable more or less interferes with EventEmitter internals anyway, but by no means is this the only pitfall with Proxies.

  2. Un-enumerable properties

    const ee = new EventEmitter();
    ee.on('ev1', () => {});
    Object.defineProperty(ee._events, 'ev2', {
      enumerable: false, // <<<
      configurable: true,
      writable: true,
      value: () => {}
    });
    ee._events.ev3 = () => {};
    
    assert.strictEqual(Object.getOwnPropertyDescriptor(ee._events, 'ev1').enumerable, true);
    assert.strictEqual(Object.getOwnPropertyDescriptor(ee._events, 'ev2').enumerable, false);
    assert.strictEqual(Object.getOwnPropertyDescriptor(ee._events, 'ev3').enumerable, true);

    I don't think we'll have to worry about users defining non-writable properties, since doing so would interfere with the normal operation of EventEmitter, though some tests would certainly be welcome. However, note the invariants that come with implementing a defineProperty hook.

  3. Prototype

    const ee = new EventEmitter();
    assert.strictEqual(Object.getPrototypeOf(ee._events), null);
    Object.setPrototypeOf(ee._events, Object.prototype);
    assert.strictEqual(Object.getPrototypeOf(ee._events), Object.prototype); // <<<
    assert.strictEqual(ee._events.hasOwnProperty, Object.prototype.hasOwnProperty); // <<<
  4. Property enumeration order

    const ee = new EventEmitter();
    const sym = Symbol();
    
    ee.on(sym, () => {});
    ee.on('-1', () => {});
    ee.on('01', () => {});
    ee.on('1.0', () => {});
    ee.on('4294967295', () => {});
    ee.on('4294967294', () => {});
    ee.on('first', () => {});
    ee.on('second', () => {});
    ee.on('1', () => {});
    ee.on('0', () => {});
    
    // I suspect V8 is implementing the ECMAScript spec incorrectly here,
    // since all "integer indices" [0, 2**53) should come first per spec, while V8 moves
    // only "array indices" [0, 2**32 - 1) to the front.
    assert.deepStrictEqual(Reflect.ownKeys(ee._events),
                           [ '0', '1', '4294967294',
                             '-1', '01', '1.0', '4294967295', 'first', 'second', sym ]);

@TimothyGu
Copy link
Member

@mscdex

I'm also curious as to why Proxy usage seems to be more ok for some folks for _events but not for _readableState?

Can you elaborate?

But just guessing here, _readableState has a fairly set structure and can benefit from hidden class optimization more, while _events is used more as a generic hash table and thus better suited for Proxy. And I would expect _readableState to be more performance-sensitive to begin with, but I can of course be totally wrong.

@mscdex
Copy link
Contributor

mscdex commented Nov 16, 2017

And I would expect _readableState to be more performance-sensitive to begin with, but I can of course be totally wrong

Not to derail this PR, but from what I understood that would have only affected userland directly using _readableState (which would be the Proxy, much like what was proposed earlier in this PR) and not node core itself, which would not be going through the Proxy.

@addaleax
Copy link
Member

@mscdex Imo the main difference is that _readableState is something that you needed to use for a few things, while on the events side that was barely ever necessary & we actually were more inclined to fill the gaps that did exist (e.g. _prependListener).

@mcollina
Copy link
Member

I’m good with a proxy here because it will be used only in very old code, and optionally created. This code is already not in use in newer versions of modules, and I think we might want to use it runtime deprecate _events access.

BTW, I still think that we should force userland to upgrade. I think the time is right. Given that the major problem is readable-stream, and we control that and in May all supported Node versions have prependListener (it was introduced in 6). I think deprecating unsupported versions of readable-stream is feasible, and we can do a patch version with the fix on all the old lines.

@apapirovski apapirovski added the wip Issues and PRs that are still a work in progress. label Nov 18, 2017
@apapirovski
Copy link
Member Author

I'm labeling as in progress as this requires a lot of further work & testing with the Proxy. There are still some strange failures in the Mocha package related to Socket.io which don't make complete sense to me.

I will continue to commit and push to this branch so everyone can review as they want to, but I will be opening a new PR once all the work is done. I'm also going to try & split up the work into individual commits before opening the new PR.

@apapirovski
Copy link
Member Author

apapirovski commented Nov 18, 2017

@apapirovski apapirovski force-pushed the patch-ee-perf branch 2 times, most recently from c0a23f3 to 88f0a14 Compare November 19, 2017 19:37
@apapirovski
Copy link
Member Author

apapirovski commented Nov 20, 2017

CI: https://ci.nodejs.org/job/node-test-pull-request/11561/
CitGM: https://ci.nodejs.org/view/Node.js-citgm/job/citgm-smoker/1092/
Benchmark CI: https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks/44/

Just testing some of the latest proxy changes. I'm starting to write up documentation of everything that has changed and will post an updated PR when done.

Based on local testing this seems to survive CitGM without any hitches and also passes test suites for readable-stream 1.x & a few other relevant modules.

@apapirovski
Copy link
Member Author

@refack @AndreasMadsen @mscdex Could you look at these results and provide any thoughts?

My local benchmark results on a Mac (100 runs for each node version, both built today on top of master):

 events/ee-add-remove.js n=250000                         5.66 %        ***  3.610752e-26
 events/ee-emit.js listeners=1 argc=0 n=2000000           6.40 %        ***  5.607456e-11
 events/ee-emit.js listeners=1 argc=10 n=2000000          0.14 %             8.386123e-01
 events/ee-emit.js listeners=1 argc=2 n=2000000          -0.59 %             4.384320e-01
 events/ee-emit.js listeners=1 argc=4 n=2000000           4.34 %        ***  2.346464e-09
 events/ee-emit.js listeners=10 argc=0 n=2000000         33.50 %        ***  5.746277e-98
 events/ee-emit.js listeners=10 argc=10 n=2000000        16.67 %        ***  9.518193e-84
 events/ee-emit.js listeners=10 argc=2 n=2000000         26.18 %        *** 4.551248e-106
 events/ee-emit.js listeners=10 argc=4 n=2000000         29.63 %        *** 1.136846e-122
 events/ee-emit.js listeners=5 argc=0 n=2000000          31.85 %        ***  4.585584e-96
 events/ee-emit.js listeners=5 argc=10 n=2000000         20.10 %        ***  4.542830e-85
 events/ee-emit.js listeners=5 argc=2 n=2000000          26.71 %        ***  2.096648e-79
 events/ee-emit.js listeners=5 argc=4 n=2000000          29.07 %        ***  3.538013e-91
 events/ee-listener-count-on-prototype.js n=50000000     15.14 %        ***  2.994930e-23
 events/ee-listeners-many.js n=5000000                   -2.71 %        ***  3.513043e-05
 events/ee-listeners.js n=5000000                        -8.36 %        ***  1.189482e-40
 events/ee-once.js n=20000000                            59.39 %        *** 2.167662e-160

Benchmark CI (200 runs):

 events/ee-add-remove.js n=250000                        -0.50 %             2.716769e-01
 events/ee-emit.js listeners=10 argc=0 n=2000000         27.52 %        *** 7.236391e-245
 events/ee-emit.js listeners=10 argc=10 n=2000000        17.72 %        *** 6.292272e-129
 events/ee-emit.js listeners=10 argc=2 n=2000000         11.88 %        ***  4.056191e-89
 events/ee-emit.js listeners=10 argc=4 n=2000000         17.84 %        ***  2.210516e-63
 events/ee-emit.js listeners=1 argc=0 n=2000000          -6.50 %        ***  2.551013e-22
 events/ee-emit.js listeners=1 argc=10 n=2000000         -5.24 %        ***  3.871810e-25
 events/ee-emit.js listeners=1 argc=2 n=2000000          -8.32 %        ***  6.688618e-49
 events/ee-emit.js listeners=1 argc=4 n=2000000          -6.08 %        ***  1.060804e-23
 events/ee-emit.js listeners=5 argc=0 n=2000000          24.66 %        *** 5.893166e-223
 events/ee-emit.js listeners=5 argc=10 n=2000000         17.62 %        *** 5.825293e-168
 events/ee-emit.js listeners=5 argc=2 n=2000000          11.67 %        *** 2.275768e-108
 events/ee-emit.js listeners=5 argc=4 n=2000000          15.30 %        *** 2.218590e-132
 events/ee-listener-count-on-prototype.js n=50000000      6.88 %        ***  9.542946e-09
 events/ee-listeners.js n=5000000                        -9.34 %        ***  2.343139e-76
 events/ee-listeners-many.js n=5000000                   -1.11 %             7.794460e-02
 events/ee-once.js n=20000000                            55.93 %        *** 2.356821e-240

The biggest difference is in the listeners=1 benchmarks. That gap is pretty substantial, given the confidence level it's claiming.

I'm guessing this might be related to some underlying differences in the exact compiler used & GC on the different systems? Or am I missing something else?

Also, this makes me wonder about whether we should generally have more than one benchmark machine to account for potential differences between Linux, OS X & Windows...

@refack
Copy link
Contributor

refack commented Nov 21, 2017

AFAICT the results on your local machine for listeners=1 are a little noisy. This could be if the n is too small it might fall below the measure accuracy. I think this is the case:

d:\code\node$ node benchmark/events/ee-emit.js listeners=1 argc=0 n=200000000
events\ee-emit.js listeners=1 argc=0 n=200000000: 42,230,049.05880847

d:\code\node$ node benchmark/events/ee-emit.js listeners=1 argc=0 n=20000000
events\ee-emit.js listeners=1 argc=0 n=20000000: 42,127,973.06216074

d:\code\node$ node benchmark/events/ee-emit.js listeners=1 argc=0 n=2000000
events\ee-emit.js listeners=1 argc=0 n=2000000: 31,118,889.79626385

d:\code\node$ node benchmark/events/ee-emit.js listeners=1 argc=0 n=2000000
events\ee-emit.js listeners=1 argc=0 n=2000000: 35,411,159.2354667

d:\code\node$ node benchmark/events/ee-emit.js listeners=1 argc=0 n=2000000
events\ee-emit.js listeners=1 argc=0 n=2000000: 33,333,746.671792064

@apapirovski
Copy link
Member Author

apapirovski commented Nov 21, 2017

@refack yeah the results for many arguments get noisy but

events/ee-emit.js listeners=1 argc=0 n=2000000 6.40 % *** 5.607456e-11

vs

events/ee-emit.js listeners=1 argc=0 n=2000000 -6.50 % *** 2.551013e-22

But also, I've never seen that result fall bellow like 2.5% on my machine. In any of the tries with 100+ runs.

Edit: Here's a quick check with 200 runs on my machine events/ee-emit.js listeners=1 argc=0 n=2000000 4.74 % *** 5.324332e-21... As I said, the difference is pretty obvious but it's not at all obvious why.

@refack
Copy link
Contributor

refack commented Nov 21, 2017

The p-value only means that the chances for finding the observed difference by chance (by getting only "extreme" results) is negligibly small.

If the actual measurement is very small (which is what I saw locally for n=2e6) 6% can be an extremely small quantity. In that case other computations, outside of the area of interest, can have significant effect on the measurements.
My best guess in this case is that the compilation of the code affects the results - back-of-envelope computation: 6% * 2E6 ~ 1E5 so it's not unreasonable that the compilation of the code is 100,000 slower than each of the calls to a compiled ee.emit('dummy', true, 5, 10, false) with 1 noop listener ee.on('dummy', function() {});

One of the reasons the benchmark machine was created, so that we have a stable, independent, and reproducible environment. It doesn't mean it's the one and only truth, just something we can all compare to. It's not as if real code runs ee.emit 2,000,000 times in a loop, right.

tl;dr eliminating FUD from benchmarks is an art not a science 🤷‍♂️

@apapirovski
Copy link
Member Author

@refack Makes sense. 👍

@AndreasMadsen
Copy link
Member

@apapirovski trust the CI results. It is almost impossible to not get inference from other processes on your own machine.

@apapirovski
Copy link
Member Author

@AndreasMadsen The CI results seem less trustworthy in this case though. In particular, ee-emit is coloured by 1 in 10 runs or so being about 1.75x faster on the old version (where the rest are equivalent). I've also spun up a separate instance on Joyent with SmartOS that was reporting similar results to what I'm seeing locally.

@apapirovski
Copy link
Member Author

Glad there were some disparities between my machine and the CI as it led me to start experimenting with our events benchmarks and tweaking them for further coverage. That led to the discovery that Map performance is significantly worse when dealing with more keys being added. So back to change this to work without a Map. I've made a copy of the current code to revisit it later once newer V8 versions are added — perhaps something changes then.

@refack
Copy link
Contributor

refack commented Nov 22, 2017

We should have a rule-of-thumb that n needs to be large enough that a test run will be longer than 3s (3 is arbitrary, but definatly should be > 1)

@apapirovski apapirovski force-pushed the patch-ee-perf branch 4 times, most recently from 74de637 to cc0f1de Compare November 26, 2017 00:20
If more than one handler is bound to an event type, avoid copying
it on emit and instead set a flag that regulates whether to make
a copy when a new event is added or removed.

Do not use delete or Object.create(null) when removing listeners,
instead set to undefined and modify other methods to account
for this.
Refactor lib & src code to eliminate all deep reaches into the
internal _events dictionary object, instead use available APIs
and add an extra argument unwrap to listeners.
@AndreasMadsen
Copy link
Member

AndreasMadsen commented Nov 26, 2017

We should have a rule-of-thumb that n needs to be large enough that a test run will be longer than 3s (3 is arbitrary, but definatly should be > 1)

I think I just need to find the time and integrate some calculations similar to those I made in #16888 (comment), such the benchmark tools report what improvement is measurable with one * (will, of course, depend on --runs). Seconds don't really indicate anything, as it entirely depends on the variance. In some cases we also don't care about micro-optimization, we just have benchmarks to check that we didn't completely break performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
events Issues and PRs related to the events subsystem / EventEmitter. lib / src Issues and PRs related to general changes in the lib or src directory. performance Issues and PRs related to the performance of Node.js. semver-major PRs that contain breaking changes and should be released in the next major version. wip Issues and PRs that are still a work in progress.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants