-
Notifications
You must be signed in to change notification settings - Fork 492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: improving performance #528
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,11 +1,41 @@ | ||
// parse out just the options we care about so we always get a consistent | ||
// obj with keys in a consistent order. | ||
const opts = ['includePrerelease', 'loose', 'rtl'] | ||
const parseOptions = options => | ||
!options ? {} | ||
: typeof options !== 'object' ? { loose: true } | ||
: opts.filter(k => options[k]).reduce((o, k) => { | ||
o[k] = true | ||
return o | ||
}, {}) | ||
module.exports = parseOptions | ||
const var1 = Object.freeze({ includePrerelease: true, loose: true, rtl: true }); | ||
const var2 = Object.freeze({ includePrerelease: true, loose: true }); | ||
const var3 = Object.freeze({ includePrerelease: true, rtl: true }); | ||
const var4 = Object.freeze({ includePrerelease: true }); | ||
const var5 = Object.freeze({ loose: true, rtl: true }); | ||
const var6 = Object.freeze({ loose: true }); | ||
const var7 = Object.freeze({ rtl: true }); | ||
const emptyOpts = Object.freeze({}); | ||
Comment on lines
+1
to
+8
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. typically freezing objects makes them much slower, so it's worth benchmarking this. Also, all of these should be There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. +1 on trying out There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, freezing has never changed performance in anything I've ever tried, unfortuantely. Honestly, I think that the way forward in optimizing things is not going to be down to tricks like freezing or looking for the fastest LRU cache; it's going to be overall algorithmic changes to the code. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The idea of freezing is just to prevent modification in the object, also, is not to gain performance but to save memory allocation. With the older version, I did a test that ended with 8.7mb of memory usage and with this freeze version, it was consistent at 6.7mb. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think @ljharb 's point was more about using There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @kurtextrem like #528 (comment), if we do it inline it can be slower, so as not to break the tests, I think it's better not to include There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It would be fine to change the tests here, to be clear. I'm very surprised doing it inline is slower - in which engines is that true? Note that we're not just concerned with node here, this package is used in browsers as well. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @ljharb The benchmark site I've used (from the other thread) doesn't work for me in Firefox, so I can only say it's definitely much slower in Chromium, Safari and Node (that caught my be surprise too, I assume There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @kurtextrem The main problem is if we were returning the object but since we run it once, the op/s is not a problem. It will be a problem if it slows down the property access but I don't think that is the case. |
||
|
||
const parseOptions = options => { | ||
if (!options) return emptyOpts; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. there's no way this is faster than There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. why shouldn't it? object allocation (and cleanup later) isn't free There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, the idea is to reduce memory allocation. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Have you benchmarked it? There are a great many optimizations people attempt to make in JS that are counterintuitively slower than the idiomatic approach. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @ljharb Yes, benchmark-memory.js
Run the benchmark with the following command:
The difference shows up at the end of the text file, without object freeze with ended up with About the pure performance of Without
With
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @ljharb Like @kurtextrem mentioned in the other comment, inline static null proto is slower and we ended up with 8.7mb. Code: const parseOptions = options => {
if (!options) return Object.create(null);
if (typeof options !== 'object') return { __proto__: null, loose: true };
const opts = Object.create(null);
if (options.includePrerelease) opts.includePrerelease = true;
if (options.loose) opts.loose = true;
if (options.rtl) opts.rtl = true;
return opts;
}; Perf: satisfies(1.0.6, 1.0.3||^2.0.0) x 341,924 ops/sec ±0.42% (93 runs sampled)
satisfies(1.0.6, 2.2.2||~3.0.0) x 357,034 ops/sec ±0.13% (97 runs sampled)
satisfies(1.0.6, 2.3.0||<4.0.0) x 384,200 ops/sec ±0.54% (92 runs sampled)
satisfies(1.0.6, 1.0.3||^2.0.0, {"includePrelease":true}) x 328,873 ops/sec ±0.98% (96 runs sampled)
satisfies(1.0.6, 2.2.2||~3.0.0, {"includePrelease":true}) x 345,029 ops/sec ±0.67% (96 runs sampled)
satisfies(1.0.6, 2.3.0||<4.0.0, {"includePrelease":true}) x 367,659 ops/sec ±0.37% (95 runs sampled)
satisfies(1.0.6, 1.0.3||^2.0.0, {"includePrelease":true,"loose":true}) x 316,462 ops/sec ±0.43% (94 runs sampled)
satisfies(1.0.6, 2.2.2||~3.0.0, {"includePrelease":true,"loose":true}) x 323,678 ops/sec ±0.90% (93 runs sampled)
satisfies(1.0.6, 2.3.0||<4.0.0, {"includePrelease":true,"loose":true}) x 339,436 ops/sec ±0.94% (96 runs sampled)
satisfies(1.0.6, 1.0.3||^2.0.0, {"includePrelease":true,"loose":true,"rtl":true}) x 287,875 ops/sec ±4.03% (88 runs sampled)
satisfies(1.0.6, 2.2.2||~3.0.0, {"includePrelease":true,"loose":true,"rtl":true}) x 314,779 ops/sec ±1.12% (94 runs sampled)
satisfies(1.0.6, 2.3.0||<4.0.0, {"includePrelease":true,"loose":true,"rtl":true}) x 332,332 ops/sec ±0.25% (96 runs sampled) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for clarifying. What i meant was, adding in to your static reference frozen objects a proto:null inline to those declarations. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. https://esbench.com/bench/64271fec6c89f600a57022e8 as mentioned, does not appear to be faster to initialize There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. hm, i can’t seem to run the benchmarks on iOS Safari. Have they been run on every browser, not just chrome? what about in node, where perf differs from chrome at times? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @ljharb About freeze with proto null:
Soooo slow, I don't even know why. |
||
|
||
if (typeof options !== 'object') return var6; | ||
|
||
if (options.includePrerelease) { | ||
if (options.loose && options.rtl) { | ||
return var1; | ||
} | ||
|
||
if (options.loose) { | ||
return var2; | ||
} | ||
|
||
if (options.rtl) { | ||
return var3; | ||
} | ||
|
||
return var4; | ||
} else if (options.loose) { | ||
if (options.rtl) { | ||
return var5; | ||
} | ||
|
||
return var6; | ||
} else if (options.rtl) { | ||
return var7; | ||
} else { | ||
return emptyOpts; | ||
} | ||
}; | ||
module.exports = parseOptions; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may also want to try a handwritten key such as:
Which is what we do in the TypeScript compiler. Not sure if it's faster than here but could be interesting to test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't hurt the performance:
After 1B ops/s, it almost has no effect choosing one or another.
The only drawback is increasing the lru-cache key by 6 characters instead of having it as 1.
If NPM team prefer readability over a tiny cache key, I can change the implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is actually a good use case for a bit flag. It's a bunch of booleans, and bitwise operations are super fast. And while bit ops are often frowned upon as being dense or obscure, if organized properly, they're much more maintainable than a series of
if/else
statements and hoping that you captured each case.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the suggestion is to change Options, but to use a different way of constructing a key.
That being said, I think it should be possible to internally store flags for these and then recustruct the options bag when asked on whichever methods that request it. Then, internally, things can use the fast check and pass
number
around. Spitballing, though.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I realized a few seconds after writing that, and yes, using the option internally and then exposing makes sense, I'll try using the getter to compute the options and internally using just the bit flag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking further, the user already has to pass the object in, so you probably can just keep that one and not reconstruct it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jakebailey I tried, but it would break a lot of tests if I did.
I put the version of parseOptions with bit flags here: h4ad-forks@6c7a68a
@isaacs Take a look and see if this PR is worth pushing, I made a lot of changes.
Also, I introduce 1 small breaking change, now the numbers passed to
parseOptions
are considered valid options, so instead of re-parsing it by passing the object, I use many of the options as bit flags to construct new objects that are inside the functions.About performance, no performance penalties and I assume it got a bit faster because I now parse the options as an object once and then just parse the options again as a number which saves some comparisons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only downside I notice when using bitmask is the memory usage (ref: #528 (comment)) is the same as allocating the objects at runtime (8.7mb).
I don't know why this behavior is happening, but I just want to point out that it's not as memory efficient as Object.freeze.