-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
util: improve readability of normalizeEncoding #10439
Conversation
The performance impact is mixed(inputs taken from the The benchmark infra is not ready to do internal module benchmarks. Not sure if putting this on the external 'use strict';
const common = require('../common.js');
const normalizeEncoding = require('util')._normalizeEncoding;
const inputs = ['', 'utf8', 'utf-8', 'UTF-8',
'UTF8', 'Utf8', 'uTf-8', 'utF-8', 'ucs2',
'UCS2', 'utf16le', 'utf-16le', 'UTF-16LE', 'UTF16LE',
'binary', 'BINARY', 'latin1', 'base64', 'BASE64',
'hex', 'HEX', 'foo', 1, false, undefined, []];
const bench = common.createBenchmark(main, {
index: Object.keys(inputs).map((key) => parseInt(key)),
n: [1e5]
});
function main(conf) {
const index = conf.index;
const n = conf.n | 0;
const input = inputs[index] || '';
common.v8ForceOptimization(normalizeEncoding, 'utf8');
bench.start();
for (var i = 0; i < n; i += 1)
normalizeEncoding(input);
bench.end(n);
} See benchmark resultsimprovement significant p.value util/normalize-encoding.js n=100000 index=0 -9.32 % *** 2.594961e-04 util/normalize-encoding.js n=100000 index=1 -13.30 % *** 1.148799e-04 util/normalize-encoding.js n=100000 index=10 -6.37 % * 1.817863e-02 util/normalize-encoding.js n=100000 index=11 -3.93 % 1.070895e-01 util/normalize-encoding.js n=100000 index=12 -1.12 % 5.513625e-01 util/normalize-encoding.js n=100000 index=13 1.21 % 5.917431e-01 util/normalize-encoding.js n=100000 index=14 10.52 % *** 2.672550e-05 util/normalize-encoding.js n=100000 index=15 -3.98 % * 1.027313e-02 util/normalize-encoding.js n=100000 index=16 2.20 % 3.329777e-01 util/normalize-encoding.js n=100000 index=17 14.93 % *** 7.643643e-07 util/normalize-encoding.js n=100000 index=18 -3.97 % * 1.168600e-02 util/normalize-encoding.js n=100000 index=19 26.33 % *** 5.501596e-17 util/normalize-encoding.js n=100000 index=2 -19.27 % *** 4.589156e-10 util/normalize-encoding.js n=100000 index=20 -3.80 % * 1.572212e-02 util/normalize-encoding.js n=100000 index=21 4.02 % * 3.897232e-02 util/normalize-encoding.js n=100000 index=22 27.05 % *** 3.436555e-19 util/normalize-encoding.js n=100000 index=23 -10.16 % *** 4.974043e-06 util/normalize-encoding.js n=100000 index=24 -16.14 % *** 1.010719e-06 util/normalize-encoding.js n=100000 index=25 6.56 % *** 1.258479e-05 util/normalize-encoding.js n=100000 index=3 -0.35 % 8.880709e-01 util/normalize-encoding.js n=100000 index=4 -2.04 % 2.544653e-01 util/normalize-encoding.js n=100000 index=5 -2.06 % 2.306798e-01 util/normalize-encoding.js n=100000 index=6 -1.51 % 3.367092e-01 util/normalize-encoding.js n=100000 index=7 -0.31 % 8.605040e-01 util/normalize-encoding.js n=100000 index=8 -11.23 % *** 6.449605e-06 util/normalize-encoding.js n=100000 index=9 -0.16 % 9.223133e-01 |
About the benchmark: adding Re-post benchmark results after making those changes and let's see where we stand. |
@mscdex I've discovered that internal modules can be benchmarked if you pass the Below are the new results, these are quite different from the previous ones. I find it a little bit strange that the cases with the most regressions are the ones that should be left untouched in the first line ( See benchmark resultsimprovement significant p.value util/normalize-encoding.js n=1000000 input="" -47.80 % *** 9.666964e-30 util/normalize-encoding.js n=1000000 input="1" 33.18 % *** 1.848349e-25 util/normalize-encoding.js n=1000000 input="BASE64" 0.99 % 1.996811e-01 util/normalize-encoding.js n=1000000 input="BINARY" -0.52 % 6.046694e-01 util/normalize-encoding.js n=1000000 input="HEX" -1.17 % 5.406674e-01 util/normalize-encoding.js n=1000000 input="UCS2" 4.44 % *** 1.386089e-06 util/normalize-encoding.js n=1000000 input="UTF-16LE" 0.50 % 5.726062e-01 util/normalize-encoding.js n=1000000 input="UTF-8" 0.70 % 4.408422e-01 util/normalize-encoding.js n=1000000 input="UTF16LE" 0.51 % 6.153607e-01 util/normalize-encoding.js n=1000000 input="UTF8" 4.80 % *** 4.519778e-07 util/normalize-encoding.js n=1000000 input="Utf8" 2.25 % * 1.944775e-02 util/normalize-encoding.js n=1000000 input="[]" 10.21 % *** 2.153273e-16 util/normalize-encoding.js n=1000000 input="base64" 0.28 % 8.131951e-01 util/normalize-encoding.js n=1000000 input="binary" 0.13 % 9.215771e-01 util/normalize-encoding.js n=1000000 input="false" -54.78 % *** 2.363131e-38 util/normalize-encoding.js n=1000000 input="foo" 1.90 % 7.937867e-02 util/normalize-encoding.js n=1000000 input="hex" -3.71 % ** 2.582364e-03 util/normalize-encoding.js n=1000000 input="latin1" -2.51 % 1.176542e-01 util/normalize-encoding.js n=1000000 input="uTf-8" 0.94 % 3.604232e-01 util/normalize-encoding.js n=1000000 input="ucs2" -2.02 % 2.298876e-01 util/normalize-encoding.js n=1000000 input="undefined" -56.19 % *** 1.454327e-27 util/normalize-encoding.js n=1000000 input="utF-8" 1.91 % 2.511025e-01 util/normalize-encoding.js n=1000000 input="utf-16le" -2.60 % 6.766127e-02 util/normalize-encoding.js n=1000000 input="utf-8" -3.06 % 1.346169e-01 util/normalize-encoding.js n=1000000 input="utf16le" -4.05 % * 1.658632e-02 util/normalize-encoding.js n=1000000 input="utf8" -11.40 % *** 1.452832e-08 ``` |
BTW: I'll try to submit another PR to make it possible to use comments to indicate the flags(like what we have in tests, |
@joyeecheung The benchmark runner should already be passing that flag to all benchmarks, assuming you're referring to benchmarks. |
The |
@mscdex Thanks for the advice, I'll change the benchmark code to see if bumping |
So, I've bumped the One thing I notice is that the control graph looks so much cleaner in the previous implementation. I'll spend a bit more time on this to see if this is a v8 bug. |
ab27388
to
fccd403
Compare
So I've changed the benchmarks a bit to feed more than one type of input in each run, and added some code to avoid quirks in micro benchmarks. The performance hit is much smaller when the inputs are fed in groups (notice the See benchmark results>improvement significant p.value util/normalize-encoding.js n=100000 input="" -19.94 % *** 1.057313e-14 util/normalize-encoding.js n=100000 input="1" 24.89 % *** 3.630534e-16 util/normalize-encoding.js n=100000 input="BASE64" -2.73 % 1.072942e-01 util/normalize-encoding.js n=100000 input="BINARY" -6.57 % ** 4.003795e-03 util/normalize-encoding.js n=100000 input="HEX" -4.36 % ** 4.988303e-03 util/normalize-encoding.js n=100000 input="UCS2" -0.41 % 8.503550e-01 util/normalize-encoding.js n=100000 input="UTF-16LE" -3.93 % * 2.017258e-02 util/normalize-encoding.js n=100000 input="UTF-8" -3.39 % 6.457016e-02 util/normalize-encoding.js n=100000 input="UTF16LE" -4.98 % ** 1.586828e-03 util/normalize-encoding.js n=100000 input="UTF8" -1.70 % 2.326543e-01 util/normalize-encoding.js n=100000 input="Utf8" -3.09 % 1.222550e-01 util/normalize-encoding.js n=100000 input="[]" 5.97 % *** 5.884119e-06 util/normalize-encoding.js n=100000 input="base64" -2.97 % 1.532578e-01 util/normalize-encoding.js n=100000 input="binary" -5.56 % ** 2.266815e-03 util/normalize-encoding.js n=100000 input="false" -25.80 % *** 5.718812e-23 util/normalize-encoding.js n=100000 input="foo" -2.59 % 1.255741e-01 util/normalize-encoding.js n=100000 input="group_common" -1.94 % 2.243914e-01 util/normalize-encoding.js n=100000 input="group_misc" 0.21 % 8.978335e-01 util/normalize-encoding.js n=100000 input="group_uncommon" 7.07 % *** 1.097578e-05 util/normalize-encoding.js n=100000 input="group_upper" -1.90 % 1.982519e-01 util/normalize-encoding.js n=100000 input="hex" -0.57 % 7.268864e-01 util/normalize-encoding.js n=100000 input="latin1" -4.75 % * 1.667714e-02 util/normalize-encoding.js n=100000 input="uTf-8" -3.49 % * 2.158359e-02 util/normalize-encoding.js n=100000 input="ucs2" -13.06 % *** 1.131472e-10 util/normalize-encoding.js n=100000 input="undefined" -22.77 % *** 2.384209e-14 util/normalize-encoding.js n=100000 input="utF-8" -3.45 % * 3.785455e-02 util/normalize-encoding.js n=100000 input="utf-16le" -5.99 % ** 7.161187e-03 util/normalize-encoding.js n=100000 input="utf-8" -10.35 % *** 4.765813e-08 util/normalize-encoding.js n=100000 input="utf16le" -9.41 % *** 2.643782e-08 util/normalize-encoding.js n=100000 input="utf8" -12.76 % *** 7.559608e-09 |
Most likely the main issue is the extra function calls. The loop is what avoids that in the original implementation. I'm still concerned about the performance degradation. |
@mscdex From what I've seen in the HIR it's probably not about the function call itself(because they would be inlined anyway), but about See HIROld: begin_HIR 0 0 v339 BlockEntry type:Tagged pos:2_48 <|@ 0 0 v340 Simulate id=37 type:Tagged pos:2_48 <|@ 0 0 v341 StackCheck t310 changes[NewSpacePromotion] type:Tagged pos:2_48 <|@ 0 0 v345 Simulate id=60 var[3] = t573, push t329 type:Tagged pos:2_63 <|@ 0 0 t347 CheckHeapObject t329 pos:2_63 <|@ 0 0 t348 CheckInstanceType internalized_string t329 pos:2_63 <|@ 0 0 v350 CompareObjectEqAndBranch t329 t324 goto (B63, B40) type:Tagged pos:2_63 <|@ end_HIR New begin_HIR 0 0 v328 BlockEntry type:Tagged pos:2_56 <|@ 0 4 t329 LoadContextSlot t310[9] type:Tagged pos:2_56 <|@ 0 0 t330 CheckValue t329 0x30917856f799 type:Tagged pos:2_56 <|@ 0 0 t332 ArgumentsObject t3 t308 type:Tagged pos:3_0 <|@ 0 0 v333 Simulate id=-1 push t329, push t3, push t308 type:Tagged pos:3_0 <|@ 0 0 v334 EnterInlined normalize type:Tagged pos:3_0 <|@ 0 0 v338 Simulate id=24 var[3] = t578, var[2] = t310, push t308 type:Tagged pos:3_10 <|@ 0 0 t340 CheckHeapObject t308 pos:3_10 <|@ 0 0 t341 CheckInstanceType internalized_string t308 pos:3_10 <|@ 0 0 v343 CompareObjectEqAndBranch t308 t324 goto (B75, B38) type:Tagged pos:3_10 <|@ end_HIR Since that bug is not going to be fixed anytime soon I'll try to use do-while instead. EDIT: that bug seems to be worked on as another V8 bug. But anyways... |
fccd403
to
d02dedb
Compare
I have updated the code to use a while loop instead with some comments on top(can't comment in the function because then it will be over See benchmark resultsimprovement significant p.value util/normalize-encoding.js n=100000 input="" 1.48 % 0.0829767494 util/normalize-encoding.js n=100000 input="1" -0.46 % 0.6432198748 util/normalize-encoding.js n=100000 input="BASE64" -0.40 % 0.6207269559 util/normalize-encoding.js n=100000 input="BINARY" -0.31 % 0.6789723120 util/normalize-encoding.js n=100000 input="HEX" -1.38 % 0.1882772287 util/normalize-encoding.js n=100000 input="UCS2" -1.40 % 0.1642200780 util/normalize-encoding.js n=100000 input="UTF-16LE" 0.21 % 0.9227927195 util/normalize-encoding.js n=100000 input="UTF-8" -0.28 % 0.7744125407 util/normalize-encoding.js n=100000 input="UTF16LE" -2.54 % ** 0.0049099170 util/normalize-encoding.js n=100000 input="UTF8" 0.41 % 0.7577822666 util/normalize-encoding.js n=100000 input="Utf8" -1.62 % * 0.0166714587 util/normalize-encoding.js n=100000 input="[]" 0.12 % 0.8908627898 util/normalize-encoding.js n=100000 input="base64" -0.34 % 0.7674064864 util/normalize-encoding.js n=100000 input="binary" -1.28 % 0.2938272124 util/normalize-encoding.js n=100000 input="false" 0.06 % 0.9413756246 util/normalize-encoding.js n=100000 input="foo" -2.31 % *** 0.0001253098 util/normalize-encoding.js n=100000 input="group_common" 0.01 % 0.9897426571 util/normalize-encoding.js n=100000 input="group_misc" -2.79 % ** 0.0041665424 util/normalize-encoding.js n=100000 input="group_uncommon" -0.15 % 0.8362969697 util/normalize-encoding.js n=100000 input="group_upper" -2.34 % ** 0.0043682302 util/normalize-encoding.js n=100000 input="hex" 1.29 % 0.3013361526 util/normalize-encoding.js n=100000 input="latin1" -0.44 % 0.7223575589 util/normalize-encoding.js n=100000 input="uTf-8" -1.35 % 0.5255336214 util/normalize-encoding.js n=100000 input="ucs2" 0.31 % 0.7994968617 util/normalize-encoding.js n=100000 input="undefined" -0.17 % 0.9115657865 util/normalize-encoding.js n=100000 input="utF-8" -0.30 % 0.7290963708 util/normalize-encoding.js n=100000 input="utf-16le" -2.80 % *** 0.0009771347 util/normalize-encoding.js n=100000 input="utf-8" 1.05 % 0.2556790884 util/normalize-encoding.js n=100000 input="utf16le" -1.10 % 0.4710466247 util/normalize-encoding.js n=100000 input="utf8" 0.76 % 0.3382814236 |
d02dedb
to
a9681a1
Compare
@mscdex Does this LGTY? |
I'm still concerned about the performance. Most of the results are not marked as highly significant and the ones that are show a performance regression. I'm just not sure it's worth it just to change the loop used. Did you try benchmarking current master against itself to see if similar results emerge? |
@mscdex I ran them again an this time there is no significant p value...strange... Against this PR: See benchmark resultsimprovement significant p.value util/normalize-encoding.js n=100000 input="" 0.76 % 0.78083843 util/normalize-encoding.js n=100000 input="1" 0.67 % 0.55243180 util/normalize-encoding.js n=100000 input="BASE64" 1.58 % 0.15997022 util/normalize-encoding.js n=100000 input="BINARY" 0.68 % 0.63394713 util/normalize-encoding.js n=100000 input="HEX" -0.04 % 0.97719285 util/normalize-encoding.js n=100000 input="UCS2" -1.31 % 0.17303227 util/normalize-encoding.js n=100000 input="UTF-16LE" -1.44 % 0.44686975 util/normalize-encoding.js n=100000 input="UTF-8" 1.21 % 0.52612697 util/normalize-encoding.js n=100000 input="UTF16LE" -0.91 % 0.68579514 util/normalize-encoding.js n=100000 input="UTF8" 0.04 % 0.98501700 util/normalize-encoding.js n=100000 input="Utf8" -1.17 % 0.28905432 util/normalize-encoding.js n=100000 input="[]" -0.16 % 0.91167510 util/normalize-encoding.js n=100000 input="base64" 1.77 % 0.48811676 util/normalize-encoding.js n=100000 input="binary" 1.15 % 0.57701763 util/normalize-encoding.js n=100000 input="false" 0.25 % 0.86446726 util/normalize-encoding.js n=100000 input="foo" 1.99 % 0.23020110 util/normalize-encoding.js n=100000 input="group_common" -1.24 % 0.39849210 util/normalize-encoding.js n=100000 input="group_misc" 1.13 % 0.38549956 util/normalize-encoding.js n=100000 input="group_uncommon" -0.87 % 0.55730142 util/normalize-encoding.js n=100000 input="group_upper" 0.12 % 0.93350800 util/normalize-encoding.js n=100000 input="hex" -0.28 % 0.81607127 util/normalize-encoding.js n=100000 input="latin1" 0.08 % 0.96807529 util/normalize-encoding.js n=100000 input="uTf-8" 0.56 % 0.69379026 util/normalize-encoding.js n=100000 input="ucs2" -2.05 % 0.24612020 util/normalize-encoding.js n=100000 input="undefined" -2.05 % 0.43810156 util/normalize-encoding.js n=100000 input="utF-8" -2.56 % 0.06979828 util/normalize-encoding.js n=100000 input="utf-16le" -1.58 % 0.21587500 util/normalize-encoding.js n=100000 input="utf-8" 4.39 % 0.29529870 util/normalize-encoding.js n=100000 input="utf16le" 0.92 % 0.57924116 util/normalize-encoding.js n=100000 input="utf8" 2.30 % 0.11971079 Against master: See benchmark resultsimprovement significant p.value util/normalize-encoding.js n=100000 input="" 1.89 % 0.51563721 util/normalize-encoding.js n=100000 input="1" -3.95 % 0.07231164 util/normalize-encoding.js n=100000 input="BASE64" 0.02 % 0.98998222 util/normalize-encoding.js n=100000 input="BINARY" 0.39 % 0.81495762 util/normalize-encoding.js n=100000 input="HEX" 2.35 % 0.12803082 util/normalize-encoding.js n=100000 input="UCS2" -1.47 % 0.45288292 util/normalize-encoding.js n=100000 input="UTF-16LE" 0.06 % 0.96956473 util/normalize-encoding.js n=100000 input="UTF-8" 0.11 % 0.91848772 util/normalize-encoding.js n=100000 input="UTF16LE" -1.04 % 0.50024308 util/normalize-encoding.js n=100000 input="UTF8" 0.12 % 0.93320841 util/normalize-encoding.js n=100000 input="Utf8" -1.20 % 0.57602093 util/normalize-encoding.js n=100000 input="[]" 1.85 % 0.09944438 util/normalize-encoding.js n=100000 input="base64" 1.62 % 0.44022027 util/normalize-encoding.js n=100000 input="binary" 0.89 % 0.48502009 util/normalize-encoding.js n=100000 input="false" -1.37 % 0.57340827 util/normalize-encoding.js n=100000 input="foo" -1.14 % 0.54335820 util/normalize-encoding.js n=100000 input="group_common" 1.69 % 0.30794851 util/normalize-encoding.js n=100000 input="group_misc" 1.82 % 0.14879427 util/normalize-encoding.js n=100000 input="group_uncommon" 0.07 % 0.95924675 util/normalize-encoding.js n=100000 input="group_upper" 1.72 % 0.20659181 util/normalize-encoding.js n=100000 input="hex" 3.19 % 0.05784523 util/normalize-encoding.js n=100000 input="latin1" -3.32 % 0.08724869 util/normalize-encoding.js n=100000 input="uTf-8" 0.18 % 0.89152942 util/normalize-encoding.js n=100000 input="ucs2" 1.32 % 0.54429200 util/normalize-encoding.js n=100000 input="undefined" -0.18 % 0.94830130 util/normalize-encoding.js n=100000 input="utF-8" -1.96 % 0.40376120 util/normalize-encoding.js n=100000 input="utf-16le" 1.57 % 0.32425724 util/normalize-encoding.js n=100000 input="utf-8" -0.07 % 0.97425237 util/normalize-encoding.js n=100000 input="utf16le" -2.09 % 0.26998988 util/normalize-encoding.js n=100000 input="utf8" 0.97 % 0.70922856 |
and increasing |
While I'm running the benchmarks, just took a look into the asm, doesn't seem to change though(just one more block without generated code). HIR of that extra B38:
ASM:
|
@AndreasMadsen any ideas why no significance when comparing with the same binary (especially since it's not performing network requests or similar I/O)? |
I don't understand the question. You are expecting significant results when comparing master against master? I would not expect that, since the performance should be the same.
It is quite easy to get false positives by just running something in the background. Unfortunately it doesn't have to be I/O. This is why nodejs/benchmarking#58 is so important. |
Use the word "confidence" to indicate the confidence level of the p value so it's easier to understand. With this change more stars in the output of compare.R means higher confidence level (lower significance level). PR-URL: #10737 Refs: #10439 Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
* Improve readability of util.normalizeEncoding and add some comments * Add a benchmark for util.normalizeEncoding
267386d
to
f329486
Compare
@mscdex I am going to land this if there are no more concerns. |
Landed in 34bf31e |
* Improve readability of util.normalizeEncoding and add some comments * Add a benchmark for util.normalizeEncoding PR-URL: #10439 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
Use the word "confidence" to indicate the confidence level of the p value so it's easier to understand. With this change more stars in the output of compare.R means higher confidence level (lower significance level). PR-URL: nodejs#10737 Refs: nodejs#10439 Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
* Improve readability of util.normalizeEncoding and add some comments * Add a benchmark for util.normalizeEncoding PR-URL: nodejs#10439 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
Use the word "confidence" to indicate the confidence level of the p value so it's easier to understand. With this change more stars in the output of compare.R means higher confidence level (lower significance level). PR-URL: nodejs#10737 Refs: nodejs#10439 Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
* Improve readability of util.normalizeEncoding and add some comments * Add a benchmark for util.normalizeEncoding PR-URL: nodejs#10439 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
Use the word "confidence" to indicate the confidence level of the p value so it's easier to understand. With this change more stars in the output of compare.R means higher confidence level (lower significance level). PR-URL: nodejs#10737 Refs: nodejs#10439 Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
* Improve readability of util.normalizeEncoding and add some comments * Add a benchmark for util.normalizeEncoding PR-URL: nodejs#10439 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
Use the word "confidence" to indicate the confidence level of the p value so it's easier to understand. With this change more stars in the output of compare.R means higher confidence level (lower significance level). PR-URL: nodejs#10737 Refs: nodejs#10439 Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
* Improve readability of util.normalizeEncoding and add some comments * Add a benchmark for util.normalizeEncoding PR-URL: nodejs#10439 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
* Improve readability of util.normalizeEncoding and add some comments * Add a benchmark for util.normalizeEncoding PR-URL: #10439 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
* Improve readability of util.normalizeEncoding and add some comments * Add a benchmark for util.normalizeEncoding PR-URL: #10439 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
Checklist
make -j4 test
(UNIX), orvcbuild test
(Windows) passesAffected core subsystem(s)
util
Description of change
Replace the redundant for loop with if-else.EDIT: there is a inevitable performance hit if rewrite the loop into if-else with a helper function. Rewrite to use a slightly more readable while loop with comments.
Also add a benchmark for
util.normalizeEncoding