-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(bench-trial): new package to benchmark performance #192
Conversation
packages/bench-trail/runner.js
Outdated
@@ -0,0 +1,333 @@ | |||
#!/usr/bin/env node --expose-gc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so many many many tests for this file they didn't fit in this PR, I will try to send it via USB to each maintainer of DataPoint
packages/bench-trail/runner.js
Outdated
return Math.round(bytes / 1024 * 100) / 100 | ||
} | ||
|
||
function runGC (val) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all memory related code is not being used ATM, I left it here because it took me a while to figure some things out. will look back into it with more time to reintegrate it
test performance
packages/bench-trial/runner.js
Outdated
} | ||
|
||
function runTest (suite) { | ||
const isASync = !!suite.async |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isAsync
? (also in runBenchmark
function)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome!
I provided some suggestions for rewording the README, so I'm requesting changes mostly for that. There are a couple of comments in the code itself too, but nothing is big.
Stoked to start using this!
packages/bench-trial/README.md
Outdated
|
||
While running [benchmarkjs](https://benchmarkjs.com) to compare different versions of code I found out a couple of things: | ||
|
||
- **Ambiguous results**: tests would throw different results every time I ran the test, specially if the difference was minimal. If I ran the same test multiple times the numbers changed, as time went by I would get more and more operations per second on each benchmark. This would only happen if I ran the tests consecutively, the reason of this might be related to the v8 engine warming up and optimizing the code the more I ran it. If I let some time to cool off, the tests would go back down on operations per second. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a rewrite of the line, so copy and pasting might be easier (if you want to keep them):
Ambiguous results: I noticed that the same benchmark tests were returning different results every time they were run. If they were re-run consecutively I would get more operations per second on each benchmark. I believe the reason may be related to the v8 engine warming up and optimizing the code the more it ran, since if I let some time to "cool off" the operations per second for each test would decrease. These ambiguous results meant having to manually repeat tests to try to ensure some consistency.
packages/bench-trial/README.md
Outdated
While running [benchmarkjs](https://benchmarkjs.com) to compare different versions of code I found out a couple of things: | ||
|
||
- **Ambiguous results**: tests would throw different results every time I ran the test, specially if the difference was minimal. If I ran the same test multiple times the numbers changed, as time went by I would get more and more operations per second on each benchmark. This would only happen if I ran the tests consecutively, the reason of this might be related to the v8 engine warming up and optimizing the code the more I ran it. If I let some time to cool off, the tests would go back down on operations per second. | ||
- **Reliable Execution**: more than once I made changes on the code being tested and never did I notice the change I had made was not even executing correctly. So the results I was getting were really unreliable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a rewrite of the line, so copy and pasting might be easier (if you want to keep them):
Unreliable Test Execution: Occasionally I made changes to the code being benchmarked and would overlook that it was not executing correctly, further compounding the issue of making the results unreliable.
packages/bench-trial/README.md
Outdated
|
||
## Solution | ||
|
||
- **Ambiguous results**: Run benchmark more than once to get median and average results, because the test will run multiple times the code will get optimized, using the median we can get more reliable results. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a rewrite of the line, so copy and pasting might be easier (if you want to keep them):
Clear, consistent results: By running benchmark tests more than once we can get median and average results and get a bigger picture with less fluctuation. Because the tests will run multiple times in succession, the code will get optimized by the engine, and we can use the median time as a more consistent and stable metric.
packages/bench-trial/README.md
Outdated
## Solution | ||
|
||
- **Ambiguous results**: Run benchmark more than once to get median and average results, because the test will run multiple times the code will get optimized, using the median we can get more reliable results. | ||
- **Reliable Execution**: Run a simple assertion tests on each suite before the actual benchmark runs, this helps us make sure our test are executing correctly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a rewrite of the line, so copy and pasting might be easier (if you want to keep them):
Reliable Tests to Execute: By running simple assertion tests on each suite before the actual benchmark runs we can be sure our tests are executing correctly.
packages/bench-trial/README.md
Outdated
|
||
### Writing your benchmark suites | ||
|
||
The file you provide to bench-trial should export an `array` of suites, each suite is an object in the form of: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The file provided to bench-trial
should export an array
of test suites, each test suite is an object in the form of:
packages/bench-trial/README.md
Outdated
test(test:function, value:*) | ||
``` | ||
|
||
to write your manual test see the manual test example below |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To write your own tests see the "manual test" example below
packages/bench-trial/package.json
Outdated
"email": "acatl.pacheco@viacom.com" | ||
}, | ||
"devDependencies": { | ||
"benchmark": "2.1.4", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason this is pinned to this specific version?
packages/bench-trial/runner.js
Outdated
|
||
function middle (values) { | ||
var len = values.length | ||
var half = Math.floor(len / 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
var
s?
@paulmolluzzo applied your feedback, thanks!!! |
packages/bench-trial/README.md
Outdated
|
||
While running [benchmarkjs](https://benchmarkjs.com) to compare different versions of code I found out a couple of things: | ||
|
||
- **Consistency**: I noticed that the same benchmark tests were returning different results every time they executed. If they were re-run consecutively, I would get more operations per second on each benchmark. I believe the reason may be related to the v8 engine warming up and optimizing the code the more it ran, since if I let some time to "cool off" the operations per second for each test would decrease. These ambiguous results meant having to repeat tests to ensure some consistency. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The title should have stayed as **Ambiguous Results**
. This is describing the problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two tiny things.
packages/bench-trial/README.md
Outdated
While running [benchmarkjs](https://benchmarkjs.com) to compare different versions of code I found out a couple of things: | ||
|
||
- **Consistency**: I noticed that the same benchmark tests were returning different results every time they executed. If they were re-run consecutively, I would get more operations per second on each benchmark. I believe the reason may be related to the v8 engine warming up and optimizing the code the more it ran, since if I let some time to "cool off" the operations per second for each test would decrease. These ambiguous results meant having to repeat tests to ensure some consistency. | ||
- **Reliable Execution**: Occasionally I made changes to the benchmarked code and would overlook that it was not executing correctly, further compounding the issue of making the results unreliable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be Unreliable Exection
. "Reliable Execution" isn't a problem 😉
packages/bench-trial/README.md
Outdated
|
||
Test synchronous code [example](examples/array-iteration.js) | ||
Test asynchronous code [example](examples/async-example.js) | ||
Write manual test sync/asynchronous code [example](examples/manual-tests.js) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we put a bullet on these please? They currently get squished to a single line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome! 🎉
Thanks for putting up with my comments on the README. 🙊
What: new performance benchmark tool
Why: to test changes we make to data-point
How: created a new module that runs multiple iterations of benchmarkjs tests
Checklist: