-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(bench-trial): new package to benchmark performance #192
Changes from 10 commits
6f45729
430b5f2
fa3d0d7
900cd0c
b8d4516
bc77f11
b54d97e
fc2656f
9ae18ab
18be79e
7b23cec
5a89cd0
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,94 @@ | ||
# Bench trial | ||
|
||
> Runs one or multiple benchmark tests | ||
|
||
## Install | ||
|
||
```bash | ||
npm install -g bench-trial | ||
``` | ||
|
||
## Usage | ||
|
||
```bash | ||
bench-trial benchmarks/map-helper-vs-entity.js -i 5 | ||
``` | ||
|
||
## TL;DR | ||
|
||
Runs one (or more) BenchmarkJs test multiple times enough to get less ambiguous results, includes basic testing to make sure tests are reliable. | ||
|
||
## Why? | ||
|
||
While running [benchmarkjs](https://benchmarkjs.com) to compare different versions of code I found out a couple of things: | ||
|
||
- **Consistency**: I noticed that the same benchmark tests were returning different results every time they executed. If they were re-run consecutively, I would get more operations per second on each benchmark. I believe the reason may be related to the v8 engine warming up and optimizing the code the more it ran, since if I let some time to "cool off" the operations per second for each test would decrease. These ambiguous results meant having to repeat tests to ensure some consistency. | ||
- **Reliable Execution**: Occasionally I made changes to the benchmarked code and would overlook that it was not executing correctly, further compounding the issue of making the results unreliable. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This should be |
||
|
||
## Solution | ||
|
||
- **Consistency**: By running benchmark tests more than once, we can get median and average results and get a bigger picture with less fluctuation. Because the tests will run multiple times in succession, the code will get optimized by the engine, and we can use the median time as a more consistent and stable metric. | ||
|
||
- **Tests for reliable execution**: By running simple assertion tests on each suite before the actual benchmark runs, we can be sure our tests are executing correctly. | ||
|
||
## API | ||
|
||
```bash | ||
bench-trial <file> [-i <iterations>] [-s] | ||
``` | ||
|
||
- `-i --iterations <iteration>` iterations default to 10 iterations if not provided. | ||
- `-s --skip-tests` if provided, it will skip the assertion tests. | ||
|
||
### Writing your benchmark suites | ||
|
||
The file provided to **bench-trial** should export an `array` of test suites, each test suite is an object in the form of: | ||
|
||
``` | ||
{ | ||
name: string, | ||
test: function, | ||
benchmark: function | ||
} | ||
``` | ||
|
||
| Property | Type | Description | | ||
|:---|:---|:---| | ||
| *name* | `String` | Name that describes the test you are running | | ||
| *test* | `function` | function to run assertion test against the result of the code you want to benchmark | | ||
| *benchmark* | `function` | function to pass to benchmarkjs Suite that actually runs the benchmark | | ||
|
||
#### Sync vs Async | ||
|
||
- Synchronous methods are simple methods that expect a return value. | ||
- Asynchronous methods are a bit different to benchmarkjs async methods, bench-trial expects async methods to follow the [error-first callbacks](https://nodejs.org/api/errors.html#errors_error_first_callbacks). | ||
|
||
#### Testing | ||
|
||
bench-trial provides a convenience method that accepts the function to execute and a value to check against the result of the code you are testing. It takes care of async vs async depending on how you set the `async` flag. | ||
|
||
```js | ||
test(test:function, value:*) | ||
``` | ||
|
||
To write your manual test see the manual test example below | ||
|
||
## Examples | ||
|
||
Test synchronous code [example](examples/array-iteration.js) | ||
Test asynchronous code [example](examples/async-example.js) | ||
Write manual test sync/asynchronous code [example](examples/manual-tests.js) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we put a bullet on these please? They currently get squished to a single line. |
||
|
||
## Acknowledgements | ||
|
||
This tool is a wrapper of [benchmarkjs](https://benchmarkjs.com), so all credit related to benchmarking itself really goes to them. | ||
|
||
Thanks to [Paul Molluzzo](https://github.com/paulmolluzzo) for coming up with the name **bench-trial**! | ||
|
||
## <a name="contributing">Contributing</a> | ||
|
||
Please read [CONTRIBUTING.md](https://github.com/ViacomInc/data-point/blob/master/CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us. | ||
|
||
## <a name="license">License</a> | ||
|
||
This project is licensed under the Apache License Version 2.0 - see the [LICENSE](LICENSE) file for details |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,43 @@ | ||
const { test } = require('bench-trial') | ||
|
||
const array = Array(100).fill('foo') | ||
const expected = array.join('').length | ||
|
||
function forLoop () { | ||
let result = '' | ||
for (let index = 0; index < array.length; index++) { | ||
result = result + array[index] | ||
} | ||
|
||
const length = result.length | ||
result = '' | ||
return length | ||
} | ||
|
||
function whileLoop () { | ||
let result = '' | ||
let index = 0 | ||
while (index !== array.length) { | ||
result = result + array[index] | ||
index++ | ||
} | ||
|
||
const length = result.length | ||
result = '' | ||
return length | ||
} | ||
|
||
module.exports = [ | ||
{ | ||
async: false, | ||
name: 'while-loop', | ||
test: test(whileLoop, expected), | ||
benchmark: whileLoop | ||
}, | ||
{ | ||
async: false, | ||
name: 'for-loop', | ||
test: test(forLoop, expected), | ||
benchmark: forLoop | ||
} | ||
] |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
const { test } = require('bench-trial') | ||
|
||
const expected = true | ||
|
||
function testPromise (done) { | ||
Promise.resolve(true).then(() => { | ||
done(null, true) | ||
}) | ||
} | ||
|
||
function testSetTimeOut (done) { | ||
setTimeout(() => { | ||
done(null, true) | ||
}, 0) | ||
} | ||
|
||
module.exports = [ | ||
{ | ||
async: true, | ||
name: 'promise', | ||
test: test(testPromise, expected), | ||
benchmark: testPromise | ||
}, | ||
{ | ||
async: true, | ||
name: 'timeout', | ||
test: test(testSetTimeOut, expected), | ||
benchmark: testSetTimeOut | ||
} | ||
] |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
const assert = require('assert') | ||
|
||
const expected = 2 | ||
|
||
function addsNumbersSync () { | ||
return 1 + 1 | ||
} | ||
|
||
function addsNumbersAsync (done) { | ||
Promise.resolve().then(() => { | ||
done(null, 1 + 1) | ||
}) | ||
} | ||
|
||
module.exports = [ | ||
{ | ||
async: false, | ||
name: 'addsNumbersSync', | ||
test: () => assert.deepEqual(addsNumbersSync(), expected), | ||
benchmark: addsNumbersSync | ||
}, | ||
{ | ||
async: true, | ||
name: 'addsNumbersAsync', | ||
test: done => { | ||
addsNumbersAsync((e, val) => { | ||
assert.deepEqual(val, expected) | ||
done(null) | ||
}) | ||
}, | ||
benchmark: addsNumbersAsync | ||
} | ||
] |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
const assert = require('assert') | ||
|
||
function testSync (method, expected) { | ||
return () => assert.deepEqual(method(), expected) | ||
} | ||
|
||
function testAsync (method, expected) { | ||
return done => { | ||
return method((err, value) => { | ||
if (err) { | ||
return done(err) | ||
} | ||
try { | ||
assert.deepEqual(value, expected) | ||
done(null, value) | ||
} catch (e) { | ||
done(e) | ||
} | ||
}) | ||
} | ||
} | ||
|
||
function test (method, expected) { | ||
return { | ||
test: method, | ||
expected | ||
} | ||
} | ||
|
||
test.sync = testSync | ||
test.async = testAsync | ||
|
||
function benchmarkSync (method) { | ||
return method | ||
} | ||
|
||
function benchmarkAsync (method) { | ||
return deferred => { | ||
return method((err, value) => { | ||
if (err) { | ||
throw err | ||
} | ||
deferred.resolve() | ||
}) | ||
} | ||
} | ||
|
||
module.exports = { | ||
test, | ||
benchmark: { | ||
sync: benchmarkSync, | ||
async: benchmarkAsync | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
{ | ||
"name": "bench-trial", | ||
"version": "2.0.0", | ||
"description": "Runs one or multiple benchmark tests", | ||
"main": "./lib/index.js", | ||
"license": "Apache-2.0", | ||
"engines": { | ||
"node": ">=6" | ||
}, | ||
"bin": { | ||
"bench-trial": "runner.js" | ||
}, | ||
"author": { | ||
"name": "Acatl Pacheco", | ||
"email": "acatl.pacheco@viacom.com" | ||
}, | ||
"dependencies": { | ||
"benchmark": "2.x", | ||
"commander": "2.x", | ||
"chalk": "2.x" | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The title should have stayed as
**Ambiguous Results**
. This is describing the problem.