Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(bench-trial): new package to benchmark performance #192

Merged
merged 12 commits into from
Jan 31, 2018
94 changes: 94 additions & 0 deletions packages/bench-trial/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Bench trial

> Runs one or multiple benchmark tests

## Install

```bash
npm install -g bench-trial
```

## Usage

```bash
bench-trial benchmarks/map-helper-vs-entity.js -i 5
```

## TL;DR

Runs one (or more) BenchmarkJs test multiple times enough to get less ambiguous results, includes basic testing to make sure tests are reliable.

## Why?

While running [benchmarkjs](https://benchmarkjs.com) to compare different versions of code I found out a couple of things:

- **Ambiguous results**: I noticed that the same benchmark tests were returning different results every time they executed. If they were re-run consecutively, I would get more operations per second on each benchmark. I believe the reason may be related to the v8 engine warming up and optimizing the code the more it ran, since if I let some time to "cool off" the operations per second for each test would decrease. These ambiguous results meant having to repeat tests to ensure some consistency.
- **Unreliable execution**: Occasionally I made changes to the benchmarked code and would overlook that it was not executing correctly, further compounding the issue of making the results unreliable.

## Solution

- **Consistency**: By running benchmark tests more than once, we can get median and average results and get a bigger picture with less fluctuation. Because the tests will run multiple times in succession, the code will get optimized by the engine, and we can use the median time as a more consistent and stable metric.

- **Reliable execution**: By running simple assertion tests on each suite before the actual benchmark runs, we can be sure our tests are executing correctly.

## API

```bash
bench-trial <file> [-i <iterations>] [-s]
```

- `-i --iterations <iteration>` iterations default to 10 iterations if not provided.
- `-s --skip-tests` if provided, it will skip the assertion tests.

### Writing your benchmark suites

The file provided to **bench-trial** should export an `array` of test suites, each test suite is an object in the form of:

```
{
name: string,
test: function,
benchmark: function
}
```

| Property | Type | Description |
|:---|:---|:---|
| *name* | `String` | Name that describes the test you are running |
| *test* | `function` | function to run assertion test against the result of the code you want to benchmark |
| *benchmark* | `function` | function to pass to benchmarkjs Suite that actually runs the benchmark |

#### Sync vs Async

- Synchronous methods are simple methods that expect a return value.
- Asynchronous methods are a bit different to benchmarkjs async methods, bench-trial expects async methods to follow the [error-first callbacks](https://nodejs.org/api/errors.html#errors_error_first_callbacks).

#### Testing

bench-trial provides a convenience method that accepts the function to execute and a value to check against the result of the code you are testing. It takes care of async vs async depending on how you set the `async` flag.

```js
test(test:function, value:*)
```

To write your manual test see the manual test example below

## Examples

- Test synchronous code [example](examples/array-iteration.js)
- Test asynchronous code [example](examples/async-example.js)
- Write manual test sync/asynchronous code [example](examples/manual-tests.js)

## Acknowledgements

This tool is a wrapper of [benchmarkjs](https://benchmarkjs.com), so all credit related to benchmarking itself really goes to them.

Thanks to [Paul Molluzzo](https://github.com/paulmolluzzo) for coming up with the name **bench-trial**!

## <a name="contributing">Contributing</a>

Please read [CONTRIBUTING.md](https://github.com/ViacomInc/data-point/blob/master/CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us.

## <a name="license">License</a>

This project is licensed under the Apache License Version 2.0 - see the [LICENSE](LICENSE) file for details
43 changes: 43 additions & 0 deletions packages/bench-trial/examples/array-iteration.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
const { test } = require('bench-trial')

const array = Array(100).fill('foo')
const expected = array.join('').length

function forLoop () {
let result = ''
for (let index = 0; index < array.length; index++) {
result = result + array[index]
}

const length = result.length
result = ''
return length
}

function whileLoop () {
let result = ''
let index = 0
while (index !== array.length) {
result = result + array[index]
index++
}

const length = result.length
result = ''
return length
}

module.exports = [
{
async: false,
name: 'while-loop',
test: test(whileLoop, expected),
benchmark: whileLoop
},
{
async: false,
name: 'for-loop',
test: test(forLoop, expected),
benchmark: forLoop
}
]
30 changes: 30 additions & 0 deletions packages/bench-trial/examples/async-example.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
const { test } = require('bench-trial')

const expected = true

function testPromise (done) {
Promise.resolve(true).then(() => {
done(null, true)
})
}

function testSetTimeOut (done) {
setTimeout(() => {
done(null, true)
}, 0)
}

module.exports = [
{
async: true,
name: 'promise',
test: test(testPromise, expected),
benchmark: testPromise
},
{
async: true,
name: 'timeout',
test: test(testSetTimeOut, expected),
benchmark: testSetTimeOut
}
]
33 changes: 33 additions & 0 deletions packages/bench-trial/examples/manual-tests.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
const assert = require('assert')

const expected = 2

function addsNumbersSync () {
return 1 + 1
}

function addsNumbersAsync (done) {
Promise.resolve().then(() => {
done(null, 1 + 1)
})
}

module.exports = [
{
async: false,
name: 'addsNumbersSync',
test: () => assert.deepEqual(addsNumbersSync(), expected),
benchmark: addsNumbersSync
},
{
async: true,
name: 'addsNumbersAsync',
test: done => {
addsNumbersAsync((e, val) => {
assert.deepEqual(val, expected)
done(null)
})
},
benchmark: addsNumbersAsync
}
]
54 changes: 54 additions & 0 deletions packages/bench-trial/lib/index.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
const assert = require('assert')

function testSync (method, expected) {
return () => assert.deepEqual(method(), expected)
}

function testAsync (method, expected) {
return done => {
return method((err, value) => {
if (err) {
return done(err)
}
try {
assert.deepEqual(value, expected)
done(null, value)
} catch (e) {
done(e)
}
})
}
}

function test (method, expected) {
return {
test: method,
expected
}
}

test.sync = testSync
test.async = testAsync

function benchmarkSync (method) {
return method
}

function benchmarkAsync (method) {
return deferred => {
return method((err, value) => {
if (err) {
throw err
}
deferred.resolve()
})
}
}

module.exports = {
test,
benchmark: {
sync: benchmarkSync,
async: benchmarkAsync
}
}
22 changes: 22 additions & 0 deletions packages/bench-trial/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
{
"name": "bench-trial",
"version": "2.0.0",
"description": "Runs one or multiple benchmark tests",
"main": "./lib/index.js",
"license": "Apache-2.0",
"engines": {
"node": ">=6"
},
"bin": {
"bench-trial": "runner.js"
},
"author": {
"name": "Acatl Pacheco",
"email": "acatl.pacheco@viacom.com"
},
"dependencies": {
"benchmark": "^2.1.4",
"commander": "^2.13.0",
"chalk": "^2.3.0"
}
}
Loading