Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(bench-trial): new package to benchmark performance #192

Merged
merged 12 commits into from
Jan 31, 2018

Conversation

acatl
Copy link
Collaborator

@acatl acatl commented Jan 29, 2018

What: new performance benchmark tool

Why: to test changes we make to data-point

How: created a new module that runs multiple iterations of benchmarkjs tests

Checklist:

  • Has Breaking changes N/A
  • Documentation
  • Tests
  • Ready to be merged
  • Added username to all-contributors list

@acatl acatl added this to the v2 - new features milestone Jan 29, 2018
@acatl acatl self-assigned this Jan 29, 2018
@@ -0,0 +1,333 @@
#!/usr/bin/env node --expose-gc
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so many many many tests for this file they didn't fit in this PR, I will try to send it via USB to each maintainer of DataPoint

@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling 6f45729 on acatl:bench-trail into bb6c7bc on ViacomInc:master.

@coveralls
Copy link

coveralls commented Jan 29, 2018

Coverage Status

Coverage remained the same at 100.0% when pulling 5a89cd0 on acatl:bench-trail into 485f4ab on ViacomInc:master.

return Math.round(bytes / 1024 * 100) / 100
}

function runGC (val) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all memory related code is not being used ATM, I left it here because it took me a while to figure some things out. will look back into it with more time to reintegrate it

@acatl acatl changed the title feat(bench-trail): new package to benchmark performance feat(bench-trial): new package to benchmark performance Jan 29, 2018
}

function runTest (suite) {
const isASync = !!suite.async
Copy link
Contributor

@raingerber raingerber Jan 30, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isAsync? (also in runBenchmark function)

Copy link
Collaborator

@paulmolluzzo paulmolluzzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is awesome!

I provided some suggestions for rewording the README, so I'm requesting changes mostly for that. There are a couple of comments in the code itself too, but nothing is big.

Stoked to start using this!


While running [benchmarkjs](https://benchmarkjs.com) to compare different versions of code I found out a couple of things:

- **Ambiguous results**: tests would throw different results every time I ran the test, specially if the difference was minimal. If I ran the same test multiple times the numbers changed, as time went by I would get more and more operations per second on each benchmark. This would only happen if I ran the tests consecutively, the reason of this might be related to the v8 engine warming up and optimizing the code the more I ran it. If I let some time to cool off, the tests would go back down on operations per second.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a rewrite of the line, so copy and pasting might be easier (if you want to keep them):

Ambiguous results: I noticed that the same benchmark tests were returning different results every time they were run. If they were re-run consecutively I would get more operations per second on each benchmark. I believe the reason may be related to the v8 engine warming up and optimizing the code the more it ran, since if I let some time to "cool off" the operations per second for each test would decrease. These ambiguous results meant having to manually repeat tests to try to ensure some consistency.

While running [benchmarkjs](https://benchmarkjs.com) to compare different versions of code I found out a couple of things:

- **Ambiguous results**: tests would throw different results every time I ran the test, specially if the difference was minimal. If I ran the same test multiple times the numbers changed, as time went by I would get more and more operations per second on each benchmark. This would only happen if I ran the tests consecutively, the reason of this might be related to the v8 engine warming up and optimizing the code the more I ran it. If I let some time to cool off, the tests would go back down on operations per second.
- **Reliable Execution**: more than once I made changes on the code being tested and never did I notice the change I had made was not even executing correctly. So the results I was getting were really unreliable.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a rewrite of the line, so copy and pasting might be easier (if you want to keep them):

Unreliable Test Execution: Occasionally I made changes to the code being benchmarked and would overlook that it was not executing correctly, further compounding the issue of making the results unreliable.


## Solution

- **Ambiguous results**: Run benchmark more than once to get median and average results, because the test will run multiple times the code will get optimized, using the median we can get more reliable results.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a rewrite of the line, so copy and pasting might be easier (if you want to keep them):

Clear, consistent results: By running benchmark tests more than once we can get median and average results and get a bigger picture with less fluctuation. Because the tests will run multiple times in succession, the code will get optimized by the engine, and we can use the median time as a more consistent and stable metric.

## Solution

- **Ambiguous results**: Run benchmark more than once to get median and average results, because the test will run multiple times the code will get optimized, using the median we can get more reliable results.
- **Reliable Execution**: Run a simple assertion tests on each suite before the actual benchmark runs, this helps us make sure our test are executing correctly.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a rewrite of the line, so copy and pasting might be easier (if you want to keep them):

Reliable Tests to Execute: By running simple assertion tests on each suite before the actual benchmark runs we can be sure our tests are executing correctly.


### Writing your benchmark suites

The file you provide to bench-trial should export an `array` of suites, each suite is an object in the form of:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The file provided to bench-trial should export an array of test suites, each test suite is an object in the form of:

test(test:function, value:*)
```

to write your manual test see the manual test example below
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To write your own tests see the "manual test" example below

"email": "acatl.pacheco@viacom.com"
},
"devDependencies": {
"benchmark": "2.1.4",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason this is pinned to this specific version?


function middle (values) {
var len = values.length
var half = Math.floor(len / 2)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vars?

@acatl
Copy link
Collaborator Author

acatl commented Jan 31, 2018

@paulmolluzzo applied your feedback, thanks!!!

@acatl acatl modified the milestone: v2 - new features Jan 31, 2018

While running [benchmarkjs](https://benchmarkjs.com) to compare different versions of code I found out a couple of things:

- **Consistency**: I noticed that the same benchmark tests were returning different results every time they executed. If they were re-run consecutively, I would get more operations per second on each benchmark. I believe the reason may be related to the v8 engine warming up and optimizing the code the more it ran, since if I let some time to "cool off" the operations per second for each test would decrease. These ambiguous results meant having to repeat tests to ensure some consistency.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The title should have stayed as **Ambiguous Results**. This is describing the problem.

Copy link
Collaborator

@paulmolluzzo paulmolluzzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two tiny things.

While running [benchmarkjs](https://benchmarkjs.com) to compare different versions of code I found out a couple of things:

- **Consistency**: I noticed that the same benchmark tests were returning different results every time they executed. If they were re-run consecutively, I would get more operations per second on each benchmark. I believe the reason may be related to the v8 engine warming up and optimizing the code the more it ran, since if I let some time to "cool off" the operations per second for each test would decrease. These ambiguous results meant having to repeat tests to ensure some consistency.
- **Reliable Execution**: Occasionally I made changes to the benchmarked code and would overlook that it was not executing correctly, further compounding the issue of making the results unreliable.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be Unreliable Exection. "Reliable Execution" isn't a problem 😉


Test synchronous code [example](examples/array-iteration.js)
Test asynchronous code [example](examples/async-example.js)
Write manual test sync/asynchronous code [example](examples/manual-tests.js)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we put a bullet on these please? They currently get squished to a single line.

Copy link
Collaborator

@paulmolluzzo paulmolluzzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is awesome! 🎉

Thanks for putting up with my comments on the README. 🙊

@acatl acatl merged commit a704568 into ViacomInc:master Jan 31, 2018
@acatl acatl deleted the bench-trail branch January 31, 2018 15:07
victorsingh pushed a commit to victorsingh/data-point that referenced this pull request Feb 13, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants