-
Notifications
You must be signed in to change notification settings - Fork 255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarking #261
Benchmarking #261
Conversation
This pull request is automatically built and testable in CodeSandbox. To see build info of the built libraries, click here or the icon next to each commit SHA. Latest deployment of this branch, based on commit 6fe698b:
|
Excellent start 💯
Yeah, definitely 👍 Are you planning on adding similar tooling in this PR too? (Styled components?)
Love this <3 This will help us make decisions backed by meaningful data
I'd think we need to benchmark the functions that are likely to be called the most often
When we compare with other libraries, does it make sense to only compare "similar" API's? For example, the
No strong preference. Would they help keep the benchmarks more readable? If so, I'm all for it ✨
Works for me 👌
I'm happy for it to be in the repo for now, unless there's a strong reason for it to be abstracted |
85fa67d
to
3dba0ff
Compare
f1dcc7c
to
9d397eb
Compare
1b92227
to
9e333e1
Compare
233355f
to
148ea49
Compare
This request adds benchmarking abilities to the project using Continuous Benchmark with BenchmarkJS.
Highlights
ts
files.*.benchmark.ts
files similar to how tests run in*.test.ts
files.Performance regressions that exceed 25% above the current threshold will result in an automatic warning added as a comment to a PR. Performance regressions that double the current threshold trigger a workflow failure.
Usage
Example (
packages/core/benchmarks/index.benchmark.ts
):Details
Benchmarking Purposes
These benchmarks can serve at least two purposes.
First, benchmarks can compare the performance of stitches with similar tooling. For them to be the most meaningful, they should demonstrate the strong and weak areas in similar tooling. While stitches may already be the “fastest” ⚡, developers would likely appreciate it if the benchmarks demonstrate where the other tooling does well and where it can improve performance.
Second, benchmarks can compare the performance of internal functions in order to measure improvements and regressions in individual pull requests and even between releases. These are especially helpful for developers to measure the most computationally intensive areas of the library.
Requirements
tests
directory, benchmark scripts could be written to abenchmarks
directory. Initially, these scripts would benchmark internal logic; again, mimicking the tests.Questions
describe()
andtest()
?*.benchmark.ts
?