-
-
Notifications
You must be signed in to change notification settings - Fork 406
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Merged by Bors] - Only run benchmarks on PRs when a label is set #2114
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! This will definitely speedup our CI process by a lot :)
Test262 conformance changesVM implementation
|
Codecov Report
@@ Coverage Diff @@
## main #2114 +/- ##
==========================================
- Coverage 43.66% 43.62% -0.04%
==========================================
Files 217 217
Lines 19644 19673 +29
==========================================
+ Hits 8578 8583 +5
- Misses 11066 11090 +24
Continue to review full report at Codecov.
|
I saw the test conformance comment and it occured to me that it would be good to conditionally run parts of the CI only when there has been changes on |
True. I think there is some discussion needed on this, as some changes outside of the Rust crates probably also have to be tested. For example if we update the 262 submodule, we should probably run ci normally. |
The thing is that a change to, for example, the Unicode dependency, could make some random stuff to fail, and performance to change in any place (almost). So it's difficult to know when it makes sense or not. |
That is true, but I would argue that those cases should not be the norm and we still have the full benchmarks on main. If any change unexpectedly changes performance, we can still track it down to the commit. IMO the benefits of getting faster feedback on PRs outweigh the negatives. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’m assuming this works at the point when you add the label?
Yes, when you add the label it gets triggered for the first time. Any push after that also triggers them like in the past. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me :)
bors r+ |
This changes our ci benchmarks to only run when the label `run-benchmark` is set on the PR. The motivation is to reduce the time waiting on benchmarks to run while working on PRs. Also this saves some ci minutes which is always good. When we spot changes that we suspect impact performance, we can add the `run-benchmark` label to the PR and the benchmarks will run.
Pull request successfully merged into main. Build succeeded: |
This changes the trigger type for PR benchmarks back to the default (`opened`, `synchronize`, `reopened`). As part of #2114 I added the `labeled` trigger type. This causes the benchmarks to run when the `run-benchmark` label is present and another label is added. For example in #2116 I added the `run-benchmark` label while creating the PR. The benchmarks then where triggered six times; one for the PR creation (`opened`) and five times for each label that I initially added to the PR. The only drawback is that the benchmarks are not triggered, when we just add the label, but unfortunately I don't have a clever idea on how to achieve that right now. We will have to add the label and then trigger the run via a `synchronize` (push).
This changes our ci benchmarks to only run when the label
run-benchmark
is set on the PR.The motivation is to reduce the time waiting on benchmarks to run while working on PRs. Also this saves some ci minutes which is always good.
When we spot changes that we suspect impact performance, we can add the
run-benchmark
label to the PR and the benchmarks will run.