-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add UI for comparing backend results #2010
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exciting!
Here are a few comments/questions
}); | ||
} | ||
const record = benchmarkMap.get(key); | ||
if (comparison.backend === "llvm") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we normalize the case here somewhere I guess?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't, the name comes from here.
Thanks for the comments, went through them and resolved them. |
There are a few cases where we have before/after assumptions on the compare page, and I don't know if we need to do anything here now:
These are all useful in understanding where the gains/losses are but it's likely a bit of work, and maybe it's fine not to do anything yet. Probably not a big deal but when using this we'll also need to remember that we need to use the "after" commit in the compare UI when doing actual single-commit comparisons on rustc-perf. Otherwise there won't be results for both backends on the baseline commit, until we profile cranelift on each merge. There's also an interesting case that may be more common for the foreseeable future: are broken benchmarks only shown when the 2 commits are different? I say "common" because cg_clif currently doesn't support LTO, so your recent try build shows broken benchmarks here but not on the single-commit comparison. (At some point some of the exa benchmarks were showing as -100% for me, but I can't reproduce this anymore: after some reloads, the failed benchmark rows seem to be gone, and it likely was some transient caching issue.) |
Yes, it seems like so, because it only shows "newly broken" benchmarks ( rustc-perf/site/src/comparison.rs Line 780 in 72658ac
Tbh, I consider this functionality to be a hack useful only for comparing backends for a specific use-case of comparing LLVM and Cranelift, rather than making it work universally for all situations. So things like detailed results and charts are out of scope, IMO. |
These shouldn't have data since they're broken benchmarks in this run IIUC (and incidentally, toggling "Display raw data" filters these rows out), and I'm not sure exactly when it happens, but for historical purposes this is how it looks when it does happen: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
This PR adds a new comparison mode to the compile result compare page. This mode allows users to compare the results between the LLVM and Cranelift backends if you are comparing the same commit against each other.
This was IMO the easiest way how to introduce this comparison mode. It would require non-trivial changes to get this working on the backend, and while this is arguably a bit of a hack, I think that it should work fine for the use-case that we need it for. In the future, this could be easily generalized to multiple backends, but for now it was not worth the hassle.
How to test:
$ rustup component add rustc-codegen-cranelift-preview --toolchain nightly $ cargo run --bin collector -- bench_local `rustup +nightly which rustc` --include serde --profiles Debug --backends Llvm,Cranelift