Add a Regularly Updated Collection of Benchmark Results #334
ericsnowcurrently
started this conversation in
Ideas
Replies: 3 comments
-
Possible Futuretools
other
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Thanks for writing down a clear plan! I just have one request -- can we make the data available both as JSON and in markdown table form? The latter is easier to review, the former is handy if we want to apply additional tools to it (even |
Beta Was this translation helpful? Give feedback.
0 replies
-
@mdboom, this is likely informative. Most notably, the work to support cron jobs for benchmarking won't be substantial but isn't non-trivial. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
A public collection of benchmark results would be valuable for the faster-cpython project, as well as the Python community as a whole. The results would need to be produced regularly and uploaded to a publicly accessible location. This would be independent of presentation (e.g. codespeed, which is used by speed.python.org), though some level of interoperability between the collection and codespeed would be nice.
My plan is to first run the benchmarks manually and upload them to this repo. After that I'll look into setting up a cron job and then maybe some further tooling to make life easier. We'll also work on expanding our platform/build matrix.
Key Requirements
Plan
phase 1: manual runs/uploads
Run benchmarks manually and upload the results. Automation will likely happen well before all the below manual runs have been done.
Only our dedicated linux benchmarking host will be used.
Releases
(Also, all future 3.8, 3.9, and 3.10 bug fix releases.)
phase 2: automation
phase 3
Uploads
location: https://github.com/faster-cpython/ideas/tree/main/benchmark-results/
For now, we will only upload the json file produced by pyperf. The filename will be unique and meaningful to humans:
<impl>-<release>-<commit>-<host>-<compat ID>.json
where
<compat ID>
is a hash derived from the metadata found in the json file .Examples:
/benchmark-results/cpython-main-ada6B4cf8d-linux-cc12888f9B4b.json
/benchmark-results/cpython-3.11a5-838b51a079-linux-cc12888f9B4b.json
/benchmark-results/cpython-3.10.2-3571d41577-linux-cc12888f9B4b.json
Open Questions
Beta Was this translation helpful? Give feedback.
All reactions