π
Here are some recent and important revisions. π Complete list of results.
Most recent pystats on main (0ac40ac)
date | fork/ref | hash/flags | vs. 3.12.6: | vs. 3.13.0rc2: | vs. base: |
---|---|---|---|---|---|
2024-12-19 | python/39e69a7cd54d44c9061d | 39e69a7 | 1.102x β ππ |
1.058x β ππ |
|
2024-12-19 | python/39e69a7cd54d44c9061d | 39e69a7 (NOGIL) | 1.143x β ππ |
1.172x β ππ |
1.216x β πππ§ |
2024-12-18 | python/f802c8bf872ab882d305 | f802c8b (NOGIL) | 1.181x β ππ |
1.208x β ππ |
1.251x β πππ§ |
2024-12-18 | python/f802c8bf872ab882d305 | f802c8b | 1.107x β ππ |
1.064x β ππ |
|
2024-12-18 | python/b92f101d0f19a1df3205 | b92f101 (NOGIL) | 1.163x β ππ |
1.192x β ππ |
1.236x β πππ§ |
2024-12-18 | python/b92f101d0f19a1df3205 | b92f101 | 1.105x β ππ |
1.064x β ππ |
|
2024-12-16 | python/cfeaa992ba9bad9be268 | cfeaa99 | 1.118x β ππ |
1.072x β ππ |
|
2024-12-16 | python/cfeaa992ba9bad9be268 | cfeaa99 (NOGIL) | 1.158x β ππ |
1.186x β ππ |
1.240x β πππ§ |
date | fork/ref | hash/flags | vs. 3.12.6: | vs. 3.13.0rc2: | vs. base: |
---|---|---|---|---|---|
2024-12-20 | Yhg1s/optimise_recursive_c | ddb794a (NOGIL) | 1.185x β ππ |
1.212x β ππ |
1.003x β πππ§ |
2024-12-20 | python/78ffba4221dcb2e39fd5 | 78ffba4 (NOGIL) | 1.188x β ππ |
1.215x β ππ |
|
2024-12-20 | mpage/gh_115999_load_attr_ | b868363 | 1.094x β ππ |
1.055x β ππ |
1.008x β πππ§ |
2024-12-20 | mpage/gh_115999_load_attr_ | 1b787b3 (NOGIL) | 1.086x β ππ |
1.115x β ππ |
1.122x β πππ§ |
2024-12-19 | mpage/gh_115999_load_attr_ | 3876bc7 (NOGIL) | 1.091x β ππ |
1.120x β ππ |
1.116x β πππ§ |
2024-12-19 | mpage/gh_115999_load_attr_ | 3876bc7 | 1.094x β ππ |
1.055x β ππ |
1.008x β πππ§ |
2024-12-19 | python/39e69a7cd54d44c9061d | 39e69a7 | 1.084x β ππ |
1.046x β ππ |
|
2024-12-19 | python/39e69a7cd54d44c9061d | 39e69a7 (NOGIL) | 1.189x β ππ |
1.215x β ππ |
1.246x β πππ§ |
2024-12-18 | python/f802c8bf872ab882d305 | f802c8b (NOGIL) | 1.218x β ππ |
1.243x β ππ |
1.266x β πππ§ |
2024-12-18 | python/f802c8bf872ab882d305 | f802c8b | 1.076x β ππ |
1.037x β ππ |
|
2024-12-18 | nascheme/gh_115999_specialize | 9015a3f | 1.082x β ππ |
1.043x β ππ |
1.000x β πππ§ |
2024-12-18 | nascheme/gh_115999_specialize | 9015a3f (NOGIL) | 1.192x β ππ |
1.218x β ππ |
1.023x β πππ§ |
2024-12-18 | python/b92f101d0f19a1df3205 | b92f101 (NOGIL) | 1.214x β ππ |
1.240x β ππ |
1.269x β πππ§ |
2024-12-18 | python/b92f101d0f19a1df3205 | b92f101 | 1.085x β ππ |
1.046x β ππ |
|
2024-12-17 | mpage/gh_115999_specialize | 40f5577 (NOGIL) | 1.189x β ππ |
1.215x β ππ |
1.027x β πππ§ |
2024-12-17 | python/329165639f9ac00ba64f | 3291656 (NOGIL) | 1.211x β ππ |
1.236x β ππ |
1.265x β πππ§ |
2024-12-17 | python/329165639f9ac00ba64f | 3291656 | 1.082x β ππ |
1.043x β ππ |
|
2024-12-19 | corona10/gh_115999_BINARY_SUB | 47b80b4 (NOGIL) | 1.221x β ππ |
1.246x β ππ |
1.012x β πππ§ |
2024-12-19 | corona10/gh_115999_BINARY_SUB | 47b80b4 | 1.062x β ππ |
1.024x β ππ |
1.003x β πππ§ |
2024-12-18 | corona10/gh_115999_BINARY_SUB | 6ef74ac | 1.060x β ππ |
1.022x β ππ |
1.005x β πππ§ |
2024-12-18 | corona10/gh_115999_BINARY_SUB | 6ef74ac (NOGIL) | 1.222x β ππ |
1.247x β ππ |
1.011x β πππ§ |
2024-12-17 | nascheme/gh_115999_specialize | 699f4e9 | 1.079x β ππ |
1.040x β ππ |
1.005x β πππ§ |
2024-12-17 | nascheme/gh_115999_specialize | 699f4e9 (NOGIL) | 1.201x β ππ |
1.227x β ππ |
1.017x β πππ§ |
2024-12-17 | corona10/gh_115999_BINARY_SUB | 3aa9426 | 1.061x β ππ |
1.022x β ππ |
1.004x β πππ§ |
2024-12-17 | corona10/gh_115999_BINARY_SUB | 3aa9426 (NOGIL) | 1.226x β ππ |
1.250x β ππ |
1.006x β πππ§ |
2024-12-16 | python/cfeaa992ba9bad9be268 | cfeaa99 | 1.088x β ππ |
1.049x β ππ |
|
2024-12-16 | python/cfeaa992ba9bad9be268 | cfeaa99 (NOGIL) | 1.212x β ππ |
1.237x β ππ |
1.269x β πππ§ |
2024-12-13 | nascheme/gh_115999_specialize | 4c484ab | 1.078x β ππ |
1.040x β ππ |
1.006x β πππ§ |
*
indicates that the exact same versions of pyperformance was not used.
Improvement of the geometric mean of key merged benchmarks, computed with pyperf compare
.
The results have a resolution of 0.01 (1%).
Visit the π benchmark action and click the "Run Workflow" button.
The available parameters are:
fork
: The fork of CPython to benchmark. If benchmarking a pull request, this would normally be your GitHub username.ref
: The branch, tag or commit SHA to benchmark. If a SHA, it must be the full SHA, since finding it by a prefix is not supported.machine
: The machine to run on. One oflinux-amd64
(default),windows-amd64
,darwin-arm64
orall
.benchmark_base
: If checked, the base of the selected branch will also be benchmarked. The base is determined by runninggit merge-base upstream/main $ref
.pystats
: If checked, collect the pystats from running the benchmarks.
To watch the progress of the benchmark, select it from the π benchmark action page. It may be canceled from there as well. To show only your benchmark workflows, select your GitHub ID from the "Actor" dropdown.
When the benchmarking is complete, the results are published to this repository and will appear in the complete table. Each set of benchmarks will have:
- The raw
.json
results from pyperformance. - Comparisons against important reference releases, as well as the merge base of the branch if
benchmark_base
was selected. These include- A markdown table produced by
pyperf compare_to
. - A set of "violin" plots showing the distribution of results for each benchmark.
- A markdown table produced by
The most convenient way to get results locally is to clone this repo and git pull
from it.
To automate benchmarking runs, it may be more convenient to use the GitHub CLI.
Once you have gh
installed and configured, you can run benchmarks by cloning this repository and then from inside it:
gh workflow run benchmark.yml -f fork=me -f ref=my_branch
Any of the parameters described above are available at the commandline using the -f key=value
syntax.
To collect Linux perf sampling profile data for a benchmarking run, run the _benchmark
action and check the perf
checkbox.
Follow this by a run of the _generate
action to regenerate the plots.
This repo is licensed under the BSD 3-Clause License, as found in the LICENSE file.