Skip to content

Benchmarking

Wil Wade edited this page May 21, 2024 · 22 revisions

In order to achieve a certain block time, for example 6 seconds per block, there is a limited number of lines of code can be run during that time frame. When writing a pallet function, the developer is responsible for calculating its computational complexity, which is called weight. The process of determining that complexity, or simply put, time cost is called benchmarking.

Release Benchmarks

  • Benchmarks are run again for each Release using Option 2: "Referenced Hardware"
  • Release benchmarks are not merged back into main
  • Benchmarks should still be run on your branches as needed so that PRs highlight changes in weights

Running Benchmarks

There are 2 options to run benchmarks:

  • Locally on a developer laptop
  • Remotely on referenced hardware in the cloud

1. Locally

❗ DO NOT commit weights yourself!

make benchmarks

It can be helpful to temporarily lower the number of iterations when testing benchmarks locally to decrease the time it takes to run them.

2. Referenced Hardware (GitHub Actions)


❗ ATTENTION: DO NOT commit weights yourself! They will be auto-committed by CI job upon completion.


To trigger running benchmarks on referenced hardware in the cloud:

  1. Push your branch to GitHub.
  2. Request a core developer run the benchmarks.
  3. A core developer will need to go to the Benchmarks Run Action:
    • Select Run Workflow
    • Input the branch name
    • Use all or specify the pallets to use
  4. Wait for the commit to be added to the branch, and if a PR is open it will then run the PR checks on that commit as well.

Resources for Understanding and Writing Benchmarks