Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize execution loop #293

Merged
merged 13 commits into from
May 23, 2024
Merged

Optimize execution loop #293

merged 13 commits into from
May 23, 2024

Conversation

karmacoma-eth
Copy link
Collaborator

Completely driven by flamegraphs, I kept squeezing until the fetch-decode-execute loop almost disappeared from flamegraphs (now dominated by other things like calls to resolve_address_alias):

Tested on morpho blue, 24.8% faster:

before: check_fee(bytes4,address) (paths: 38, time: 21.82s (paths: 21.82s, models: 0.00s), bounds: [])
after:  check_fee(bytes4,address) (paths: 39, time: 16.41s (paths: 16.41s, models: 0.00s), bounds: [])

Avoids duplicating work:

  • load pgm[pc] to know the current instruction (doing a chunk lookup + indexing in the code bytevec)
  • load pgm[pc+1:pc+1+operand_size] for PUSHn instructions (every time it is executed, which can be expensive)
  • load pgm[pc] again to know the next_pc

Instead we do this eagerly once and then remember the result.

Plus some extra goodies:

  • support for MCOPY
  • explicit Message.call_scheme, which caused us to render deploy output in traces at high verbosity when the intent was to just display the number of bytes

Tested on morpho blue, 24.8% faster:

    before: check_fee(bytes4,address) (paths: 38, time: 21.82s (paths: 21.82s, models: 0.00s), bounds: [])
    after:  check_fee(bytes4,address) (paths: 39, time: 16.41s (paths: 16.41s, models: 0.00s), bounds: [])

Avoids duplicating work:

- load pgm[pc] to know the current instruction (doing a chunk lookup + indexing in the code bytevec)
- load pgm[pc+1:pc+1+operand_size] for PUSHn instructions (every time it is executed, which can be expensive)
- load pgm[pc] again to know the next_pc

Instead we do this eagerly once and then remember the result.
@karmacoma-eth karmacoma-eth requested a review from daejunpark May 22, 2024 01:03
@karmacoma-eth
Copy link
Collaborator Author

Tested on Farcaster, 11% faster:

# before
[PASS] check_Invariants_PostMigration(bytes4,address) (paths: 641, time: 195.43s (paths: 195.43s, models: 0.00s), bounds: [])

# after
[PASS] check_Invariants_PostMigration(bytes4,address) (paths: 637, time: 173.95s (paths: 173.95s, models: 0.00s), bounds: [])

src/halmos/sevm.py Show resolved Hide resolved
src/halmos/sevm.py Show resolved Hide resolved
src/halmos/sevm.py Show resolved Hide resolved
src/halmos/utils.py Show resolved Hide resolved
src/halmos/utils.py Show resolved Hide resolved
.github/workflows/test-external.yml Show resolved Hide resolved
.github/workflows/test-external.yml Show resolved Hide resolved
.github/workflows/test-external.yml Outdated Show resolved Hide resolved
@karmacoma-eth karmacoma-eth merged commit 337adba into main May 23, 2024
53 checks passed
@karmacoma-eth karmacoma-eth deleted the insn-cache branch May 23, 2024 00:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants