Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[wasm] New jiterpreter backbranch scan pass #98530

Merged
merged 4 commits into from
Feb 19, 2024

Conversation

kg
Copy link
Member

@kg kg commented Feb 15, 2024

Interpreted SequenceEqual got slower in WASM (according to browser-bench) after #96315, which was my fault due to the back branch implementation in the jiterpreter being too complicated. It's still going to be too complicated, but this PR should fix the regression. Specifically, the interpreter and jiterpreter were disagreeing slightly on where a back-branch was pointing - this disagreement would not cause a crash or incorrect behavior, but would cause the jiterpreter to not compile the backward branch.

  • Remove the C code responsible for generating the backward_branch_offsets table stored in InterpMethod that was used by the jiterpreter.
  • Add a new pre-pass to the jiterpreter that does a quick opcode scan to find back-branch targets on the fly when a trace is actually being compiled.
  • Reimplement interpreter branch decoding in a central place in the jiterpreter, and generalize it based on opcode info tables instead of a bunch of hand-written decode logic.
  • Remove some dead options and dead code from the jiterpreter typescript.
  • Fix an enum that got out of sync.
  • Minor debug logging improvements.

Overall this should ensure that the jiterpreter is able to handle back-branches for any branch opcode it supports (it still doesn't support some of the obscure branch opcodes, I think). I verified that it no longer fails to generate back branches for the benchmarks in question, but I can't easily verify whether this might cause regressions in other traces as a result of the newly-visible back branches making traces get too large.

kg added 3 commits February 15, 2024 14:06
…jiterpreter-trace-generator as a prepass

Unify branch decoding to use mono_jiterp_get_opcode_info
@kg kg added the arch-wasm WebAssembly architecture label Feb 15, 2024
@ghost
Copy link

ghost commented Feb 15, 2024

Tagging subscribers to 'arch-wasm': @lewing
See info in area-owners.md if you want to be subscribed.

Issue Details

Interpreted SequenceEqual got slower in WASM (according to browser-bench) after #96315, which was my fault due to the back branch implementation in the jiterpreter being too complicated. It's still going to be too complicated, but this PR should fix the regression. Specifically, the interpreter and jiterpreter were disagreeing slightly on where a back-branch was pointing - this disagreement would not cause a crash or incorrect behavior, but would cause the jiterpreter to not compile the backward branch.

  • Remove the C code responsible for generating the backward_branch_offsets table stored in InterpMethod that was used by the jiterpreter.
  • Add a new pre-pass to the jiterpreter that does a quick opcode scan to find back-branch targets on the fly when a trace is actually being compiled.
  • Reimplement interpreter branch decoding in a central place in the jiterpreter, and generalize it based on opcode info tables instead of a bunch of hand-written decode logic.
  • Remove some dead options and dead code from the jiterpreter typescript.
  • Fix an enum that got out of sync.
  • Minor debug logging improvements.

Overall this should ensure that the jiterpreter is able to handle back-branches for any branch opcode it supports (it still doesn't support some of the obscure branch opcodes, I think). I verified that it no longer fails to generate back branches for the benchmarks in question, but I can't easily verify whether this might cause regressions in other traces as a result of the newly-visible back branches making traces get too large.

Author: kg
Assignees: -
Labels:

arch-wasm

Milestone: -

@kg
Copy link
Member Author

kg commented Feb 16, 2024

S.T.J tests in browser-on-windows are OOMing near the end of the run, which is probably not caused by this but I'll look into it anyway. My guess is that test is just really heavy and has some sort of leak, since it's near the end and doesn't happen on Linux.

@kg kg marked this pull request as ready for review February 16, 2024 07:25
@kg
Copy link
Member Author

kg commented Feb 16, 2024

The STJ tests look like they do have some sort of a leak - they hit ~1GB of browser tab memory usage by the time they hit the 3MB write tests, and the tab process peaks at around 1.2GB. But that's still not remotely up against the normal limits of a browser tab process on a modern x64 machine, and I don't think I caused that to regress.

My guess is something is eating up memory on the build VMs, probably the msbuild/roslyn compiler server instances.

Copy link
Member

@BrzVlad BrzVlad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any idea what was the actual problem with the previous code computation ?

@kg
Copy link
Member Author

kg commented Feb 19, 2024

Any idea what was the actual problem with the previous code computation ?

It looks like the offsets it calculated were specifically wrong when the branch target was right next to a jiterpreter entry point.

@kg kg merged commit 85679d2 into dotnet:main Feb 19, 2024
111 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Mar 21, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants