-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JIT: build pred lists before inlining #81000
JIT: build pred lists before inlining #81000
Conversation
Move pred list building up a bit further. Note that this impacts both the root method and all inlinees, since we run a number of the initial phases on both. Since inlinees build their basic blocks and flow edges in the root compiler's memory pool, the only work required to unify the pred lists for a successful inline is to get things right at the boundaries. And for failed inlines there is no cross-referencing so we can just let the new pred lists leak away (like we alredy do for the inlinee blocks). Contributes towards dotnet#80193.
Tagging subscribers to this area: @JulieLeeMSFT, @jakobbotsch, @kunalspathak Issue DetailsMove pred list building up a bit further. Note that this impacts both the root method and all inlinees, since we run a number of the initial phases on both. Since inlinees build their basic blocks and flow edges in the root compiler's memory pool, the only work required to unify the pred lists for a successful inline is to get things right at the boundaries. And for failed inlines there is no cross-referencing so we can just let the new pred lists leak away (like we alredy do for the inlinee blocks). Contributes towards #80193.
|
@EgorBo PTAL Should be zero diff because I take pains to renumber after inlining. At some point we should think about how to address the implicit dependence some phases have on bbNums. |
Hmm, crossgen unhappy. |
We were running post-phase checks on failed inlines. Not surprisingly they found things they didn't like. We can actually bail out of the phase list once an inline fails (which is can during importation). So added code for this too -- might actually give us a bit of TP improvement. |
Diffs show a 0.2%ish TP regression when optimizing. A bit more costly than I'd expect; let me poke at it a bit. |
Looks like it is just the extra cost of building pred lists for inlinees, which we need to pay anyways. |
Probably some more places need |
Min opts is not affected. I think we might be able to win back some of the lost perf, say by making |
/azp run runtime-coreclr jitstress, runtime-coreclr libraries-jitstress |
Azure Pipelines successfully started running 2 pipeline(s). |
Detailed TP diff using @SingleAccretion 's pin tool for the benchmarks collection.
|
coreclr jitstress failure is #80666 libraries jitstress failures are not ones I've seen before, but also seem unrelated. Both are on linux arm runs:
and
|
My understanding that on 32bit we have a sort of random asserts from |
Move pred list building up a bit further. Note that this impacts both the root method and all inlinees, since we run a number of the initial phases on both.
Since inlinees build their basic blocks and flow edges in the root compiler's memory pool, the only work required to unify the pred lists for a successful inline is to get things right at the boundaries. And for failed inlines there is no cross-referencing so we can just let the new pred lists leak away (like we alredy do for the inlinee blocks).
Contributes towards #80193.