-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix for invalid caching of CodeInfo from typeinf_ext #51872
Conversation
When inference decided it was not necessary to cache the object, it also skipped all of the work required to make the code valid, which typeinf_ext depends upon. This resulted in caching invalid data, causing effects tests to break unpredictably. This change ensures that we always update the InferenceState with the final result, so that typeinf_ext can get the correct results out of it for internal codegen use. Fixes one of the issues uncovered in #51860
Seems like tests are looking green, although also seems like a long backlog before everything finishes building |
Merging somewhat early since it should get CI green again, although it doesn't fix the bug with accessing IPO effects that caused #51860 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I'll be making the needed adjustments to the packages in the ecosystem impacted by this change like JET and Cthulhu.
opt.min_world = first(valid_worlds) | ||
opt.max_world = last(valid_worlds) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, this is the critical change that allows typeinf_ext
to always use CodeInfo
with correct information valid for caching.
inferred_result.min_world = first(valid_worlds) | ||
inferred_result.max_world = last(valid_worlds) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's great to see this function now just focused on code transformation without modifying some additional information to make it valid for caching. Especially this part was very specific to the native compilation and so wasn't necessarily required for external abstract interpreters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that was my thinking too. It feels like a bit of a hack to pull the final result out of the InferenceFrame object, but that is what our tools currently expect, so we do need to make sure the final results get populated there. This is still not fully correct, as we have too much of the logic for caching implemented only in Julia instead of having all customers call jl_get_codeinst_for_src
and let that function deal with extracting all of the backedges, world ages, and IPO information (which currently happens only when Julia/inference is allocating a cache entry, but not when C/codegen is choosing to allocate a cache entry).
When inference decided it was not necessary to cache the object, it also skipped all of the work required to make the code valid, which typeinf_ext depends upon. This resulted in caching invalid data, causing effects tests to break unpredictably. This change ensures that we always update the InferenceState with the final result (when
must_be_codeinf
is true), so that typeinf_ext can get the correct results out of it for internal codegen use. Previously we were disregarding that flag in some cases.Fixes one of the issues uncovered in #51860