-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
type-inference workq #15300
type-inference workq #15300
Conversation
Amazing 💯 ! |
For things like this, it would be good to also have a compile-time benchmark suite (cc @jrevels). |
@@ -7,7 +7,7 @@ endof(t::Tuple) = length(t) | |||
size(t::Tuple, d) = d==1 ? length(t) : throw(ArgumentError("invalid tuple dimension $d")) | |||
getindex(t::Tuple, i::Int) = getfield(t, i) | |||
getindex(t::Tuple, i::Real) = getfield(t, convert(Int, i)) | |||
getindex(t::Tuple, r::AbstractArray) = tuple([t[ri] for ri in r]...) | |||
getindex(t::Tuple, r::AbstractArray) = tuple(Any[t[ri] for ri in r]...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this necessary?
Awesome!! 👍 |
@@ -291,11 +291,9 @@ function code_typed(f::ANY, types::ANY=Tuple; optimize=true) | |||
for x in _methods(f,types,-1) | |||
linfo = func_for_method_checked(x, types) | |||
if optimize | |||
(tree, ty) = Core.Inference.typeinf(linfo, x[1], x[2], linfo, | |||
true, true) | |||
(tree, ty) = Core.Inference.typeinf(linfo, x[1], x[2], true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recently made the exact same change to the signature of typeinf
on jb/linear. That's encouraging :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
having a parameter that was always true
has confused me for quite some time
(::Type{Matrix})(m::Integer, n::Integer) = Array{Any}(m, n) | ||
(::Type{Matrix})() = Array{Any}(0, 0) | ||
(::Type{Array{T}}){T}(m::Integer) = Array{T,1}(Int(m)) | ||
(::Type{Array{T}}){T}(m::Integer, n::Integer) = Array{T,1}(Int(m), Int(n)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Array{T,2}
Sounds great! |
5d1bf6a
to
ff1e1e6
Compare
note: directly inferring linfo in pure_eval_call is unnecessary since abstract_call_gf_by_type would add this edge anyways and it can be harmful since there was no logic here to prevent unbounded recursion in the type signature fixes #9770
since in the common case, the call graph is a simple DAG, and it is faster / more efficient to infer immediately rather than building the edges graph and repeating method lookup process after the backedge finishes
since CI was killing this before it could finish, i layered several optimizations on top of the original design. now CI is happy and it should be good to merge. |
Were there any regression without the optimizations you add? |
judging by the timeouts, yes. I saw some tests slower, some faster when running an earlier version of this locally |
yes, i think CI is right up against its memory limit, and some tests cause it to start swapping heavily. these optimizations are faster, but mostly they retain a lot less temporary memory during inference. |
Observed the following on this branch after the latest updates:
|
the interaction here is rather complex, but recursing on a frame already in typeinf_frame causes the state upon return to that function to be a bit convoluted. detecting frame.inferred at that point and aborting was also correct, but requires a bit of code duplication at each abstract_interpret call to check and return quickly. making the processing state part of inworkq rather than stuck' should be a bit more robust against future edits to this code
I would like to see comments clearly explaining the roles of typeinf_edge, typeinf_loop, and typeinf_frame. I see the code for limiting the number of exception handlers is commented out. Are we sure it's no longer needed? If so then the code should be removed. There is also a commented-out piece in typeinf_loop. |
Ok, this seems to be ready to go. My only comments (above) are very minor and can be addressed post-merge. |
The results of a
I'm a bit out of my depth here as gdb doesn't catch any signals from the error. Does anyone have any insight on why this is occurring, or have advice on how to further debug this issue? |
Catch the llvm error with The error happens when generating native code for
|
same as #14735? |
Yeah, likely. Somehow this only happens for me after this branch but not on a one week old master. |
This might have the same underlying cause as #14735 but that one used to manifest only during precompilation of packages that use PyCall and not PyCall itself as I see now. That is, I could use PyPlot and Sympy before the change in this pull request just by commenting out the |
As a work around, |
now live at http://juliacomputing.com/blog/2016/04/04/inference-convergence.html (part 1 of maybe more than 1) |
This is a fantastic blog post, thanks for writing it. I love the animation illustrating progress on the simple inference example. A couple of comments:
|
are you referring just to "When asked for the return type of a function, inference first checks the list in the .tfunc field of the LambdaInfo method definition for whether the return type has been computed previously or is currently being computed (the .tfunc list is a dictionary that maps from a call signature to either the in-process InferenceState object or a pair of inferred function body and return type)." (which upon rereading, I agree is atrocious), or was there more in that section that deserves revisiting? |
That's far and away the main one. A bit lower you talk about the A couple of additional issues (distinct from the "internal implementation" concern):
|
thanks. i made a handful of updates (mostly adding image captions and moving references to tfunc out of the main text). a stepwise example of Julia's cycle resolution implementation seems fairly annoying to animate (since it involves moving lots of boxes between lots of groups), and so think this would require more of youtube-level animation to be comprehensible. i'm not quite willing to put that level of effort into it right now. |
I don't blame you one bit for capping the effort level there. It's already very useful as it is. inference.jl has always been difficult to grok for me, and this helps. |
This changes type inference to use a queue. Benefits thereof include:
TODO: