Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NativeInterpreter vs custom AbstractInterpreters giving different inference results #55638

Open
willtebbutt opened this issue Aug 30, 2024 · 3 comments

Comments

@willtebbutt
Copy link
Contributor

willtebbutt commented Aug 30, 2024

This was discussed on slack, but I've opened an issue here at @oscardssmith 's suggestion.

The issue is that type inference for the below example when using the NativeInterpreter (correctly) infers Vector{Float64}, while the Cthulhu.jl and (soon-to-be-renamed) Tapir.jl interpreters both infer AbstractVector. I guess technically this isn't "incorrect", it's just a less precise result, but it would certainly be nice to figure out what we're doing wrong in Cthulhu.jl and Tapir.jl to see this degraded result.

using Cthulhu, Tapir

# Specify function + args.
fargs = (Base._mapreduce_dim, Base.Fix1(view, [5.0, 4.0]), vcat, Float64[], [1:1, 2:2], :)
tt = typeof(fargs)

# Construct the relevant interpreters.
native_interp = Core.Compiler.NativeInterpreter();
cthulhu_interp = Cthulhu.CthulhuInterpreter();
tapir_interp = Tapir.TapirInterpreter();

# Both of these correctly infer the return type, Vector{Float64}.
julia> Base.code_ircode_by_type(tt; optimize_until=nothing, interp=native_interp)
1-element Vector{Any}:
362 1%1 = %new(Base.MappingRF{Base.Fix1{typeof(view), Vector{Float64}}, Base.BottomRF{typeof(vcat)}}, _2, $(QuoteNode(Base.BottomRF{typeof(vcat)}(vcat))))::Base.MappingRF{Base.Fix1{typeof(view), Vector{Float64}}, Base.BottomRF{typeof(vcat)}}%2 = invoke Base._foldl_impl(%1::Base.MappingRF{Base.Fix1{typeof(view), Vector{Float64}}, Base.BottomRF{typeof(vcat)}}, _4::Vector{Float64}, _5::Vector{UnitRange{Int64}})::Vector{Float64}
    └──      return %2=> Vector{Float64}

# Inference fails.
julia> Base.code_ircode_by_type(tt; optimize_until=nothing, interp=cthulhu_interp)
1-element Vector{Any}:
    1 ──       nothing::Nothing2 ──       nothing::Nothing3 ──       nothing::Nothing4 ──       goto #5                            │       
    5 ──       nothing::Nothing6 ──       nothing::Nothing7 ──       goto #8                            │       
    8 ──       goto #9                            │       
362 9 ──       goto #10                           │╻╷╷     mapfoldl_impl
    10nothing::Nothing11nothing::Nothing12%12 = %new(Base.MappingRF{Base.Fix1{typeof(view), Vector{Float64}}, Base.BottomRF{typeof(vcat)}}, _2, $(QuoteNode(Base.BottomRF{typeof(vcat)}(vcat))))::Base.MappingRF{Base.Fix1{typeof(view), Vector{Float64}}, Base.BottomRF{typeof(vcat)}}
    └───       goto #13                           │       
    13 ─       goto #14                           │       
    14 ─       goto #15                           │       
    15 ─       goto #16                           │       
    16 ─       goto #17                           │       
    17 ─       goto #18                           │││┃       ComposedFunction
    18 ─       goto #19                           │       
    19%20 = invoke Base.foldl_impl(%12::Base.MappingRF{Base.Fix1{typeof(view), Vector{Float64}}, Base.BottomRF{typeof(vcat)}}, _4::Vector{Float64}, _5::Vector{UnitRange{Int64}})::AbstractVector
    └───       goto #20                           ││      
    20return %20=> AbstractVector   

# Inference fails.
julia> Base.code_ircode_by_type(tt; optimize_until=nothing, interp=tapir_interp)
1-element Vector{Any}:
362 1%1 = %new(Base.MappingRF{Base.Fix1{typeof(view), Vector{Float64}}, Base.BottomRF{typeof(vcat)}}, _2, $(QuoteNode(Base.BottomRF{typeof(vcat)}(vcat))))::Base.MappingRF{Base.Fix1{typeof(view), Vector{Float64}}, Base.BottomRF{typeof(vcat)}}%2 = invoke Base.foldl_impl(%1::Base.MappingRF{Base.Fix1{typeof(view), Vector{Float64}}, Base.BottomRF{typeof(vcat)}}, _4::Vector{Float64}, _5::Vector{UnitRange{Int64}})::AbstractVector
    └──      return %2=> AbstractVector

I've not dug into the detail of type inference yet, so I'm not sure where this discrepancy might be coming from. Would greatly appreciate any advice on how to get started debugging this!

Reproducibility:

julia> versioninfo()
Julia Version 1.10.5
Commit 6f3fdf7b362 (2024-08-27 14:19 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: macOS (x86_64-apple-darwin22.4.0)
  CPU: 12 × Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, skylake)
Threads: 6 default, 0 interactive, 3 GC (on 12 virtual cores)
Environment:
  JULIA_NUM_THREADS = 6

julia> Pkg.status()
Status `/private/var/folders/z7/0fkyw8ms795b7znc_3vbvrsw0000gn/T/jl_9hQ5av/Project.toml`
  [f68482b8] Cthulhu v2.14.0
  [07d77754] Tapir v0.2.42

Associated Cthulhu.jl issue.
Associated Tapir.jl issue.

Note: It's also an issue in Enzyme.jl, but I figured links to two packages was probably sufficient!

@Keno
Copy link
Member

Keno commented Aug 31, 2024

but it would certainly be nice to figure out what we're doing wrong in Cthulhu.jl and Tapir.jl to see this degraded result.

Probably nothing. There's a known limitation of inference where a previously cached result can improve the precision of a later-inferred result even if the recursion heuristics would have cut of inference for said later query. It's just not worth doing the the work to determine if you're in this case.

@willtebbutt
Copy link
Contributor Author

Thanks for the response @Keno -- is there anything written down about this?

I'm interested, and would quite like to get to the bottom of this, because it's causing problems for both Enzyme.jl and Tapir.jl (see issues linked above and the issues which have now referenced this one) -- it seems to cause rather innocuous looking things to fail to infer, and have a serious impact on performance.

@Keno
Copy link
Member

Keno commented Sep 5, 2024

I think there's various issues about it, but regardless, the thing to figure out is not why there is a difference between Base and the custom absint, but what is causing the custom absint's inference to be limited, and then either improve the heuristic or override it for a particular case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants