Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inference: backward constraint propagation from call signatures #55229

Merged
merged 1 commit into from
Jul 25, 2024

Conversation

aviatesk
Copy link
Member

@aviatesk aviatesk commented Jul 24, 2024

This PR implements another (limited) backward analysis pass in abstract interpretation; it exploits signatures of matching methods and refines types of slots.

Here are couple of examples where these changes will improve the accuracy:

generic function example

addint(a::Int, b::Int) = a + b
@test Base.infer_return_type((Any,Any,)) do a, b
    c = addint(a, b)
    return a, b, c # now the compiler understands `a::Int`, `b::Int`
end == Tuple{Int,Int,Int}

typeassert example

@test Base.infer_return_type((Any,)) do a
    a::Int
    return a # now the compiler understands `a::Int`
end == Int

Unlike Conditional constrained type propagation, this type refinement information isn't encoded within any lattice element, but rather they are propagated within the newly added field frame.curr_stmt_change of frame::InferenceState.
For now this commit exploits refinement information available from call signatures of generic functions and typeassert.


@jishnub
Copy link
Contributor

jishnub commented Jul 24, 2024

Closes #55115

@aviatesk
Copy link
Member Author

@nanosoldier runbenchmarks("inference", vs=":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here.

@aviatesk aviatesk force-pushed the avi/callsig-backprop branch from da15892 to 0b253e9 Compare July 24, 2024 18:26
@aviatesk
Copy link
Member Author

@nanosoldier runbenchmarks("inference", vs=":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here.

@aviatesk
Copy link
Member Author

@nanosoldier runbenchmarks("inference", vs=":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here.

@aviatesk aviatesk force-pushed the avi/callsig-backprop branch from ed79684 to b618a9a Compare July 25, 2024 08:32
@aviatesk
Copy link
Member Author

@nanosoldier runbenchmarks("inference", vs=":master")

@aviatesk
Copy link
Member Author

aviatesk commented Jul 25, 2024

Ok, I fixed all inference performance regressions, and this should be ready to go now.

This PR implements another (limited) backward analysis pass in abstract
interpretation; it exploits signatures of matching methods and refines
types of slots.

Here are couple of examples where these changes will improve the accuracy:

> generic function example
```julia
addint(a::Int, b::Int) = a + b
@test Base.infer_return_type((Any,Any,)) do a, b
    c = addint(a, b)
    return a, b, c # now the compiler understands `a::Int`, `b::Int`
end == Tuple{Int,Int,Int}
```

> `typeassert` example
```julia
@test Base.infer_return_type((Any,)) do a
    a::Int
    return a # now the compiler understands `a::Int`
end == Int
```

Unlike `Conditional` constrained type propagation, this type refinement
information isn't encoded within any lattice element, but rather they
are propagated within the newly added field `frame.curr_stmt_change` of
`frame::InferenceState`.
For now this commit exploits refinement information available from call
signatures of generic functions and `typeassert`.
@aviatesk aviatesk force-pushed the avi/callsig-backprop branch from b618a9a to 4f25149 Compare July 25, 2024 09:29
@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here.

@aviatesk
Copy link
Member Author

@nanosoldier runbenchmarks("inference", vs=":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here.

@aviatesk
Copy link
Member Author

Going to merge. Post reviews would be very welcomed, but at the same time we don't need to wait for reviews before merging.

@aviatesk aviatesk merged commit c14fe35 into master Jul 25, 2024
7 of 8 checks passed
@aviatesk aviatesk deleted the avi/callsig-backprop branch July 25, 2024 14:11
@nsajko
Copy link
Contributor

nsajko commented Jul 28, 2024

This doesn't seem like it's been completely effective yet: #55278

lazarusA pushed a commit to lazarusA/julia that referenced this pull request Aug 17, 2024
…aLang#55229)

This PR implements another (limited) backward analysis pass in abstract
interpretation; it exploits signatures of matching methods and refines
types of slots.

Here are couple of examples where these changes will improve the
accuracy:

> generic function example
```julia
addint(a::Int, b::Int) = a + b
@test Base.infer_return_type((Any,Any,)) do a, b
    c = addint(a, b)
    return a, b, c # now the compiler understands `a::Int`, `b::Int`
end == Tuple{Int,Int,Int}
```

> `typeassert` example
```julia
@test Base.infer_return_type((Any,)) do a
    a::Int
    return a # now the compiler understands `a::Int`
end == Int
```

Unlike `Conditional` constrained type propagation, this type refinement
information isn't encoded within any lattice element, but rather they
are propagated within the newly added field `frame.curr_stmt_change` of
`frame::InferenceState`.
For now this commit exploits refinement information available from call
signatures of generic functions and `typeassert`.

---

- closes JuliaLang#37866
- fixes JuliaLang#38274
- closes JuliaLang#55115
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference
Projects
None yet
4 participants