Proposed semantics for implicit vectorization of primitives #56481
Labels
compiler:llvm
For issues that relate to LLVM
compiler:simd
instruction-level vectorization
design
Design of APIs or of the language itself
@gbaraldi requested a writeup of the design I had for vectorized primitives.
I have no particular plans to implement this myself anytime soon, so this is
up for grabs.
Background
A longstanding (#21454 is the earliest I can find in Julia, but it's been discussed in various places) issue
is that while LLVM is reasonably good at emitting vector code for straight-line and looped code of arithemetic
primitives, we do not permit LLVM to perform any vectorization of julia functions. This most prominently shows
up with the trig functions and exp and in hyper-optimized code can become the final bottleneck. It has never
been a priority, because there are feasible workarounds (replacing the julia versions with LLVM intrinsics,
hacking LLVM iteself, rewriting the code to explicitly vectorize), but they are all very annoying.
Fixing this issue really has three parts:
For this issue, what I'm talking about is #2 (though a little bit of #1 is required to
make it work). Fixing #2 would probably resolve 90% of the cases in which case this issue
is the final performance bottleneck. The primary reason for this is that various groups
have already assembled libraries of hand-vectorized implementations of most common math
functions (SLEEF, SVML, etc.). LLVM does in fact have the ability to replace calls to e.g.
llvm.sin
by these vectorized versions. However, Julia does not use either this capabilityor the LLVM primitives (at least by default, various people have turned it on for performance
at various points). There are several reasons for this:
In the past we have had trouble controlling the scalar case for these intrinsics, which somtimes
goes to system math libraries.
LLVM will sometimes constant fold these in ways that are not legal according to our semantics.
The list of vectorizable intrinsics is not extensible and requires a compiler upgrade to extend.
This is in general counter to our philosophy of making things non-special.
Unless explicitly annotated otherwise, we generally want code to be reproducible across systems.
Our effects system does not understand this replacement, requiring either pessimization or non-soundness
The basic gist is that we would like a mechanism that explicitly lets the user indicate that the
vectorization is permissable and lets the effect system properly analyze all possible cases without
pessimization.
Proposed design
The proposed design is the following (semantically):
The key new intrinsics here are
semantic_shuffle
/semantic_unshuffle
. These are (non-:consistent)intrinsics that give the optimize license to expand
tup
to any of the sizes inSpec
.ShuffleOrder
is a zero-sized token. The interpreter semantics are forsemantic_shuffle
andsemantic_unshuffle
to be no-ops. Here's a usage example:This by itself gives the optimizer license to combine several
sin
calls into one vectorizedsin
(under appropriateeffect assumptions, in particular :nothrow and :effect_free, but crucially not :consistent). To futher improve things,
codegen should recognize
OpaqueVecTuple
in the signature and generate (LLVM-level) specialized versions for the vectorlengths specified in
Spec
with the appropriate ABI.Some additional work is then required to teach LLVM to perform this operation in the vectorizer. However, crucially,
this is entirely user-defined. It does not matter whether the vector library is some external library of primitives
or whether the vector intrinsics are written in Julia. The semantics are well defined and the compiler can continue to
optimize, so long as it proves whatever information it wants correct across all brances of the dispatcher.
The text was updated successfully, but these errors were encountered: