-
Notifications
You must be signed in to change notification settings - Fork 12.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SLP vectorization causes 1.75x slowdown with march=skylake-avx512 due to vgatherdps #70259
Comments
You mention skylake-avx512 (xeon) in the title (and refer to a i9-9960X) but use skylake (desktop) in the command line args - which are you targeting? https://godbolt.org/z/7j8YWenfM |
Oops, I should have used -march=skylake-avx512 in my clang invocations, but it doesn't seem to matter much. Doing that reduces the runtime of the version with SLP vectorization on from 4.2s to 4.1s. With SLP vectorization off it's still 2.4s. For reference here's the skylake-avx512 version of the inner loop (SLP vectorization on, unrolling off for brevity):
|
To update my llvm-mca comment: llvm-mca thinks the skylake-avx512 version is fast (481 cycles for 100 iterations). |
Another datapoint: The problem does not occur on Zen 4 (march=znver4). The asm is basically the same as above, but the vgatherdps version produced with slp vectorization on is indeed slightly faster (1.8s vs 2.0s) |
Zen4 (and Zen3 for that matter) have much faster gather than Zen1/2 - we should probably enable the 'TuningFastGather' flag for them (currently this is only used on Intel Skylake and later). Unfortunately I think we need to keep that flag on for Skylake as for build_vector style patterns gather is still the better way - but for this ticket it might be as simple as we don't have great cost numbers to compare the gather+vectorstore vs scalarload+scalarstore sequences. |
Assuming you are on Linux, can you please check the status of For this reason, if you are running an up-to-date kernel, by default it will be loading a microcode update upon boot. This turns gather isns into slow-but-secure microcoded uops. This would probably explain the poor gather performance. If you have root, you can benchmark this by temporarily setting a kernel boot parameter in the GRUB config If this is indeed the source of the slowdown, I am not sure what llvm could possibly do about it, other than split all affected targets into two, eg. |
@TiborGY I believe the |
Yes, it looks like I have microcode mitigation for gather! -mno-gather fixes it without me having to turn off slp vectorization. The reason I asked the original question is for a downstream compiler, rather than trying to optimize a specific piece of code. I guess I'll add +prefer-no-gather as a function attribute if I don't know for sure the target is not one of the affected processors. The slowdown from using it on affected processors is much larger than the speed-up from using it on unaffected processors. I will leave this issue open in case you want to figure out how to handle this differently in LLVM, but feel free to close if this is a wont-fix. |
Although fwiw, given that my command-line flags targeted skylake-avx512 specifically, I think llvm should assume by default that the microcode mitigation is in effect, and have an option to enable gather emission for people who like to live life on the edge. |
@phoebewang Should we disable TuningFastGather on pre-AlderLake Intel targets (and maybe x86-64-v4)? |
I think the SLP cost model might be wrong for vector gathers on skylake.
Consider the following code which repeatedly permutes an array:
Compiled with top of tree clang with
-march=skylake -O3
it takes about 4.2 seconds to run on my i9-9960X. Compiled with-march=skylake -O3 -fno-slp-vectorization
it takes 2.4 seconds.The only salient difference in the assembly is that slp vectorization has packed the eight stores in
f
into a gather intrinsic. Here's the inner loop assembly with slp vectorization on (with unrolling off for brevity):and here it is with slp vectorization off:
Interestingly, llvm-mca has the right idea. It says the version with SLP vectorization on is 2310 cycles per 100 iterations, and the version with it off is 813 cycles per 100 iterations.
The text was updated successfully, but these errors were encountered: