-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable the llvm offload build configuration #131527
Conversation
argh, I thought the draft state would prevent bors from rolling. |
Failed to set assignee to
|
c1679ba
to
36c458d
Compare
r? @Kobzol Generally, offload is the llvm feature to run code on the GPU and part of my project goal, so this is just the first PR to get warm. Do you want any changes? This implicitly also enables openmp, since LLVM offload is a feature which developed out of openmp. Edit: updated to include a |
This PR modifies If appropriate, please update This PR modifies If appropriate, please update This PR changes how LLVM is built. Consider updating src/bootstrap/download-ci-llvm-stamp. |
36c458d
to
b1f21b8
Compare
@@ -84,6 +84,9 @@ | |||
# Wheter to build Enzyme as AutoDiff backend. | |||
#enzyme = false | |||
|
|||
# Whether to build LLVM with support for it's gpu offload runtime. | |||
#offload = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe call it gpu-offload? Or does it also support offloading to other hardware like programmable network controllers and flash drives?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only compute devices, so no controllers or flash drives, but also non-gpu hardware:
https://clang.llvm.org/docs/OffloadingDesign.html#openmp-offloading
supports [..] target offloading to several different architectures such as NVPTX, AMDGPU, X86_64, Arm, and PowerPC.
But then again GPUs are surely the most popular use-case (though afaik there is a hope to support other coprocessors too).
I don't care strongly for either and for some co-processors like TPUs support we probably want MLIR instead of offload anyway, which would be a follow-up project.
☔ The latest upstream changes (presumably #131934) made this pull request unmergeable. Please resolve the merge conflicts. |
ping @Kobzol |
Oh, this had a conflict and was set to |
b1f21b8
to
e2d3f5a
Compare
@Kobzol Thanks, done. |
@bors r+ |
…, r=Kobzol Enable the llvm offload build configuration Tracking: - rust-lang#131513
…r=Kobzol Enable the llvm offload build configuration Tracking: - rust-lang#131513
The job Click to see the possible cause of the failure (guessed by this bot)
|
💔 Test failed - checks-actions |
That looks.. spurious, but also concerning (heap corruption error?). Let's retry. @bors retry |
☀️ Test successful - checks-actions |
Finished benchmarking commit (17f8215): comparison URL. Overall result: ❌ regressions - no action needed@rustbot label: -perf-regression Instruction countThis is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
Max RSS (memory usage)This benchmark run did not return any relevant results for this metric. CyclesThis benchmark run did not return any relevant results for this metric. Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 782.08s -> 784.653s (0.33%) |
Tracking: