-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unconditional errors result in dynamic invocations #649
Comments
Fascinating. So the actual bug is that you're incorrectly invoking @aviatesk Is this the throw-block deoptimization? I tested this on 1.11. Can we disable this deoptimization so that compilation succeeds and the user gets a chance to see the error being generated at run time? |
Holy shit! That was a 3-hour torment for me until your response. Thanks a lot, and yeah, the suggestion would have helped me. |
Yes, since |
Hmm, we should already do that: GPUCompiler.jl/src/interface.jl Lines 254 to 260 in 09b4708
|
That does seem to be the case... Taking a look at the inference results from GPUInterpreter might reveal something? |
I might have discovered another area potentially where dynamic invocations occur. function kernel()
c_shared = CuDynamicSharedArray(Float32, 2)
if threadIdx().x == 1
c_shared[1] = 0.0
c_shared[2] = 0.0
end
sync_threads()
if threadIdx().x % 2 == 0
CUDA.atomic_add!(pointer(c_shared, 1), 1.0)
else
CUDA.atomic_add!(CUDA.pointer(c_shared, 2), 1.0)
end
sync_threads()
return
end
@cuda threads = 10 blocks = 1 shmem=sizeof(Float32)*2 kernel() Reason: unsupported dynamic function invocation (call to atomic_add!)
Stacktrace:
[1] kernel
@ ~/test.jl:26
Reason: unsupported dynamic function invocation (call to atomic_add!)
Stacktrace:
[1] kernel
@ ~/test.jl:28 |
Yeah, that's expected behavior of the low-level interface. Only the high-level |
Yes, that's expected. But should it lead to a dynamic invocation? My question is why in |
I guess you could interpret calling a function with the wrong arguments as throwing an unconditional MethodError, but generally I find it less surprising that doing so results in a dynamic invocation, whereas in the case of |
Describe the bug
Any use of
shfl_sync
throws an error sayingshfl_recurse
is a dynamic function.To reproduce
The Minimal Working Example (MWE) for this bug:
Attempting to do a stream compaction:
Manifest.toml
Expected behavior
Expected behavior is that the shuffle function doesn't throw an error, and all zeros in
a
get removed when moved tob
Version info
Details on Julia:
Details on CUDA:
The text was updated successfully, but these errors were encountered: