-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove dummy hooks for kernels #1700
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. it reduces the missing possibility
} \ | ||
\ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} \ | |
\ | |
} \ |
nit
IMO this removes a rather significant feature - being able to swap out backend binaries to replace dummies by actual implementations. With a bit of clean-up of the ABI, this would allow us to ship binaries for all backends compiled by different compilers. |
This is not a 'feature' we are using right now, and I highly doubt we would ever use it.
I'm not sure how to understand this. We can already do, if multiple backends are enabled, which requires having the different compiler available on the same system. |
I was planning to start using it for Julia binary distribution in Yggdrasil, making the CUDA and ROCm support controllable. |
Can you explain that a bit more? Were you building only the backend libraries on different systems, then bringing them onto one system, and linking them together? |
Also, shouldn't that still be possible? On the system where you build core and link everything together, you need to enable all backends, and then you should be able to use the libraries from the other systems. |
No, if you build Ginkgo with all backends enabled, then executing a binary linking against Ginkgo requires all backend runtime libraries (for CUDA/HIP/SYCL) to be available, unless we want to start implementing our own dynamic linking, which we absolutely should not attempt. Without the stubs, we lose the main reason why we split things up into separate libraries in the first place. |
@upsj I'm still struggling to understand your point, since I'm not really familiar with linking. As far as I understand it, with the current implementation you would be able to:
This is different from building Ginkgo with all backends enabled, since other runtime libraries, e.g. cuda runtime, would be needed for backends which are not available on the target system. Does that summarize the issue you mentioned? |
Yes, by linking against a shared library (e.g. |
@upsj Then I get your point. I will close this PR as I agree that this would be a nice feature to have. |
This PR removes the need for dummy kernels hooks. By using
if constexpr
, kernels for non-build backends are just skipped, so no implementation needs to be provided.