Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA runtime api #262

Merged
merged 56 commits into from
Dec 17, 2024
Merged

CUDA runtime api #262

merged 56 commits into from
Dec 17, 2024

Conversation

psmyth94
Copy link
Contributor

@psmyth94 psmyth94 commented Jun 20, 2024

Hello,

I started working on the implementation for the cuda runtime api since I saw some interest in it (#200). I managed to translate the cuda runtime api equivalent for most of the functions in the driver api except for context and module management, which is handled automatically by cudart. Below are some of the issues/limitations:

  • No cudaOccupancyMaxPotentialBlockSize.*
    • bindgen isn't able to generate templated static inline functions [see their unsuppoted-features]
  • Cannot use cudaLaunchKernel via FFI
    • Using cudaLaunchKernel via FFI bindings in Rust isn't possible AFAIK because CUDA runtime expects a specific binary layout to find and launch compiled kernels. Rust's FFI mechanism doesn't natively conform to this layout, preventing the CUDA runtime from resolving and executing kernel functions properly.
    • The only way to use the runtime api is by calling precompiled wrappers that use the CUDA <<<...>>> syntax, which what I did with testkernel.cu.

I only have bindings for cuda-12050 and 12020 so far. I want to gauge community interest in this implementation before investing further time.

Edit: the issue with cudaLaunchKernel has been fixed. Please read below

@psmyth94 psmyth94 changed the title Cudart CUDA runtime api Jun 21, 2024
@ya0guang
Copy link

Hi Patrick, I'm interested in your proposal! I'm trying to add support for cuda 12.4 but I don't understand how to generate the source file like sys_12040.rs using the bindgen.sh script. I see a line here: CUDART_VERSION=$(cat tmp.rs | grep "CUDART_VERSION" | awk '{ print $6 }' | sed 's/.$//'). What is expected from tmp.rs?

Thanks for your effort for porting CUDA runtime!

@ya0guang
Copy link

Sorry there is some problem with my rustfmt. the bindgen script works on my side now with CUDA 12.4 and Ubuntu 22.04. All tests from the runtime pass. I'll dive deeper into it, thanks!

@coreylowman
Copy link
Owner

Cannot use cudaLaunchKernel via FFI

Yeah this was the main problem that I ran into as well. I think with rust we always would have to rely on the driver api to call kernels. This is why at this point I just chose to go with driver api.

Do we gain anything with adding runtime api?

At the very least we should document this shortcoming

@ya0guang
Copy link

My understanding is runtime API works at a higher level and is easier to deal with for developers. From CUDA developer guide:

The runtime API eases device code management by providing implicit initialization, context management, and module management. This leads to simpler code, but it also lacks the level of control that the driver API has.

I believe an alternative way to do cudaLaunchKernel is to launch kernel using cuLaunchKernel, as it cannot be directly implemented via this runtime API call. Another project for CUDA API remoting, cricket, adopted this way: https://github.com/RWTH-ACS/cricket/blob/ce8fdf7d4f4df696cf65c6d35926a76443d18f28/cpu/cpu-server-runtime.c#L875

@ahem
Copy link

ahem commented Jul 6, 2024

If you are trying to gauge community interest, then I at least would be super interested in this. I am working on a project that provides Rust support for a large-ish in-house C/C++ project that is build exclusively with the Runtime API. Since many of the types are interchangeable with the driver API this is doable already, but would of course be much easier with direct bindings to the Runtime API!

We are still using CUDA 11, though...

@psmyth94
Copy link
Contributor Author

psmyth94 commented Jul 8, 2024

Hey @ahem, that's awesome to hear. As for using CUDA 11, I don't think the code will vastly differ. The only difference is that I don't think the primary context is initialized after cudaSetDevice() in CUDA 11. I think we can use a no-op call via cudaFree(0) to init the primary context, tho. I can test this once I set-up an env with CUDA 11.8.

@psmyth94
Copy link
Contributor Author

CUDA versions from 11.4.0 to 12.5.0 have been added. Did not encounter any issues for the tests. I also added a module in result called version that checks which versions of runtime api and driver api is currently being used (instead of using features to check).

Note though, while my tests on 11.x were using the bindings of 11.x for both driver and runtime api, my libcuda.so was still 12.5.0. This hasn't been properly tested on NVIDIA drivers using cuda-11.x.

@therishidesai
Copy link

I am very interested in the work here. I was trying to previously use cust but it doesn't support Pitched memory and unfortunately it doesn't look like the project is actively maintained.

I was looking at the diff here and I don't see support for pitched memory and page-locked memory. These are both going to be useful to have if more people are going to use this crate.

@coreylowman
Copy link
Owner

I think my biggest concern here is just the amount of shared code between driver & runtime that would need to be maintained together.

If we are maintaining the exact same api between driver::safe::CudaDevice and runtime::safe::CudaDevice, I'm wondering if there's a better way to do all of this without having copies of everything (Although it does make it less complex because we don't need extra abstractions).

I guess I'm wondering: for downstream crates, what are the differences when using driver vs runtime?

If there aren't really any differences, we could probably just stick to exposing the sys/result for runtime, but not expose a safe api for runtime at all.

Thoughts?

@therishidesai
Copy link

At least for memory I think cudarc should take some inspiration from the cust::memory module. It would be nice to have a safe rust way of managing all of the forms of CUDA memory.

I do agree with @coreylowman that the current work seems to be a lot like the unsafe API.

@psmyth94
Copy link
Contributor Author

psmyth94 commented Sep 7, 2024

I think my biggest concern here is just the amount of shared code between driver & runtime that would need to be maintained together.

If we are maintaining the exact same api between driver::safe::CudaDevice and runtime::safe::CudaDevice, I'm wondering if there's a better way to do all of this without having copies of everything (Although it does make it less complex because we don't need extra abstractions).

I guess I'm wondering: for downstream crates, what are the differences when using driver vs runtime?

If there aren't really any differences, we could probably just stick to exposing the sys/result for runtime, but not expose a safe api for runtime at all.

Thoughts?

Yeah, I agree there's a lot of code copying. I looked into a more integrated approach, but it would require a lot of abstraction through traits, like having CudaBlasLT accept CudaDevice from either runtime or driver. Personally, I'm not a fan of abstraction. My idea of having it separate, is that even if you update the driver API, it won't cause integration issues with runtime. Managing the abstraction would be more cumbersome IMO (in the short term at least).

From what I gather, users seem to prefer using this as a standalone module rather than integrating it with the rest.

My thoughts regarding whether to include a safe API or not, I think it’s more practical (and less time-consuming) to stick with the sys/result for now. We can always allow contributions toward a safe API later on.

At least for memory I think cudarc should take some inspiration from the cust::memory module. It would be nice to have a safe rust way of managing all of the forms of CUDA memory.

Yeah, I agree. While page-locked memory isn't exclusive to cudart, it does make things easier.

I do agree with @coreylowman that the current work seems to be a lot like the unsafe API.

The result module is a thin wrapper around sys, which is why it has that similarity.

@psmyth94
Copy link
Contributor Author

Had to step away from this for a bit. I removed the safe API and made changes to pass all the checks. Should be good to go now.

@HenrikStenbyAndresen
Copy link

HenrikStenbyAndresen commented Nov 9, 2024

I'm interested in this PR. Some of the runtime support for memory, streams etc. etc. would be really interesting in terms of QoL

Edit: I can't see any page-locked host memory allocation. Is that on purpose or an oversight?

@zachcp
Copy link

zachcp commented Dec 12, 2024

Plus 1 to the ability to bind the runtime API. It would definitely simplify distribution.

@coreylowman coreylowman merged commit d8b0656 into coreylowman:main Dec 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants