Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cudaPackages: wrapCCWith without overrideCC results in linking the wrong libstdc++ #225493

Open
SomeoneSerge opened this issue Apr 9, 2023 · 2 comments

Comments

@SomeoneSerge
Copy link
Contributor

SomeoneSerge commented Apr 9, 2023

Describe the bug

The context, copied pretty much as is from matrix: cudaPackages.cuda_nvcc builds cuda programs, and behind the scenes it uses gcc for that. This is controlled by the CMAKE_CUDA_HOST_COMPILER variable.

If we just set CMAKE_CUDA_HOST_COMPILER to cudaPackages.cudatoolkit.cc (which currently equals wrapCCWith { cc = gcc11Stdenv.cc.cc; libcxx = gcc12Stdenv.cc.cc.lib; }), the build ends up still linking gcc11's libstdc++, which is explicitly not what we're trying to do, and results in broken builds

If we make a customStdenv = overrideCC stdenv (wrapCCWith ( ... }) (currently, cudaPackages.backendStdenv, to be renamed into cudaStdenv) and use its mkDerivation to build the cuda program, it correctly links gcc12's libstdc++.

This is unexpected, because the actual host compiler (e.g. used for .cc and .cpp files) in the derivation is still gcc12. Gcc11 is only visible to the build through the aforementioned CMAKE_CUDA_HOST_COMPILER and it is wrapped anyway

Steps To Reproduce

Another version of the same example: https://gist.github.com/SomeoneSerge/cc47a75885f8b9ea267456675e115cfe#file-vanilla-log-L59

Expected behavior

Both builds link gcc12's libstdc++ and succeed

Screenshots

If applicable, add screenshots to help explain your problem.

Additional context

wrapCCWith practically just adds the chosen libstdc++ into propagatedBuildInput, and we maybe abuse the function

Notify maintainers

@NixOS/cuda-maintainers

@FRidh
Copy link
Member

FRidh commented Apr 10, 2023

In case 1), you add the wrapped CC to your build inputs? Seems like you then get two cc in your environment (one via stdenv, the other as an explicit dependency, and the one you wanted is second in the list. Did you check with nix-shell or nix develop how the environment variables look?

I haven't worked much with custom stdenv so I don't think I can provide much input here.

@SomeoneSerge
Copy link
Contributor Author

In case 1), you add the wrapped CC to your build inputs?

At least not explicitly, no. I do add cuda_nvcc to the native, and it has a setup hook which sets CMAKE_CUDA_HOST_COMPILER.

E.g.

❯ nix show-derivation --derivation github:SomeoneSerge/nix-findcudatoolkit-cmake#use-cudatoolkit-root-wrong-stdenv | rg '[Bb]uildInputs'
      "buildInputs": "/nix/store/4yg62g4gn66s9603rzdqnjv39lk8z2n4-cppzmq-4.9.0 /nix/store/g426pr0njmk1yinm24wjg0n9vck24ccj-zeromq-4.3.4",
      "nativeBuildInputs": "/nix/store/hs7nplvi947aqhqgja8v44iq64a0zhdk-auto-add-opengl-runpath-hook /nix/store/qlz2nfb0p7lbm22nqf9i9p35fz5z1xcj-cmake-3.25.3 /nix/store/nhsagnmqxvwcwlcvmlhljxxa0j0bw142-cuda_nvcc-11.7.64",
      "propagatedBuildInputs": "",
      "propagatedNativeBuildInputs": "",

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: 🔮 Roadmap
Development

No branches or pull requests

2 participants