-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tch::Cuda::is_available() returns false using local libtorch 1.8 for CUDA 11.1 #329
Comments
Looks like if I unset |
Thanks for reporting this issue, I just pushed a (hacky) fix that should hopefully help with this. |
Thanks for the quick reply and fix! Ah, I see - let's hope that makes it to cargo stable soon. That fix seemed to do the trick! I am now getting |
With the latest change linkage fails against libtorch compiled against CUDA 10.2:
because it doesn't have these libraries
Maybe I should switch to libtorch with CUDA 11.1. Hopefully it doesn't have the same regressions for convolutions as PyTorch 1.7.1 with CUDA 11 had. |
Ah it's a bummer that this depends on the cuda version. Anyway I just pushed a small tweak that will only trigger these libs to be linked if the files are present, hopefully this getting 10.2 to work. |
Works like a charm, thanks! |
So now I am not entirely sure that this is working. Running the basics example works just fine, but if I try to run the reinforcement learning example, I am getting a lot of linker errors.
(then there is lots and lots of linker flags, I'll just share the first and last few of them)
EDIT: Should I make a separate issue? It seems to be correctly trying to link to libtorch_cuda_cpp.so, but is having issues doing so. |
I fixed it. There must have been some issue with trying to link with Sorry for the false alarm! |
Glad that you got it to work, closing this issue for now but feel free to re-open if you notice more issues. |
I am on Arch Linux, Rust/cargo 1.50 and have set both
LIBTORCH
andLD_LIBRARY_PATH
according to the README.tch-rs
is on latest commit to master, commit 25ac21d. I am running the example withcargo run --example basics
. I getfalse
returned by bothtch::Cuda::is_available()
andtch::Cuda::cudnn_is_available()
.I have triple checked that the path for
LIBTORCH
is correct, and it must be correct as everything builds fine and libtorch was not downloaded inside oftch-rs
. I know that CUDA is installed correctly as well, because using thepython-pytorch-cuda
package from the Arch community repo (which is on PyTorch 1.8) I can use CUDA tensors just fine andtorch.cuda.is_available()
returnsTrue
.I should note that my version of CUDA is 11.2, though I haven't seen that cause any issues with PyTorch 1.8 in Python (which apparently was built for CUDA 11.1).
Any suggestions? Are others having this issue? I saw #291, but I am not building in release, so I think this is a different problem.
The text was updated successfully, but these errors were encountered: