-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
C-API库进行gpu多线程inference时,出现cublas status: not initialized错误 #5669
Labels
User
用于标记用户问题
Comments
QingshuChen
changed the title
C-API库进行inference时,出现cublas status: not initialized错误
C-API库进行gpu多线程inference时,出现cublas status: not initialized错误
Nov 16, 2017
|
@Xreki |
|
We need to add a paddle_init_cuda interface to inference API. |
@QingshuChen I create a PR #5773 to fix it. Please help to check. Thanks! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
从网站上下载的cuda 8.0的libpaddle_capi_shared.so, 使用一个全连接的网络,在进行forward时,出现如下错误:
请问这个是什么原因?怎么解决?
hl_matrix_mul函数报错代码在:
stat = CUBLAS_GEMM(t_resource.handle,
CUBLAS_OP_N,
CUBLAS_OP_N,
dimN,
dimM,
dimK,
&alpha,
B_d,
ldb,
A_d,
lda,
&beta,
C_d,
ldc);
其中t_resource.handle是一个空指针,原因是t_resource是一个thread_local的变量,没有初始化。
The text was updated successfully, but these errors were encountered: