Replies: 2 comments 1 reply
-
Pytorch should use all CPU cores on default. For CPU executionYou can try this great C++ implementation from @ggerganov. For GPU executionIts hard to run DNN models on Radeon...
|
Beta Was this translation helpful? Give feedback.
-
FYI from the ROCm FAQ :
https://docs.amd.com/bundle/ROCm-Installation_FAQ/page/Frequently_Asked_Questions.html |
Beta Was this translation helpful? Give feedback.
-
I have Whisper running on a multicore Mac Pro (2012) under Mojave. Video card is Radeon RX 580.
As far as I understand, Whisper will only use the CPU (in FP32) for such a configuration, and will not make use of the GPU. I've used it several times with the tiny and base models and it works well. It's just very slow.
I came across this promising thread about faster CPU execution.
Is there anything else I can do to speed it up, or take advantage of the hardware I have? Are there any other forks that might allow Whisper to run Radeon on a pre-MacOS-11 Mac? Or a parallelized version that might make use of multiple cores?
Beta Was this translation helpful? Give feedback.
All reactions