-
After testing on my M1 Mac Mini, I moved over to my gaming PC where I believe pytorch can see CUDA and it should use the GPU to transcode faster ( But for a much longer test of a ~2 hour file, I am not seeing either the GPU or CPU max out. GPU is in the single digit % usage, CPU is maybe 10%. I understand it's working through the file in sliding 30 second windows, but I'd expect to see something light up 100% on my PC when doing all this. Neither the GPU nor CPU seems very taxed at the moment during I'm invoking in python like the example like so:
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
I think "GPU utilization" on Windows only measures graphics work. Looking at its power consumption etc. in a hardware monitor is much more helpful to see if it's "doing something".
Each window's output is used as context for the next, and within the windows, after the encoder "listens" to the whole segment, the decoder predicts character probabilities one at a time, based on the characters selected before. So a lot of work can't be parallelized - without transcribing multiple recordings in parallel. One thing that is parallelized within windows is keeping a list of |
Beta Was this translation helpful? Give feedback.
I think "GPU utilization" on Windows only measures graphics work. Looking at its power consumption etc. in a hardware monitor is much more helpful to see if it's "doing something".
Each window's output is used as context for the next, and within the windows, after the encoder "listens" to the whole segment, the decoder predicts character probabilities one at a time, based on the characters selected before.
So a lot of work can't be parallelized - without transcribing multiple recordings in parallel.
model…