Blenderbot + GPU #3682
-
Hi there. Sorry if this has an answer somewhere else, but I've not been able to track it down. I'm playing with the blender bot as per the docs (https://parl.ai/projects/recipes/) via interactive.py. As expected the larger models are quite slow and I'm trying to work out if my GPU is being used. Just wondering - do I need to do anything to make interactive.py use the GPU for the blender models or would it just use them by default without further action? Do I perhaps need to configure PyTorch globally to use the GPU? I watch the GPU usage when I talk to the bot and I'm not seeing it spike when it's thinking. Thanks Lee |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
You should make sure you have PyTorch installed with GPU support. Then the GPU should be used unless the --no-cuda option is given to it. If you run "nvidia-smi", you should see large amounts of memory being used. On a reasonably modern system, response times for the 2.7B model should be like 2s with GPU and 12s without GPU. |
Beta Was this translation helpful? Give feedback.
You should make sure you have PyTorch installed with GPU support. Then the GPU should be used unless the --no-cuda option is given to it.
If you run "nvidia-smi", you should see large amounts of memory being used. On a reasonably modern system, response times for the 2.7B model should be like 2s with GPU and 12s without GPU.