You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think I understand. You are saying training the model on GPU and after that send it to CPU to make predictions.
It is already possible to do it (I did it with the CPU version trained on SQUAD), but it is not that transparent. After training the reader on GPU, you only need to run the code:
qa_pipeline.reader.model.to('cpu')
But we can also implement inside a method inside QAPipeline that does that, let's say to_cpu():
I think I understand. You are saying training the model on GPU and after that send it to CPU to make predictions.
It is already possible to do it (I did it with the CPU version trained on SQUAD), but it is not that transparent. After training the reader on GPU, you only need to run the code:
qa_pipeline.reader.model.to('cpu')
But we can also implement inside a method inside QAPipeline that does that, let's say
to_cpu()
:Originally posted by @andrelmfarias in #133 (comment)
The text was updated successfully, but these errors were encountered: