How to use FP16 model precision for inference ? #1835
jerin-scalers-ai
started this conversation in
General
Replies: 2 comments 5 replies
-
fp16 is true by default. See |
Beta Was this translation helpful? Give feedback.
4 replies
-
for truly float16 use other interfaces like |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I believe the default model precision of whisper models if FP32. so how can i use FP16 model using openai-whisper package.
Beta Was this translation helpful? Give feedback.
All reactions