Standalone executables of OpenAI's Whisper & Faster-Whisper for those who don't want to bother with Python.
Executables are compatible with Windows 7 x64 and above.
Meant to be used in command-line interface or Subtitle Edit.
Faster-Whisper is much faster than OpenAI's Whisper, and it requires less RAM/VRAM.
-
whisper-faster.exe "D:\videofile.mkv" --language=English --model=medium
-
whisper-faster.exe --help
Run your command-line interface as Administrator.
Don't copy programs to the Windows' folders!
Programs automatically will choose to work on GPU if CUDA is detected.
For decent transcription use not smaller than medium
model.
Guide how to run the command line programs: https://www.youtube.com/watch?v=A3nwRCV-bTU
Examples how to do batch processing on the multiple files: Purfview#29
By default the subtitles are created in the current folder.
Needs 'FFmpeg.exe' in PATH, or copy it to Whisper's folder [Subtitle Edit downloads FFmpeg automatically].
Some defaults are tweaked for movies transcriptions and to make it portable.
Shows the progress bar in the title bar of command-line interface. [or it can be printed with -pp
]
By default it looks for models in the same folder, in path like this -> _models\faster-whisper-medium
.
Models are downloaded automatically or can be downloaded manually from: https://huggingface.co/guillaumekln
large
is mapped to large-v2
model.
beam_size=1
: can speed-up transcription by ~40%. [ in my tests it had insignificant impact on accuracy ]
compute_type
: test different types to find fastest for your hardware. [ use --verbose
to see all supported types ]