You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
bugfix for #5 (causing a whitespace between words to get lost sometimes)
upgrade to latest NLTK and Stanza versions including new "punkt-tab" model
allow offline environment for stanza
adds support for async streams (preparations for async in RealtimeTTS)
dependency upgrades to latest version (coqui tts 0.24.2 ➡️ 0.24.3, elevenlabs 1.11.0 ➡️ 1.12.1, openai 1.52.2 ➡️ 1.54.3)
added load_balancing parameter to coqui engine
if you have a fast machine with a realtime factor way lower than 1, we infer way faster then we need to
this parameter allows you to infer with a rt factor closer to 1, so you will still have streaming voice inference BUT your GPU load goes down to the minimum that is needed to produce chunks in realtime
if you do LLM inference in parallel this will be faster now because TTS takes less load