Skip to content

v0.4.10

Compare
Choose a tag to compare
@KoljaB KoljaB released this 07 Nov 14:17
· 29 commits to master since this release
  • new stream2sentence version 0.2.7
    • bugfix for #5 (causing a whitespace between words to get lost sometimes)
    • upgrade to latest NLTK and Stanza versions including new "punkt-tab" model
    • allow offline environment for stanza
    • adds support for async streams (preparations for async in RealtimeTTS)
  • dependency upgrades to latest version (coqui tts 0.24.2 ➡️ 0.24.3, elevenlabs 1.11.0 ➡️ 1.12.1, openai 1.52.2 ➡️ 1.54.3)
  • added load_balancing parameter to coqui engine
    • if you have a fast machine with a realtime factor way lower than 1, we infer way faster then we need to
    • this parameter allows you to infer with a rt factor closer to 1, so you will still have streaming voice inference BUT your GPU load goes down to the minimum that is needed to produce chunks in realtime
    • if you do LLM inference in parallel this will be faster now because TTS takes less load