-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spawn Start Method? #20
Comments
i got the same issue with streaming normal operation works just fine (xtts) ➜ python -m xtts_api_server -hs 0.0.0.0 -v 2.0.3 -d cuda:1
(xtts) ➜ python -m xtts_api_server -hs 0.0.0.0 -v 2.0.3 -d cuda:1 --streaming-mode
the improved steaming-mode yields the same issues |
Guys, I honestly have no idea what the problem could be, I'm using the RealtimeTTS library, I've encountered this myself in colab, but never found a way to fix it |
I spent some time looking into this today. In seeking out places where CUDA usage might span multiple processes, I found this in
In other words, it's creating a pipe and forking a process to do the synthesis part. I'm guessing that's the part that CUDA's unhappy with. If so then possible solutions would be to confine all CUDA calls to synthesize_process, use the CUDA spawn stuff, or eliminate the subprocess altogether. All would require changes to the RealtimeTTS code. |
Oh neat, I just learned that pytorch offers In
And then inside
Those two changes got realtime working for me, although I haven't tested it extensively. |
I submitted a RealtimeTTS PR with essentially the same fix mentioned above. |
Great, thanks for the hint and for the PR in RealtimeTTS, since I'm using a copy of this project I've already fixed my own and released the update |
I'm going to close this one now. Your continued efforts and maintenance are truly appreciated. |
Hello, do you have time to take a look at this one below?
~/xtts$ python -m xtts_api_server -d cuda:1 --streaming-mode-improve
/home/jay/miniconda3/envs/xtts/lib/python3.10/site-packages/pydub/utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
2023-12-13 16:58:24.724 | WARNING | xtts_api_server.server::53 - 'Streaming Mode' has certain limitations, you can read about them here https://github.com/daswer123/xtts-api-server#about-streaming-mode
2023-12-13 16:58:24.724 | INFO | xtts_api_server.server::56 - You launched an improved version of streaming, this version features an improved tokenizer and more context when processing sentences, which can be good for complex languages like Chinese
2023-12-13 16:58:24.724 | INFO | xtts_api_server.server::58 - Load model for Streaming
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/site-packages/xtts_api_server/RealtimeTTS/engines/coqui_engine.py", line 304, in _synthesize_worker
logging.exception(f"Error initializing main coqui engine model: {e}")
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/logging/init.py", line 2113, in exception
error(msg, *args, exc_info=exc_info, **kwargs)
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/logging/init.py", line 2105, in error
root.error(msg, *args, **kwargs)
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/logging/init.py", line 1506, in error
self._log(ERROR, msg, args, **kwargs)
TypeError: Log._log() got an unexpected keyword argument 'exc_info'
The text was updated successfully, but these errors were encountered: