-
-
Notifications
You must be signed in to change notification settings - Fork 651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Texts with multiple line spacings are voiced with NVDA + down arrow and voices crack #11061
Comments
Hi. Which synthesizer is in use here? I can confirm a small crackle when using eSpeak, but I'm not sure if continuous reading of long text can cause this in eSpeak under normal circumstances, as I don't use it on the regular. |
Synthesizer is Espeak. |
I can reproduce this only with eSpeak, and there also depending on which variant you are using. for example using Variant Quincy there is much less crackling than using for example variant "Robert". This issue might also be due to the sonic library used by eSpeak as the underlying driver. Maybe reporting this to the developer of Sonic driver would help as well. |
As much as I tried, I had no luck breaking Eloquence, Windows OneCore, and eSpeak set to Persian which is based on British English, which, IMO, means good luck this time around. |
This may be improved by: #11024 which has a build you can test with. Note that this build still has some issues, this PR is still only a draft. |
OK, I will share this issue on Espeak. I will write the result here. |
Hi NVDA Team, I hope you are fine. I shared Issue on Espeak. Issue is here. |
This might also be related to #7769. |
Simpler STR:
Note that this doesn't happen with 2019.2.1. |
hi @tspivey , I finished the last test. Sound cracks in 2020.2. Unfortunately, I often have to run 2019.2 as portable. And english :) . Sound, audio, voice... |
Could you please test with the last alpha version which has been issued today?
|
Test results: My example still breaks. |
cc: @jcsteh maybe you have some thoughts on this? |
This is related to the indexes (or marks) sent to the synthesiser for cursor tracking, accurate synchronisation of sounds and synth changes, etc. Say all uses these to mark each line for cursor tracking. However, it did this in 2019.2 as well. There are a couple of possibilities:
|
It's also possible it's a combination of 1 and 2; i.e. this isn't an eSpeak bug, but we are sending two indexes very close together for some reason and that causes a buffer underrun. That raises the question of why we're sending indexes so close together. Either way, I think it's worth looking at the markup being sent to eSpeak. |
Hi, not that I’m opposed to this (we had a long discussion about Cython a while back), but I think it would be better to focus on one thing at a time (mental health takes priority). Thanks.
|
Actually, NVDA uses only a single background thread for synths and audio. I very much doubt the GIL is the bottleneck here. Of course, if you can prove otherwise, that's good info to have and solutions can then be considered... but let's work out the root cause before diving into solutions that may well not fix the problem. |
I'm not sure about that. When I test with Python's performance profiler, I can see 30-40 ms delay for some functions. I will do something. I will inform you about this when I move to the city. |
NVDA's existing audio output code (nvwave) is largely very old and uses WinMM, a very old legacy Windows audio API. It is also written in pure Python, contains quite a few threading locks necessitated by WinMM, and parts of it have become rather difficult to reason about. There are several known stability and audio glitching issues that are difficult to solve with the existing code. Description of user facing changes At the very least, this fixes audio glitches at the end of some utterances as described in #10185 and #11061. I haven't noticed a significant improvement in responsiveness on my system, but my system is also very powerful. It's hard to know whether the stability issues (e.g. #11169) are fixed or not. Time will tell as I run with this more. Description of development approach 1. The bulk of the WASAPI implementation is written in C++. The WASAPI interfaces are easy to access in C++ and difficult to access in Python. In addition, this allows for the best possible performance, given that we regularly and continually stream audio data. 2. The WinMM code fired callbacks by waiting for the previous chunk to finish playing before sending the next chunk, which could result in buffer underruns (glitches) if callbacks were close together (Python 3 versions of NVDA produce a scratch in the speech when finishing the end of a line #10185 and Texts with multiple line spacings are voiced with NVDA + down arrow and voices crack #11061). In contrast, the WASAPI code uses the audio playback clock to fire callbacks independent of data buffering, eliminating glitches caused by callbacks. 3. The WinMM WavePlayer class is renamed to WinmmWavePlayer. The WASAPI version is called WasapiWavePlayer. Rather than having a common base class, this relies on duck-typing. I figured it didn't make sense to have a base class given that WasapiWavePlayer will likely replace WinmmWavePlayer altogether at some point. 4. WavePlayer is set to one of these two classes during initialisation based on a new advanced configuration setting. WASAPI defaults to disabled. 5. WasapiWavePlayer.feed can take a ctypes pointer and size instead of a Python bytes object. This avoids the overhead of additional memory copying and Python objects in cases where we are given a direct pointer to memory anyway, which is true for most (if not all) speech synthesisers. 6. For compatibility, WinmmWavePlayer.feed supports a ctypes pointer as well, but it just converts it to a Python bytes object. 7. eSpeak and oneCore have been updated to pass a ctypes pointer to WavePlayer.feed. 8. When playWaveFile is used asynchronously, it now feeds audio on the background thread, rather than calling feed on the current thread. This is necessary because the WASAPI code blocks once the buffer (400 ms) is full, rather than having variable sized buffers. Even with the WinMM code, playWaveFile code could block for a short time (nvwave.playWaveFile not fully async #10413). This should improve that also. 9. WasapiWavePlayer supports associating a stream with a specific audio session, which allows that session to be separately configurable in the system Volume Mixer. NVDA tones and wave files have been split into a separate "NVDA sounds" session. WinmmWavePlayer has a new setSessionVolume method that can be used to set the volume of a session. This at least partially addresses Ability to adjust volume of sounds #1409.
Reopening since WASAPI is not enabled by default anymore (#15172). |
Reintroduces #14697 Closes #10185 Closes #11061 Closes #11615 Summary of the issue: WASAPI usage should be reenabled by default on alpha so wider testing can occur Description of user facing changes WASAPI is re-enabled - refer to #14697 for benefits Description of development approach change feature flag default value to enabled
Steps to reproduce:
Actual behavior:
Voices crack in some places.
Expected behavior:
System configuration
NVDA installed/portable/running from source:
installed
NVDA version:
2019.3.1
Windows version:
windows 10 pro, 64 bit, 18363.778
Name and version of other software in use when reproducing the issue:
Other information about your system:
Other questions
Does the issue still occur after restarting your computer?
Yes
Have you tried any other versions of NVDA? If so, please report their behaviors.
Tested with versions after 2019.3.1.
Sounds are cracking.
If addons are disabled, is your problem still occuring?
Yes
Did you try to run the COM registry fixing tool in NVDA menu / tools?
Yes
The text was updated successfully, but these errors were encountered: