Skip to content

v2.0.0

Latest
Compare
Choose a tag to compare
@drfeinberg drfeinberg released this 15 Jun 17:32
· 109 commits to main since this release

VoiceLab

Automated Reproducible Acoustical Analysis
Voice Lab is an automated voice analysis software. What this software does is allow you to measure, manipulate, and visualize many voices at once, without messing with analysis parameters. You can also save all of your data, analysis parameters, manipulated voices, and full colour spectrograms and power spectra, with the press of one button.

Version 2.0.0

License

VoiceLab is licensed under the MIT license. See the LICENSE file for more information.

Cite Voicelab

If you use VoiceLab in your research, please cite it:

  • Feinberg, D. (2022). VoiceLab: Software for Fully Reproducible Automated Voice Analysis. Proc. Interspeech 2022, 351-355.
  • Feinberg, D. R., & Cook, O. (2020). VoiceLab: Automated Reproducible Acoustic Analysis. PsyArxiv

Installation instructions:

  • Install from pip using Python 3.9-3.11
    • pip install voicelab
  • To install on Windows, download the .exe file from the releases page.
    • run the voicelab.exe file
  • To install on OSX, download the .zip file from the releases page.
    • Unzip the file, and run the VoiceLab app
  • Install on Ubuntu (standalone)
    • Download voicelab
    • change the permissions to executable: chmod +x voicelab
    • try: voicelab or ./voicelab
    • You might need to run this to install a dependency: sudo apt-get install libxcb-xinerama0

Changes from 1.3.1 to 2.0

New Features

  • MeasureAlphaNode measures the alpha ratio

    • It is a measure of the ratio of low frequency energy to high frequency energy
    • It's written by me using NumPy
  • Pitch-corrected RMS Energy (Voice Sauce) --See bug fixes.

  • Pitch-corrected Equivalent Continuous Sound Level (Leq)

  • New viewport window for LPC spectra

Bug fixes

  • Major bugfix affecting all users of Energy in VoiceLab and Voice Sauce

In Voice Sauce source code, they report to calculate RMS in the documentation, but instead calculated total energy in each pitch-dependent frame. This means the Energy value in Voice Sauce that was translated to VoiceLab was not scaled for wavelength, and therefore not pitch-independent. Why does this matter?

Lower pitched voices have longer wavelengths, and therefore more energy than higher pitched voices. Voice Sauce is trying to correct for that by making the window length equivalent to a few pitch periods. They take the sum of the energy in each frame, and since they do not divide by the number of frames (in taking the mean for the RMS calculation), there is no pitch correction occurring at that level. If you then take the mean or RMS of the output of Voice Sauce Energy, you would be taking the total energy divided by number of frames in the sound. Higher pitched sounds have shorter wavelengths, and you can fit more of them into a fixed time period, so if your sounds are all the same length, then your measurements are pitch corrected. This doesn't happen automatically, so the problem is that the longer sounds also have more frames. Thus the measure is confounded.

To fix this I have implemented and RMS calculation at every frame as it says in the Voice Sauce manual. You can see the values are much closer to those given by Praat now, but are different, and that is because of the pitch-dependent frame length. I've removed the old calculation of mean energy, and if you are using RMS energy as a single value, that is the RMS of all of the frames. If you want the old calculation, it is in all of the older versions of VoiceLab.

If you are concerned, I recommend anyone who has published using this algorithm, or plans to in Voice Sauce or older versions of VoiceLab, re-run their Energy measurements and use the new values if this is something critical to your findings.

  • Fixed spectrograms and spectra
    • You can now see them in the boxes and you can expand them

API is no longer supported until further notice

If you clone the GitHub repo and look in the tests, you can see how to use the API. However, it is not supported at this time. I did, however update the example documentation. I have started writing a test suite, so you can prepare nodes by modifying that code.

Contact

David Feinberg: feinberg@mcmaster.ca

Documentation

https://voice-lab.github.io/VoiceLab