Skip to content

Use GANs to generate spectrogram of speech. The generated spectrogram will be conditioned on emotion

Notifications You must be signed in to change notification settings

hegde95/GAN-for-speech-spectrogram

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SpectroGAN

gif

Express emotion through Images

Contains code for SpectroGAN, Final project for USC EE599 Deep Learning for Engineers Spring 2020.

Github link: https://github.com/hegde95/GAN-for-speech-spectrogram

Objective

The main objective is to apply style transfer on speech spectrograms in order to change the emotions conveyed in said speech.

Recent studies have successfully shown how style transfer can be applied on images from one domain to another. In this project we attempt to use this technique to embed emotions in spectrogram images. The end goal of the project will be to show that speech audio recorded with the connotation of one emotion can be conveted to another emotion without changing the content/information convayed in the speech.

Methodology

-- Data set:

For this project we chose the RAVDESS RAVDESS data set. The data set contains lexically-matched statements in a neutral North American accent spoken with emotions from anger, calm, disgust, fearful, happy, neutral, sad and surprised. The cleaned and re-arranged data can be found here. For this project, we chose to convert audio from "calm" to other emotions. The entire set of npz files can be found at this links:

calm2surprised- https://drive.google.com/uc?id=15HlogMsEX9juzL1j7HqweDQv9F5tJFuG

calm2sad- https://drive.google.com/uc?id=15HlO9YvZjMtbcEiXajfE9uqmrVS0Unep

calm2happy - https://drive.google.com/uc?id=153PIrQEk_agKiUOP5cujrVyGjnxDqKhd

calm2fearful - https://drive.google.com/uc?id=14scuVs2nlNH29DIWecrNrCcwNAVR0orG

calm2disgust - https://drive.google.com/uc?id=14s7kWrDQP61X9QXYDV-W3W4YIukJs_55

calm2anger - https://drive.google.com/uc?id=14q4aZseMCQO_xbbmX-JRsbRSlGX9bB3E)

fearful2surprised - https://drive.google.com/uc?id=167zknyKgV5r8qO_fLbbFLT1A76WwTWiL


-- Data Conversion:

The source and target data format in this project are .wav files, but our GAN's work on images.

  1. Audio to Image: To convert the audio to spectrograms we sampled the audio at 16000 Hz and performed stft of lenght 512 and used a hop lenght of 256. The source audio files were also trimmed as to obtain a spectrogram of size 257 X 257. This image padded was with 0's to get a 260 X 260 array, which is the input and output to our GAN.
  2. Image to Audio: To convert the generated spectrograms to audio, we used the griffin-lim algorithm on the clipped image. We made sure that that the fft lenght and the hop lenght used in the istft was the same as before.

-- CycleGANs:

For our project we attemted to implement a CycleGAN as this has been shown to perform well on style transfer tasks. Also, to be size (and therefore fft length) independent, we use a PatchGAN model for our descriminator network .This code was based on this link

Documentation

Here is the link to our presentation

Here is a link to our report

Here is a video showing a demo:

<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/qP9sjOJIR-4" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

Results

Below are a couple of input and output audio files with the corresponding spectrograms, tested with with different models. (Click on the image to hear the audio.)

Audio from same data set:

The following are results for 3, 6 and 9 ReNet blocks in the the transformer trained for 100 epochs:

Emotion "Dogs are sitting by the door" (3) "Dogs are sitting by the door" (6) "Dogs are sitting by the door" (9)
Neutral (Original) Input Neutral speech Input Calm speech Input Calm speech
Angry Output Angry speech Output Angry speech Output Angry speech

We found that 3 ResNet blocks performed poorly without noticeable emotion transfer or perfect reconstruction of original audio. The model with 6 ResNet blocks performed better with satisfactory emotion transfer and reconstruction. Although 9 ResNet blocks gave very good results in terms of emotion transfer, the reconstructed audio suffered from noise. And this was computationally expensive too. Hence, we decided to proceed with 6 ResNet blocks which has a good compromise between style transfer, denoising and computational efficiency.

The following are results for 260 X 260 and 520 X 520 spectrograms, trained for 100 epochs:

Emotion "Dogs are sitting by the door" (260 X 260) "Dogs are sitting by the door" (520 X 520)
Calm (Original) Input Neutral speech Input Neutral speech
Angry Output Angry speech Output Angry speech

It is evident that the model which generated a 260X260 spectrogram had a better reconstruction compared to the other. This finding also helped us in reducing computation for further experiments.

After the above experimentation, we found that the models showed peak performance at the epochs denoted under the emotion. These are the results after implementing emotion transfer on the same audio file:

Emotion "Kids are talking by the door" "Dogs are sitting by the door" "Dogs are sitting by the door" "Dogs are sitting by the door"
Calm (Original) Input Calm speech Input Calm speech Input Calm speech Input Calm speech
Surprised(30000) Output Surprised speech Output Surprised speech Output Surprised speech Output Surprised speech
Fearful(30000) Output Fearful speech Output Fearful speech Output Fearful speech Output Fearful speech
Anger(30000) Output Angry speech Output Angry speech Output Angry speech Output Angry speech
Disgust(30000) Output Disgust speech Output Disgust speech Output Disgust speech Output Disgust speech
Happy(30000) Output Happy speech Output Happy speech Output Happy speech Output Happy speech
Sad(30000) Output Sad speech Output Sad speech Output Sad speech Output Sad speech

From the above table we see two conversions, calm to fearful and calm to surprised, gives the best emotion transfer. Also there were noticeable characteristic changes in the harmonic structure of the input speech.

Unseen audio from same data set:

A few audio files from the dataset were held back for testing and optimizing our model. The spectrograms shown below are generated for these unseen input audio files.

Emotion "Kids are talking by the door" "Dogs are sitting by the door"
Calm (Original) Input Calm speech Input Calm speech
Angry Output Angry speech Output Angry speech
Fearful Output Fearful speech Output Fearful speech

Same script by unseen actor:

Spectrograms below shows the performance of the model on an audio file of the same script, but by an actor not from the dataset. The model shows good performance even on unseen data.

Emotion "Dogs are sitting by the door"
Calm (Original) Input Calm speech
Angry Output Angry speech
Fearful Output Fearful speech

Lexically similar script by unseen actor:

The following results demonstrate the ability of our model to transfer emotions on audio clips of unseen actors speaking lexically similar sentences.

Emotion "This project is fun" "Three plus one equals four"
Calm (Original) Input Calm speech Input Calm speech
Angry Output Angry speech Output Angry speech
Fearful Output Fearful speech Output Fearful speech

Different laguage by unseen actor:

We also experimented on audio clips of unseen actors speaking in a different language (Hindi and Kannada). The model did not produce results with sufficient style transfer. However, the model was still able to reconstruct the audio clip of an unseen language without much noise.

Emotion "Gaadi waala aya ghar se kachra nikal" "Konegu project mugithu"
Calm (Original) Input Calm speech Input Calm speech
Angry Output Angry speech Output Angry speech
Fearful Output Fearful speech Output Fearful speech

Contributors

Ashwin Telagimathada Ravi - https://www.linkedin.com/in/ashwin-tr/

About

Use GANs to generate spectrogram of speech. The generated spectrogram will be conditioned on emotion

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •