We experiment with the LJSpeech dataset. Download and unzip LJSpeech.
wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
tar xjvf LJSpeech-1.1.tar.bz2
Assume the path to the dataset is ~/datasets/LJSpeech-1.1
.
Assume the path to the Tacotron2 generated mels is ../tts0/output/test
.
Run the command below to
- source path.
- preprocess the dataset.
- train the model.
- synthesize wavs from mels.
./run.sh
You can choose a range of stages you want to run, or set stage
equal to stop-stage
to use only one stage, for example, running the following command will only preprocess the dataset.
./run.sh --stage 0 --stop-stage 0
./local/preprocess.sh ${preprocess_path}
./local/train.sh
calls ${BIN_DIR}/train.py
.
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${preprocess_path} ${train_output_path}
The training script requires 4 command line arguments.
--data
is the path of the training dataset.--output
is the path of the output directory.--ngpu
is the number of gpus to use, if ngpu == 0, use cpu.
If you want distributed training, set a larger --ngpu
(e.g. 4). Note that distributed training with cpu is not supported yet.
./local/synthesize.sh
calls ${BIN_DIR}/synthesize.py
, which can synthesize waveform from mels.
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${input_mel_path} ${train_output_path} ${ckpt_name}
Synthesize waveform.
- We assume the
--input
is a directory containing several mel spectrograms(log magnitude) in.npy
format. - The output would be saved in the
--output
directory, containing several.wav
files, each with the same name as the mel spectrogram does. --checkpoint_path
should be the path of the parameter file (.pdparams
) to load. Note that the extention name.pdparmas
is not included here.--ngpu
is the number of gpus to use, if ngpu == 0, use cpu.
Pretrained Model with residual channel equals 128 can be downloaded here. waveflow_ljspeech_ckpt_0.3.zip.