Yet another PyTorch implementation of Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. The project is highly based on these. I made some modification to improve speed and performance of both training and inference.
- Python >= 3.5.2
- torch >= 1.0.0
- numpy
- scipy
- pillow
- inflect
- librosa
- Unidecode
- matplotlib
- tensorboardX
Currently only support LJ Speech. You can modify hparams.py
for different sampling rates. prep
decides whether to preprocess all utterances before training or online preprocess. pth
sepecifies the path to store preprocessed data.
- For training Tacotron2, run the following command.
python3 train.py \
--data_dir=<dir/to/dataset> \
--ckpt_dir=<dir/to/models>
- If you have multiple GPUs, try distributed.launch.
python -m torch.distributed.launch --nproc_per_node <NUM_GPUS> train.py \
--data_dir=<dir/to/dataset> \
--ckpt_dir=<dir/to/models>
Note that the training batch size will become <NUM_GPUS> times larger.
- For training using a pretrained model, run the following command.
python3 train.py \
--data_dir=<dir/to/dataset> \
--ckpt_dir=<dir/to/models> \
--ckpt_pth=<pth/to/pretrained/model>
- For using Tensorboard (optional), run the following command.
python3 train.py \
--data_dir=<dir/to/dataset> \
--ckpt_dir=<dir/to/models> \
--log_dir=<dir/to/logs>
You can find alinment images and synthesized audio clips during training. The text to synthesize can be set in hparams.py
.
- For synthesizing wav files, run the following command.
python3 inference.py \
--ckpt_pth=<pth/to/model> \
--img_pth=<pth/to/save/alignment> \
--npy_pth=<pth/to/save/mel> \
--wav_pth=<pth/to/save/wav> \
--text=<text/to/synthesize>
You can download pretrained models from Realeases. The hyperparameter for training is also in the directory. All the models were trained using 8 GPUs.
A vocoder is not implemented. But the model is compatible with WaveGlow and Hifi-GAN. Check the Colab demo for more information.
This project is highly based on the works below.