You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Checkpoint path: synthesizer\saved_models\mandarin\mandarin.pt
Loading training data from: I:\视频剪辑\配音人家\数据处理\SV2TTS\synthesizer\train.txt
Using model: Tacotron
Using device: cuda
Loading weights at synthesizer\saved_models\mandarin\mandarin.pt
Traceback (most recent call last):
File "D:\MockingBird-main\synthesizer_train.py", line 37, in
train(**vars(args))
File "D:\MockingBird-main\synthesizer\train.py", line 121, in train
model.load(weights_fpath, device, optimizer)
File "D:\MockingBird-main\synthesizer\models\base.py", line 51, in load
self.load_state_dict(checkpoint["model_state"], strict=False)
File "C:\Users\zhuhero\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1604, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Tacotron:
size mismatch for encoder.embedding.weight: copying a param with shape torch.Size([75, 512]) from checkpoint, the shape in current model is torch.Size([70, 512]).
The text was updated successfully, but these errors were encountered:
#37
what's the '_characters' for your new modle 'my_run8_25k.pt'?
it seems like a new char set
RuntimeError: Error(s) in loading state_dict for Tacotron: size mismatch for encoder.embedding.weight: copying a param with shape torch.Size([75, 512]) from checkpoint, the shape in current model is torch.Size([70, 512]). size mismatch for gst.stl.attention.W_query.weight: copying a param with shape torch.Size([512, 256]) from checkpoint, the shape in current model is torch.Size([512, 512]).
Checkpoint path: synthesizer\saved_models\mandarin\mandarin.pt
Loading training data from: I:\视频剪辑\配音人家\数据处理\SV2TTS\synthesizer\train.txt
Using model: Tacotron
Using device: cuda
Initialising Tacotron Model...
WARNING: you are using compatible mode due to wrong sympols length, please modify varible _characters in
utils\symbols.py
\Loading the json with %s
{'sample_rate': 16000, 'n_fft': 800, 'num_mels': 80, 'hop_size': 200, 'win_size': 800, 'fmin': 55, 'min_level_db': -100, 'ref_level_db': 20, 'max_abs_value': 4.0, 'preemphasis': 0.97, 'preemphasize': True, 'tts_embed_dims': 512, 'tts_encoder_dims': 256, 'tts_decoder_dims': 128, 'tts_postnet_dims': 512, 'tts_encoder_K': 5, 'tts_lstm_dims': 1024, 'tts_postnet_K': 5, 'tts_num_highways': 4, 'tts_dropout': 0.5, 'tts_cleaner_names': ['basic_cleaners'], 'tts_stop_threshold': -3.4, 'tts_schedule': [[2, 0.001, 10000, 12], [2, 0.0005, 15000, 12], [2, 0.0002, 20000, 12], [2, 0.0001, 30000, 12], [2, 5e-05, 40000, 12], [2, 1e-05, 60000, 12], [2, 5e-06, 160000, 12], [2, 3e-06, 320000, 12], [2, 1e-06, 640000, 12]], 'tts_clip_grad_norm': 1.0, 'tts_eval_interval': 500, 'tts_eval_num_samples': 1, 'tts_finetune_layers': [], 'max_mel_frames': 900, 'rescale': True, 'rescaling_max': 0.9, 'synthesis_batch_size': 16, 'signal_normalization': True, 'power': 1.5, 'griffin_lim_iters': 60, 'fmax': 7600, 'allow_clipping_in_normalization': True, 'clip_mels_length': True, 'use_lws': False, 'symmetric_mels': True, 'trim_silence': True, 'speaker_embedding_size': 256, 'silence_min_duration_split': 0.4, 'utterance_min_duration': 1.6, 'use_gst': True, 'use_ser_for_gst': True}
Trainable Parameters: 0.000M
Loading weights at synthesizer\saved_models\mandarin\mandarin.pt
Traceback (most recent call last):
File "D:\MockingBird-main\synthesizer_train.py", line 37, in
train(**vars(args))
File "D:\MockingBird-main\synthesizer\train.py", line 121, in train
model.load(weights_fpath, device, optimizer)
File "D:\MockingBird-main\synthesizer\models\base.py", line 51, in load
self.load_state_dict(checkpoint["model_state"], strict=False)
File "C:\Users\zhuhero\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1604, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Tacotron:
size mismatch for encoder.embedding.weight: copying a param with shape torch.Size([75, 512]) from checkpoint, the shape in current model is torch.Size([70, 512]).
The text was updated successfully, but these errors were encountered: