You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The problem encountered during CMatchASR training is that when I trained using train.yaml, I got a word error rate of 22% on the data libriadapt_en_us_clean_matrix. Then I used the model.loss.best model saved at this time as the value of the load_pretrained_model parameter, and used libriadapt_en_us_clean_pseye as the target data for mmd domain adaptation training. After 39 epoch of training, I got train loss: 310.2868, dev loss: 302.119, test loss: 223.363, and test wer was 147.
The text was updated successfully, but these errors were encountered:
I made some changes in the code. When decoding, an error occurred that
model.loss.best could not be found.
I did not find the command to save model.loss.best in the code, so I added it to the line after saving snapshot.ep.{epoch}. See the following code for details.
Another change is that when I was doing mmd training, an error occurred: File “/root/model/NeuralSpeech/CMatchASR/utils.py”, line 60, in load_pretrained_model model.load_state_dict(dst_state) RuntimeError: Error(s) in loading state_dict for UDASpeechTransformer: Unexpected key(s) in state_dict: “model”, “optimizer”.
So I changed it to model.load_state_dict(dst_state,strict=False).
What else do I need to check?
save_path = f"{args.outdir}/model.loss.best" #Specified in the source code
torch_save(model, f"{args.outdir}/snapshot.ep.{epoch}", optimizer=optimizer)
#save model.loss.best
if sum(test_stats['loss_lst'])/len(test_stats['loss_lst']) <= min(test_losses):
torch_save(model, save_path, optimizer=optimizer)
early_stop = 0
test_losses.append(sum(test_stats['loss_lst'])/len(test_stats['loss_lst']))
The problem encountered during CMatchASR training is that when I trained using train.yaml, I got a word error rate of 22% on the data libriadapt_en_us_clean_matrix. Then I used the model.loss.best model saved at this time as the value of the load_pretrained_model parameter, and used libriadapt_en_us_clean_pseye as the target data for mmd domain adaptation training. After 39 epoch of training, I got train loss: 310.2868, dev loss: 302.119, test loss: 223.363, and test wer was 147.
The text was updated successfully, but these errors were encountered: