Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unreasonable WER obtained by MMD training in CMatch #3

Open
suuuuju opened this issue May 12, 2023 · 1 comment
Open

Unreasonable WER obtained by MMD training in CMatch #3

suuuuju opened this issue May 12, 2023 · 1 comment

Comments

@suuuuju
Copy link

suuuuju commented May 12, 2023

The problem encountered during CMatchASR training is that when I trained using train.yaml, I got a word error rate of 22% on the data libriadapt_en_us_clean_matrix. Then I used the model.loss.best model saved at this time as the value of the load_pretrained_model parameter, and used libriadapt_en_us_clean_pseye as the target data for mmd domain adaptation training. After 39 epoch of training, I got train loss: 310.2868, dev loss: 302.119, test loss: 223.363, and test wer was 147.

@suuuuju
Copy link
Author

suuuuju commented May 12, 2023

I made some changes in the code. When decoding, an error occurred that
model.loss.best could not be found.
I did not find the command to save model.loss.best in the code, so I added it to the line after saving snapshot.ep.{epoch}. See the following code for details.
Another change is that when I was doing mmd training, an error occurred: File “/root/model/NeuralSpeech/CMatchASR/utils.py”, line 60, in load_pretrained_model model.load_state_dict(dst_state) RuntimeError: Error(s) in loading state_dict for UDASpeechTransformer: Unexpected key(s) in state_dict: “model”, “optimizer”.
So I changed it to model.load_state_dict(dst_state,strict=False).
What else do I need to check?

save_path = f"{args.outdir}/model.loss.best"  #Specified in the source code
torch_save(model, f"{args.outdir}/snapshot.ep.{epoch}", optimizer=optimizer)
#save model.loss.best
if sum(test_stats['loss_lst'])/len(test_stats['loss_lst']) <= min(test_losses):
    torch_save(model, save_path, optimizer=optimizer)
    early_stop = 0
test_losses.append(sum(test_stats['loss_lst'])/len(test_stats['loss_lst']))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant