Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--nce #20

Open
drtonyr opened this issue Jan 12, 2024 · 2 comments
Open

--nce #20

drtonyr opened this issue Jan 12, 2024 · 2 comments

Comments

@drtonyr
Copy link

drtonyr commented Jan 12, 2024

README.md refers to option --nce, for example python main.py --cuda --noise-ratio 10 --norm-term 9 --nce --train

example/utils.py does not have --nce in setup_parser()

Result:

think0 tonyr: python main.py --nce
usage: main.py [-h] [--data DATA] [--vocab VOCAB] [--min-freq MIN_FREQ]
               [--emsize EMSIZE] [--nhid NHID] [--nlayers NLAYERS] [--lr LR]
               [--bptt BPTT] [--concat] [--weight-decay WEIGHT_DECAY]
               [--lr-decay LR_DECAY] [--clip CLIP] [--epochs EPOCHS]
               [--batch-size N] [--dropout DROPOUT] [--seed SEED] [--cuda]
               [--log-interval N] [--save SAVE] [--loss LOSS]
               [--index-module INDEX_MODULE] [--noise-ratio NOISE_RATIO]
               [--norm-term NORM_TERM] [--train] [--tb-name TB_NAME] [--prof]
main.py: error: unrecognized arguments: --nce

It looks like --nce has been replaced by --loss nce and README.md should be updated.

It's not clear that the rest of the code still works. This has two issues:

think7 tonyr: python main.py --cuda --noise-ratio 10 --norm-term 9 --train --index-module gru --dropout 0
...
Falling into one layer GRU due to Index_GRU supporting
/usr/lib/python3/dist-packages/torch/nn/modules/rnn.py:71: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
  warnings.warn("dropout option adds dropout after all but last "
Training PPL 0.0:  11%|██▎                  | 227/2104 [00:02<00:17, 108.01it/s]

firstly is the warning from rnn.py, secondly the perplexities are all zero.

Moving on to NCE, the reported train PPL is very low, the valid PPL very high.

...
Training PPL 34.3: 100%|████████████████████| 2104/2104 [00:32<00:00, 64.85it/s]
| end of epoch   1 | time: 33.14s |valid ppl   895.19
...
Training PPL 9.8: 100%|█████████████████████| 2104/2104 [00:32<00:00, 64.55it/s]
| end of epoch  40 | time: 33.25s |valid ppl   187.99
| End of training | test ppl   183.84
@Stonesjtu
Copy link
Owner

Stonesjtu commented Jan 15, 2024

Hi Tony,
Thanks for reporting, Here are the line-by-line replies.

It looks like --nce has been replaced by --loss nce and README.md should be updated.

Yes

firstly is the warning from rnn.py,

Here we use single-layer RNN so drop-out config is dismissed. Should be fixed later

secondly the perplexities are all zero.

Something must be wrong

Moving on to NCE, the reported train PPL is very low, the valid PPL very high.

Since the loss criterion is different during NCE training and evalutation, which is (NCE vs Cross-Entropy). The training PPL is just the perplexity between the noise samples and positive samples, it should be, by definition, lower than real Perplexity within the whole vocabulary.

Looking forward to further discussion!

@drtonyr
Copy link
Author

drtonyr commented Jan 15, 2024

Hey, thanks for getting back to me so quickly.

I'm not really concerned about argparse or dropout issues. This is the best public code for NCE in Language Modelling I could find, that's a great achievement.

Zero perplexities is not something I can easily look into, and is quite a blocker for someone like me just starting with the code.

I can help with the reported PPL under NCE. Firstly, for large tasks, NCE will self-normalise. That is \sum exp(x_i) will be about 1. When this happens you can report approx standard perplexity during training (dev/test sets are much smaller, it's good to report exact PPL by normalising).

It has been ten years since I really got into this, I hope I haven't forgotten too much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants