Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

a problem of Reproduce the results with the same seed #6

Open
Yefeiyang-luis opened this issue May 18, 2022 · 2 comments
Open

a problem of Reproduce the results with the same seed #6

Yefeiyang-luis opened this issue May 18, 2022 · 2 comments

Comments

@Yefeiyang-luis
Copy link

hello, when i run the code,i find that i can't get the same loss or f1 score with fixed seed.
for example i set seed==0,and the f1 result will get 43.25, 43.20, 42.89 etc.
i print the output and find that first finetune round the model get the fixed bert out and loss,then loss.backward(),the second round the bert out or loss will change slightly.
it's so wired that every time rerun the scripts, the result always change even fix the seed.
whether the loss function is too complex and The model accuracy is not enough

@Sarathismg
Copy link
Collaborator

I am not sure if I have understood your problem fully but a slight run-to-run variation is expected. Please refer to: https://pytorch.org/docs/stable/notes/randomness.html

Regarding model performance: Can you be more specific about which dataset, which script, and which support/test set you are using? I think I got a similar performance in repeated runs.

@Switchsyj
Copy link

Hi, I get a similar issue on few_nerd inter dataset provided by here when I tried to train from scratch. I run:
sh exec_container.sh inter 0 5 5, I finally get different F1-score of 55.17 and 45.28 across different runs (seed=1, gpu=0 for both). Is this variance a normal case? (I set batch size=16 because of limited GPU memory).

And for simplicity, I just evaluate on 'support_test_5_5/0', and test on 'query_test_5_5/0'
In exec_container.sh, I adjust my code line21 to:

python src/container.py --data_dir data/few-nerd/${G} --labels-train data/few-nerd/${G}/labels_train.txt --labels-test data/few-nerd/${G}/labels_test.txt --config_name bert-base-uncased --model_name_or_path bert-base-uncased --saved_model_dir saved_models/few-nerd/${G}/${SAVED_MODEL_DIR} --output_dir outputs/few-nerd/${G}/${finetune_loss}_${is_viterbi}_final/${G}-${way}-${shot}/0 --support_path support_test_${way}_${shot}/0 --test_path query_test_${way}_${shot}/0 --n_shots ${shot} --max_seq_length 128 --embedding_dimension 128 --num_train_epochs 1 --train_batch_size 16 --seed 1 --do_predict --select_gpu ${GPU} --training_loss KL --finetune_loss ${finetune_loss} --evaluation_criteria euclidean_hidden_state --learning_rate 5e-5 --learning_rate_finetuning 5e-5 --consider_mutual_O --temp_trans 0.01 --silent --do_train

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants