Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistency in Precision Between Dataset Generation and Training for gsm8k in SDFT Script #13

Open
dineshkh opened this issue Sep 24, 2024 · 1 comment

Comments

@dineshkh
Copy link

In the gsm8k script (link), the distilled dataset is generated using fp16 precision, while the model is trained on this dataset using bf16.

Shouldn't the precision format be consistent throughout the process?

Generate distilled dataset using fp16: [Line 35](https://github.com/sail-sg/sdft/blob/bfb6c255fccdce7459235c20f19a3b9817a9cd5d/scripts/gsm8k/sdft.sh#L35)
Train on distilled dataset using bf16: [Line 61](https://github.com/sail-sg/sdft/blob/bfb6c255fccdce7459235c20f19a3b9817a9cd5d/scripts/gsm8k/sdft.sh#L61)
@rickyang1114
Copy link
Collaborator

Thanks for your interest! In an initial experiment, training with fp16 resulted in instabilities. Consequently, we adopted bf16 for training while continuing to use fp16 for inference. This approach has not led to any significant issues to date.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants