Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Instruction Tune with SFTTrainer? #426

Closed
jenkspt opened this issue Jun 9, 2023 · 9 comments · Fixed by #445
Closed

How to Instruction Tune with SFTTrainer? #426

jenkspt opened this issue Jun 9, 2023 · 9 comments · Fixed by #445

Comments

@jenkspt
Copy link

jenkspt commented Jun 9, 2023

With the SFTTrainer it's unclear to me how to instruction tune. I might be missing relevant details - but I the examples I've seen look like they are fine-tuning on the prompt and response rather than just the response.

specifically looking at:
https://github.com/lvwerra/trl/blob/main/examples/stack_llama/scripts/supervised_finetuning.py

meanwhile alpaca code explicitly creates a supervised dataset to train on responses
https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py

Are there any examples for instruction tuning with SFTTrainer or am I just missing something?

@younesbelkada
Copy link
Contributor

younesbelkada commented Jun 13, 2023

Hi @jenkspt
Thanks for the issue, for the SFTTrainer you might be interested in first creating a instruction dataset, or use an existing one. Then use that dataset and pass it to the trainer out of the box. Please see an example below on how we used SFT Trainer to fine-tune Falcon 7B/40B on Guanaco dataset: https://gist.github.com/pacman100/1731b41f7a90a87b457e8c5415ff1c14
let me know if anything else is unclear

@jenkspt
Copy link
Author

jenkspt commented Jun 13, 2023

For example - the dataset from the falcon script 'timdettmers/openassistant-guanaco'. There are responses prefixed with ### Human and ### Assistant. Does the SFTTrainer split on these to optimize only on the responses after ### Assistant? Or does the SFTTrainer optimize on the entire 'text' field?

@younesbelkada
Copy link
Contributor

@jenkspt
I see now, per my understanding the SFTTrainer does not do that and optimizes on the entire text chunk and from what I know (but maybe I am wrong) that is also how it is done in all instruction fine-tuned models

@jenkspt
Copy link
Author

jenkspt commented Jun 14, 2023

Dolly does completion only: https://github.com/databrickslabs/dolly/blob/master/training/trainer.py#L48-L77
and I'm pretty sure this is what the stanford alpaca is doing as well: https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py#L127-L153

@younesbelkada
Copy link
Contributor

I see that makes sense, thanks a lot for the pointers!
it looks like it is a matter of adding a new datacollator to SFTTrainer, let me know if you want to give it a try and contribute in TRL! Otherwise happy to do it

@PhilDakin
Copy link

Alpaca indicates they are including input in about ~40% of their training data here.

@younesbelkada
Copy link
Contributor

Hi everyone,

Thanks all for your pointers, I made #445 that hopefully will be merged soon

@vwxyzjn
Copy link
Contributor

vwxyzjn commented Jun 16, 2023

Hey @jenkspt, just saying hi :) It was great learning from your gpt jax implementation jenkspt/gpt-jax#2. Glad our paths crossed again.

@jenkspt
Copy link
Author

jenkspt commented Jun 20, 2023

@vwxyzjn congrats on HuggingFace!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants