-
Notifications
You must be signed in to change notification settings - Fork 532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/peft compatible models #346
Feature/peft compatible models #346
Conversation
…model into a composer one
refactored the hf convertor to a single function as suggested by @dakinggg. tested it on my end and ran pre-commit successfully. I want to move forward and push the code updates to the hub. |
Tests are failing with
|
Hello @danbider , could you share your yamls for MPT peft/lora training? Thanks. |
Co-authored-by: Sam Havens <samhavens@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly LGTM! Left a few comments. Will approve on next pass / w review from NLP person
Co-authored-by: Mihir Patel <mihir.v.patel7@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as you have resolved everyone else's concerns I will approve.
Is there an example on how to fine-tune with this? |
Hey @chris-aeviator, I noticed that in the repository, LoRA currently only supports MPT models. Can we perform LoRA fine-tuning on other models such as LLAMA? |
Edits needed to support a combo of composer with hf/peft.
Pipeline is:
inputs_embeds
args tomodel.forward()
.