Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/peft compatible models #346

Merged
merged 28 commits into from
Jun 27, 2023

Conversation

danbider
Copy link
Contributor

Edits needed to support a combo of composer with hf/peft.

Pipeline is:

  1. load a hf model e.g., mpt-7b
  2. use hf/peft to add lora modules or adapter modules.
  3. convert that peft model (that is loaded into python) into a composer model (use my new function for this)
  4. train in composer (required adding the inputs_embeds args to model.forward().

@danbider danbider requested a review from mvpatel2000 June 22, 2023 00:11
@danbider danbider requested a review from codestar12 June 23, 2023 18:38
@danbider
Copy link
Contributor Author

refactored the hf convertor to a single function as suggested by @dakinggg. tested it on my end and ran pre-commit successfully. I want to move forward and push the code updates to the hub.

@samhavens
Copy link
Contributor

Tests are failing with

___________________ ERROR collecting tests/test_training.py ____________________
ImportError while importing test module '/llm-foundry/tests/test_training.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
llmfoundry/__init__.py:8: in <module>
    from llmfoundry.data import (ConcatTokensDataset,
llmfoundry/data/__init__.py:5: in <module>
    from llmfoundry.data.denoising import (MixtureOfDenoisersCollator,
llmfoundry/data/denoising.py:20: in <module>
    from llmfoundry.models import utils
llmfoundry/models/__init__.py:4: in <module>
    from llmfoundry.models.hf import (ComposerHFCausalLM, ComposerHFPrefixLM,
llmfoundry/models/hf/__init__.py:4: in <module>
    from llmfoundry.models.hf.hf_causal_lm import (ComposerHFCausalLM,
llmfoundry/models/hf/hf_causal_lm.py:10: in <module>
    import peft
E   ModuleNotFoundError: No module named 'peft'

@c9o
Copy link

c9o commented Jun 24, 2023

Hello @danbider , could you share your yamls for MPT peft/lora training? Thanks.

Co-authored-by: Sam Havens <samhavens@gmail.com>
@codestar12 codestar12 requested a review from samhavens June 26, 2023 14:54
Copy link
Collaborator

@mvpatel2000 mvpatel2000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly LGTM! Left a few comments. Will approve on next pass / w review from NLP person

setup.py Outdated Show resolved Hide resolved
scripts/train/train.py Outdated Show resolved Hide resolved
llmfoundry/models/mpt/modeling_mpt.py Outdated Show resolved Hide resolved
llmfoundry/models/hf/hf_causal_lm.py Show resolved Hide resolved
llmfoundry/__init__.py Outdated Show resolved Hide resolved
setup.py Outdated Show resolved Hide resolved
scripts/train/train.py Outdated Show resolved Hide resolved
scripts/train/train.py Outdated Show resolved Hide resolved
Copy link
Contributor

@codestar12 codestar12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as you have resolved everyone else's concerns I will approve.

@stoperro
Copy link

Is there an example on how to fine-tune with this?

@chris-aeviator
Copy link

@stoperro according to #416 just use the ordinary peft code (huggingface has ready to go PEFT notebooks) or with llm-foundry add
grafik

@palash04
Copy link

Hey @chris-aeviator, I noticed that in the repository, LoRA currently only supports MPT models. Can we perform LoRA fine-tuning on other models such as LLAMA?

@dakinggg
Copy link
Collaborator

@palash04 this is getting fixed in #435

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants