Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

replace_lora_weights_loftq() throw HFValidationError when base model is loaded using a local path #2020

Closed
2 of 4 tasks
Anstinus opened this issue Aug 20, 2024 · 2 comments · Fixed by #2022
Closed
2 of 4 tasks

Comments

@Anstinus
Copy link

System Info

transformers 4.44.0
peft 0.12.0

Who can help?

@BenjaminBossan @sayakpaul

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder
  • My own task or dataset (give details below)

Reproduction

I'm following the document to do LoftQ fine-tuning.
The following code works well when the base model is loaded using a model name.
But when loading the base model using a local path, replace_lora_weights_loftq() throw an exception.

HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/usr/local/huggingface/Qwen2-0.5B-Instruct'. Use repo_type argument if needed.


from transformers import AutoModelForCausalLM, BitsAndBytesConfig
from peft import LoraConfig, get_peft_model, replace_lora_weights_loftq

### Loading model from cache works!
# model_path = "Qwen/Qwen2-0.5B-Instruct"  
### Loading model from local path does not work!
model_path = "/usr/local/huggingface/Qwen2-0.5B-Instruct"



## load base model
bnb_config = BitsAndBytesConfig(load_in_4bit=True)
base_model = AutoModelForCausalLM.from_pretrained(
   model_path
   , torch_dtype="auto"
   , device_map="auto"
   , quantization_config=bnb_config
   )

## add peft model
# note: don't pass init_lora_weights="loftq" or loftq_config!
lora_config = LoraConfig(task_type="CAUSAL_LM")
peft_model = get_peft_model(base_model, lora_config)
replace_lora_weights_loftq(peft_model)

Note that base_model works well for either model name or local path (e.g. use it for inference). Exception only thrown on replace_lora_weights_loftq().

Expected behavior

replace_lora_weights_loftq() should work regardless how the base model is loaded (either using model name or local path)

@BenjaminBossan
Copy link
Member

Thanks for reporting. This should already work, try running replace_lora_weights_loftq(peft_model, model_path=model_path). However, I agree that the error message is not very helpful. I'll create a PR to improve this.

BenjaminBossan added a commit to BenjaminBossan/peft that referenced this issue Aug 20, 2024
Resolves huggingface#2020

If users want to use a local model, they need to pass the model_path
argument. The error message now says so.
@Anstinus
Copy link
Author

Thanks. it works.

BenjaminBossan added a commit that referenced this issue Aug 21, 2024
…al model. (#2022)

Resolves #2020

If users want to use a local model, they need to pass the model_path
argument. The error message now says so.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants