You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An officially supported task in the examples folder
My own task or dataset (give details below)
Reproduction
I'm following the document to do LoftQ fine-tuning.
The following code works well when the base model is loaded using a model name.
But when loading the base model using a local path, replace_lora_weights_loftq() throw an exception.
HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/usr/local/huggingface/Qwen2-0.5B-Instruct'. Use repo_type argument if needed.
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
from peft import LoraConfig, get_peft_model, replace_lora_weights_loftq
### Loading model from cache works!
# model_path = "Qwen/Qwen2-0.5B-Instruct"
### Loading model from local path does not work!
model_path = "/usr/local/huggingface/Qwen2-0.5B-Instruct"
## load base model
bnb_config = BitsAndBytesConfig(load_in_4bit=True)
base_model = AutoModelForCausalLM.from_pretrained(
model_path
, torch_dtype="auto"
, device_map="auto"
, quantization_config=bnb_config
)
## add peft model
# note: don't pass init_lora_weights="loftq" or loftq_config!
lora_config = LoraConfig(task_type="CAUSAL_LM")
peft_model = get_peft_model(base_model, lora_config)
replace_lora_weights_loftq(peft_model)
Note that base_model works well for either model name or local path (e.g. use it for inference). Exception only thrown on replace_lora_weights_loftq().
Expected behavior
replace_lora_weights_loftq() should work regardless how the base model is loaded (either using model name or local path)
The text was updated successfully, but these errors were encountered:
Thanks for reporting. This should already work, try running replace_lora_weights_loftq(peft_model, model_path=model_path). However, I agree that the error message is not very helpful. I'll create a PR to improve this.
System Info
transformers 4.44.0
peft 0.12.0
Who can help?
@BenjaminBossan @sayakpaul
Information
Tasks
examples
folderReproduction
I'm following the document to do LoftQ fine-tuning.
The following code works well when the base model is loaded using a model name.
But when loading the base model using a local path, replace_lora_weights_loftq() throw an exception.
Note that base_model works well for either model name or local path (e.g. use it for inference). Exception only thrown on replace_lora_weights_loftq().
Expected behavior
replace_lora_weights_loftq() should work regardless how the base model is loaded (either using model name or local path)
The text was updated successfully, but these errors were encountered: