Diffusers 0.30.1: Merging LoRA adapters without modifying original model structure #9382
-
Is there a way to fully merge LoRA weights into the base model in newer versions of Diffusers, resulting in an unmodified model graph structure similar to how it worked in version 0.26.3? Previous behavior (Diffusers 0.26.3):
Current behavior (Diffusers 0.30.1):
Before merging
After merging
I did follow the recommendations in https://huggingface.co/docs/diffusers/en/using-diffusers/merge_loras but observed the same behavior as listed above. GoalMy use case requires merging LoRA weights into the model in a way that preserves the original model structure. ReproduceScript to reproduce the above behavior
|
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
@sayakpaul @yiyixuxu any input on this? |
Beta Was this translation helpful? Give feedback.
-
You need to call |
Beta Was this translation helpful? Give feedback.
-
Thanks! |
Beta Was this translation helpful? Give feedback.
You need to call
unload_lora_weights()
to complete unload them. Detailed guide: https://huggingface.co/docs/diffusers/main/en/using-diffusers/merge_loras