Question about LoHa and LoKr support during inference. #6771
Replies: 2 comments 5 replies
-
It should be possible via but not via Let me elaborate this a bit. Let's say you have conducted LoHA or LoKR training on SD/SDXL via Now, to be able to inference_model = PeftModel.from_pretrained(model, repo_name) where the Once this is done, you can do: pipeline.unet = inference_model (assuming you only conducted PEFT training on the Does this help? |
Beta Was this translation helpful? Give feedback.
-
Hi Sayakapaul, Thank you for the response. Does this mean LoHA/LoKr that is not trained on peft will likely not work with this alternate method? This also implies the loading method would affect the unet but not on the clip/textual encoder layer? |
Beta Was this translation helpful? Give feedback.
-
Hello Diffusers Team, great work on the Lora and lycoris support.
I see that there are newer formats such as LoHa and LoKr, which is supported in PEFT for training. Furthermore, there are some loha loras which can be used for SD1.5 and SDXL. Is loading of Loha and lokr formats supported when we load them via peft for inference?
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions