You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for providing such a convenient tool. Can you provide an example of using lora in sdxl?😊
I use the following test lora calling, but the python kernel was killed .
I suspect the issue is coming from setting wtype="q4_0" when your model is a safetensors model. Try removing that line or setting wtype="default". The code should automatically assign the right model type. Something like this should work:
fromstable_diffusion_cppimportStableDiffusionstable_diffusion=StableDiffusion(
model_path="sd_xl_base_1.0.safetensors",
vae_path="sdxl_vae.safetensors",
# Weight type (options: default, f32, f16, q4_0, q4_1, q5_0, q5_1, q8_0)wtype="default", # or remove this linelora_model_dir="lora_dir/"
)
#### <lora:pytorch_lora_weights:1>prompt="European, green coniferous tree, yellow coniferous tree, rock, creek, sunny day, pastel tones, 3D<lora:pytorch_lora_weights:1>"output=stable_diffusion.txt_to_img(
prompt, # Promptwidth=1024,
height=1024,
sample_steps=1,
seed=-1
)
output[0]
If you intend on quantizing the sd_xl_base_1.0 model, you can use the low-level API like this:
importstable_diffusion_cpp.stable_diffusion_cppassd_cppsd_cpp.convert(
"sd_xl_base_1.0.safetensors".encode("utf-8"), # SafeTensors model path"sdxl_vae.safetensors".encode("utf-8"),
"sd_xl_base_1.0.q4_0.gguf".encode("utf-8"), # Output quantized GGUF model pathsd_cpp.GGMLType.SD_TYPE_Q4_0, # Quantization type
)
Then use the new quantized GGUF model in place of your safetensors model.
It's worth noting however that as far as I can tell, it isn't possible to use a quantized model with a Lora without it causing a "GGML_ASSERT" error. I believe this to be a stable-diffusion.cpp issue as I get the same errors when using the original stable-diffusion.cpp CLI tool and this issue has been raised in the stable-diffusion.cpp repo before: SDXL: LoRa problem
Even if the quantized model + Lora was working, stable-diffusion.cpp doesn't recommend it and warns that "In quantized models when applying LoRA, the images have poor quality".
Thank you for providing such a convenient tool. Can you provide an example of using lora in sdxl?😊
I use the following test lora calling, but the python kernel was killed .
The text was updated successfully, but these errors were encountered: