-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SDXL : LoRa problem #203
Comments
I had the same issue and addressed it in my pending pull request #200 From what I can tell it is because the SDXL LoRAs use a slightly different naming convention that the current code isn't set up to properly convert to the internally used convention. Also, it seems like the existing memory allocated for the GGML graph is insufficient to accommodate adding a SDXL LoRA so I had to bump that up as well. |
Now it is crashing with
probably because loras contains some f32 AND |
Interesting, I'm not having that problem, what invocation are you using? |
forgot to mention the obvious: I am using a model that is converted to q8_0. Maybe you can reproduce using |
I assumed as much, and this only happens when you use a quantized LoRA and not just the quantized model, or both? |
I am able to use --type q8_0 on an SDXL model and SDXL LoRA without incident |
the model is always quantized. Lora cant be quantized rn. sad, I thought I could have memory savings and deleted the .safetensors models <.< |
Alright, in that case it sounds like a separate quantization issue distinct from this one. I propose that this issue be marked as resolved. |
I tried with the new release master-48bcce4.
With |
In my opinion, it might better to close this issue and make that new problem it's own issue with a more descriptive name so that other people having the same issue or those with a solution can find it more easily as it does not seem to be related to issue in the original post. Just to avoid those reading this issue never scrolling down and seeing that someone is in fact having the same issue they are. |
I am still seeing some lora's not being applied even with fix from #200
|
Have you verified that the corresponding tensor exists in the model you are using? |
I have used this model file a while ago and it had no issue unless UNET changed since then (unlikely). I am wondering if this is due to the change introduced with PhotoMaker PR #179 . @leejet did a nice job of consolidating vanilla Lora and Photomaker Lora. |
I'm not familiar with anything to do with photomaker but I would recommend checking out the model to make sure that the corresponding tensor is in fact present, as just because it didn't warn you of this before doesn't mean it wasn't an issue. |
I found commenting out these lines will fix my issue but I assume it will not work for other models. Lines 93 to 95 in 48bcce4
I am using:
|
That particular addition was not from my pull request and I would be curious to know the rational behind it. |
Fixed in the new release master-90e9178 |
When I use the following command:
The lora model apparently can not be used:
It's the same problem as #117 (comment).
The lora model is eventually not used at all.
I'm using the latest master-a469688 release.
The text was updated successfully, but these errors were encountered: