-
Notifications
You must be signed in to change notification settings - Fork 27.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement UniPC sampler #7710
Implement UniPC sampler #7710
Conversation
Looking great man! Edit: 28 it/s on a 4090, where I get around 34 with euler_a. This is insane considering the fast resolving. Edit: Indeed doesn't work with v2.1-768. But it does work with the 512 v2.1 model. |
I tried this out and it's shocking how fast it is, even separate from needing fewer steps |
I noticed the commit message about this sampler not supporting img2img yet. Anyone have any insight as to why that is and if it might be resolved? |
Same reason as why PLMS doesn't support img2img as far as I understand, those samplers are missing some methods used by img2img |
This seems like a powerful sampler already, but I think we could make it work better with webui by further changing the example sampler.py from the original repo. Some ideas:
|
Thanks for the feedback, I would never have thought of the SD2.X check you mentioned I edited |
should it work with --no-half only? |
From my understanding of the paper, UniPC can be applied to all DPM based samplers. Would it maybe make more sense to have this as some sort of a check box rather than as its own separate sampler (or eventually samplers once its implemented for other samplers) |
do we have permission to copy and paste code from that repo |
I filed an issue with the code owners here: wl-zhao/UniPC#4 |
Hi, thanks for your efforts! We are glad to see UniPC can also achieve good results in stable-diffusion-webui project. By the way, UniPC has already been integrated into diffusers, you can also refer to the doc if you have any questions. |
Works great aside from a few things:
|
https://huggingface.co/spaces/wl-zhao/unipc_sdm This demo supports img2img. I could get impressive results with only 7 or 8 steps, with a well defined prompt If I only need one or two steps for img2img then this would speed up making animations quite a lot. Wish this was implemented already |
one request - handle borderline cases gracefully. currently unipc sampler throws an assertion if requested number of steps is below 3 (which is the default order and totally ). but a very common use case is to run through such scenarios (investigate intermediate steps, run with xyz grids on steps on one axies, etc.). if number of steps is below acceptable limit, simpy use whatever is lowest possible. really, use assertions only for critical runtime errors, not for something that can and should be automatically handled.
|
I don't quite understand why UniPC becomes DDIM when using hires fix, it seems to introduce some noise? Edit: Solved, the noise comes from the upscale step. |
Implement UniPC sampler
Describe what this pull request is trying to achieve.
Implement a new sampler, recently released here: https://github.com/wl-zhao/UniPC
Related: #7705
Additional notes and description of your changes
I adapted the code from their Stable Diffusion example. Some changes needed to be made to support multicond.
Environment this was tested in
Screenshots or videos of your changes