Implementation of DiffusionCLIP as an extension #3770
TuomasHeikkila
started this conversation in
Ideas
Replies: 1 comment
-
FYI : #2485 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
First off, very grateful for all the work that's been done here, keep it up boys, proud of ya.
While cleaning some old colabs I don't use anymore, I revisited some old e4e StyleCLIP and StyleGAN colabs that were the a la mode a couple years back. The reason those were neat was they were really good at making subtle context-aware modifications to the source image without that much of a hassle. Just describe the neutral vector, then describe the target vector, set the disentanglement threshold and manipulation strength and bob's your uncle you got an output finetuned to your liking.
There's apparently a version of StyleCLIP that uses diffusors https://github.com/gwang-kim/DiffusionCLIP
Python is a language I've intentionally avoided for the longest time and I'm fairly new to ML so I can't even begin to estimate how difficult (or even possible in the first place) would it be to add DiffusionCLIP as an extension to the webui? Having access to a tool that you could use to make slight modifications to your generations (i.e. add facial expressions for emotions, which SD is pretty bad at) would be pretty invaluable.
Beta Was this translation helpful? Give feedback.
All reactions