Replies: 2 comments
-
However I can't conceive, at present, a way to modify an image so as to alter it's depth map / segmentation in any predictable way 🤔 |
Beta Was this translation helpful? Give feedback.
0 replies
-
Stable diffusion v2 is out with one model specialised on depth aware img2img : https://github.com/Stability-AI/stablediffusion |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Maybe that is nonsensical, I don't know... I just had the strange idea that img2img could eventually be driven by depth estimation and/or segmentation, just like it is by color when color correction is active, so as to "correct" image's depth map and/or objects' segmentation at each step in order to better reflect the initial image ? I believe, even after thinking twice, that it could be useful in some circumstances, where one whishes to alter an image's look and feel quite aggressively (high denoising strength) but without modifying too much the image's "architecture"...
Beta Was this translation helpful? Give feedback.
All reactions