Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting incorrect inpainting results #57

Open
jeffufu opened this issue Nov 19, 2024 · 10 comments
Open

Getting incorrect inpainting results #57

jeffufu opened this issue Nov 19, 2024 · 10 comments

Comments

@jeffufu
Copy link

jeffufu commented Nov 19, 2024

Hello! I just got this running. It's very cool! :)

I understand that #origin_image, #inpaint_prompt, and #base_prompt need to target the correct layer first.

Right now:

#origin_image: I set this to the main image I am inpainting on.

#inpaint_prompt: it's a text layer, right now I just write 'ancient temple, chinese' like in your video

#base_prompt: I write a short description of the image. (is it possible to get the image from Photoshop, and use Florence2 to write base prompt?)

Question 1:

Right now I make a selection in Photoshop in a blank layer, and when I hit generate, I can tell it changed my selected area but it doesn't match my prompt. It just looks like I did inpainting with very low denoise. I tried adjusting the denoise to be higher but seem to have trouble getting a unique generation. What am I doing wrong?

Question 2:

Do I have to manually select the layers (origin, inpaint prompt, base prompt) in ComfyUI to generate correctly each time?

謝謝

@zombieyang
Copy link
Owner

zombieyang commented Nov 19, 2024

Q0: Yes you can change the workflow as you like, no matter Florence2 or something. For the widget panel, All I use is three kinds of node: GetDocument, GetLayerByID, Primitive

Q1: You can add some preview node in the webview to debug. maybe the mask is not getting right?

Q2: no. I think select in widget panel is ok.

have you try other workflows and are they works well? for example the simple SDXL and Flux.

@jeffufu
Copy link
Author

jeffufu commented Nov 19, 2024

Q0: Yes you can change the workflow as you like, no matter Florence2 or something. For the widget panel, All I use is three kinds of node: GetDocument, GetLayerByID, Primitive

Q1: You can add some preview node in the webview to debug. maybe the mask is not getting right?

Q2: no. I think select in widget panel is ok.

have you try other workflows and are they works well? for example the simple SDXL and Flux.

Flux - SDPPP Inpaint.json

Hi! I managed to get it working, but it wasn't with the provided workflow.

I am not a programmer but I knew something wasn't working with the sampler/denoise (for me), so I took an inpainting with mask workflow I had, and took parts from your SDPPP nodes (getting the image, base prompt etc) and combined it together. I shared the workflow. I added Florence2 so I don't have to type the base prompt.

I am a professional designer and think there could be some simple changes to make this much easier to use. Right now it's a little hard to type a layer, and then go back to Comfy to make sure I have the right layer for prompt selected. If it's like this, it's easier for me to put the image directly into Comfy, and bring it back to Photoshop.

I know you are doing this on your own time, so I am not expecting you to do what I suggest. I think it's a cool tool and wanted to share my thoughts as someone who uses Photoshop and SD for work :)

@jeffufu jeffufu closed this as completed Nov 19, 2024
@zombieyang
Copy link
Owner

Yes, sometimes the layer selection would not take effect, I will test it more and fix in 1.5.1. And I'm considering to add a button to quickly set the layer to current selection layer.

@jeffufu
Copy link
Author

jeffufu commented Nov 20, 2024

Yes, sometimes the layer selection would not take effect, I will test it more and fix in 1.5.1. And I'm considering to add a button to quickly set the layer to current selection layer.

Oh! that's a really great idea. It would save me a lot of time if I could do that. Right now I am switching to ComfyUI (easier than opening the small plugin window) to adjust my prompt and select the layer.

If I could do that + type directly into the Photoshop plugin for my prompt, it would be so great :D Maybe it's custom Positive Prompt node so it can be displayed in the Photoshop plugin window?

Also, for me I have many workflows saved:
image
It would be great if I could maybe mark which ones I use often so I don't have to search for the ones specific to SDPPP.

Thanks for the great work! I'm planning on to use your nodes with https://github.com/erosDiffusion/ComfyUI-enricos-nodes/tree/master and use the image I quickly edit in Photoshop to send into this workflow.

@jeffufu
Copy link
Author

jeffufu commented Nov 20, 2024

Yes, sometimes the layer selection would not take effect, I will test it more and fix in 1.5.1. And I'm considering to add a button to quickly set the layer to current selection layer.

Also, noticed that when I use get image, it takes the entire image of the layer/group I select, even if there are parts not visible:
image

Left is what I started with in Photoshop, right is what it looks like when I pull the image. This makes it a little hard to use, because sometimes I move images around and don't want to lose the pixels. Not a big problem, but sharing. You can ignore it if you want!

@zombieyang
Copy link
Owner

Yes, sometimes the layer selection would not take effect, I will test it more and fix in 1.5.1. And I'm considering to add a button to quickly set the layer to current selection layer.

Oh! that's a really great idea. It would save me a lot of time if I could do that. Right now I am switching to ComfyUI (easier than opening the small plugin window) to adjust my prompt and select the layer.

If I could do that + type directly into the Photoshop plugin for my prompt, it would be so great :D Maybe it's custom Positive Prompt node so it can be displayed in the Photoshop plugin window?

Also, for me I have many workflows saved: image It would be great if I could maybe mark which ones I use often so I don't have to search for the ones specific to SDPPP.

Thanks for the great work! I'm planning on to use your nodes with https://github.com/erosDiffusion/ComfyUI-enricos-nodes/tree/master and use the image I quickly edit in Photoshop to send into this workflow.

ComfyUI allows you to bookmark a workflow. I'll show them in high priority next version.

I saw your reddit post and openAI workflow, great work!

@zombieyang
Copy link
Owner

Yes, sometimes the layer selection would not take effect, I will test it more and fix in 1.5.1. And I'm considering to add a button to quickly set the layer to current selection layer.

Also, noticed that when I use get image, it takes the entire image of the layer/group I select, even if there are parts not visible: image

Left is what I started with in Photoshop, right is what it looks like when I pull the image. This makes it a little hard to use, because sometimes I move images around and don't want to lose the pixels. Not a big problem, but sharing. You can ignore it if you want!

Do you mean part of the image is outside the canvas?

@jeffufu
Copy link
Author

jeffufu commented Nov 21, 2024

Yes, sometimes the layer selection would not take effect, I will test it more and fix in 1.5.1. And I'm considering to add a button to quickly set the layer to current selection layer.

Oh! that's a really great idea. It would save me a lot of time if I could do that. Right now I am switching to ComfyUI (easier than opening the small plugin window) to adjust my prompt and select the layer.
If I could do that + type directly into the Photoshop plugin for my prompt, it would be so great :D Maybe it's custom Positive Prompt node so it can be displayed in the Photoshop plugin window?
Also, for me I have many workflows saved: image It would be great if I could maybe mark which ones I use often so I don't have to search for the ones specific to SDPPP.
Thanks for the great work! I'm planning on to use your nodes with https://github.com/erosDiffusion/ComfyUI-enricos-nodes/tree/master and use the image I quickly edit in Photoshop to send into this workflow.

ComfyUI allows you to bookmark a workflow. I'll show them in high priority next version.

I saw your reddit post and openAI workflow, great work!

That would be great! Yes, I do bookmark my workflows so that would work.

And yes, for the other point, when I use Generative Fill in Photoshop, or if I drop a larger image in, there is often pixels outside of the canvas. It seems your node picks up the entire image, not just the visible canvas. The problem is when I generate with that larger image and send it back to Photoshop, it gets resized. I'm not sure if this is easy to fix, though.

@zombieyang
Copy link
Owner

zombieyang commented Nov 24, 2024

forgot to fix in 1.5.1.
I'll reopen it to remind myself.

@zombieyang zombieyang reopened this Nov 24, 2024
@zombieyang
Copy link
Owner

The out-of-canvas issue:
I think it is because you link the boundary input with a layer boundary output?
I haven't think about it before, but maybe it is a good case to return the entire image when boundary is specified as layer boundary? If someone want to get what he/she see on canvas, the document boundary should be used instead.

Or actually you did not use the layer_bound? That should be a bug that I didn't reproduce.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants