-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting incorrect inpainting results #57
Comments
Q0: Yes you can change the workflow as you like, no matter Florence2 or something. For the widget panel, All I use is three kinds of node: Q1: You can add some preview node in the webview to debug. maybe the mask is not getting right? Q2: no. I think select in widget panel is ok. have you try other workflows and are they works well? for example the simple SDXL and Flux. |
Hi! I managed to get it working, but it wasn't with the provided workflow. I am not a programmer but I knew something wasn't working with the sampler/denoise (for me), so I took an inpainting with mask workflow I had, and took parts from your SDPPP nodes (getting the image, base prompt etc) and combined it together. I shared the workflow. I added Florence2 so I don't have to type the base prompt. I am a professional designer and think there could be some simple changes to make this much easier to use. Right now it's a little hard to type a layer, and then go back to Comfy to make sure I have the right layer for prompt selected. If it's like this, it's easier for me to put the image directly into Comfy, and bring it back to Photoshop. I know you are doing this on your own time, so I am not expecting you to do what I suggest. I think it's a cool tool and wanted to share my thoughts as someone who uses Photoshop and SD for work :) |
Yes, sometimes the layer selection would not take effect, I will test it more and fix in 1.5.1. And I'm considering to add a button to quickly set the layer to current selection layer. |
Oh! that's a really great idea. It would save me a lot of time if I could do that. Right now I am switching to ComfyUI (easier than opening the small plugin window) to adjust my prompt and select the layer. If I could do that + type directly into the Photoshop plugin for my prompt, it would be so great :D Maybe it's custom Positive Prompt node so it can be displayed in the Photoshop plugin window? Also, for me I have many workflows saved: Thanks for the great work! I'm planning on to use your nodes with https://github.com/erosDiffusion/ComfyUI-enricos-nodes/tree/master and use the image I quickly edit in Photoshop to send into this workflow. |
ComfyUI allows you to bookmark a workflow. I'll show them in high priority next version. I saw your reddit post and openAI workflow, great work! |
That would be great! Yes, I do bookmark my workflows so that would work. And yes, for the other point, when I use Generative Fill in Photoshop, or if I drop a larger image in, there is often pixels outside of the canvas. It seems your node picks up the entire image, not just the visible canvas. The problem is when I generate with that larger image and send it back to Photoshop, it gets resized. I'm not sure if this is easy to fix, though. |
forgot to fix in 1.5.1. |
The out-of-canvas issue: Or actually you did not use the |
Hello! I just got this running. It's very cool! :)
I understand that #origin_image, #inpaint_prompt, and #base_prompt need to target the correct layer first.
Right now:
#origin_image: I set this to the main image I am inpainting on.
#inpaint_prompt: it's a text layer, right now I just write 'ancient temple, chinese' like in your video
#base_prompt: I write a short description of the image. (is it possible to get the image from Photoshop, and use Florence2 to write base prompt?)
Question 1:
Right now I make a selection in Photoshop in a blank layer, and when I hit generate, I can tell it changed my selected area but it doesn't match my prompt. It just looks like I did inpainting with very low denoise. I tried adjusting the denoise to be higher but seem to have trouble getting a unique generation. What am I doing wrong?
Question 2:
Do I have to manually select the layers (origin, inpaint prompt, base prompt) in ComfyUI to generate correctly each time?
謝謝
The text was updated successfully, but these errors were encountered: