Native InstantID support for ComfyUI.
This extension differs from the many already available as it doesn't use diffusers but instead implements InstantID natively and it fully integrates with ComfyUI.
If you like my work and wish to see updates and new features please consider sponsoring my projects.
Not to mention the documentation and videos tutorials. Check my ComfyUI Advanced Understanding videos on YouTube for example, part 1 and part 2
The only way to keep the code open and free is by sponsoring its development. The more sponsorships the more time I can dedicate to my open source projects.
Please consider a Github Sponsorship or PayPal donation (Matteo "matt3o" Spinelli). For sponsorships of $50+, let me know if you'd like to be mentioned in this readme file, you can find me on Discord or matt3o 🐌 gmail.com.
-
2024/02/27: Added noise injection in the negative embeds.
-
2024/02/26: Fixed a small but nasty bug. Results will be different and you may need to lower the CFG.
-
2024/02/20: I refactored the nodes so they are hopefully easier to use. This is a breaking update, the previous workflows won't work anymore.
In the examples
directory you'll find some basic workflows.
** 🎥 Introduction to InstantID features**
Upgrade ComfyUI to the latest version!
Download or git clone
this repository into the ComfyUI/custom_nodes/
directory or use the Manager.
InstantID requires insightface
, you need to add it to your libraries together with onnxruntime
and onnxruntime-gpu
.
The InsightFace model is antelopev2 (not the classic buffalo_l). Download the models (for example from here or here), unzip and place them in the ComfyUI/models/insightface/models/antelopev2
directory.
The main model can be downloaded from HuggingFace and should be placed into the ComfyUI/models/instantid
directory. (Note that the model is called ip_adapter as it is based on the IPAdapter).
You also needs a controlnet, place it in the ComfyUI controlnet directory.
Remember at the moment this is only for SDXL.
The training data is full of watermarks, to avoid them to show up in your generations use a resolution slightly different from 1024×1024 (or the standard ones) for example 1016×1016 works pretty well.
It's important to lower the CFG to at least 4/5 or you can use the RescaleCFG
node.
The person is posed based on the keypoints generated from the reference image. You can use a different pose by sending an image to the image_kps
input.
The default InstantID implementation seems to really burn the image, I find that by injecting noise to the negative embeds we can mitigate the effect and also increase the likeliness to the reference. The default Apply InstantID node automatically injects 35% noise, if you want to fine tune the effect you can use the Advanced InstantID node.
This is still experimental and may change in the future.
You can add more controlnets to the generation. An example workflow for depth controlnet is provided.
It's possible to style the composition with IPAdapter. An example is provided.
Multi-ID is supported but the workflow is a bit complicated and the generation slower. I'll check if I can find a better way of doing it. The "hackish" workflow is provided in the example directory.
There's an InstantID advanced node available, at the moment the only difference with the standard one is that you can set the weights for the instantID models and the controlnet separately. It now also includes a noise injection option. It might be helpful for finetuning.
The instantID model influences the composition of about 25%, the rest is the controlnet.
The noise helps reducing the "burn" effect.
It works very well with SDXL Turbo/Lighting. Best results with community's checkpoints.
It's only thanks to generous sponsors that the whole community can enjoy open and free software. Please join me in thanking the following companies and individuals!
- RunComfy (ComfyUI Cloud)