Jiwen Yu1, Yinhuai Wang1, Chen Zhao2, Bernard Ghanem2, Jian Zhang1
1 Peking University, 2 KAUST
- News (2023-10-06): We successfully shared our work in Paris, thank you all for communicating with us! 😎
- News (2023-08-17): We have released the main code. The details of ControlNet-related code can be found in
./CN
while the details of the human face and guided diffusion-related code can be found in./Face-GD
- News (2023-07-16): We have released the code for FreeDoM-SD-Style, and you can find detailed information in the directory of
./SD_style
- News (2023-07-14): 🎉🎉🎉 Congratulations on FreeDoM being accepted by ICCV 2023! Our open-source project is making progress, stay tuned for updates!
- release the the camera-ready version of the paper and supplementary materials
- release the code for human face diffusion models and guided diffusion with various training-free guidances
- release the code for ControlNet with training-free face ID guidance and style guidance
- release the code for Stable Diffusion with training-free style guidance
FreeDoM is a simple but effective training-free method generating results under control from various conditions using unconditional diffusion models. Specifically, we use off-the-shelf pre-trained networks to construct the time-independent energy function, which measures the distance between the given conditions and the intermediately generated images. Then we compute the energy gradient and use it to guide the generation process. FreeDoM supports various conditions, including texts, segmentation maps, sketches, landmarks, face IDs, and style images. FreeDoM applies to different data domains, including human faces, images from ImageNet, and latent codes.
Model Source | Data Domain | Resolution | Original Conditions | Additional Training-free Conditions | Sampling Time*(s/image) |
---|---|---|---|---|---|
SDEdit | aligned human face | None | parsing maps, sketches, landmarks, face IDs, texts | ≈20s | |
guided-diffusion | ImageNet | None | texts, style images | ≈140s | |
guided-diffusion | ImageNet | class label | style images | ≈50s | |
Stable Diffusion | general images |
|
texts | style images | ≈84s |
ControlNet | general images |
|
human poses, scribbles, texts | face IDs, style images | ≈120s |
*The sampling time is tested on a GeForce RTX 3090 GPU card.
Our work is standing on the shoulders of giants. We want to thank the following contributors that our code is based on:
- open-source pre-trained diffusion models:
- (human face models) https://github.com/ermongroup/SDEdit
- (ImageNet mdoels) https://github.com/openai/guided-diffusion
- (Stable Diffusion) https://github.com/CompVis/stable-diffusion
- (ControlNet) https://github.com/lllyasviel/ControlNet
- pre-trained networks for constructing the training-free energy functions:
- (texts, style images) https://github.com/openai/CLIP
- (face parsing maps) https://github.com/zllrunning/face-parsing.PyTorch
- (sketches) https://github.com/Mukosame/Anime2Sketch
- (face landmarks) https://github.com/cunjian/pytorch_face_landmark
- (face IDs) ArcFace(https://arxiv.org/abs/1801.07698)
- time-travel strategy for better sampling:
- (DDNM) https://github.com/wyhuai/DDNM
- (Repaint) https://github.com/andreas128/RePaint
We also introduce some recent works that shared similar ideas by updating the clean intermediate results
- concurrent conditional image generation methods:
- zero-shot image restoration methods:
If this work is helpful for your research, please consider citing the following BibTeX entry.
@article{yu2023freedom,
title={FreeDoM: Training-Free Energy-Guided Conditional Diffusion Model},
author={Yu, Jiwen and Wang, Yinhuai and Zhao, Chen and Ghanem, Bernard and Zhang, Jian},
journal={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2023}
}