Skip to content

jkwang28/OSDFace

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 

Repository files navigation

OSDFace: One-Step Diffusion Model for Face Restoration

Jingkai Wang, Jue Gong, Lin Zhang, Zheng Chen, Xing Liu, Hong Gu, Yutong Liu, Yulun Zhang, and Xiaokang Yang, "One-Step Diffusion Model for Face Restoration", 2024

page arXiv supp releases visitors GitHub Stars

🔥🔥🔥 News

  • 2024-11-25: This repo is released.

Abstract: Diffusion models have demonstrated impressive performance in face restoration. Yet, their multi-step inference process remains computationally intensive, limiting their applicability in real-world scenarios. Moreover, existing methods often struggle to generate face images that are harmonious, realistic, and consistent with the subject’s identity. In this work, we propose OSDFace, a novel one-step diffusion model for face restoration. Specifically, we propose a visual representation embedder (VRE) to better capture prior information and understand the input face. In VRE, low-quality faces are processed by a visual tokenizer and subsequently embedded with a vector-quantized dictionary to generate visual prompts. Additionally, we incorporate a facial identity loss derived from face recognition to further ensure identity consistency. We further employ a generative adversarial network (GAN) as a guidance model to encourage distribution alignment between the restored face and the ground truth. Experimental results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics, generating high-fidelity, natural face images with high identity consistency.



⚒️ TODO

  • Release code and pretrained models

🔗 Contents

🔎 Results

We achieved state-of-the-art performance on synthetic and real-world datasets. Detailed results can be found in the paper.

 Quantitative Comparisons (click to expand)
  • Results in Table 1 on synthetic dataset (CelebA-Test) from the main paper.

  • Results in Table 2 on real-world datasets (Wider-Test, LFW-Test, WebPhoto-Test) from the main paper.

  •  Visual Comparisons (click to expand)
  • Results in Figure 5 on synthetic dataset (CelebA-Test) from the main paper.

  • Results in Figure 6 on real-world dataset (Wider-Test, LFW-Test, WebPhoto-Test) from the main paper.

  •  More Comparisons on Synthetic Dataset...
  • Results in Figure 4, 5, 6 on synthetic dataset (CelebA-Test) from supplemental material.

  •  More Comparisons on Real-World Dataset...
  • Results in Figure 7, 8, 9, 10 on real-world datasets (Wider-Test, LFW-Test, WebPhoto-Test) from supplemental material.

  • 📎 Citation

    If you find the code helpful in your research or work, please cite the following paper(s).

    @article{wang2024osdface,
        title={One-Step Diffusion Model for Face Restoration},
        author={Wang, Jingkai and Gong, Jue and Zhang, Lin and Chen, Zheng and Liu, Xing and Gu, Hong and Liu, Yutong and Zhang, Yulun and Yang, Xiaokang},
        journal={arXiv preprint arXiv:2411.17163},
        year={2024}
    }
    

    💡 Acknowledgements

    [TBD]

    About

    No description, website, or topics provided.

    Resources

    Stars

    Watchers

    Forks

    Packages

    No packages published

    Contributors 3

    •  
    •  
    •