Skip to content
/ EVA3D Public

[ICLR 2023 Spotlight] EVA3D: Compositional 3D Human Generation from 2D Image Collections

License

Notifications You must be signed in to change notification settings

hongfz16/EVA3D

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EVA3D: Compositional 3D Human Generation from 2D Image Collections

S-Lab, Nanyang Technological University  *corresponding author
🤩 Accepted to ICLR 2023 as Spotlight

EVA3D is a high-quality unconditional 3D human generative model that only requires 2D image collections for training.

Sample 1 RGB Sample 1 Geo Sample 2 RGB Sample 2 Geo Novel Pose Generation Latent Space Interpolation

📖 For more visual results, go checkout our project page

🍻 Training and Inference codes released


📣 Updates

[02/2023] Inference codes for SHHQ, UBCFashion and AIST are released.

[02/2023] Training codes for DeepFashion with our processed dataset are released.

[02/2023] Inference codes (512x256 generation on DeepFashion) are released, including colab and huggingface demos.

[01/2023] EVA3D is accepted to ICLR 2023 (Spotlight):partying_face:!

🤟 Citation

If you find our work useful for your research, please consider citing the paper:

@inproceedings{
    hong2023evad,
    title={{EVA}3D: Compositional 3D Human Generation from 2D Image Collections},
    author={Fangzhou Hong and Zhaoxi Chen and Yushi LAN and Liang Pan and Ziwei Liu},
    booktitle={International Conference on Learning Representations},
    year={2023},
    url={https://openreview.net/forum?id=g7U9jD_2CUr}
}

🖥️ Requirements

NVIDIA GPUs are required for this project. We have test the inference codes on NVIDIA RTX2080Ti, NVIDIA V100, NVIDIA A100, NVIDIA T4. The training codes have been tested on NVIDIA V100, NVIDIA A100. We recommend using anaconda to manage the python environments.

conda create --name eva3d python=3.8
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install pytorch3d -c pytorch3d
pip install -r requirements.txt

🏃‍♀️ Inference

Download Models

The pretrain model and SMPL model are needed for inference.

The following script downloads pretrain models.

python download_models.py

Register and download SMPL models here. Put the downloaded models in the folder smpl_models. Only the neutral one is needed. The folder structure should look like

./
├── ...
└── smpl_models/
    ├── smpl/
        └── SMPL_NEUTRAL.pkl

Commands

We provide a script for inference the model trained on DeepFashion, SHHQ, UBCFashion, AIST.

bash scripts/demo_deepfashion_512x256.sh
bash scripts/demo_shhq_512x256.sh
bash scripts/demo_ubcfashion_512x256.sh
bash scripts/demo_aist_256x256.sh

🚋 Training

DeepFashion

Download SMPL Models & Processed Datasets

python download_models.py
python download_datasets.py

Commands

bash scripts/train_deepfashion_512x256.sh

Intermediate results will be saved under checkpoint/train_deepfashion_512x256/volume_renderer/samples every 100 iterations. The first line presents inference images from EMA generator. The second line present one inference sample of the training generator and one sample from the training dataset.

To inference the trained models, please refer to the Inference section.

Support for more datasets coming soon...

🗞️ License

Distributed under the S-Lab License. See LICENSE for more information.

🙌 Acknowledgements

This study is supported by NTU NAP, MOE AcRF Tier 2 (T2EP20221-0033), and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).

This project is built on source codes shared by StyleSDF.

About

[ICLR 2023 Spotlight] EVA3D: Compositional 3D Human Generation from 2D Image Collections

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published