Skip to content

Latest commit

 

History

History
105 lines (79 loc) · 4.51 KB

README.md

File metadata and controls

105 lines (79 loc) · 4.51 KB

DreamScene

Haoran Li, Haolin Shi, Wenli Zhang, Wenjun Wu, Yong Liao, Lin Wang, Lik-hang Lee, Pengyuan Zhou

This repository contains the official implementation for DreamScene: 3D Gaussian-based Text-to-3D Scene Generation via Formation Pattern Sampling.

Project Page arXiv

Note: We compress these motion pictures for faster previewing.

A DSLR photo of a ikea style bedroom, ikea style, IKEA A DSLR photo of an autumn park Gray land at the moon, black tranquil universe in the distance, Sci-fi style

News

  • 2024-07-01: Our paper is accepted by ECCV2024 and to be published!

TODO

  • Release the code of Formation Pattern Sampling (FPS) for single object.
  • Release the code of entire DreamScene for generating dream scenes and our demo video.
  • More samples, and tools for generating layout interactively.

Getting Start!

Requirments

git clone https://github.com/DreamScene-Project/DreamScene.git
cd DreamScene

conda create -n dreamscene python=3.10
conda activate dreamscene

pip install -r requirements.txt -f https://download.pytorch.org/whl/cu118/torch_stable.html

git clone --recursive https://github.com/DreamScene-Project/comp-diff-gaussian-rasterization.git
git clone https://github.com/YixunLiang/simple-knn.git

pip install comp-diff-gaussian-rasterization/
pip install simple-knn/

# Follow https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md
pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable"

# Install point-e
git clone https://github.com/crockwell/Cap3D.git
cd Cap3D/text-to-3D/point-e/
pip install -e .
mkdir point_e_model_cache
# Optional: Initialize with better point-e
wget https://huggingface.co/datasets/tiange/Cap3D/resolve/main/misc/our_finetuned_models/pointE_finetuned_with_825kdata.pth
mv pointE_finetuned_with_825kdata.pth point_e_model_cache/
# Modify the parameter init_guided in the configuration file to pointe_825k

# or

wget https://huggingface.co/datasets/tiange/Cap3D/resolve/main/misc/our_finetuned_models/pointE_finetuned_with_330kdata.pth
mv pointE_finetuned_with_330kdata.pth point_e_model_cache/
# Modify the parameter init_guided in the configuration file to pointe_330k

Generate Single Object

python main.py --object --config configs/objects/sample.yaml

Generate Entire Scenes

If your device has more than 40G VRAM, you can run it with a single card. Otherwise, it is recommended to use dual cards.

CUDA_VISIBLE_DEVICES=0,1 python main.py --config configs/scenes/sample_indoor.yaml

CUDA_VISIBLE_DEVICES=2,3 python main.py --config configs/scenes/sample_outdoor.yaml

Acknowledgement

This work is built on many amazing research works and open-source projects, thanks a lot to all the authors for sharing!

Citation

If you find it useful in your research, please consider citing our paper as follows:

@article{li2024dreamscene,
  title={DreamScene: 3D Gaussian-based Text-to-3D Scene Generation via Formation Pattern Sampling},
  author={Li, Haoran and Shi, Haolin and Zhang, Wenli and Wu, Wenjun and Liao, Yong and Wang, Lin and Lee, Lik-hang and Zhou, Pengyuan},
  journal={arXiv preprint arXiv:2404.03575},
  year={2024}
}