This repository contains the code and dataset that accompanies our paper MIME: Human-Aware 3D Scene Generation .
You can find detailed usage instructions for training your own models, using our pretrained models as well as performing the interactive tasks described in the paper below.
If you found this work influential or helpful for your research, please consider citing
@inproceedings{yi2022mime,
title = {{MIME}: Human-Aware {3D} Scene Generation},
author = {Yi, Hongwei and Huang, Chun-Hao P. and Tripathi, Shashank and Hering, Lea and
Thies, Justus and Black, Michael J.},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month={June},
year={2023}
}
- See docs/installation.md to install all the required packages and pretrained models.
- See docs/dataset.md to download the 3D-FRONT-Human datasets and learn how to add free-space and contact humans inside a 3D room.
Please download several files from the download webpage.
- Downlad the MIME_CKPT and put it into
data/CKPT
- Download other needed code_data and put them into
data
. - Download 3D-FRONT-HUMAN preprocess data preprocess_3DFRONTHUMAN_input.tar.gz and unzip into
data/preprocess_3DFRONTHUMAN_input
- Download our preprocessed 3DFRONTHUMAN_relative_path_pkl.zip for 3D-FRONT-Human and unzip into
data/3DFRONTHUMAN_relative_path_pkl
- Download our preprocess body input input_finement for running scene refinement, unzip it into
data/input_refinement
. - Download several visualized samples in 3D-FRONT-HUMAN for visualization.
You need to modify these variables in env.sh
.
# ! need to modify.
export PYTHONPATH=${change_to_python_path}:$PYTHONPATH
export CODE_ROOT_DIR=${change_to_code_path}
export DATA_ROOT_DIR=${change_to_original_3DFRONT_path}
- See docs/inference.md to generate scenes from input humans.
- See docs/refinement.md to refine the scene layout with gemetrical human-scene interaction details.
- See docs/visualization.md to visualize the 3D-FRONT-HUMAN dataset and the distribution of different objects locations in our MIME generated results.
- See docs/training.md to train your own models.
- See docs/evaluation.md to benchmark your pretrained models.
We thank Despoina Paschalidou, Wamiq Para for useful feedback about the reimplementation of ATISS,
and Yuliang Xiu, Weiyang Liu, Yandong Wen, Yao Feng for the insightful discussions,
and Benjamin Pellkofer for IT support.
This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B.
We build our MIME architecture based on ATISS.
MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon.
MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH.
JT has received research gift funds from Microsoft Research.