Code and instructions for our paper: House-GAN: Relational Generative Adversarial Networks for Graph-constrained House Layout Generation, ECCV 2020.
LIFULL HOME’s database offers five million real floorplans, from which we retrieved 117,587. The database does not contain bubble diagrams. We used the floorplan vectorization algorithm [1] to generate the vector-graphics format, later converted into room bounding boxes and bubble diagrams. The vectorized floorplans utilized in this paper can be found here, this dataset does not include the original RGB images from LIFULL dataset.
[1] Liu, C., Wu, J., Kohli, P., Furukawa, Y.: Raster-to-vector: Revisiting floorplan transformation, ICCV 2017.
See requirements.txt for checking the dependencies before running the code
For running a pretrained model check out the following steps:
- Download pretrained model and dataset here.
- Place them anywhere and rename the dataset to train_data.npy.
- Set the path in variation_bbs_with_target_graph_segments_suppl.py to the path of the folder containing train_data.npy and to the pretrained model.
- Run python variation_bbs_with_target_graph_segments_suppl.py.
- Check out the results in output folder.
See requirements.txt for checking the dependencies before running the code
For training a model from scratch check out the following steps:
- Download dataset here.
- Place housegan_clean_data.npy anywhere and rename it to train_data.npy.
- Set the path in main.py to the path of the folder containing train_data.npy.
- run python main.py --target_set D --exp_folder exp_example. The target_set argument corresponds to which portion of the graphs you want to hold out for cross-validation, where D mean graphs of size 10-12.
- You may also want to customize the interval for probing the generator by setting sample_interval in main.py.
- Check out exps and checkpoint folders for intermediate outputs and checkpoints, respectively.
@inproceedings{nauata2020house,
title={House-gan: Relational generative adversarial networks for graph-constrained house layout generation},
author={Nauata, Nelson and Chang, Kai-Hung and Cheng, Chin-Yi and Mori, Greg and Furukawa, Yasutaka},
booktitle={European Conference on Computer Vision},
pages={162--177},
year={2020},
organization={Springer}
}
If you have any question, feel free to contact me at nnauata@sfu.ca
This research is partially supported by NSERC Discovery Grants, NSERC Discovery Grants Accelerator Supplements, DND/NSERC Discovery Grant Supplement, and Autodesk. We would like to thank architects and students for participating in our user study.