Generative View Synthesis: From Single-view Semantics to Novel-view Images. Tewodros Habtegebrial , Varun Jampani , Orazio Gallo , Didier Stricker
Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS-2020)
The project page can be found here
This code was develped with python3.6
scikit-image
torch # tested with versions 1.7.1, but it might work with older versions as well
tqdm
Download a pre-trained model for the CARLA dataset from here. Extract it to the folder pre_trained_models/carla
Download 34 sample scenes (from the CARLA dataset) for demo purpose link. Extract the dataset to the folder datasets/carla_samples
CUDA_VSIBLE_DEVICES=0 python demo.py \
--dataset=carla_samples \
--mode=demo \
--movement_type=circle \
--data_path=./datasets/carla_samples \
--output_path=./output/carla_samples \
--pre_trained_gvsnet=./pre_trained_models/carla/gvsnet_model.pt \
--style_path=./data/sample_styles/carla_1.png \
A style image can be pased with the following flag --style_path
. If not given the color image of input view is used as a style image.
Fow downloading the datasets used in our experiments please read instructions here datasets
Please check the scripts
folder training scripts and check options.py
for a list of commandline arguments.
Recommended batch sizes and number of epochs
sun model
batch_size=12 and above, num_epochs=30
cd scripts
./train_sun_carla
Recommended batch sizes and number of epochs
ltn+adn
batch_size=16 and above, num_epochs=20
cd scripts
./train_gvs_carla.sh
Acknowledgments: This repo builds upon the SPADE repository from NVIDIA