Skip to content

Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic Scenes", ICCV 2021.

License

Notifications You must be signed in to change notification settings

ken2576/deep-3dmask

Repository files navigation

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes

Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic Scenes", ICCV 2021.

Kai-En Lin1, Lei Xiao2, Feng Liu2, Guowei Yang1, Ravi Ramamoorthi1

1University of California, San Diego, 2Facebook Reality Labs

Project Page | Paper | Supplementary Materials | Pretrained models | Dataset | Preprocessing script

Requirements

Install required packages

Make sure you have up-to-date NVIDIA drivers supporting CUDA 11.1 (10.2 could work but need to change cudatoolkit package accordingly)

Run

conda env create -f environments.yml
conda activate video_viewsynth

Usage

Rendering

  1. Download our pretrained checkpoint and testing data. Extract the content to [path_to_data_directory]. It contains frames and background folders, as well as poses_bounds.npy.

  2. In configs, setup data path by changing render_video.txt

    root_dir should point to the frames folder mentioned in 1. and bg_dir should point to background folder.

    out_dir can be your desired output folder.

    ckpt_path should be the pretrained checkpoint path.

  3. Run python render_llff_video.py --config [config_file_path]

    e.g. python render_llff_video.py --config ../configs/render_video.txt

  • (Optional) For your own data, please run prepare_data.sh

    sh render.sh [frame_folder] [starting_frame] [ending_frame] [output_folder_name]

    Make sure your data is in this structure before running

    [frame_folder] --- cam00 --- 00000.jpg
                    |         |- 00001.jpg
                    |         ...
                    |- cam01
                    |- cam02
                    ...
                    |- poses_bounds.npy
    

    e.g. sh render.sh ~/deep_3d_data/frames 0 20 qual

Training

Train MPI

  1. Download RealEstate10K dataset and extract the frames. There are scripts in preprocessing folder which can be used to generate the data.

    The order should be download_data.py -> extract_frames.py -> compress_data.py.

    Remember to change the path in compress_data.py.

  2. Change the paths in config file train_realestate10k.txt

  3. Run

    cd train_mpi
    python train.py --config ../configs/train_realestate10k.txt
    

Train Mask

Once MPI is trained, we can use the checkpoint to train 3D mask network.

  1. Download dataset

  2. Change the paths in config file train_mask.txt

  3. Run

    cd train_mask
    python train.py --config ../configs/train_mask.txt
    

Citation

@inproceedings {lin2021deep,
    title = {Deep 3D Mask Volume for View Synthesis of Dynamic Scenes},
    author = {Kai-En Lin and Lei Xiao and Feng Liu and Guowei Yang and Ravi Ramamoorthi},
    booktitle = {ICCV},
    year = {2021},
}

About

Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic Scenes", ICCV 2021.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published