Skip to content

Video Frame Interpolation without Temporal Priors (a general method for blurry video interpolation)

Notifications You must be signed in to change notification settings

yjzhang96/UTI-VFI

Repository files navigation

Video Frame Interpolation without Temporal Priors (NeurIPS2020)

[Paper] [video]

How to run

Prerequisites

  • NVIDIA GPU + CUDA 9.0 + CuDNN 7.6.5
  • Pytorch 1.1.0

First clone the project

git clone https://github.com/yjzhang96/UTI-VFI 
cd UTI-VFI
mkdir pretrain_models

Download pretrained model weights from Google Drive. Put model weights "SEframe_net.pth" and "refine_net.pth" into directory "./pretrain_models"; put "model.ckpt" and "network-default.pytorch" into directory "./utils"

Dataset

download GoPro datasets with all the figh-frame-rate video frames from GOPRO_Large_all, and generate blurry videos for different exposure settings. You can generate the test datasets via run:

python utils/generate_blur.py

Test

After prepared test datasets, you can run test usding the following command:

sh run_test.sh

Note that to test the model on GOPRO datasets (datasets with groud-truth to compare), you need to set the argument "--test_type" to ''validation''. If you want to test the model on real-world video (without ground-truth), you need to use "real_world" instead.

Citation

@inproceedings{Zhang2019video,
  title={Video Frame Interpolation without Temporal Priors},
  author={Zhang, Youjian and Wang, Chaoyue and Tao, Dacheng},
  journal={Advances in Neural Information Processing Systems},
  year={2020}
}

Acknowledgment

Code of interpolation module borrows heavily from QVI

About

Video Frame Interpolation without Temporal Priors (a general method for blurry video interpolation)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published