Skip to content
/ DVD-SFE Public

Deep Video Deblurring Using Sharpness Features from Exemplars (TIP, 2020) (PyTorch)

Notifications You must be signed in to change notification settings

CVHW/DVD-SFE

Repository files navigation

Deep Video Deblurring Using Sharpness Features from Exemplars (Submitted to IEEE TIP, 2020)

Dependencies and Installation
  • Python 3 (Recommend to use Anaconda)
  • PyTorch0.4.1
  • Linux (Tested on Ubuntu 18.04)
  • numpy
  • tqdm
  • imageio
  • matplotlib
Dataset Preparation

We use the GOPRO_Su dataset to train our models. You can download it from here and put the dataset into 'train_dataset/'. The dataset should be organized in the following form:

|--dataset name
 |--train
  |--video 01
   |--input
    |--frame 01
    |--frame 02
    |...
   |--GT
    |--frame 01
    |--frame 02
    |...
  |--video 02
  ...
 |--val
 |--test
Training
  • Download the FlowNet pretrained model from Baidu Drive (password:2gca) and put it into 'pretrained_model/'.
  • Prepare the dataset same as above form.
  • Start to train the model. Hyper parameters such as batch size, learning rate, epoch number can be tuned through command line:
python main.py --batch_size 4 --patch_size 256 --lr 1e-4 --epochs 500 --save_models
Testing
python main.py --pre_train model_path --test_only
Testing your own data
  • Put your testing data into 'inference/' which should be organized the same as our given examplars.
  • The testing command is shown as follows:
python inference.py
Citation
@article{xiang2020DVD-SFE,
title={Deep Video Deblurring Using Sharpness Features from Exemplars},
author={Xiang, xinguang and Wei, Hao and Pan, jinshan},
journal={IEEE Transactions on Image Processing},
year={2020}
}

About

Deep Video Deblurring Using Sharpness Features from Exemplars (TIP, 2020) (PyTorch)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages