Skip to content

[PG 2022] Official PyTorch Implementation for "Real-Time Video Deblurring via Lightweight Motion Compensation"

License

Notifications You must be signed in to change notification settings

codeslake/RealTime_VDBLR

Repository files navigation

Real-Time Video Deblurring via Lightweight Motion Compensation
Official PyTorch Implementation of the PG 2022 Paper
Project | Paper | arXiv

This repo contains training and evaluation code for the following paper:

Real-Time Video Deblurring via Lightweight Motion Compensation
*Hyeongseok Son, *Junyong Lee, Sunghyun Cho, and Seungyong Lee (*equal contribution)
POSTECH
Pacific Graphics (PG) 2022 (special issue of the Computer Graphics Forum (CGF))

Getting Started

Prerequisites

Tested environment

Ubuntu18.04 Python PyTorch CUDA

1. Environment setup

$ git clone https://github.com/codeslake/RealTime_VDBLR.git
$ cd RealTime_VDBLR

$ conda create -y --name RealTime_VDBLR python=3.8 && conda activate RealTime_VDBLR

# Install Pytorch (1.12.1 for example,)
$ conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge

# Install required dependencies (one of below depend on CUDA version)
# for CUDA10.2
$ ./install/install_CUDA10.2.sh
# for CUDA11.1
$ ./install/install_CUDA11.1.sh
# for CUDA11.3
$ ./install/install_CUDA11.3.sh
# for CUDA11.6
$ ./install/install_CUDA11.6.sh

2. Datasets

Download and unzip datasets under [DATASET_ROOT]:

[DATASET_ROOT]
    ├── train_DVD
    ├── test_DVD
    ├── train_nah
    ├── test_nah
    └── REDS
        └── reds_lmdb
            ├── reds_info_train.pkl
            ├── reds_info_valid.pkl
            ├── reds_train
            ├── reds_train_gt
            ├── reds_valid
            └── reds_valid_gt

[DATASET_ROOT] can be modified with config.data_offset in ./configs/config.py.

3. Pre-trained models

Download pretrained weights (OneDrive | Dropbox) and place them under ./ckpt/:

RealTime_VDBLR
├── ...
├── ckpt
│   ├── liteFlowNet.pytorch
│   ├── MTU#_DVD.pytorch
│   ├── MTU#_GoPro.pytorch
│   ├── MTU#_REDS.pytorch
│   └── ...
└── ...

Testing models of PG 2022

For PSNRs and SSIMs reported in the paper, we use the approach of Koehler et al. following Su et al., that first aligns two images using global translation to represent the ambiguity in the pixel location caused by blur.
Refer here for the evaluation code.

# [n]-stack MTUs evaluationed on [DVD|GoPro|REDS] datasets
./script_eval/MTU[n]_[dataset].py

# for 2-stack amp model,
./script_eval/MTU2_amp_[dataset].py

# for 10-stack large model,
./script_eval/MTU10_L_[dataset].py

# 4-stack model on REDS dataset, for example,
./script_eval/MTU4_REDS.py

Testing results will be saved in [LOG_ROOT]/PG2022_RealTime_VDBLR/[mode]/result/eval/[mode]_[epoch]/[data]/.

[LOG_ROOT] can be modified with config.log_offset in ./configs/config.py.

options

  • --data: The name of a dataset to evaluate: DVD | nah | REDS. Default: DVD
    • The data structure can be modified in the function set_eval_path(..) in ./configs/config.py.

Wiki

Contact

Open an issue for any inquiries. You may also have contact with sonhs@postech.ac.kr or junyonglee@postech.ac.kr

License

License CC BY-NC
This software is being made available under the terms in the LICENSE file. Any exemptions to these terms require a license from the Pohang University of Science and Technology.

Citation

If you find this code useful, please consider citing:

@InProceedings{Son2022RTDeblur,
    author    = {Hyeongseok Son and Junyong Lee and Sunghyun Cho and Seungyong Lee},
    title     = {Real-Time Video Deblurring via Lightweight Motion Compensation},
    booktitle = {Pacific Graphics},
    year      = {2022},
}

About

[PG 2022] Official PyTorch Implementation for "Real-Time Video Deblurring via Lightweight Motion Compensation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published