Skip to content

Implementation of "Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring". (Zhang et al., CVPR 2024)

Notifications You must be signed in to change notification settings

huicongzhang/BSSTNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring

Huicong Zhang, Haozhe Xie, Hongxun Yao

Harbin Institute of Technology, S-Lab, Nanyang Technological University

Overview

Update

  • [2024/07/04] The training and testing code are released.
  • [2024/02/29] The repo is created.

Datasets

We use the GoPro and DVD datasets in our experiments, which are available below:

You could download the zip file and then extract it to the datasets folder.

Pretrained Models

You could download the pretrained model from here and put the weights in model_zoos.

Prerequisites

Clone the Code Repository

git clone https://github.com/huicongzhang/BSSTNet.git

Install Denpendencies

conda create -n BSSTNet python=3.8
conda activate BSSTNet
pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full==1.7.1 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9/index.html
pip install -r requirements.txt
BASICSR_EXT=True python setup.py develop

Test

To train BSSTNet, you can simply use the following commands:

GoPro dataset

scripts/dist_test.sh 2 options/test/BSST/gopro_BSST.yml

DVD dataset

scripts/dist_test.sh 2 options/test/BSST/dvd_BSST.yml

Train

To train BSSTNet, you can simply use the following commands:

GoPro dataset

scripts/dist_train.sh 2 options/test/BSST/gopro_BSST.yml

DVD dataset

scripts/dist_train.sh 2 options/test/BSST/dvd_BSST.yml

Cite this work

@inproceedings{zhang2024bsstnet,
  title     = {Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring},
  author    = {Zhang, Huicong and 
               Xie, Haozhe and 
               Yao, Hongxun},
  booktitle = {CVPR},
  year      = {2024}
}

License

This project is open sourced under MIT license.

Acknowledgement

This project is based on BasicSR, ProPainter and Shift-Net.

About

Implementation of "Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring". (Zhang et al., CVPR 2024)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published