PyTorch codes for "Spatio-Temporal Deformable Attention Network for Video Deblurring (ECCV2022)"
We use the GoPro, DVD and BSD datasets in our experiments, which are available below:
You could download the pretrained model from here and put the weights in weights folder.
git clone https://github.com/huicongzhang/STDAN.git
conda create -n STDAN python=3.7
conda activate STDAN
conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=10.2 -c pytorch
cd STDAN
sh install.sh
To train STDAN, you can simply use the following command:
python runner.py --data_path=/yourpath/DeepVideoDeblurring_Dataset/quantitative_datasets --data_name=DVD --phase=train
To test STDAN, you can simply use the following command:
python runner.py --data_path=/yourpath/DeepVideoDeblurring_Dataset/quantitative_datasets --data_name=DVD --phase=test --weights=./weights/DVD_release.pth
In here, there are more settings of testing and training.
Some video results are shown in here
@inproceedings{zhang2022spatio,
title={Spatio-Temporal Deformable Attention Network for Video Deblurring},
author={Zhang, Huicong and Xie, Haozhe and Yao, Hongxun},
booktitle={ECCV},
year={2022}
}
This project is open sourced under MIT license.
This project is based on STFAN