The official repository with Pytorch
Our paper can be downloaded from [Arxiv].
Clone this repo:
git clone https://github.com/Francis0625/OmniSR.git
cd OmniSR
Dependencies:
- PyTorch>1.10
- OpenCV
- Matplotlib 3.3.4
- opencv-python
- pyyaml
- tqdm
- numpy
- torchvision
- Download pretrained models, and copy them to
./train_logs/
:
Settings | CKPT name | CKPT url |
---|---|---|
DIV2K |
OmniSR_X2_DIV2K.zip | baidu cloud (passwd: sjtu) , Google driver |
DF2K |
OmniSR_X2_DF2K.zip | baidu cloud (passwd: sjtu) , Google driver |
DIV2K |
OmniSR_X3_DIV2K.zip | baidu cloud (passwd: sjtu) , Google driver |
DF2K |
OmniSR_X3_DF2K.zip | baidu cloud (passwd: sjtu) , Google driver |
DIV2K |
OmniSR_X4_DIV2K.zip | baidu cloud (passwd: sjtu) , Google driver |
DF2K |
OmniSR_X4_DF2K.zip | baidu cloud (passwd: sjtu) , Google driver |
- Download benchmark (baidu cloud (passwd: sjtu) , Google driver), and copy them to
./benchmark/
. If you want to generate the benchmark by yourself, please refer to the official repository of RCAN.
- Step 1, the following cmd will report a performance evaluated with python script, and generated images are placed in
./SR
python test.py -v "OmniSR_X4_DF2K" -s 994 -t tester_Matlab --test_dataset_name "Urban100"
- Step2, please execute the
Evaluate_PSNR_SSIM.m
script in the root directory to obtain the results reported in the paper. Please modifyLine 8 (Evaluate_PSNR_SSIM.m): methods = {'OmniSR_X4_DF2K'};
andLine 10 (Evaluate_PSNR_SSIM.m): dataset = {'Urban100'};
to match the model/dataset name evaluated above.
-
Step1, please download training dataset from DIV2K (
Train Data Track 1 bicubic downscaling x? (LR images)
andTrain Data (HR images)
), then set the dataset root path in./env/env.json: Line 8: "DIV2K":"TO YOUR DIV2K ROOT PATH"
-
Step2, please download benchmark (baidu cloud (passwd: sjtu) , Google driver), and copy them to
./benchmark/
. If you want to generate the benchmark by yourself, please refer to the official repository of RCAN. -
Step3, training with DIV2K
$\times 4$ dataset:
python train.py -v "OmniSR_X4_DIV2K" -p train --train_yaml "train_OmniSR_X4_DIV2K.yaml"
result.tex is the corresponding tex code for result comparison.
This project is released under the Apache 2.0 license.
If this work helps your research, please cite the following paper:
@inproceedings{omni_sr,
title = {Omni Aggregation Networks for Lightweight Image Super-Resolution},
author = {Wang, Hang and Chen, Xuanhong and Ni, Bingbing and Liu, Yutian and Liu jinfan},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023}
}