Skip to content

[CVPR2023] Implementation of ''Omni Aggregation Networks for Lightweight Image Super-Resolution".

Notifications You must be signed in to change notification settings

Francis0625/Omni-SR

Repository files navigation

Omni Aggregation Networks for Lightweight Image Super-Resolution (OmniSR)

Accepted by CVPR2023

The official repository with Pytorch

Our paper can be downloaded from [Arxiv].

Installation

Clone this repo:

git clone https://github.com/Francis0625/OmniSR.git
cd OmniSR

Dependencies:

  • PyTorch>1.10
  • OpenCV
  • Matplotlib 3.3.4
  • opencv-python
  • pyyaml
  • tqdm
  • numpy
  • torchvision

Preparation

  • Download pretrained models, and copy them to ./train_logs/:
Settings CKPT name CKPT url
DIV2K $\times 2$ OmniSR_X2_DIV2K.zip baidu cloud (passwd: sjtu) , Google driver
DF2K $\times 2$ OmniSR_X2_DF2K.zip baidu cloud (passwd: sjtu) , Google driver
DIV2K $\times 3$ OmniSR_X3_DIV2K.zip baidu cloud (passwd: sjtu) , Google driver
DF2K $\times 3$ OmniSR_X3_DF2K.zip baidu cloud (passwd: sjtu) , Google driver
DIV2K $\times 4$ OmniSR_X4_DIV2K.zip baidu cloud (passwd: sjtu) , Google driver
DF2K $\times 4$ OmniSR_X4_DF2K.zip baidu cloud (passwd: sjtu) , Google driver
  • Download benchmark (baidu cloud (passwd: sjtu) , Google driver), and copy them to ./benchmark/. If you want to generate the benchmark by yourself, please refer to the official repository of RCAN.

Evaluate Pretrained Models

Example: evaluate the model trained with DF2K@X4:

  • Step 1, the following cmd will report a performance evaluated with python script, and generated images are placed in ./SR
python test.py -v "OmniSR_X4_DF2K" -s 994 -t tester_Matlab --test_dataset_name "Urban100"
  • Step2, please execute the Evaluate_PSNR_SSIM.m script in the root directory to obtain the results reported in the paper. Please modify Line 8 (Evaluate_PSNR_SSIM.m): methods = {'OmniSR_X4_DF2K'}; and Line 10 (Evaluate_PSNR_SSIM.m): dataset = {'Urban100'}; to match the model/dataset name evaluated above.

Training

  • Step1, please download training dataset from DIV2K (Train Data Track 1 bicubic downscaling x? (LR images) and Train Data (HR images)), then set the dataset root path in ./env/env.json: Line 8: "DIV2K":"TO YOUR DIV2K ROOT PATH"

  • Step2, please download benchmark (baidu cloud (passwd: sjtu) , Google driver), and copy them to ./benchmark/. If you want to generate the benchmark by yourself, please refer to the official repository of RCAN.

  • Step3, training with DIV2K $\times 4$ dataset:

python train.py -v "OmniSR_X4_DIV2K" -p train --train_yaml "train_OmniSR_X4_DIV2K.yaml"

Visualization

performance

Results

performance result.tex is the corresponding tex code for result comparison.

Related Projects

License

This project is released under the Apache 2.0 license.

To cite our paper

If this work helps your research, please cite the following paper:

@inproceedings{omni_sr,
  title      = {Omni Aggregation Networks for Lightweight Image Super-Resolution},
  author     = {Wang, Hang and Chen, Xuanhong and Ni, Bingbing and Liu, Yutian and Liu jinfan},
  booktitle  = {Conference on Computer Vision and Pattern Recognition},
  year       = {2023}
}

About

[CVPR2023] Implementation of ''Omni Aggregation Networks for Lightweight Image Super-Resolution".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published