Skip to content

[ECCV'24] Unrolled Decomposed Unpaired Learning for Controllable Low-Light Video Enhancement

Notifications You must be signed in to change notification settings

lingyzhu0101/UDU

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 

Repository files navigation

[ECCV'24] UDU-NET

Official Pytorch implementation of Unrolled Decomposed Unpaired Learning for Controllable Low-Light Video Enhancement.

Lingyu Zhu, Wenhan Yang, Baoliang Chen, Hanwei Zhu, Zhangkai Ni, Qi Mao, Shiqi Wang

[Arxiv] [Poster] [Video] [Invited_Talk]

Overview

Obtaining pairs of low/normal-light videos, with motions, is more challenging than still images, which raises technical issues and poses the technical route of unpaired learning as a critical role. This paper makes endeavors in the direction of learning for low-light video enhancement without using paired ground truth. Compared to low-light image enhancement, enhancing low-light videos is more difficult due to the intertwined effects of noise, exposure, and contrast in the spatial domain, jointly with the need for temporal coherence. To address the above challenge, we propose the Unrolled Decomposed Unpaired Network (UDU-Net) for enhancing low-light videos by unrolling the optimization functions into a deep network to decompose the signal into spatial and temporal-related factors, which are updated iteratively. Firstly, we formulate low-light video enhancement as a Maximum A Posteriori estimation (MAP) problem with carefully designed spatial and temporal visual regularization. Then, via unrolling the problem, the optimization of the spatial and temporal constraints can be decomposed into different steps and updated in a stage-wise manner. From the spatial perspective, the designed Intra subnet leverages unpair prior information from expert photography retouched skills to adjust the statistical distribution. Additionally, we introduce a novel mechanism that integrates human perception feedback to guide network optimization, suppressing over/under-exposure conditions. Meanwhile, to address the issue from the temporal perspective, the designed Inter subnet fully exploits temporal cues in progressive optimization, which helps achieve improved temporal consistency in enhancement results. Consequently, the proposed method achieves superior performance to state-of-the-art methods in video illumination, noise suppression, and temporal consistency across outdoor and indoor scenes.

Qualitative Performance

Quantitative Performance

TODO List

This repository is still under active construction:

  • Release training and testing codes
  • Release pretrained models
  • Clean the code

Public Dataset

We use the resized RGB image based on the SDSD dataset.

Installation

First, install Python 3. We advise you to install Python 3 and PyTorch with Anaconda:

conda create --name py36 python=3.6
source activate py36

Clone the repo and install the complementary requirements:

cd $HOME
pip install -r requirements.txt

Example Usage

Train

Train the model on the corresponding dataset using the command, the training on outdoor subset of SDSD:

CUDA_VISIBLE_DEVICES=0 python main_step_single_stage_outdoor_abcd.py --mode train --version Video_outdoor_abcd --use_tensorboard True --is_test_psnr_ssim False --is_test_nima False 

Test

Test the epoch xx on the corresponding dataset using the command, the testing on outdoor subset of SDSD:

CUDA_VISIBLE_DEVICES=0 python main_step_single_stage_outdoor_abcd.py --mode test --version Video_outdoor_abcd --use_tensorboard True --pretrained_model xx

We adopt PSNR and SSIM as comparison criteria to evaluate the spatial quality of enhanced video frames, which are based upon the implementations with MATLAB (R2018b).

Contact

Citation

If you find our work helpful, please consider citing:

@inproceedings{zhu2024unrolled,
  title={Unrolled Decomposed Unpaired Learning for Controllable Low-Light Video Enhancement},
  author={Lingyu Zhu, Wenhan Yang, Baoliang Chen, Hanwei Zhu, Zhangkai Ni, Qi Mao, and Shiqi Wang},
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2024}
}

Additional Link

We also recommend our Temporally Consistent Enhancer Network TCE-Net. If you find our work helpful, please consider citing:

@article{zhu2024temporally,
  title={Temporally Consistent Enhancement of Low-Light Videos via Spatial-Temporal Compatible Learning},
  author={Zhu, Lingyu and Yang, Wenhan and Chen, Baoliang and Zhu, Hanwei and Meng, Xiandong and Wang, Shiqi},
  journal={International Journal of Computer Vision},
  pages={1--21},
  year={2024},
  publisher={Springer}
}

Acknowledgement

  • The optical flow model is adopted, and the trained model can be downloaded from the link RAFT. We thank all authors for presenting such an excellent work.

About

[ECCV'24] Unrolled Decomposed Unpaired Learning for Controllable Low-Light Video Enhancement

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published