This repository is the Pytorch implementation of the paper "Causal Context Adjustment Loss for Learned Image Compression" (NeurIPS 2024):
Clone this repository:
git clone https://github.com/LabShuHangGU/CCA.git
Install CompressAI and required packages:
pip install compressai tensorboard
Download OpenImages for training ; Kodak, CLIC and TESTIMAGES for evaluation.
A training script for reference is provided in train.py
.
CUDA_VISIBLE_DEVICES='0' python -u ./train.py -d [path of training dataset] \
-lr 1e-4 --cuda --beta 0.3 --epochs 65 --lr_epoch 56 --batch-size 8 \
--save_path [path for storing the checkpoints] --save \
--checkpoint [path of the pretrained checkpoint]
The script supports distributed training as well:
CUDA_VISIBLE_DEVICES='0, 1' python -m torch.distributed.launch --nproc_per_node=[the number of nodes] \
--master_port=29506 ./train.py -d [path of training dataset] \
-lr 1e-4 --cuda --beta 0.3 --epochs 65 --lr_epoch 56 --batch-size 8 \
--save_path [path for storing the checkpoints] --save \
--checkpoint [path of the pretrained checkpoint]
CUDA_VISIBLE_DEVICES='0' python eval.py --checkpoint [path of the pretrained checkpoint] --data [path of test dataset] --cuda
Note that we train
Lambda (Link) | 0.3 | 0.85 | 1.8 | 3.5 |
---|
Other pretrained models will be released successively.
Left: Kodak | Middle: CLIC | Right: TESTIMAGES
For detailed rate-distortion datas, please refer to
RD_data.json
.
@article{han2024causal,
title={Causal Context Adjustment Loss for Learned Image Compression},
author={Han, Minghao and Jiang, Shiyin and Li, Shengxi and Deng, Xin and Xu, Mai and Zhu, Ce and Gu, Shuhang},
journal={arXiv preprint arXiv:2410.04847},
year={2024}
}
https://github.com/jmliu206/LIC_TCM
https://github.com/megvii-research/NAFNet
https://github.com/InterDigitalInc/CompressAI
If you have any questions, feel free to contact me through email (minghao.hmh@gmail.com).