Code for CVPR 2024 paper Interactive Continual Learning: Fast and Slow Thinking
pip install -r requirements.txt
-
For CIFAR10 and CIFAR100 datasets, the script automatically downloads.
-
For ImageNet-R dataset refer to the following link: https://github.com/hendrycks/imagenet-r
-
For MiniGPT4 refer to the following github link: https://github.com/Vision-CAIR/MiniGPT-4
-
For INF-MLLM refer to the following github link: https://github.com/infly-ai/INF-MLLM
-
For PureMM refer to the following github link: https://github.com/Q-MM/PureMM
Download the github repository of MLLM to utils, and download the pre-training weight file to the specified file.
Train and evaluate models through utils/main.py
. For example, to train our model on Split CIFAR-10 with 500 fixed-size buffers, and include PureMM as System 2 in the test of the last task, one would execute:
python utils/main.py --model onlinevt --load_best_args --dataset seq-cifar10 --buffer_size 500 --csv_log --with_brain_vit --num_classes 10 --num_workers 12 --kappa 1 --lmbda 0.1 --delta 0.01 --k 5 --with_slow --slow_model PureMM
To compare training results without System 2, simply run:
python utils/main.py --model onlinevt --load_best_args --dataset seq-cifar10 --buffer_size 500 --csv_log --with_brain_vit --num_classes 10 --num_workers 12 --kappa 1 --lmbda 0.1 --delta 0.01 --k 5
More datasets and methods are supported. You can find the available options by running:
python utils/main.py --help
Please contact us or post an issue if you have any questions.
- Biqing Qi (qibiqing7@gmail.com)
- Junqi Gao (gjunqi97@gmail.com)
- Xinquan Chen (xinquanchen0117@gmail.com)
- Dong Li (arvinlee826@gmail.com)
@inproceedings{qi2024interactive,
title={Interactive continual learning: Fast and slow thinking},
author={Qi, Biqing and Chen, Xinquan and Gao, Junqi and Li, Dong and Liu, Jianxing and Wu, Ligang and Zhou, Bowen},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={12882--12892},
year={2024}
}