This repository contains the training, evaluation codes and checkpoints for the paper DeiT-LT: Distillation strikes back for Vision Transformer training on Long-Tailed datasets accepted at CVPR 2024.
- Clone the respository.
git clone https://github.com/val-iisc/DeiT-LT.git
-
CIFAR-10 and CIFAR-100 datasets are downloaded on its own through the code. However, in case of Imagenet-LT and iNaturalist-2018, download these datasets from the given links.
-
Create the conda environment and activate it.
conda env create -f environment.yml
conda activate deitlt
-
Download the teacher models as given in Results table.
-
Run the corresponding training script for any dataset after adding teacher path in the script file for the argument
--teacher-path
. For example:
bash sh/train_c10_if100.sh
bash sh/train_imagenetlt.sh
- For evaluation a DeiT-LT checkpoint, run the eval bash script for the corresponding dataset. Ensure that the checkpoint path is provided to the script for the argument
--resume
. For example:
bash sh/eval_c10.sh
bash sh/eval_imagenetlt.sh
Dataset | Imbalance Factor | Overall | Head | Mid | Tail | Teacher path | Student path |
---|---|---|---|---|---|---|---|
CIFAR 10-LT | 100 | 87.5 | 94.5 | 84.1 | 85.0 | Link | Link |
50 | 89.8 | 94.9 | 87.0 | 88.6 | Link | Link | |
CIFAR 100-LT | 100 | 55.6 | 72.8 | 55.4 | 31.4 | Link | Link |
50 | 60.5 | 74.8 | 60.3 | 43.1 | Link | Link | |
ImageNet-LT | - | 59.1 | 66.6 | 58.3 | 40.0 | Link | Link |
iNaturalist-2018 | - | 75.1 | 70.3 | 75.2 | 76.2 | Link | Link |
This codebase is heavily inspired from DeiT (ICML 2021). The concepts and methodologies adopted from DeiT have been instrumental in enabling us to push the boundaries of our research and development. We extend our sincerest thanks to the developers and contributors of DeiT.
DeiT-LT is an open-source project released under the MIT license (MIT). The codebase is derived from that of DeiT (ICML 2021), which is released under Apache 2.0 license.
If you find this code or idea useful, please consider citing our work:
@InProceedings{Rangwani_2024_CVPR,
author = {Rangwani, Harsh and Mondal, Pradipto and Mishra, Mayank and Asokan, Ashish Ramayee and Babu, R. Venkatesh},
title = {DeiT-LT: Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
pages = {23396-23406}
}