Skip to content
This repository has been archived by the owner on Jul 22, 2024. It is now read-only.

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
chunfuchen authored Aug 14, 2021
1 parent 25a3339 commit 52de5da
Showing 1 changed file with 5 additions and 4 deletions.
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,15 @@ This repository is the official implementation of CrossViT: Cross-Attention Mult

If you use the codes and models from this repo, please cite our work. Thanks!

```
@inproceedings{
chen2021crossvit,
title={{CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification}},
author={Chun-Fu (Richard) Chen and Rameswar Panda and Quanfu Fan},
booktitle={International Conference on Computer Vision (ICCV)},
year={2021}
}
```

## Image Classification

Expand Down Expand Up @@ -55,7 +56,7 @@ We provide models trained on ImageNet1K.

### Training

To train CrossViT-9$\dagger$ on ImageNet on a single node with 8 gpus for 300 epochs run:
To train `crossvit_9_dagger_224` on ImageNet on a single node with 8 gpus for 300 epochs run:

```shell script

Expand All @@ -68,7 +69,7 @@ Other model names can be found at [models/crossvit.py](models/crossvit.py).

Distributed training is available via Slurm and `submitit`:

To train a CrossViT-9$\dagger$ model on ImageNet on 4 nodes with 8 gpus each for 300 epochs:
To train a `crossvit_9_dagger_224` model on ImageNet on 4 nodes with 8 gpus each for 300 epochs:

```
python run_with_submitit.py --model crossvit_9_dagger_224 --data-path /path/to/imagenet --batch-size 128 --warmup-epochs 30
Expand All @@ -83,4 +84,4 @@ To evaluate a pretrained model on `crossvit_9_dagger_224`:

```
python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --model crossvit_9_dagger_224 --batch-size 128 --data-path /path/to/imagenet --eval --pretrained
```
```

0 comments on commit 52de5da

Please sign in to comment.