Michael Hobley, Victor Adrian Prisacariu.
Active Vision Lab (AVL), University of Oxford.
Updated code release our corresponding to arXiv paper v2 with:
- Feature Backbone training and a single linear projection, increasing the accuracy of our method.
- Localisation from an additional head.
- FSC-133, a propose dataset which removes errors, ambiguities, and repeated images from FSC-147.
- Released model checkpoints.
Initial code release, corresponding to arXiv paper v1.
We provide a environment.yml
file to set up a conda
environment:
git clone https://github.com/ActiveVisionLab/LearningToCountAnything.git
cd LearningToCountAnything
conda env create -f environment.yml
The data is the same as in Learning To Count Everything, Ranjan et al. as are the annotation, image class and train/test/val split files we include.
Dowload FSC-147 Images and the precomputed density maps.
If you are not using data/
then specify your data_path directory in configs/_DEFAULT. yml
.
data/FSC-147
├── annotation_FSC147_384.json
├── ImageClasses_FSC147.txt
├── gt_density_map_adaptive_384_VarV2
│ ├── 2.npy
│ ├── 3.npy
│ ├── ...
├── images_384_VarV2
│ ├── 2.jpg
│ ├── 3.jpg
│ ├── ...
└── Train_Test_Val_FSC_147.json
As discussed in the paper, we found FSC-147 contained 448 non-unique images. Some of the duplicated images appear with different associated counts, and/or in the training set and one of the validation or testing sets. We propose FSC-133, which removes these errors, ambiguities, and repeated images from FSC-147.
As FSC-133 is a subset of FSC-147 the images and precomputed density maps are as above. The annotations, class labels and data splits have been updated.
data/FSC-133
├── annotation_FSC133_384.json
├── ImageClasses_FSC133.txt
├── gt_density_map_adaptive_384_VarV2
│ ├── 2.npy
│ ├── 3.npy
│ ├── ...
├── images_384_VarV2
│ ├── 2.jpg
│ ├── 3.jpg
│ ├── ...
└── Train_Test_Val_FSC_133.json
We provide example weights for our models trained on FSC-133.
logs/examples
├── counting.ckpt
└── localisation.ckpt
To train the counting network:
python main.py --config example_training
To train the localisation head (purely for visualisation), given a trained counting network: To train the counting network:
python main.py --config example_localisation_training
To test a trained model on the validation set:
python main.py --config example_test --val
To test a trained model on the test set:
python main.py --config example_test --test
To view the localisation via feature PCA, saved to output/pca:
python main.py --config example_pca_vis --val
To view the localisation head's predictions, saved to output/heatmap:
python main.py --config example_localisation_vis --val
If you find the code or FSC-133 useful, please cite:
@article{hobley2022-LTCA,
title={Learning to Count Anything: Reference-less Class-agnostic Counting with Weak Supervision},
author={Hobley, Michael and Prisacariu, Victor},
journal = {Proceedings of the {IEEE} Conference on Computer Vision and Pattern Recognition ({CVPR})},
year={2023}
}