[NeurIPS 2021] Do different tracking tasks require different appearance model?
[ArXiv] [Project Page]
UniTrack is a simple and Unified framework for addressing multiple tracking tasks.
Being a fundamental problem in computer vision, tracking has been fragmented into a multitude of different experimental setups. As a consequence, the literature has fragmented too, and now the novel approaches proposed by the community are usually specialized to fit only one specific setup. To understand to what extent this specialization is actually necessary, we present UniTrack, a solution to address multiple different tracking tasks within the same framework. All tasks share the same appearance model. UniTrack
-
Does NOT need training on a specific tracking task.
-
Shows competitive performance on six out of seven tracking tasks considered.
-
Can be easily adapted to even more tasks.
-
Can be used as an evaluation platform to test pre-trained self-supervised models.
Multi-Object Tracking demo for 80 COCO classes (YOLOX + UniTrack)
In this demo we run the YOLOX detector and perform MOT for the 80 COCO classes. Try the demo by:
python demo/mot_demo.py --classes cls1 cls2 ... clsN
where cls1 to clsN represent the indices of classes you would like to detect and track. See here for the index list. By default all 80 classes are detected and tracked.
Single-Object Tracking demo for custom videos
python demo/sot_demo.py --config ./config/imagenet_resnet18_s3.yaml --input /path/to/your/video
In this demo, you are asked to annotate the target to be tracked, by drawing a rectangle in the first frame of the video. Then the algorithm tracks the target in following timesteps without object detection.
We classify existing tracking tasks along four axes: (1) Single or multiple targets; (2) Users specify targets or automatic detectors specify targets; (3) Observation formats (bounding box/mask/pose); (2) Class-agnostic or class-specific (i.e. human/vehicles). We mainly experiment on 5 tasks: SOT, VOS, MOT, MOTS, and PoseTrack. Task setups are summarized in the above figure.
An appearance model is the only learnable component in UniTrack. It should provide universal visual representation, and is usually pre-trained on large-scale dataset in supervised or unsupervised manners. Typical examples include ImageNet pre-trained ResNets (supervised), and recent self-supervised models such as MoCo and SimCLR (unsupervised).
Propagation and Association are the two core primitives used in UniTrack to address a wide variety of tracking tasks (currently 7, but more can be added), Both use the features extracted by the pre-trained appearance model. For propagation, we adopt exiting methods such as cross correlation, DCF, and mask propation. For association we employ a simple algorithm as in JDE and develop a novel reconstruction-based similairty metric that allows to compare objects across shapes and sizes.
- Installation: Please check out docs/INSTALL.md
- Data preparation: Please check out docs/DATA.md
- Appearance model preparation: Please check out docs/MODELZOO.md
- Run evaluation on all datasets: Please check out docs/RUN.md
Below we show results of UniTrack with a simple ImageNet Pre-trained ResNet-18 as the appearance model. More results can be found in RESULTS.md.
Single Object Tracking (SOT) on OTB-2015
Video Object Segmentation (VOS) on DAVIS-2017 val split
Multiple Object Tracking (MOT) on MOT-16 test set private detector track (Detections from FairMOT)
Multiple Object Tracking and Segmentation (MOTS) on MOTS challenge test set (Detections from COSTA_st)
Pose Tracking on PoseTrack-2018 val split (Detections from LightTrack)
A part of code is borrowed from
VideoWalk by Allan A. Jabri
SOT code by Zhipeng Zhang
@article{wang2021different,
author = {Wang, Zhongdao and Zhao, Hengshuang and Li, Ya-Li and Wang, Shengjin and Torr, Philip and Bertinetto, Luca},
title = {Do different tracking tasks require different appearance models?},
journal = {Thirty-Fifth Conference on Neural Infromation Processing Systems},
year = {2021},
}