Simple model to "Detect/Track" and "Re-identify" individuals in different cameras/videos.
This project aims to track people in different videos accounting for different angles.
The framework used to accomplish this task relies on MOT and ReID to track and re-identify ID's of humans, respectively. The tracking can be completed using YOLO_v3 or YOLO_v4 and ReID relies on KaiyangZhou's Torchreid library.
-
Download Anaconda if it is not installed on your machine
-
Clone the repository
git clone https://github.com/samihormi/Multi-Camera-Person-Tracking-and-Re-Identification
- Create a project environment
cd Multi-Camera-Person-Tracking-and-Re-Identification
conda create --name py37 python=3.7
conda activate py37
- Install dependencies
pip install -r requirements.txt
- Install torch and torchvision based on the cuda version of your machine
conda install pytorch torchvision cudatoolkit -c pytorch
- YOLO_v3
python convert_y3.py model_data\weights\yolov3.weights model_data\models\yolov3.h5
- YOLO_v4
python convert_y4.py model_data\weights\yolov4.weights model_data\models\yolov4.h5
-
Download the Keras models for YOLO_v3 and YOLO_v4 and add them to \model_data\models\
-
Download either one of the following Torchreid models 1,2 and add them to \model_data\models\ (you might have to change the path in reid.py)
You can try out your own videos by running demo.py. Under the directory \videos\output, the program will generate a video of the tracking, as well as a video of the tracking and ReID. (as can be seen in the example above) You should specify the path of the videos and the version of YOLO you would like to use (v3 or v4)
python demo.py --videos videos\init\Double1.mp4 videos\init\Single1.mp4 --version v3
This model is build on top of the incredible work done in the following projects: