Skip to content
This repository has been archived by the owner on Apr 25, 2024. It is now read-only.

burnpiro/farm-animal-tracking

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Farm Animal Tracking Project

Project for tracking farm animals.Sample YT

Prerequisites

Instalation

Download repository and install dependencies

$ git clone https://github.com/burnpiro/farm-animal-tracking.git
$ cd farm-animal-tracking
$ pip install -r requirements.txt

Download detection model weights

  1. To download precompiled model weights Google Drive
  2. Unzip archive to model/detection_model

Download recognition model weights

  1. To download precompiled model weights Google Drive
  2. Unzip archive to model/siamese/weights

Running

Detection

To visualize animal detection on video use:

$ python show_prediction.py

or for image:

$ python run_detection.py

Tracking

To visualize animal tracking on video use:

$ python show_tracking.py --video=<path to video>

Dataset

Dataset for learning of model can be obtained at PSRG website.

Update 2023:

Dataset is no longer available through the university website. Please contact them directly to access the data as described in Issue 8

EDA (Exploratory Data Analysis)

  • Run:
docker-compose -f eda/docker-compose.yaml up
  • Go to localhost:8001 and enter token from console

Siamese network

You can download current best weights from Google Drive MobileNetV2 Google Drive EfficientNetB5 Google Drive ResNet101V2. Put them into ./model/siamese/weights and use the path as --weights parameter.

Training

Make sure you have cropped dataset in ./data/cropped_animals folder. Please check ./data/data_generator.py documentation for more info.

$ python train_siamese.py

Generate Embeddings for Test dataset and visualize it

Instead of running this script manually (requires ~30GB of RAM) you can use pre-generated train/test/concat files in ./data/visualization. Just select two files with the same postfix, vecs-$1.tsv and meta-$1.tsv, it's important to use the same postfix, otherwise length won't match.

$ python helpers/generate_siamese_emb_space.py

Options:

  • --datatype: either train or test (default train), which data should be used for embeddings
  • --weights: string (default siam-118_0.0633.h5), specify weights file from mode/siamese/weights/MobileNetV2/ folder

This is going to produce two files:

  • vecs.tsv - list of embeddings for test dataset
  • meta.tsv - list of labels for embeddings

You can visualize those embeddings in https://projector.tensorflow.org/ application. Just upload them as a custom data (use Load option).

Average class values - Video

Test day data - Video

Train all data - Video

Generate tracking data

$ cd data
$ python generate_tracking.py

This is going to produce tracking data from videos, so we can evaluate model. Look for frames_tracking.json and pigs_tracking.json inside ./data/tracking/. For more details check Wiki.

Testing two images

You can specify the weights for the model. Please use weights marked with the lowest number (loss value).

$ python test_siamese.py

Options:

--weights siam-118_0.0633.h5