Skip to content
/ DLT Public

Deep Learning Tracker to track the trajectory of the selected object. Undergraduate work.

Notifications You must be signed in to change notification settings

TDaryaT/DLT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Learning Model for Visual Tracking

Undergraduate work.

Implementation of the dlt tracker in MATLAB and testing the algorithm for quantizing weighting coefficients. Using this tracker, you can track the trajectory of an object on a set of images in real time.

Link to additional information.

Prerequisites

I advise you to read the article on the DLT tracker and the article on the weighting coefficient quantization algorithm before starting work.

Also, you can familiarize yourself with the work of comparing trackers and with sequences designed specifically for such programs.

From here you can take sets of video frames for testing the tracker.

Getting Started

These instructions will get you a copy of the project up and running on your local machine. I spend my work on Linux OS.

Next, you need to change "run_individual.m", namely:

  • path with the selected test sequence, namely with the folder in which the image, format jpg or png (variable: "datePath")
dataPath = 'name_data_path';
  • If you want to use the GPU to get better results, change the "useGpu" variable to true.
useGpu = true;
  • If your sequence is in the statement "swich", change the value of the title variable to the desired one.
title = 'name_folder\title'; 
  • If you want to test your sequence, you need to enter the parameters in the switch statement, as
case 'name_folder_with_img'; p = [158 106 62 78 0]; 

    opt = struct('numsample',1000, 'affsig',[4, 4,.005,.00,.001,.00]);

where:

  • p = [px, py, sx, sy, theta]; - the location of the target in the first frame. px and py are th coordinates of the centre of the box sx and sy are the size of the box in the x (width) and y (height) dimensions, before rotation theta is the rotation angle of the box

  • 'numsample',1000, The number of samples used in the condensation algorithm/particle filter.

  • 'affsig',[4,4,.02,.02,.005,.001] These are the standard deviations of the dynamics distribution, that is how much we expect the target object might move from one frame to the next. The meaning of each number is as follows:

    affsig(1) = x translation (pixels, mean is 0)

    affsig(2) = y translation (pixels, mean is 0)

    affsig(3) = x & y scaling

    affsig(4) = rotation angle

    affsig(5) = aspect ratio

    affsig(6) = skew angle

About changing the number of quantization levels

The quantization algorithm is implemented in the initialization of the network in the file "initDLT.m". To change the quantization level, it is necessary to excuse the loaded file:

 load quant_res_new_NUMBER;

Running by example

download the set Car4 to your computer and follow all the instructions described above, i.e.

  • In my case, the folder where the sequence is located is here:
dataPath = '/home/dasha/Desktop/диплом/individual_siq/';
  • Sequence options are in "swich":
title = 'Car4';  

Next we run the file "run_individual.m" and you can see the frame-by-frame output of the sequence on which the trajectory of the object will be visible with a red frame.

Image

Authors

  • *Tlepbergenova Darya * - MMF NSU 2020

See also the text of the Undergraduate work on overleaf.

About

Deep Learning Tracker to track the trajectory of the selected object. Undergraduate work.

Topics

Resources

Stars

Watchers

Forks