-
Notifications
You must be signed in to change notification settings - Fork 284
movement_trajectory
#Movement Trajectory Features
##Description The MovementTrajectoryFeatures module is a feature-extraction module that can be used for computing basic temporal features of a given time series.
The MovementTrajectoryFeatures algorithm works as follows:
- Each new input sample (with a dimensionality of N) is added to a circular buffer (of length M)
- The buffer is segmented into K equal windows and the centroid of each window is computed
The features are then computed using these K * N centroids, there are 5 possible feature modes to choose from these are:
-
CENTROID_VALUE: In this feature mode, the features are simply the centroid values. The feature vector size will therefore be K * N (where K is the number of centroids and N is the number of input dimensions).
-
NORMALIZED_CENTROID_VALUE: In this feature mode, the features are the centroid values, normalized by the minimum and maximum centroid values to a new range of [0 1]. The feature vector size will therefore be K * N (where K is the number of centroids and N is the number of input dimensions).
-
CENTROID_DERIVATIVE: In this feature mode, the features are the derivative values between centroid i and centroid i+1. The feature vector size will therefore be K-1 * N (where K is the number of centroids and N is the number of input dimensions).
-
CENTROID_ANGLE_2D: In this feature mode, the features consist of a histogram of the angles between each centroid. For this mode, your input signal must have an even dimensionality and the assumption is that every pair of input variables are a pair of [x y] coordinates (or something similar). The feature vector size will be H*(N/2), where H is the number of histogram bins and N is the number of input dimensions.
-
CENTROID_ANGLE_3D: This feature mode is similar to the CENTROID_ANGLE_2D mode, with the exception that this mode is designed for triplets of 3D points. For this mode, the number of dimensions must be divisible by 3 and it is assumed that each triplet of input variables are a triplet of [x y z] coordinates (or something similar). The feature vector size will be H*(N/3), where H is the number of histogram bins and N is the number of input dimensions.
##Applications The MovementTrajectoryFeatures module is a good feature-extraction algorithm to use if you need to classify a temporal gesture and you do not want to use a specific temporal classifiers (such as Dynamic Time Warping or a Hidden Markov Model). The MovementTrajectoryFeatures module can be used with any of the non-temporal classifiers, such as ANBC, KNN, or SVM algorithms.