By Alexander Mathis | Steffen Schneider | Jessy Lauer | Mackenzie Mathis
Here we provide code that we used in the worked-examples we provide in our Primer on Deep Learning for Motion Capture.
Publication in Neuron: https://www.cell.com/neuron/fulltext/S0896-6273(20)30717-0
Preprint: https://arxiv.org/abs/2009.00564
Create the figure in Google Colaboratory:
Or in plain Python:
python Illustrating-Augmentations-FigurePipelineMontBlancBirds.py
python Illustrating-Augmentations-FigurePipelineMouse.py
This code creates images like this:
To simulate inattentive labeling we corrupted the original dataset from Mathis et al., available on Zenodo. We corrupted 1, 5, 10 and 20% of the dataset (N=1,066 images) either by swapping two labels or removing one, and trained with varying percentages of the data. Then we trained models and evaluated them on the test set (the rest of the data). The figures below show percentage of correct keypoints (PCK) on the test set for various conditions. The results from these experiments can be plotted like this:
python CorruptionFigure.py
Which creates figures like this (when training with 10% of the data as shown in the paper)
Also results for different training fractions (not shown in the paper are plotted).
Here we used the standad example dataset from the main repo: https://github.com/DeepLabCut/DeepLabCut/tree/master/examples/openfield-Pranav-2018-10-30 and trained with 3 different augmentation methods. To plot the results run the following:
python Primer_AugmentationComparison-Figures.py
Creates figure (in folder: ResultsComparison)
8A = ResultsComparison/LearningCurves.png
8B = ResultsComparison/ErrorCurves.png
8C, D = Stickfigures.png, Stickfiguresfirst500.png
incl. the 8D in the folder "ResultsComparisonFrames"
We then evaluated this model (only trained on a single mouse and session, which is not recommended) on all other mice and sessions (data from here Zenodo). We noticed that with good augmentation (imgaug, tensorpack) the model generalizes to data from the same camera but not the higher-resolution camera (Fig 8E). We then furthermore illustrate active learning, by adding a few frames from an experiment with the higher-resolution camera (Fig 8F). We found that this is sufficient to generalize reasonably.
python Primer_RobustnessEvaluation-Figures.py
Creates figure (in folder: ResultsComparison)
8E = ResultsComparison/PCKresults.png
8F = ResultsComparison/PCKresultsafterAugmentation.png
Outlook
Do you want to contribute labeled data to the DeepLabCut project? Check out http://contrib.deeplabcut.org