58

GitHub - AlexEMG/DeepLabCut: Markerless tracking of user-defined features with d...

 5 years ago
source link: https://github.com/AlexEMG/DeepLabCut
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

README.md

License: LGPL v3 Krihelimeter GitHub tag GitHub forks GitHub stars GitHub watchers

DeepLabCut

Welcome to the DeepLabCut repository, a toolbox for markerless tracking of body parts of animals in lab settings performing various tasks, like trail tracking, reaching in mice and various Drosophila behaviors during egg-laying (see Mathis et al. for details). There is, however, nothing specific that makes the toolbox only applicable to these tasks and/or species. The toolbox has also already been successfully applied to rats, humans, various fish species, bacteria, leeches, various robots, and race horses. Please check out www.mousemotorlab.org/deeplabcut for video demonstrations of automated tracking.

githubfig-01-01.png

This work utilizes the feature detectors (ResNet + readout layers) of one of the state-of-the-art algorithms for human pose estimation by Insafutdinov et al., called DeeperCut, which inspired the name for our toolbox (see references below).

In our paper we demonstrate that those feature detectors can be trained with few labeled images to achieve human-level tracking accuracy for various body parts in lab tasks. Please check it out:

"DeepLabCut: markerless pose estimation of user-defined body parts with deep learning" by Alexander Mathis, Pranav Mamidanna, Kevin M. Cury, Taiga Abe, Venkatesh N. Murthy, Mackenzie W. Mathis* and Matthias Bethge*

News:

  • Our preprint just appeared in Nature Neuroscience
  • Ed Yong covered DeepLabCut and interviewed several users for the Atlantic.
  • All the documentation is now (also) organized in a website format!
  • We added a simplified installation procedure including a conda environments & a Docker container. See Installation guide
  • Thanks to Richard Warren for checking the compatability of the code in Windows. It works!
  • We added "quick guides" for training and for the evaluation tools that we provide with the package. We still recommend becoming familiar with the code base via the demo (below) first.
  • We also have a Slack group if you have questions that you feel don't fit a github issue (deeplabcut.slack.com) (please email Mackenzie at [email protected] to join!)

Overview:

A typical use case is:

A user has videos of an animal (or animals) performing a behavior and wants to extract the position of various body parts from images/video frames. Ideally these parts are visible to a human annotator, yet potentially difficult to extract by standard image processing methods due to changes in background, body articulation etc.

To solve this problem, one can train feature detectors in an end-to-end fashion. In order to do so one should:

  • label points of interests (e.g. joints, snout, etc.) from distinct frames (containing different poses, individuals etc.)
  • train a deep neural network while leaving out labeled frames to check if it generalizes well
  • once the network is trained it can be used to analyze videos in a fast way

The key result of our paper is that one typically requires just a few labeled frames to get excellent tracking results.

The general pipeline for DeepLabCut is:

Install --> Extract frames --> Label training data --> Train DeeperCut feature detectors --> Apply your trained network to unlabeled data --> Extract trajectories for analysis.

deeplabcutFig-01.png

Once one has a well trained network, one can just use it to analyze heaps of videos Analysis tools. The network can also be retrained on frames, where it makes errors. User guide in website format.

Installation guide and Hardware and Software Requirements:

Installation guide

Demo (and detailed) user instructions for training and testing the network:

User guide (detailed walk-through with labeled example data)

Quick guide for training a tailored feature detector network

Quick guide for evaluation of feature detectors (on train & test set)

User instructions for analyzing data (with a trained network):

Analysis guide: How to use trained network to analyze videos?

Support:

If you are having issues, please let us know (Issue Tracker). Perhaps consider checking the already closed issues and the Frequently asked questions to see if this might help.

For questions feel free to reach out to: [[email protected]] or [[email protected]] or join our Slack user group: (deeplabcut.slack.com) (please email Mackenzie to join!).

Contribute:

DeepLabCut is an actively developing project and community contributions are welcome!

Code contributors:

Alexander Mathis, Mackenzie Mathis, and the DeeperCut authors for the feature detector code. Edits and suggestions by Jonas Rauber, Taiga Abe, Hao Wu, Jonny Saunders, Richard Warren and Brandon Forys. The feature detector code is based on Eldar Insafutdinov's TensorFlow implementation of DeeperCut. Please check out the following references for details:

References:

@inproceedings{insafutdinov2017cvpr,
    title = {ArtTrack: Articulated Multi-person Tracking in the Wild},
    author = {Eldar Insafutdinov and Mykhaylo Andriluka and Leonid Pishchulin and Siyu Tang and Evgeny Levinkov and Bjoern Andres and Bernt Schiele},
    booktitle = {CVPR'17},
    url = {http://arxiv.org/abs/1612.01465}
}

@article{insafutdinov2016eccv,
    title = {DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model},
    author = {Eldar Insafutdinov and Leonid Pishchulin and Bjoern Andres and Mykhaylo Andriluka and Bernt Schiele},
    booktitle = {ECCV'16},
    url = {http://arxiv.org/abs/1605.03170}
}

@article{Mathisetal2018,
  title={DeepLabCut: markerless pose estimation of user-defined body parts with deep learning},
  author = {Alexander Mathis and Pranav Mamidanna and Kevin M. Cury and Taiga Abe  and Venkatesh N. Murthy and Mackenzie W. Mathis and Matthias Bethge},
   journal={Nature Neuroscience},
    year={2018},
    url={https://www.nature.com/articles/s41593-018-0209-y}

}

License:

This project is licensed under the GNU Lesser General Public License v3.0.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK