Pose Estimation in Neurophysiology

Previous pose estimation methods required reflective markers placed on a subject, as well as multiple expensive high-frame-rate infrared cameras to triangulate position within a limited field. Recent advancements in machine learning have facilitated dramatic advancements in capturing pose data with a video camera alone. In particular, DeepLabCut (DLC) facilitates the use of pre-trained machine learning models for 2- and 3-D non-invasive markerless pose estimation.

While some alternative tools are either species-specific (e.g., DeepFly3D) or uniquely 2D (e.g., DeepPoseKit), DLC highlights a diversity of use-cases via a Model Zoo. Even compared to tools with similar functionality (e.g., SLEAP and dannce), DLC has more users, as measured by either GitHub forks or more citations (1600 vs. 900). DLC's trajectory toward an industry standard is attributable to continued funding, extensive documentation and both creator- and peer-support.

Key Partnerships

Mackenzie Mathis (Swiss Federal Institute of Technology Lausanne) is both a lead developer of DLC and a key advisor on DataJoint open source development as a member of the Scientific Steering Committee.

DataJoint is also partnered with a number of groups who use DLC as part of broader workflows. In these collaborations, members of the DataJoint team have interviewed researchers to understand their needs in experiment workflow, pipeline design, and interfaces.

These teams include:

Pipeline Development

Development of the Element began with an open source repository shared by the Mathis team. We further identified common needs across our respective partnerships to offer the following features for single-camera 2D models:

The workflow handles training data as file sets stored within DLC's project directory. Parameters of the configuration file are captured and preserved. Model evaluation permits direct model comparison, and, when combined with upstream Elements, Element DeepLabCut can be used to generate pose estimation information for each session.