Feb 16, 2026

Public workspaceOpen-Field Video Tracking and Locomotion Classification (DeepLabCut Pipeline)

  • Cristian González-Cabrera1
  • 1LIN
Icon indicating open access to content
QR code linking to this content
Protocol CitationCristian González-Cabrera 2026. Open-Field Video Tracking and Locomotion Classification (DeepLabCut Pipeline). protocols.io https://dx.doi.org/10.17504/protocols.io.q26g7763kgwz/v1
License: This is an open access protocol distributed under the terms of the Creative Commons Attribution License,  which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Protocol status: Working
We use this protocol and it's working
Created: February 16, 2026
Last Modified: February 16, 2026
Protocol Integer ID: 243355
Keywords: framewise locomotion classification pipeline, locomotion classification, field video tracking, video tracking, deeplabcut, video frame, deeplabcut pipeline, video, sideways locomotion, track, stimulus time bins for each stimulation bout, body point, instantaneous movement vector
Funders Acknowledgements:
ASAP
Grant ID: 020505
Abstract
This protocol describes the video tracking and framewise locomotion classification pipeline used for open-field experiments. Videos were tracked in DeepLabCut (DLC) using three body points (nose, back, tail base). Tracks were filtered and smoothed, and each video frame was classified as forward, backward, or sideways locomotion based on the angle between the instantaneous movement vector and the body-axis orientation vector. Outputs were summarized in peri-stimulus time bins for each stimulation bout and exported for downstream analyses.
Guidelines
**Critical steps and notes**
- Use consistent video acquisition settings (30 fps, stable lighting) across sessions.
- Maintain the same DLC network and post-processing parameters across conditions.
- The speed gate and angle thresholds strongly affect classification; keep them fixed once defined.
- Inspect representative sessions visually (overlay tracks) to confirm tracking quality and correct orientation.
Materials
**Inputs**
- Overhead open-field video recordings (1080p; 30 fps).
- Stimulation timing (bout onset timestamps) to define peri-stimulus windows.
- DeepLabCut model trained to detect: nose, back, tail base.

**Software**
- DeepLabCut (DLC).
- Python or MATLAB for post-processing of DLC tracks (gap filling, smoothing, vector computations, binning).
Troubleshooting
Procedure
Track each video in DeepLabCut using three keypoints: nose, back, tail base.
Export x/y coordinates and likelihood values per frame for each keypoint.
Likelihood filter: retain frames with likelihood p ≥ 0.9 (per keypoint).
Gap-fill short dropouts in the coordinate traces (brief missing segments after likelihood filtering).
Smooth the coordinate traces using a 6-frame moving average.
Movement vector (per frame): computed from the framewise displacement of the 'back' keypoint.
Body-axis orientation vector (per frame): tail base → back (Tailbase-to-Back).
Only classify frames that exceed a minimum movement threshold (speed gate): ≥ 0.2 units/frame⁻¹, based on the 'back' displacement.
Compute the angle between the movement vector and the body-axis orientation vector for each frame passing the speed gate.
Classify frames by angle thresholds:
Forward: <60°
Backward: >120°
Sideways: 60–120°
For sideways frames (60–120°), classify leftward vs rightward using the sign of the 2D cross product between vectors.
For each stimulation bout, extract a peri-stimulus window from −1 s to +4 s relative to bout onset.
Bin the peri-stimulus window into 200 ms bins.
Within each bin, compute the proportion of frames classified as forward, backward, and sideways (and optionally left/right sideways).
Compute these proportions per trial (per bout), then average within mouse to yield one per-mouse summary per condition.