What this is
A human pose dataset for AI training here means packaged files where each accepted frame is stored as structured data: normalized keypoints (typically MediaPipe-compatible body landmarks), optional temporal smoothing, quality scores, and metadata. You can work primarily in JSON (line-delimited JSONL for scale) and optionally export a COCO-like keypoints-only JSON for tools that expect a single JSON document with images/annotations lists—coordinates stay normalized unless your pipeline maps them to pixels.
Licensing and scope: see Dataset licensing & terms. Exports are transformative representations; they do not ship original video as the deliverable.
Examples (what you receive)
data.jsonl
One JSON object per line: frame id, timestamps, keypoints (x, y, z, visibility), optional motion features and layer summaries.
coco_keypoints.json (optional)
COCO-style keypoints-only bundle when enabled—annotations keyed to frame indices; useful for pipelines that ingest a single JSON file.
Manifests & QA
manifest.json, global_stats.json, export_quality_report.json, and runtime_config.json documenting filters and augmentation parameters.
Free samples for format evaluation: GitHub, Hugging Face, Kaggle.
Use cases
Robotics
Train locomotion policies, humanoid controllers, or teleoperation models using consistent 2D/3D keypoint sequences and motion statistics. Bulk exports and documented augmentation ranges help match sim-to-real experiments and reproducible training splits.
LLM vision & multimodal AI
Pair structured pose timelines with video frames in your own pipeline to build captioning, VQA, or action-description models. JSONL is easy to shard and join with text tokens for multimodal fine-tuning.
Surveillance AI
Use pose-based features for crowd analytics, fall-detection research, or activity classification under your compliance framework. Exports emphasize abstract motion signals—always align deployment with local laws and your organization's privacy policy.
Ready to buy or scope a bundle?
Browse ready-made pose bundles or Dataset Lab frame quotas.