Perceiving Systems, Computer Vision

MoSh: Motion and Shape Capture from Sparse Markers

2021-10-09


MoSh can fit surface models of human, animal, and objects to marker-based motion capture, in short mocap, data with high accuracy second only to that of 4D scan data. Fitting is completely automatic, given labeled, and clean mocap marker data. The labeling can also be made automatic using SOMA, whose code is also released here.

Author(s): Nima Ghorbani
Department(s): Perceiving Systems
Research Projects(s): Beyond Motion Capture
Publication(s): {MoSh}: Motion and Shape Capture from Sparse Markers
{AMASS}: Archive of Motion Capture as Surface Shapes
Authors: Nima Ghorbani
Maintainers: Nima Ghorbani
Release Date: 2021-10-09
Repository: https://github.com/nghorbani/moshpp