From gait, dance to martial art, human movements provide rich, complex yet coherent spatiotemporal patterns reflecting characteristics of a group or an individual. We develop computer algorithms to automatically learn such quality discriminative features from multimodal data. In this talk, I present a trilogy on learning from human movements:
(1) Gait analysis from video data: based on frieze patterns (7 frieze groups), a video sequence of silhouettes is mapped into a pair of spatiotemporal patterns that are near-periodic along the time axis. A group theoretical analysis of periodic patterns allows us to determine the dynamic time warping and affine scaling that aligns two gait sequences from similar viewpoints for human identification.
(2) Dance analysis and synthesis (mocap, music, ratings from Mechanical Turks): we explore the complex relationship between perceived dance quality/dancer's gender and dance movements respectively. As a feasibility study, we construct a computational framework for an analysis-synthesis-feedback loop using a novel multimedia dance-texture representation for joint angular displacement, velocity and acceleration. Furthermore, we integrate crowd sourcing, music and motion-capture data, and machine learning-based methods for dance segmentation, analysis and synthesis of new dancers. A quantitative validation of this framework on a motion-capture dataset of 172 dancers evaluated by more than 400 independent on-line raters demonstrates significant correlation between human perception and the algorithmically intended dance quality or gender of the synthesized dancers.
(3) Tai Chi performance evaluation (mocap + video): I shall also discuss the feasibility of utilizing spatiotemporal synchronization and, ultimately, machine learning to evaluate Tai Chi routines performed by different subjects in our current project of “Tai Chi + Advanced Technology for Smart Health”.
Biography: Yanxi Liu received her Ph.D. degree in computer science for group theory applications in robotics assembly planning from University of Massachusetts (Amherst, MA, USA) under the direction of late Robin Popplestone, and her postdoctoral training in robotics fine motion planning at LIFIA/IMAG (Grenoble, France). With an US NSF research-education fellowship award, Dr. Liu spent one year at DIMACS (NSF center for Discrete Mathematics and Theoretical Computer Science) before joining Carnegie Mellon University (CMU) faculty for ten years in the Robotics Institute (RI). She is currently a full professor in the School of Electrical Engineering and Computer Science, Penn State University where she co-directs the Lab for Perception, Action and Cognition (LPAC) and the Human Motion Lab for Taiji (Tai Chi) Research. During 2013-2014, she took a 15-month leave visiting Stanford University (Palo Alto, CA), Google (Mountain View, CA) and Microsoft Silicon Valley (Sunnyvale, CA). She is currently on sabbatical at ETH, Zurich, Switzerland.