ACM Transactions on Graphics, (Proc. SIGGRAPH) [conditionally accepted], 2017, Two first authors contributed equally (article)
Designing and simulating realistic clothing is challenging and, while several methods have addressed the capture of clothing from 3D scans, previous methods have been limited to single garments and simple motions, lack detail, or require specialized texture pat- terns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be segmented from the others and the body. Our ClothCap approach uses a new multi-part 3D model of clothed bodies, automatically segments each piece of clothing, estimates the naked body shape and pose under the clothing, and tracks the 3D deformations of the clothing over time. We estimate the garments and their mo- tion from 4D scans; that is, high-resolution 3D scans of the subject in motion at 60 fps. The model allows us to capture a clothed per- son in motion, extract their clothing, and retarget the clothing to new body shapes. ClothCap provides a step towards virtual try-on with a technology for capturing, modeling, and analyzing clothing in motion.
ACM Transactions on Graphics, (Proc. SIGGRAPH) [conditionally accepted], 2017 (article)
Data driven models of human pose and soft-tissue deformations can produce very realistic results.
However, they only model the visible surface of the human body, and thus cannot create skin deformation
due to interactions with the environment.
Physical simulation generalizes to external forces but its parameters are difficult to control.
In this paper we present a layered volumetric human body model learned from data.
Our model is composed of data-driven inner layer and a physics-based external layer.
The inner layer is driven with a volumetric statistical body model (VSMPL).
The soft tissue layer consists of a tetrahedral mesh that is driven using FEM.
The combination of both layers creates coherent and realistic full-body avatars
that can be animated and generalize to external forces.
Model parameters, namely the segmentation of the body into layers and the soft tissue elasticity
are learned directly from 4D registrations of humans exhibiting soft tissue deformations,
and the learned parameters can faithfully reproduce the 4D registrations.
The resulting avatars produce realistic results for held out sequences and react to
external forces. Moreover, the model allows to retarget physical properties from an avatar to another one
as they all share the same topology.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, Spotlight (inproceedings)
We address the problem of estimating human body shape from 3D scans over time. Reliable estimation of 3D body shape is necessary for many applications including virtual try-on, health monitoring, and avatar creation for virtual reality. Scanning bodies in minimal clothing, however, presents a practical barrier to these applications. We address this problem by estimating body shape under clothing from a sequence of 3D scans. Previous methods that have exploited statistical models of body shape produce overly smooth shapes lacking personalized details. In this paper we contribute a new approach to recover not only an approximate shape of the person, but also their detailed shape. Our approach allows the estimated shape to deviate from a parametric model to fit the 3D scans. We demonstrate the method using high quality 4D data as well as sequences of visual hulls extracted from multi-view images. We also make available a new high quality 4D dataset that enables quantitative evaluation. Our method outperforms the previous state of the art, both qualitatively and quantitatively.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems