We, humans, live in a real world governed by the laws of physics, i.e. we apply and experiment forces, such as gravity, in our daily interactions with the world. In this project we allow virtual humans to interact with a virtual world subject to the laws of physics. How would ones’ body shape deform in case of a collision with an object? How would our walk pattern look like if we had a few kilos more?
The goal in [ ] was, starting with observations of the surface of a human peforming high dynamic motions, how to create a virtual avatar with physical properties, allowing to not only reproduce the observed motions as well as new unseen motions with high visual fidelity, but also capable to generalize to external forces, such as increased gravity or the collision with objects? The problem is hard: which metric to use to best match the simulations to the real data? How to overcome the lack of analytical gradients in the optimization problem? Which assumptions can reduce the complexity of the problem without compromising the quality of the results? With a new layered representation of the human body, hypothesis on the continuity of the physic parameters on the body and a new optimization scheme, we solved the complex problem.
The research question in [ ] was how to retarget an input motion into a new body having different physical properties, i.e. taller, heavier, thinner, so that the new morphology is taken into account. To obtain visually plausible simulations we propose a simplified representation of the human body, and animate it using physically-based retargeting. We develop a novel spacetime optimization approach that learns and robustly adapts physical controllers to new bodies and constraints. The method automatically adapts the motion of a subject with the novel target body shape, respecting the physical properties and creating a different movement. This makes it easy to create a varied set of motions from a single mocap sequence by simply varying the characters.