On the basis of a few photos alone, a new technique creates realistic avatars of animals that look and move like real animals
Filmmakers and developers of computer games will have a new way of animating animals in the future. A team led by researchers at the Max Planck Institute for Intelligent Systems in Tübingen, Germany, has developed a technique that uses photographs alone to create lifelike 3D models of almost all quadrupeds. These avatars can be animated to realistically imitate the movements of animals. But the simple method of bringing animals to life on the computer is not only interesting for the entertainment industry. Many people have lost a beloved family pet. Now this technology can bring them back to “life” as a virtual 3D avatar. It could also benefit biologists in species protection and help to make children in particular aware of the importance of biodiversity.
Am 1. März begann Geiger die Professur für "Learning-based Computer Vision" an der über 500 Jahre alten, renommierten Universität Tübingen. Geiger leitet weiterhin eine Forschungsgruppe am MPI-IS in Tübingen.
The German Pattern Recognition Award is awarded once a year to one young researcher in computer vision, pattern recognition or machine learning at an age of 35 years or less and sponsored by the Daimler AG with 5000€.
NVIDIA CEO Jensen Huang presented the NVAIL AI Labs with the very first Tesla V100 GPUs, based on NVIDIA's Volta architecture. MPI-IS is among the top centers working at the leading edge of deep learning in computer vision. As such it is recognized by NVIDIA as one of its NVAIL labs and giving the MPI access to the best and latest NVIDIA technology. Huang unveiled these new GPUs at CVPR saying that he wants to put them in the hands of researchers first.
Researchers at the Max Planck Institute for Intelligent Systems (MPI-IS) have developed technology to digitally capture clothing on moving people, turn it into a 3D digital form, and dress virtual avatars with it. This new technology makes virtual clothing try-on practical.
A breakthrough in our shared understanding, perception, and description of human body shape brings new alternatives to 3D body scanning
ANAHEIM, CALIFORNIA -- JULY 26, 2016 -- Researchers from the Max Planck Institute for Intelligent Systems and the University of Texas at Dallas, revealed new crowdshaping technology at SIGGRAPH 2016 that creates accurate 3D body models from 2D photos using crowdsourced linguistic descriptions of body shape. The Body Talk system takes a single photo and produces 3D body shapes that look like the person and are accurate enough to size clothing. It does this using the help of 15 volunteers who rate the body shape in the photo using 30 words or fewer. The researchers believe this technology has applications in online shopping, gaming, virtual reality and healthcare.
The FAUST dataset wins the "Dataset Award" at the Eurographics Symposium on Geometry Processing 2016. The award encourages and recognises the importance of the distribution of high-quality datasets on which geometry processing algorithms are tested.