My research concerns learning models of perception and production of non-verbal communicative behavior. Such models can be used to create richer human-robot and human-avatar interaction, for medical diagnosis systems, and for contextual synthesis of different kinds of human behaviors, e.g., guiding synthesis of hand motion from body motion.
Peter Vincent Gehler
How can autonomous perception discover high-dimensional patterns in recorded data from our environment? I am approaching this question by working on structured computer vision tasks, such as Human Pose Estimation. I hope that insights from this area will improve our data analysis systems, so that they can assist us in better understanding our environment.
I am currently doing my Master thesis project within Robot Perception Group: Deep Reinforcement Learning-based On-board Visual-inertial Autonomous landing of Miniature UAVs. Previously, I was working as a student assistant within the AirCap (Aerial outdoor motion capture) project. My duties included integrating sensors into the current distributed system, taking care of software and hardware repositories and assembling the drones. During this time, I also completed a 7-week Essay rotation - An Overview on Neuromorphic Event-Based Visual Perception for Autonomous Robots
Are there people out there? How do they move? What is their body shape? What are they wearing? For machines to interact with humans and the physical world, we need to train them to answer these questions. My research is focused on combining ideas from computer vision and machine learning to enable machines to perceive humans. During my Ph.D. I worked mostly on geometric modelling and articulated tracking from images.
My research focus is body representation. I am interested in basic theoretical frameworks and mechanisms, but also in disturbed body representation. To this end, I conduct behavioral studies in patients with eating disorders or obesity, but also in healthy people.
My research aims at understanding the world through the capture and analysis of heterogeneous data (MRI, CT, Point clouds, images, ...) in order to create applied digital instruments, that allow, for example, to generate novel views from a scene, to infer the human shape from a clothed scan, or to predict the amount of adipose tissue of a person from surface observations. To address this challenge, I adopt the approaches of Computer Vision, Signal Processing, Computer Graphics and Statistical Models. My research is often multi-disciplinary, as I need to combine knowledge from these different domains.
I am in the field of Virtual Humans and Affective Computing. I am interested in what makes us perceive an interacting agent as 'human'. Specifically, I am interested in affectivity and appearance. What does the shape, pose, movement, behavior and style of a person or a virtual human tells us about them? How does the interaction with each other affects our own actions? Furthermore, I study how individual factors (such as culture) influences this perception. On the side from research, I enjoy web technologies! I support the creation of websites for scientific data acquisition and dissemination related to 3D body shape, as well as web development for scientific experiments and perceptual studies.
One of the requirements for enabling machines to perceive and interact in a human environment is to accurately perceive humans and their activities. My research is related to different aspects of movement perception and modeling. Since completing my PhD I'm focusing in human hand modeling, detection and pose estimation.
My research interests are in motion estimation and scene understanding. In particular I'm interested in exploring and modeling how the semantics and the motion of the scene are related.
My goal is to apply statistical human body models in various research domains such as psychology, cognitive science, and medicine. A primary goal is to make our body software accessible to more people. For this purpose I interact with various research groups who need body data and software for doing experiments. I manage these relationships, and support the transfer of body shapes as needed.
The goal of my research is to understand visual factors that contribute to one’s representation of the physical body. I am using the novel technology of biometric-based avatars provided by the MPI for Intelligent Systems to study mechanisms related to body representations in healthy people and its distortions in eating and weight disordered individuals.
I am interested in human understanding in videos. Particularly, I am exploring the use of synthetic images for learning human-related representations.
My current research focuses on representing the appearance of people in images and video sequences. I am particularly interested in 2D and 3D models that capture the variability in shape of articulated and deformable objects like the human body. Previous work focused on color image reproduction, multispectral color imaging, readability of colored text.