Department Talks

Depth, You, and the World

Talk
  • 10 September 2013 • 11:15:00
  • Jamie Shotton
  • Max Planck Haus Lecture Hall

Consumer level depth cameras such as Kinect have changed the landscape of 3D computer vision.  In this talk we will discuss two approaches that both learn to directly infer correspondences between observed depth image pixels and 3D model points.  These correspondences can then be used to drive an optimization of a generative model to explain the data.  The first approach, the "Vitruvian Manifold", aims to fit an articulated 3D human model to a depth camera image, and extends our original Body Part Recognition algorithm used in Kinect.  It applies a per-pixel regression forest to infer direct correspondences between image pixels and points on a human mesh model.  This allows an efficient “one-shot” continuous optimization of the model parameters to recover the human pose.  The second approach, "Scene Coordinate Regression", addresses the problem of camera pose relocalization.  It uses a similar regression forest, but now aims to predict correspondences between observed image pixels and 3D world coordinates in an arbitrary 3D scene.  These correspondences are again used to drive an efficient optimization of the camera pose to a highly accurate result from a single input frame.


Efficient Algorithms for Semantic Scene Parsing

Talk
  • 09 September 2013 • 12:00:00
  • Raquel Urtasun
  • MPI Lecture Hall

Developing autonomous systems that are able to assist humans in everyday's tasks is one of the grand challenges in modern computer science. Notable examples are personal robotics for the elderly and people with disabilities, as well as autonomous driving systems which can help decrease fatalities caused by traffic accidents. In order to perform tasks such as navigation, recognition and manipulation of objects, these systems should be able to efficiently extract 3D knowledge of their environment.  In this talk, I'll show how Markov random fields provide a great mathematical formalism to extract this knowledge.  In particular, I'll focus on a few examples, i.e., 3D reconstruction, 3D layout  estimation, 2D holistic parsing and object detection, and show  representations and inference strategies that allow us  to achieve state-of-the-art performance as well as  several orders of magnitude speed-ups.


  • Sanja Fidler
  • MRZ

Object detection is one of the main challenges of computer vision. In the standard setting, we are given an image and the goal is to place bounding boxes around the objects and recognize their classes. In robotics, estimating additional information such as accurate viewpoint or detailed segmentation is important for planning and interaction. In this talk, I'll approach detection in three scenarios: purely 2D, 3D from 2D and 3D from 3D and show how different types of information can be used to significantly boost the current state-of-the-art in detection.
 


  • Karl Abson
  • MRZ Seminar Room

Motion capture and data driven technologies have come very far over the past few years. In terms of human capture the high volume of research that has gone into this sub group has led to very impressive results. Human motion can now be captured in real time which when used in the creative sectors can lead to blockbuster films such as Avatar. Similarly in the medical sectors these techniques can be used to diagnose, analyse performance and avoid invasive procedures in tasks such as deformity correction. There is, however, very little research on motion capture of animals. While the technology for capturing animal motion exists, the method used is inefficient, unreliable and limited, as much manual work is required to turn blocked out motions into acceptable results. How we move forward with a suitable procedure however is the major question. Do we extend the life of marker based capture or do we move towards the holy grail of markerless tracking? In this talk we look at a possible solution suitable for both possibilities through physically based simulation techniques. It is our belief that such techniques could help cross the gap in the uncanny valley as far as marker based capture is concerned but also be useful as far as markerless tracking is concerned.


Discriminative Non-blind Deblurring

Talk
  • 03 June 2013 • 13:00:00
  • Uwe Schmidt
  • MRZ seminar

Non-blind deblurring is an integral component of blind approaches for removing image blur due to camera shake. Even though learning-based deblurring methods exist, they have been limited to the generative case and are computationally expensive. To this date, manually-defined models are thus most widely used, though limiting the attained restoration quality. We address this gap by proposing a discriminative approach for non-blind deblurring. One key challenge is that the blur kernel in use at test time is not known in advance. To address this, we analyze existing approaches that use half-quadratic regularization. From this analysis, we derive a discriminative model cascade for image deblurring. Our cascade model consists of a Gaussian CRF at each stage, based on the recently introduced regression tree fields. We train our model by loss minimization and use synthetically generated blur kernels to generate training data. Our experiments show that the proposed approach is efficient and yields state-of-the-art restoration quality on images corrupted with synthetic and real blur.


Interactive Variational Shape Modeling

Talk
  • 27 May 2013 • 11:15:00
  • Olga Sorkine-Hornung
  • Max Planck Haus Lecture Hall

Irregular triangle meshes are a powerful digital shape representation: they are flexible and can represent virtually any complex shape; they are efficiently rendered by graphics hardware; they are the standard output of 3D acquisition and routinely used as input to simulation software. Yet irregular meshes are difficult to model and edit because they lack a higher-level control mechanism. In this talk, I will survey a series of research results on surface modeling with meshes and show how high-quality shapes can be manipulated in a fast and intuitive manner. I will outline the current challenges in intelligent and more user-friendly modeling metaphors and will attempt to suggest possible directions for future work in this area.


3D vision in a changing world

Talk
  • 17 May 2013 • 09:15:00
  • Andrew Fitzgibbon
  • MPH Lecture Hall

3D reconstruction from images has been a tremendous success-story of computer vision, with city-scale reconstruction now a reality.   However, these successes apply almost exclusively in a static world, where the only motion is that of the camera.  Even with the advent of realtime depth cameras, full 3D modelling of dynamic scenes lags behind the rigid-scene case, and for many objects of interest (e.g. animals moving in natural environments), depth sensing remains challenging.  In this talk, I will discuss a range of recent work in the modelling of nonrigid real-world 3D shape from 2D images, for example building generic animal models from internet photo collections.   While the state of the art depends heavily on dense point tracks from textured surfaces,  it is rare to find suitably textured surfaces: most animals are limited in texture (think of dogs, cats, cows, horses, …). I will show how this assumption can be relaxed by incorporating the strong constraints given by the object’s silhouette.
 


  • Gerard Pons-Moll
  • MPH Lecture Hall

Significant progress has been made over the last years in estimating people's shape and motion from video and nonetheless the problem still remains unsolved. This is especially true in uncontrolled environments such as people in the streets or the office where background clutter and occlusions make the problem even more challenging.
The goal of our research is to develop computational methods that enable human pose estimation from video and inertial sensors in indoor and outdoor environments. Specifically, I will focus on one of our past projects in which we introduce a hybrid Human Motion Capture system that combines video input with sparse inertial sensor input. Employing a particle-based optimization scheme, our idea is to use orientation cues derived from the inertial input to sample particles from the manifold of valid poses. Additionally, we introduce a novel sensor noise model to account for uncertainties based on the von Mises-Fisher distribution. Doing so, orientation constraints are naturally fulfilled and the number of needed particles can be kept very small. More generally, our method can be used to sample poses that fulfill arbitrary orientation or positional kinematic constraints. In the experiments, we show that our system can track even highly dynamic motions in an outdoor environment with changing illumination, background clutter, and shadows.


What Make Big Visual Data Hard?

Talk
  • 29 April 2013 • 13:15:00
  • Alexei Efros
  • MPH Lecture Hall

There are an estimated 3.5 trillion photographs in the world, of which 10% have been taken in the past 12 months. Facebook alone reports 6 billion photo uploads per month. Every minute, 72 hours of video are uploaded to YouTube. Cisco estimates that in the next few years, visual data (photos and video) will account for over 85% of total internet traffic. Yet, we currently lack effective computational methods for making sense of all this mass of visual data. Unlike easily indexed content, such as text, visual content is not routinely searched or mined; it's not even hyperlinked. Visual data is Internet's "digital dark matter" [Perona,2010] -- it's just sitting there!

In this talk, I will first discuss some of the unique challenges that make Big Visual Data difficult compared to other types of content. In particular, I will argue that the central problem is the lack a good measure of similarity for visual data. I will then present some of our recent work that aims to address this challenge in the context of visual matching, image retrieval and visual data mining. As an application of the latter, we used Google Street View data for an entire city in an attempt to answer that age-old question which has been vexing poets (and poets-turned-geeks): "What makes Paris look like Paris?"


  • Cristobal Curio

Studying the interface between artificial and biological vision has been an area of research that has been greatly promoted for a long time. It seems promising that cognitive science can provide new ideas to interface computer vision and human perception, yet no established design principles do exist. In the first part of my talk I am going to introduce the novel concept of 'object detectability'. Object detectability refers to a measure of how likely a human observer is visually aware of the location and presence of specific object types in a complex, dynamic, urban scene.

We have shown a proof of concept of how to maximize human observers' scene awareness in a dynamic driving context. Nonlinear functions are learnt from experimental samples of a combined feature vector of human gaze and visual features mapping to object detectabilities. We obtain object detectabilities through a detection experiment, simulating a proxy task of distracted real-world driving. In order to specifically enhance overall pedestrian detectability in a dynamic scene, the sum of individual detectability predictors defines a complex cost function that we seek to optimize with respect to human gaze. Results show significantly increased human scene awareness in hazardous test situations comparing optimized gaze and random fixation. Thus, our approach can potentially help a driver to save reaction time and resolve a risky maneuvre. In our framework, the remarkable ability of the human visual system to detect specific objects in the periphery has been implicitly characterized by our perceptual detectability task and has thus been taken into account.

The framework may provide a foundation for future work to determine what kind of information a Computer Vision system should process reliably, e.g. certain pose or motion features, in order to optimally alert a driver in time-critical situations. Dynamic image data was taken from the Caltech Pedestrian database. I will conclude with a brief overview of recent work, including a new circular output random regression forest for continuous object viewpoint estimation and a novel learning-based, monocular odometry approach based on robust LVMs and sensorimotor learning, offering stable 3D information integration. Last but not least, I present results of a perception experiment to quantify emotion in estimated facial movement synergy components that can be exploited to control emotional content of 3D avatars in a perceptually meaningful way.

This work was done in particular with David Engel (now a Post-Doc at M.I.T.), Christian Herdtweck (a PhD student at MPI Biol. Cybernetics), and in collaboration with Prof. Martin A. Giese and Dr. Enrico Chiovetto, Center for Integrated Neuroscience, Tübingen.