Header logo is ps
Department Talks

Mind Games

IS Colloquium
  • 21 December 2018 • 11:00 12:00
  • Peter Dayan
  • IS Lecture Hall

Much existing work in reinforcement learning involves environments that are either intentionally neutral, lacking a role for cooperation and competition, or intentionally simple, when agents need imagine nothing more than that they are playing versions of themselves. Richer game theoretic notions become important as these constraints are relaxed. For humans, this encompasses issues that concern utility, such as envy and guilt, and that concern inference, such as recursive modeling of other players, I will discuss studies treating a paradigmatic game of trust as an interactive partially-observable Markov decision process, and will illustrate the solution concepts with evidence from interactions between various groups of subjects, including those diagnosed with borderline and anti-social personality disorders.


  • Yao Feng
  • PS Aquarium

In this talk, I will present my understanding on 3D face reconstruction, modelling and applications from a deep learning perspective. In the first part of my talk, I will discuss the relationship between representations (point clouds, meshes, etc) and network layers (CNN, GCN, etc) on face reconstruction task, then present my ECCV work PRN which proposed a new representation to help achieve state-of-the-art performance on face reconstruction and dense alignment tasks. I will also introduce my open source project face3d that provides examples for generating different 3D face representations. In the second part of the talk, I will talk some publications in integrating 3D techniques into deep networks, then introduce my upcoming work which implements this. In the third part, I will present how related tasks could promote each other in deep learning, including face recognition for face reconstruction task and face reconstruction for face anti-spoofing task. Finally, with such understanding of these three parts, I will present my plans on 3D face modelling and applications.

Organizers: Timo Bolkart


Generating Faces & Heads: Texture, Shape and Beyond.

Talk
  • 17 December 2018 • 11:00 12:00
  • Stefanos Zafeiriou
  • PS Aquarium

The past few years with the advent of Deep Convolutional Neural Networks (DCNNs), as well as the availability of visual data it was shown that it is possible to produce excellent results in very challenging tasks, such as visual object recognition, detection, tracking etc. Nevertheless, in certain tasks such as fine-grain object recognition (e.g., face recognition) it is very difficult to collect the amount of data that are needed. In this talk, I will show how, using DCNNs, we can generate highly realistic faces and heads and use them for training algorithms such as face and facial expression recognition. Next, I will reverse the problem and demonstrate how by having trained a very powerful face recognition network it can be used to perform very accurate 3D shape and texture reconstruction of faces from a single image. Finally, I will demonstrate how to create very lightweight networks for representing 3D face texture and shape structure by capitalising upon intrinsic mesh convolutions.

Organizers: Dimitrios Tzionas


  • Prof. Dr. Björn Ommer
  • PS Aquarium

Understanding objects and their behavior from images and videos is a difficult inverse problem. It requires learning a metric in image space that reflects object relations in real world. This metric learning problem calls for large volumes of training data. While images and videos are easily available, labels are not, thus motivating self-supervised metric and representation learning. Furthermore, I will present a widely applicable strategy based on deep reinforcement learning to improve the surrogate tasks underlying self-supervision. Thereafter, the talk will cover the learning of disentangled representations that explicitly separate different object characteristics. Our approach is based on an analysis-by-synthesis paradigm and can generate novel object instances with flexible changes to individual characteristics such as their appearance and pose. It nicely addresses diverse applications in human and animal behavior analysis, a topic we have intensive collaboration on with neuroscientists. Time permitting, I will discuss the disentangling of representations from a wider perspective including novel strategies to image stylization and new strategies for regularization of the latent space of generator networks.

Organizers: Joel Janai


  • Yanxi Liu
  • Aquarium (N3.022)

Human pose stability analysis is the key to understanding locomotion and control of body equilibrium, with numerous applications in the fields of Kinesiology, Medicine and Robotics. We propose and validate a novel approach to learn dynamics from kinematics of a human body to aid stability analysis. More specifically, we propose an end-to-end deep learning architecture to regress foot pressure from a human pose derived from video. We have collected and utilized a set of long (5min +) choreographed Taiji (Tai Chi) sequences of multiple subjects with synchronized motion capture, foot pressure and video data. The derived human pose data and corresponding foot pressure maps are used jointly in training a convolutional neural network with residual architecture, named “PressNET”. Cross validation results show promising performance of PressNet, significantly outperforming the baseline method under reasonable sensor noise ranges.

Organizers: Nadine Rueegg


Recognizing the Pain Expressions of Horses

Talk
  • 10 December 2018 • 14:00 15:00
  • Prof. Dr. Hedvig Kjellström
  • Aquarium (N3.022)

Recognition of pain in horses and other animals is important, because pain is a manifestation of disease and decreases animal welfare. Pain diagnostics for humans typically includes self-evaluation and location of the pain with the help of standardized forms, and labeling of the pain by an clinical expert using pain scales. However, animals cannot verbalize their pain as humans can, and the use of standardized pain scales is challenged by the fact that animals as horses and cattle, being prey animals, display subtle and less obvious pain behavior - it is simply beneficial for a prey animal to appear healthy, in order lower the interest from predators. We work together with veterinarians to develop methods for automatic video-based recognition of pain in horses. These methods are typically trained with video examples of behavioral traits labeled with pain level and pain characteristics. This automated, user independent system for recognition of pain behavior in horses will be the first of its kind in the world. A successful system might change the concept for how we monitor and care for our animals.


  • Umar Iqbal
  • PS Aquarium

In this talk, I will present an overview of my Ph.D. research towards articulated human pose estimation from unconstrained images and videos. In the first part of the talk, I will present an approach to jointly model multi-person pose estimation and tracking in a single formulation. The approach represents body joint detections in a video by a spatiotemporal graph and solves an integer linear program to partition the graph into sub-graphs that correspond to plausible body pose trajectories for each person. I will also introduce the PoseTrack dataset and benchmark which is now the de-facto standard for multi-person pose estimation and tracking. In the second half of the talk, I will present a new method for 3D pose estimation from a monocular image through a novel 2.5D pose representation. The new 2.5D representation can be reliably estimated from an RGB image. Furthermore, it allows to exactly reconstruct the absolute 3D body pose up to a scaling factor, which can be estimated additionally if a prior of the body size is given. I will also describe a novel CNN architecture to implicitly learn the heatmaps and depth-maps for human body key-points from a single RGB image.

Organizers: Dimitrios Tzionas


  • Prof. Dr. Stefan Roth
  • N0.002

Supervised learning with deep convolutional networks is the workhorse of the majority of computer vision research today. While much progress has been made already, exploiting deep architectures with standard components, enormous datasets, and massive computational power, I will argue that it pays to scrutinize some of the components of modern deep networks. I will begin with looking at the common pooling operation and show how we can replace standard pooling layers with a perceptually-motivated alternative, with consistent gains in accuracy. Next, I will show how we can leverage self-similarity, a well known concept from the study of natural images, to derive non-local layers for various vision tasks that boost the discriminative power. Finally, I will present a lightweight approach to obtaining predictive probabilities in deep networks, allowing to judge the reliability of the prediction.

Organizers: Michael Black


A fine-grained perspective onto object interactions

Talk
  • 30 October 2018 • 10:30 11:30
  • Dima Damen
  • N0.002

This talk aims to argue for a fine-grained perspective onto human-object interactions, from video sequences. I will present approaches for the understanding of ‘what’ objects one interacts with during daily activities, ‘when’ should we label the temporal boundaries of interactions, ‘which’ semantic labels one can use to describe such interactions and ‘who’ is better when contrasting people perform the same interaction. I will detail my group’s latest works on sub-topics related to: (1) assessing action ‘completion’ – when an interaction is attempted but not completed [BMVC 2018], (2) determining skill or expertise from video sequences [CVPR 2018] and (3) finding unequivocal semantic representations for object interactions [ongoing work]. I will also introduce EPIC-KITCHENS 2018, the recently released largest dataset of object interactions in people’s homes, recorded using wearable cameras. The dataset includes 11.5M frames fully annotated with objects and actions, based on unique annotations from the participants narrating their own videos, thus reflecting true intention. Three open challenges are now available on object detection, action recognition and action anticipation [http://epic-kitchens.github.io]

Organizers: Mohamed Hassan


Still, In Motion

Talk
  • 12 October 2018 • 11:00 12:00
  • Michael Cohen

In this talk, I will take an autobiographical approach to explain both where we have come from in computer graphics from the early days of rendering, and to point towards where we are going in this new world of smartphones and social media. We are at a point in history where the abilities to express oneself with media is unparalleled. The ubiquity and power of mobile devices coupled with new algorithmic paradigms is opening new expressive possibilities weekly. At the same time, these new creative media (composite imagery, augmented imagery, short form video, 3D photos) also offer unprecedented abilities to move freely between what is real and unreal. I will focus on the spaces in between images and video, and in between objective and subjective reality. Finally, I will close with some lessons learned along the way.