Header logo is ps


2016


Thumb xl tes cvpr16 bilateral
Learning Sparse High Dimensional Filters: Image Filtering, Dense CRFs and Bilateral Neural Networks

Jampani, V., Kiefel, M., Gehler, P. V.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 4452-4461, June 2016 (inproceedings)

Abstract
Bilateral filters have wide spread use due to their edge-preserving properties. The common use case is to manually choose a parametric filter type, usually a Gaussian filter. In this paper, we will generalize the parametrization and in particular derive a gradient descent algorithm so the filter parameters can be learned from data. This derivation allows to learn high dimensional linear filters that operate in sparsely populated feature spaces. We build on the permutohedral lattice construction for efficient filtering. The ability to learn more general forms of high-dimensional filters can be used in several diverse applications. First, we demonstrate the use in applications where single filter applications are desired for runtime reasons. Further, we show how this algorithm can be used to learn the pairwise potentials in densely connected conditional random fields and apply these to different image segmentation tasks. Finally, we introduce layers of bilateral filters in CNNs and propose bilateral neural networks for the use of high-dimensional sparse data. This view provides new ways to encode model structure into network architectures. A diverse set of experiments empirically validates the usage of general forms of filters.

project page code CVF open-access pdf supplementary poster Project Page Project Page [BibTex]


Thumb xl futeaser
Occlusion boundary detection via deep exploration of context

Fu, H., Wang, C., Tao, D., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Occlusion boundaries contain rich perceptual information about the underlying scene structure. They also provide important cues in many visual perception tasks such as scene understanding, object recognition, and segmentation. In this paper, we improve occlusion boundary detection via enhanced exploration of contextual information (e.g., local structural boundary patterns, observations from surrounding regions, and temporal context), and in doing so develop a novel approach based on convolutional neural networks (CNNs) and conditional random fields (CRFs). Experimental results demonstrate that our detector significantly outperforms the state-of-the-art (e.g., improving the F-measure from 0.62 to 0.71 on the commonly used CMU benchmark). Last but not least, we empirically assess the roles of several important components of the proposed detector, so as to validate the rationale behind this approach.

pdf [BibTex]

pdf [BibTex]


Thumb xl jun teaser
Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer

Xie, J., Kiefel, M., Sun, M., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a probabilistic model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of-the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


Thumb xl appealingavatarsbig
Appealing female avatars from 3D body scans: Perceptual effects of stylization

Fleming, R., Mohler, B., Romero, J., Black, M. J., Breidt, M.

In 11th Int. Conf. on Computer Graphics Theory and Applications (GRAPP), Febuary 2016 (inproceedings)

Abstract
Advances in 3D scanning technology allow us to create realistic virtual avatars from full body 3D scan data. However, negative reactions to some realistic computer generated humans suggest that this approach might not always provide the most appealing results. Using styles derived from existing popular character designs, we present a novel automatic stylization technique for body shape and colour information based on a statistical 3D model of human bodies. We investigate whether such stylized body shapes result in increased perceived appeal with two different experiments: One focuses on body shape alone, the other investigates the additional role of surface colour and lighting. Our results consistently show that the most appealing avatar is a partially stylized one. Importantly, avatars with high stylization or no stylization at all were rated to have the least appeal. The inclusion of colour information and improvements to render quality had no significant effect on the overall perceived appeal of the avatars, and we observe that the body shape primarily drives the change in appeal ratings. For body scans with colour information, we found that a partially stylized avatar was most effective, increasing average appeal ratings by approximately 34%.

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Thumb xl teaser
Deep Discrete Flow

Güney, F., Geiger, A.

Asian Conference on Computer Vision (ACCV), 2016 (conference) Accepted

pdf suppmat Project Page [BibTex]

pdf suppmat Project Page [BibTex]


Thumb xl siyu eccvw
Multi-Person Tracking by Multicuts and Deep Matching

(Winner of the Multi-Object Tracking Challenge ECCV 2016)

Tang, S., Andres, B., Andriluka, M., Schiele, B.

ECCV Workshop on Benchmarking Mutliple Object Tracking, 2016 (conference)

PDF [BibTex]

PDF [BibTex]


Thumb xl website thumbnail
Reconstructing Articulated Rigged Models from RGB-D Videos

Tzionas, D., Gall, J.

In European Conference on Computer Vision Workshops 2016 (ECCVW’16) - Workshop on Recovering 6D Object Pose (R6D’16), pages: 620-633, Springer International Publishing, 2016 (inproceedings)

Abstract
Although commercial and open-source software exist to reconstruct a static object from a sequence recorded with an RGB-D sensor, there is a lack of tools that build rigged models of articulated objects that deform realistically and can be used for tracking or animation. In this work, we fill this gap and propose a method that creates a fully rigged model of an articulated object from depth data of a single sensor. To this end, we combine deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow. The fully rigged model then consists of a watertight mesh, embedded skeleton, and skinning weights.

pdf suppl Project's Website YouTube link (url) DOI [BibTex]

pdf suppl Project's Website YouTube link (url) DOI [BibTex]


Thumb xl jointmc
A Multi-cut Formulation for Joint Segmentation and Tracking of Multiple Objects

Keuper, M., Tang, S., Yu, Z., Andres, B., Brox, T., Schiele, B.

In arXiv:1607.06317, 2016 (inproceedings)

PDF [BibTex]

PDF [BibTex]

1990


Thumb xl bildschirmfoto 2013 01 14 um 12.09.14
A model for the detection of motion over time

Black, M. J., Anandan, P.

In Proc. Int. Conf. on Computer Vision, ICCV-90, pages: 33-37, Osaka, Japan, December 1990 (inproceedings)

Abstract
We propose a model for the recovery of visual motion fields from image sequences. Our model exploits three constraints on the motion of a patch in the environment: i) Data Conservation: the intensity structure corresponding to an environmental surface patch changes gradually over time; ii) Spatial Coherence: since surfaces have spatial extent neighboring points have similar motions; iii) Temporal Coherence: the direction and velocity of motion for a surface patch changes gradually. The formulation of the constraints takes into account the possibility of multiple motions at a particular location. We also present a highly parallel computational model for realizing these constraints in which computation occurs locally, knowledge about the motion increases over time, and occlusion and disocclusion boundaries are estimated. An implementation of the model using a stochastic temporal updating scheme is described. Experiments with both synthetic and real imagery are presented.

pdf [BibTex]

1990

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 12.14.18
Constraints for the early detection of discontinuity from motion

Black, M. J., Anandan, P.

In Proc. National Conf. on Artificial Intelligence, AAAI-90, pages: 1060-1066, Boston, MA, 1990 (inproceedings)

Abstract
Surface discontinuities are detected in a sequence of images by exploiting physical constraints at early stages in the processing of visual motion. To achieve accurate early discontinuity detection we exploit five physical constraints on the presence of discontinuities: i) the shape of the sum of squared differences (SSD) error surface in the presence of surface discontinuities; ii) the change in the shape of the SSD surface due to relative surface motion; iii) distribution of optic flow in a neighborhood of a discontinuity; iv) spatial consistency of discontinuities; V) temporal consistency of discontinuities. The constraints are described, and experimental results on sequences of real and synthetic images are presented. The work has applications in the recovery of environmental structure from motion and in the generation of dense optic flow fields.

pdf [BibTex]

pdf [BibTex]