ps Thumb sm petergehler copy
Peter Vincent Gehler (Project leader)
Research Group Leader
ei ps Thumb sm thumb profile 1
Martin Kiefel
Alumni
ps Thumb sm thumb varun
Varun Jampani
Ph.D. Student
no image

5 results

2015


Thumb md teaser
Permutohedral Lattice CNNs

Kiefel, M., Jampani, V., Gehler, P.

In ICLR Workshop Track, May 2015 (inproceedings)

Abstract
This paper presents a convolutional layer that is able to process sparse input features. As an example, for image recognition problems this allows an efficient filtering of signals that do not lie on a dense grid (like pixel position), but of more general features (such as color values). The presented algorithm makes use of the permutohedral lattice data structure. The permutohedral lattice was introduced to efficiently implement a bilateral filter, a commonly used image processing operation. Its use allows for a generalization of the convolution type found in current (spatial) convolutional network architectures.

pdf link (url) Project Page [BibTex]

2015

pdf link (url) Project Page [BibTex]


Thumb md jampani15aistats teaser
Consensus Message Passing for Layered Graphical Models

Jampani, V., Eslami, S., Tarlow, D., Kohli, P., Winn, J.

In Eighteenth International Conference on Artificial Intelligence and Statistics (AISTATS), 38, pages: 425-433, JMLR Workshop and Conference Proceedings, May 2015 (inproceedings)

Abstract
Generative models provide a powerful framework for probabilistic reasoning. However, in many domains their use has been hampered by the practical difficulties of inference. This is particularly the case in computer vision, where models of the imaging process tend to be large, loopy and layered. For this reason bottom-up conditional models have traditionally dominated in such domains. We find that widely-used, general-purpose message passing inference algorithms such as Expectation Propagation (EP) and Variational Message Passing (VMP) fail on the simplest of vision models. With these models in mind, we introduce a modification to message passing that learns to exploit their layered structure by passing 'consensus' messages that guide inference towards good solutions. Experiments on a variety of problems show that the proposed technique leads to significantly more accurate inference results, not only when compared to standard EP and VMP, but also when compared to competitive bottom-up conditional models.

online pdf supplementary link (url) Project Page [BibTex]

online pdf supplementary link (url) Project Page [BibTex]

2014


Thumb md thumb 9780262028370
Advanced Structured Prediction

Nowozin, S., Gehler, P., Jancsary, J., Lampert, C.

Advanced Structured Prediction, pages: 432, Neural Information Processing Series, MIT Press, November 2014 (book)

Abstract
The goal of structured prediction is to build machine learning models that predict relational information that itself has structure, such as being composed of multiple interrelated parts. These models, which reflect prior knowledge, task-specific relations, and constraints, are used in fields including computer vision, speech recognition, natural language processing, and computational biology. They can carry out such tasks as predicting a natural language sentence, or segmenting an image into meaningful components. These models are expressive and powerful, but exact computation is often intractable. A broad research effort in recent years has aimed at designing structured prediction models and approximate inference and learning procedures that are computationally efficient. This volume offers an overview of this recent research in order to make the work accessible to a broader research community. The chapters, by leading researchers in the field, cover a range of topics, including research trends, the linear programming relaxation approach, innovations in probabilistic modeling, recent theoretical progress, and resource-aware learning.

publisher link (url) Project Page [BibTex]

2014

publisher link (url) Project Page [BibTex]

2013


Thumb md thumb
Branch&Rank for Efficient Object Detection

Lehmann, A., Gehler, P., VanGool, L.

International Journal of Computer Vision, Springer, December 2013 (article)

Abstract
Ranking hypothesis sets is a powerful concept for efficient object detection. In this work, we propose a branch&rank scheme that detects objects with often less than 100 ranking operations. This efficiency enables the use of strong and also costly classifiers like non-linear SVMs with RBF-TeX kernels. We thereby relieve an inherent limitation of branch&bound methods as bounds are often not tight enough to be effective in practice. Our approach features three key components: a ranking function that operates on sets of hypotheses and a grouping of these into different tasks. Detection efficiency results from adaptively sub-dividing the object search space into decreasingly smaller sets. This is inherited from branch&bound, while the ranking function supersedes a tight bound which is often unavailable (except for rather limited function classes). The grouping makes the system effective: it separates image classification from object recognition, yet combines them in a single formulation, phrased as a structured SVM problem. A novel aspect of branch&rank is that a better ranking function is expected to decrease the number of classifier calls during detection. We use the VOC’07 dataset to demonstrate the algorithmic properties of branch&rank.

pdf link (url) Project Page [BibTex]

2013

pdf link (url) Project Page [BibTex]

2011


Thumb md mt
Branch&Rank: Non-Linear Object Detection

(Best Impact Paper Prize)

Lehmann, A., Gehler, P., VanGool, L.

In Proceedings of the British Machine Vision Conference (BMVC), pages: 8.1-8.11, (Editors: Jesse Hoey and Stephen McKenna and Emanuele Trucco), BMVA Press, September 2011, http://dx.doi.org/10.5244/C.25.8 (inproceedings)

video of talk pdf slides supplementary Project Page [BibTex]

2011

video of talk pdf slides supplementary Project Page [BibTex]