Header logo is ps


2019


Thumb xl cover
Motion Planning for Multi-Mobile-Manipulator Payload Transport Systems

Tallamraju, R., Salunkhe, D., Rajappa, S., Ahmad, A., Karlapalem, K., Shah, S. V.

15th IEEE International Conference on Automation Science and Engineering, IEEE, August 2019 (conference) Accepted

[BibTex]

2019

[BibTex]


Thumb xl teaser results
Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation

Ranjan, A., Jampani, V., Balles, L., Kim, K., Sun, D., Wulff, J., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2019 (inproceedings)

Abstract
We address the unsupervised learning of several interconnected problems in low-level vision: single view depth prediction, camera motion estimation, optical flow, and segmentation of a video into the static scene and moving regions. Our key insight is that these four fundamental vision problems are coupled through geometric constraints. Consequently, learning to solve them together simplifies the problem because the solutions can reinforce each other. We go beyond previous work by exploiting geometry more explicitly and segmenting the scene into static and moving regions. To that end, we introduce Competitive Collaboration, a framework that facilitates the coordinated training of multiple specialized neural networks to solve complex problems. Competitive Collaboration works much like expectation-maximization, but with neural networks that act as both competitors to explain pixels that correspond to static or moving regions, and as collaborators through a moderator that assigns pixels to be either static or independently moving. Our novel method integrates all these problems in a common framework and simultaneously reasons about the segmentation of the scene into moving objects and the static background, the camera motion, depth of the static scene structure, and the optical flow of moving objects. Our model is trained without any supervision and achieves state-of-the-art performance among joint unsupervised methods on all sub-problems.

Paper link (url) Project Page Project Page [BibTex]


Thumb xl cvpr2019 demo v2.001
Local Temporal Bilinear Pooling for Fine-grained Action Parsing

Zhang, Y., Tang, S., Muandet, K., Jarvers, C., Neumann, H.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2019 (inproceedings)

Abstract
Fine-grained temporal action parsing is important in many applications, such as daily activity understanding, human motion analysis, surgical robotics and others requiring subtle and precise operations in a long-term period. In this paper we propose a novel bilinear pooling operation, which is used in intermediate layers of a temporal convolutional encoder-decoder net. In contrast to other work, our proposed bilinear pooling is learnable and hence can capture more complex local statistics than the conventional counterpart. In addition, we introduce exact lower-dimension representations of our bilinear forms, so that the dimensionality is reduced with neither information loss nor extra computation. We perform intensive experiments to quantitatively analyze our model and show the superior performances to other state-of-the-art work on various datasets.

Code video demo pdf link (url) [BibTex]

Code video demo pdf link (url) [BibTex]


Thumb xl ringnet
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Sanyal, S., Bolkart, T., Feng, H., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2019 (inproceedings)

Abstract
The estimation of 3D face shape from a single image must be robust to variations in lighting, head pose, expression, facial hair, makeup, and occlusions. Robustness requires a large training set of in-the-wild images, which by construction, lack ground truth 3D shape. To train a network without any 2D-to-3D supervision, we present RingNet, which learns to compute 3D face shape from a single image. Our key observation is that an individual’s face shape is constant across images, regardless of expression, pose, lighting, etc. RingNet leverages multiple images of a person and automatically detected 2D face features. It uses a novel loss that encourages the face shape to be similar when the identity is the same and different for different people. We achieve invariance to expression by representing the face using the FLAME model. Once trained, our method takes a single image and outputs the parameters of FLAME, which can be readily animated. Additionally we create a new database of faces “not quite in-the-wild” (NoW) with 3D head scans and high-resolution images of the subjects in a wide variety of conditions. We evaluate publicly available methods and find that RingNet is more accurate than methods that use 3D supervision. The dataset, model, and results are available for research purposes.

code pdf preprint link (url) [BibTex]

code pdf preprint link (url) [BibTex]


Thumb xl obman new
Learning joint reconstruction of hands and manipulated objects

Hasson, Y., Varol, G., Tzionas, D., Kalevatykh, I., Black, M. J., Laptev, I., Schmid, C.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2019 (inproceedings)

Abstract
Estimating hand-object manipulations is essential for interpreting and imitating human actions. Previous work has made significant progress towards reconstruction of hand poses and object shapes in isolation. Yet, reconstructing hands and objects during manipulation is a more challenging task due to significant occlusions of both the hand and object. While presenting challenges, manipulations may also simplify the problem since the physics of contact restricts the space of valid hand-object configurations. For example, during manipulation, the hand and object should be in contact but not interpenetrate. In this work, we regularize the joint reconstruction of hands and objects with manipulation constraints. We present an end-to-end learnable model that exploits a novel contact loss that favors physically plausible hand-object constellations. Our approach improves grasp quality metrics over baselines, using RGB images as input. To train and evaluate the model, we also propose a new large-scale synthetic dataset, ObMan, with hand-object manipulations. We demonstrate the transferability of ObMan-trained models to real data.

pdf suppl poster link (url) Project Page Project Page [BibTex]

pdf suppl poster link (url) Project Page Project Page [BibTex]


Thumb xl smplex
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image

Pavlakos, G., Choutas, V., Ghorbani, N., Bolkart, T., Osman, A. A. A., Tzionas, D., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2019 (inproceedings)

Abstract
To facilitate the analysis of human actions, interactions and emotions, we compute a 3D model of human body pose, hand pose, and facial expression from a single monocular image. To achieve this, we use thousands of 3D scans to train a new, unified, 3D model of the human body, SMPL-X, that extends SMPL with fully articulated hands and an expressive face. Learning to regress the parameters of SMPL-X directly from images is challenging without paired images and 3D ground truth. Consequently, we follow the approach of SMPLify, which estimates 2D features and then optimizes model parameters to fit the features. We improve on SMPLify in several significant ways: (1) we detect 2D features corresponding to the face, hands, and feet and fit the full SMPL-X model to these; (2) we train a new neural network pose prior using a large MoCap dataset; (3) we define a new interpenetration penalty that is both fast and accurate; (4) we automatically detect gender and the appropriate body models (male, female, or neutral); (5) our PyTorch implementation achieves a speedup of more than 8x over Chumpy. We use the new method, SMPLify-X, to fit SMPL-X to both controlled images and images in the wild. We evaluate 3D accuracy on a new curated dataset comprising 100 images with pseudo ground-truth. This is a step towards automatic expressive human capture from monocular RGB data. The models, code, and data are available for research purposes at https://smpl-x.is.tue.mpg.de.

video code pdf suppl poster link (url) Project Page [BibTex]


Thumb xl voca
Capture, Learning, and Synthesis of 3D Speaking Styles

Cudeiro, D., Bolkart, T., Laidlaw, C., Ranjan, A., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2019 (inproceedings)

Abstract
Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input—even speech in languages other than English—and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at http://voca.is.tue.mpg.de.

code Project Page video paper [BibTex]

code Project Page video paper [BibTex]


Thumb xl hessepami
Learning and Tracking the 3D Body Shape of Freely Moving Infants from RGB-D sequences

Hesse, N., Pujades, S., Black, M., Arens, M., Hofmann, U., Schroeder, S.

Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019 (article)

Abstract
Statistical models of the human body surface are generally learned from thousands of high-quality 3D scans in predefined poses to cover the wide variety of human body shapes and articulations. Acquisition of such data requires expensive equipment, calibration procedures, and is limited to cooperative subjects who can understand and follow instructions, such as adults. We present a method for learning a statistical 3D Skinned Multi-Infant Linear body model (SMIL) from incomplete, low-quality RGB-D sequences of freely moving infants. Quantitative experiments show that SMIL faithfully represents the RGB-D data and properly factorizes the shape and pose of the infants. To demonstrate the applicability of SMIL, we fit the model to RGB-D sequences of freely moving infants and show, with a case study, that our method captures enough motion detail for General Movements Assessment (GMA), a method used in clinical practice for early detection of neurodevelopmental disorders in infants. SMIL provides a new tool for analyzing infant shape and movement and is a step towards an automated system for GMA.

pdf Journal DOI [BibTex]

pdf Journal DOI [BibTex]


Thumb xl kenny
Perceptual Effects of Inconsistency in Human Animations

Kenny, S., Mahmood, N., Honda, C., Black, M. J., Troje, N. F.

ACM Trans. Appl. Percept., 16(1):2:1-2:18, Febuary 2019 (article)

Abstract
The individual shape of the human body, including the geometry of its articulated structure and the distribution of weight over that structure, influences the kinematics of a person’s movements. How sensitive is the visual system to inconsistencies between shape and motion introduced by retargeting motion from one person onto the shape of another? We used optical motion capture to record five pairs of male performers with large differences in body weight, while they pushed, lifted, and threw objects. From these data, we estimated both the kinematics of the actions as well as the performer’s individual body shape. To obtain consistent and inconsistent stimuli, we created animated avatars by combining the shape and motion estimates from either a single performer or from different performers. Using these stimuli we conducted three experiments in an immersive virtual reality environment. First, a group of participants detected which of two stimuli was inconsistent. Performance was very low, and results were only marginally significant. Next, a second group of participants rated perceived attractiveness, eeriness, and humanness of consistent and inconsistent stimuli, but these judgements of animation characteristics were not affected by consistency of the stimuli. Finally, a third group of participants rated properties of the objects rather than of the performers. Here, we found strong influences of shape-motion inconsistency on perceived weight and thrown distance of objects. This suggests that the visual system relies on its knowledge of shape and motion and that these components are assimilated into an altered perception of the action outcome. We propose that the visual system attempts to resist inconsistent interpretations of human animations. Actions involving object manipulations present an opportunity for the visual system to reinterpret the introduced inconsistencies as a change in the dynamics of an object rather than as an unexpected combination of body shape and body motion.

publisher pdf DOI [BibTex]

publisher pdf DOI [BibTex]


Thumb xl webteaser
Perceiving Systems (2016-2018)
Scientific Advisory Board Report, 2019 (misc)

pdf [BibTex]

pdf [BibTex]


Thumb xl screen shot 2019 07 15 at 19.10.21
Learning to Train with Synthetic Humans

Hoffmann, D., Tzionas, D., Black, M., Tang, S.

In 2019 (inproceedings)

Abstract
Neural networks need big annotated datasets for training. However, manual annotation can be too expensive or even unfeasible for certain tasks, like multi-person 2D pose estimation with severe occlusions. A remedy for this is synthetic data with perfect ground truth. Here we explore two variations of synthetic data for this challenging problem; a dataset with purely synthetic humans, as well as a real dataset augmented with synthetic humans. We then study which approach better generalizes to real data, as well as the influence of virtual humans in the training loss. We observe that not all synthetic samples are equally informative for training, while the informative samples are different for each training stage. To exploit this observation, we employ an adversarial student-teacher framework; the teacher improves the student by providing the hardest samples for its current state as a challenge. Experiments show that this student-teacher framework outperforms all our baselines.

[BibTex]


Thumb xl virtualcaliper
The Virtual Caliper: Rapid Creation of Metrically Accurate Avatars from 3D Measurements

Pujades, S., Mohler, B., Thaler, A., Tesch, J., Mahmood, N., Hesse, N., Bülthoff, H. H., Black, M. J.

IEEE Transactions on Visualization and Computer Graphics, 25, pages: 1887,1897, IEEE, 2019 (article)

Abstract
Creating metrically accurate avatars is important for many applications such as virtual clothing try-on, ergonomics, medicine, immersive social media, telepresence, and gaming. Creating avatars that precisely represent a particular individual is challenging however, due to the need for expensive 3D scanners, privacy issues with photographs or videos, and difficulty in making accurate tailoring measurements. We overcome these challenges by creating “The Virtual Caliper”, which uses VR game controllers to make simple measurements. First, we establish what body measurements users can reliably make on their own body. We find several distance measurements to be good candidates and then verify that these are linearly related to 3D body shape as represented by the SMPL body model. The Virtual Caliper enables novice users to accurately measure themselves and create an avatar with their own body shape. We evaluate the metric accuracy relative to ground truth 3D body scan data, compare the method quantitatively to other avatar creation tools, and perform extensive perceptual studies. We also provide a software application to the community that enables novices to rapidly create avatars in fewer than five minutes. Not only is our approach more rapid than existing methods, it exports a metrically accurate 3D avatar model that is rigged and skinned.

Project Page IEEE Open Access IEEE Open Access PDF DOI [BibTex]

Project Page IEEE Open Access IEEE Open Access PDF DOI [BibTex]


Thumb xl autonomous mocap cover image new
Active Perception based Formation Control for Multiple Aerial Vehicles

Tallamraju, R., Price, E., Ludwig, R., Karlapalem, K., Bülthoff, H. H., Black, M. J., Ahmad, A.

IEEE Robotics and Automation Letters, Robotics and Automation Letters, IEEE, 2019 (article) Accepted

Abstract
We present a novel robotic front-end for autonomous aerial motion-capture (mocap) in outdoor environments. In previous work, we presented an approach for cooperative detection and tracking (CDT) of a subject using multiple micro-aerial vehicles (MAVs). However, it did not ensure optimal view-point configurations of the MAVs to minimize the uncertainty in the person's cooperatively tracked 3D position estimate. In this article, we introduce an active approach for CDT. In contrast to cooperatively tracking only the 3D positions of the person, the MAVs can actively compute optimal local motion plans, resulting in optimal view-point configurations, which minimize the uncertainty in the tracked estimate. We achieve this by decoupling the goal of active tracking into a quadratic objective and non-convex constraints corresponding to angular configurations of the MAVs w.r.t. the person. We derive this decoupling using Gaussian observation model assumptions within the CDT algorithm. We preserve convexity in optimization by embedding all the non-convex constraints, including those for dynamic obstacle avoidance, as external control inputs in the MPC dynamics. Multiple real robot experiments and comparisons involving 3 MAVs in several challenging scenarios are presented.

[BibTex]

[BibTex]


Thumb xl model
Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders

Ghosh, P., Losalka, A., Black, M. J.

In Proc. AAAI, 2019 (inproceedings)

Abstract
Susceptibility of deep neural networks to adversarial attacks poses a major theoretical and practical challenge. All efforts to harden classifiers against such attacks have seen limited success till now. Two distinct categories of samples against which deep neural networks are vulnerable, ``adversarial samples" and ``fooling samples", have been tackled separately so far due to the difficulty posed when considered together. In this work, we show how one can defend against them both under a unified framework. Our model has the form of a variational autoencoder with a Gaussian mixture prior on the latent variable, such that each mixture component corresponds to a single class. We show how selective classification can be performed using this model, thereby causing the adversarial objective to entail a conflict. The proposed method leads to the rejection of adversarial samples instead of misclassification, while maintaining high precision and recall on test data. It also inherently provides a way of learning a selective classifier in a semi-supervised scenario, which can similarly resist adversarial attacks. We further show how one can reclassify the detected adversarial samples by iterative optimization.

link (url) Project Page [BibTex]


Thumb xl rae
From Variational to Deterministic Autoencoders

Ghosh*, P., Sajjadi*, M. S. M., Vergari, A., Black, M. J., Schölkopf, B.

2019, *equal contribution (conference) Submitted

Abstract
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

arXiv [BibTex]

2009


Thumb xl teaser wacv2010
Ball Joints for Marker-less Human Motion Capture

Pons-Moll, G., Rosenhahn, B.

In IEEE Workshop on Applications of Computer Vision (WACV),, December 2009 (inproceedings)

pdf [BibTex]

2009

pdf [BibTex]


no image
Background Subtraction Based on Rank Constraint for Point Trajectories

Ahmad, A., Del Bue, A., Lima, P.

In pages: 1-3, October 2009 (inproceedings)

Abstract
This work deals with a background subtraction algorithm for a fish-eye lens camera having 3 degrees of freedom, 2 in translation and 1 in rotation. The core assumption in this algorithm is that the background is considered to be composed of a dominant static plane in the world frame. The novelty lies in developing a rank-constraint based background subtraction for equidistant projection model, a property of the fish-eye lens. A detail simulation result is presented to support the hypotheses explained in this paper.

link (url) [BibTex]

link (url) [BibTex]


Thumb xl teaser cinc
Parametric Modeling of the Beating Heart with Respiratory Motion Extracted from Magnetic Resonance Images

Pons-Moll, G., Crosas, C., Tadmor, G., MacLeod, R., Rosenhahn, B., Brooks, D.

In IEEE Computers in Cardiology (CINC), September 2009 (inproceedings)

[BibTex]

[BibTex]


Thumb xl ascc09
Computer cursor control by motor cortical signals in humans with tetraplegia

Kim, S., Simeral, J. D., Hochberg, L. R., Donoghue, J. P., Black, M. J.

In 7th Asian Control Conference, ASCC09, pages: 988-993, Hong Kong, China, August 2009 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
ISocRob-MSL 2009 Team Description Paper for Middle Sized League

Lima, P., Santos, J., Estilita, J., Barbosa, M., Ahmad, A., Carreira, J.

13th Annual RoboCup International Symposium 2009, July 2009 (techreport)

Abstract
This paper describes the status of the ISocRob MSL roboticsoccer team as required by the RoboCup 2009 qualification procedures.Since its previous participation in RoboCup, the ISocRob team has car-ried out significant developments in various topics, the most relevantof which are presented here. These include self-localization, 3D objecttracking and cooperative object localization, motion control and rela-tional behaviors. A brief description of the hardware of the ISocRobrobots and of the software architecture adopted by the team is also in-cluded.

[BibTex]

[BibTex]


no image
Denoising Fluorescence Endoscopy: A Motion-Compensated Temporal Recursive Video Filter with an Optimal Minimum Mean Square Error Parametrization

Stehle, T., Wulff, J., Behrens, A., Gross, S., Aach, T.

Abstract
Fluorescence endoscopy is an emerging technique for the detection of bladder cancer. A marker substance is brought into the patient's bladder which accumulates at cancer tissue. If a suitable narrow band light source is used for illumination, a red fluorescence of the marker substance is observable. Because of the low fluorescence photon count and because of the narrow band light source, only a small amount of light is detected by the camera's CCD sensor. This, in turn, leads to strong noise in the recorded video sequence. To overcome this problem, we apply a temporal recursive filter to the video sequence. The derivation of a filter function is presented, which leads to an optimal filter in the minimum mean square error sense. The algorithm is implemented as plug-in for the real-time capable clinical demonstrator platform RealTimeFrame and it is capable to process color videos with a resolution of 768times576 pixels at 50 frames per second.

pdf link (url) DOI [BibTex]


Thumb xl foe2009
Fields of Experts

Roth, S., Black, M. J.

International Journal of Computer Vision (IJCV), 82(2):205-29, April 2009 (article)

Abstract
We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach provides a practical method for learning high-order Markov random field (MRF) models with potential functions that extend over large pixel neighborhoods. These clique potentials are modeled using the Product-of-Experts framework that uses non-linear functions of many linear filter responses. In contrast to previous MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the capabilities of this Field-of-Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we obtain results that compete with specialized techniques.

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


no image
Classification of colon polyps in NBI endoscopy using vascularization features

Stehle, T., Auer, R., Gross, S., Behrens, A., Wulff, J., Aach, T., Winograd, R., Trautwein, C., Tischendorf, J.

In Medical Imaging 2009: Computer-Aided Diagnosis, 7260, (Editors: N. Karssemeijer and M. L. Giger), SPIE, February 2009 (inproceedings)

Abstract
The evolution of colon cancer starts with colon polyps. There are two different types of colon polyps, namely hyperplasias and adenomas. Hyperplasias are benign polyps which are known not to evolve into cancer and, therefore, do not need to be removed. By contrast, adenomas have a strong tendency to become malignant. Therefore, they have to be removed immediately via polypectomy. For this reason, a method to differentiate reliably adenomas from hyperplasias during a preventive medical endoscopy of the colon (colonoscopy) is highly desirable. A recent study has shown that it is possible to distinguish both types of polyps visually by means of their vascularization. Adenomas exhibit a large amount of blood vessel capillaries on their surface whereas hyperplasias show only few of them. In this paper, we show the feasibility of computer-based classification of colon polyps using vascularization features. The proposed classification algorithm consists of several steps: For the critical part of vessel segmentation, we implemented and compared two segmentation algorithms. After a skeletonization of the detected blood vessel candidates, we used the results as seed points for the Fast Marching algorithm which is used to segment the whole vessel lumen. Subsequently, features are computed from this segmentation which are then used to classify the polyps. In leave-one-out tests on our polyp database (56 polyps), we achieve a correct classification rate of approximately 90%.

DOI [BibTex]

DOI [BibTex]


Thumb xl 3dim09
One-shot scanning using de bruijn spaced grids

Ulusoy, A., Calakli, F., Taubin, G.

In Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on, pages: 1786-1792, IEEE, 2009 (inproceedings)

Abstract
In this paper we present a new one-shot method to reconstruct the shape of dynamic 3D objects and scenes based on active illumination. In common with other related prior-art methods, a static grid pattern is projected onto the scene, a video sequence of the illuminated scene is captured, a shape estimate is produced independently for each video frame, and the one-shot property is realized at the expense of space resolution. The main challenge in grid-based one-shot methods is to engineer the pattern and algorithms so that the correspondence between pattern grid points and their images can be established very fast and without uncertainty. We present an efficient one-shot method which exploits simple geometric constraints to solve the correspondence problem. We also introduce De Bruijn spaced grids, a novel grid pattern, and show with strong empirical data that the resulting scheme is much more robust compared to those based on uniform spaced grids.

pdf link (url) DOI [BibTex]

pdf link (url) DOI [BibTex]


no image
An introduction to Kernel Learning Algorithms

Gehler, P., Schölkopf, B.

In Kernel Methods for Remote Sensing Data Analysis, pages: 25-48, 2, (Editors: Gustavo Camps-Valls and Lorenzo Bruzzone), Wiley, New York, NY, USA, 2009 (inbook)

Abstract
Kernel learning algorithms are currently becoming a standard tool in the area of machine learning and pattern recognition. In this chapter we review the fundamental theory of kernel learning. As the basic building block we introduce the kernel function, which provides an elegant and general way to compare possibly very complex objects. We then review the concept of a reproducing kernel Hilbert space and state the representer theorem. Finally we give an overview of the most prominent algorithms, which are support vector classification and regression, Gaussian Processes and kernel principal analysis. With multiple kernel learning and structured output prediction we also introduce some more recent advancements in the field.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl iccv09
Estimating human shape and pose from a single image

Guan, P., Weiss, A., Balan, A., Black, M. J.

In Int. Conf. on Computer Vision, ICCV, pages: 1381-1388, 2009 (inproceedings)

pdf video - mov 25MB video - mp4 10MB YouTube Project Page [BibTex]

pdf video - mov 25MB video - mp4 10MB YouTube Project Page [BibTex]


Thumb xl screen shot 2012 02 21 at 15.56.00  2
On feature combination for multiclass object classification

Gehler, P., Nowozin, S.

In Proceedings of the Twelfth IEEE International Conference on Computer Vision, pages: 221-228, 2009, oral presentation (inproceedings)

project page, code, data GoogleScholar pdf DOI [BibTex]

project page, code, data GoogleScholar pdf DOI [BibTex]


no image
Evaluating the potential of primary motor and premotor cortex for mutltidimensional neuroprosthetic control of complete reaching and grasping actions

Vargas-Irwin, C. E., Yadollahpour, P., Shakhnarovich, G., Black, M. J., Donoghue, J. P.

2009 Abstract Viewer and Itinerary Planner. Society for Neuroscience, Society for Neuroscience, 2009, Online (conference)

[BibTex]

[BibTex]


Thumb xl tracking iccv09
Segmentation, Ordering and Multi-object Tracking Using Graphical Models

Wang, C., Gorce, M. D. L., Paragios, N.

In IEEE International Conference on Computer Vision (ICCV), 2009 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Visual Object Discovery

Sinha, P., Balas, B., Ostrovsky, Y., Wulff, J.

In Object Categorization: Computer and Human Vision Perspectives, pages: 301-323, (Editors: S. J. Dickinson, A. Leonardis, B. Schiele, M.J. Tarr), Cambridge University Press, 2009 (inbook)

link (url) [BibTex]

link (url) [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 12.02.32 pm
Modeling and Evaluation of Human-to-Robot Mapping of Grasps

Romero, J., Kjellström, H., Kragic, D.

In International Conference on Advanced Robotics (ICAR), pages: 1-6, 2009 (inproceedings)

Pdf [BibTex]

Pdf [BibTex]


Thumb xl nips2009b
An additive latent feature model for transparent object recognition

Fritz, M., Black, M., Bradski, G., Karayev, S., Darrell, T.

In Advances in Neural Information Processing Systems 22, NIPS, pages: 558-566, MIT Press, 2009 (inproceedings)

pdf slides [BibTex]

pdf slides [BibTex]


Thumb xl ncomm fig2
Automatic recognition of rodent behavior: A tool for systematic phenotypic analysis

Serre, T.*, Jhuang, H*., Garrote, E., Poggio, T., Steele, A.

CBCL paper #283/MIT-CSAIL-TR #2009-052., MIT, 2009 (techreport)

pdf [BibTex]

pdf [BibTex]


Thumb xl screen shot 2012 06 06 at 11.24.14 am
Let the kernel figure it out; Principled learning of pre-processing for kernel classifiers

Gehler, P., Nowozin, S.

In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pages: 2836-2843, IEEE Computer Society, 2009 (inproceedings)

doi project page pdf [BibTex]

doi project page pdf [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 12.04.52 pm
Monocular Real-Time 3D Articulated Hand Pose Estimation

Romero, J., Kjellström, H., Kragic, D.

In IEEE-RAS International Conference on Humanoid Robots, pages: 87-92, 2009 (inproceedings)

Pdf [BibTex]

Pdf [BibTex]


Thumb xl snap
Grasp Recognition and Mapping on Humanoid Robots

Do, M., Romero, J., Kjellström, H., Azad, P., Asfour, T., Kragic, D., Dillmann, R.

In IEEE-RAS International Conference on Humanoid Robots, pages: 465-471, 2009 (inproceedings)

Pdf Video [BibTex]

Pdf Video [BibTex]


Thumb xl teaser wc
4D Cardiac Segmentation of the Epicardium and Left Ventricle

Pons-Moll, G., Tadmor, G., MacLeod, R. S., Rosenhahn, B., Brooks, D. H.

In World Congress of Medical Physics and Biomedical Engineering (WC), 2009 (inproceedings)

[BibTex]

[BibTex]


Thumb xl bmvc1
Geometric Potential Force for the Deformable Model

Si Yong Yeo, Xianghua Xie, Igor Sazonov, Perumal Nithiarasu

In The 20th British Machine Vision Conference, pages: 1-11, 2009 (inproceedings)

Abstract
We propose a new external force field for deformable models which can be conve- niently generalized to high dimensions. The external force field is based on hypothesized interactions between the relative geometries of the deformable model and image gradi- ents. The evolution of the deformable model is solved using the level set method. The dynamic interaction forces between the geometries can greatly improve the deformable model performance in acquiring complex geometries and highly concave boundaries, and in dealing with weak image edges. The new deformable model can handle arbi- trary cross-boundary initializations. Here, we show that the proposed method achieve significant improvements when compared against existing state-of-the-art techniques.

[BibTex]

[BibTex]


Thumb xl ajp1
Left Ventricular Regional Wall Curvedness and Wall Stress in Patients with Ischemic Dilated Cardiomyopathy

Liang Zhong, Yi Su, Si Yong Yeo, Ru San Tan Dhanjoo Ghista, Ghassan Kassab

American Journal of Physiology – Heart and Circulatory Physiology, 296(3):H573-84, 2009 (article)

Abstract
Geometric remodeling of the left ventricle (LV) after myocardial infarction is associated with changes in myocardial wall stress. The objective of this study was to determine the regional curvatures and wall stress based on three-dimensional (3-D) reconstructions of the LV using MRI. Ten patients with ischemic dilated cardiomyopathy (IDCM) and 10 normal subjects underwent MRI scan. The IDCM patients also underwent delayed gadolinium-enhancement imaging to delineate the extent of myocardial infarct. Regional curvedness, local radii of curvature, and wall thickness were calculated. The percent curvedness change between end diastole and end systole was also calculated. In normal heart, a short- and long-axis two-dimensional analysis showed a 41 +/- 11% and 45 +/- 12% increase of the mean of peak systolic wall stress between basal and apical sections, respectively. However, 3-D analysis showed no significant difference in peak systolic wall stress from basal and apical sections (P = 0.298, ANOVA). LV shape differed between IDCM patients and normal subjects in several ways: LV shape was more spherical (sphericity index = 0.62 +/- 0.08 vs. 0.52 +/- 0.06, P < 0.05), curvedness at end diastole (mean for 16 segments = 0.034 +/- 0.0056 vs. 0.040 +/- 0.0071 mm(-1), P < 0.001) and end systole (mean for 16 segments = 0.037 +/- 0.0068 vs. 0.067 +/- 0.020 mm(-1), P < 0.001) was affected by infarction, and peak systolic wall stress was significantly increased at each segment in IDCM patients. The 3-D quantification of regional wall stress by cardiac MRI provides more precise evaluation of cardiac mechanics. Identification of regional curvedness and wall stresses helps delineate the mechanisms of LV remodeling in IDCM and may help guide therapeutic LV restoration.

[BibTex]

[BibTex]


no image
Polyp Segmentation in NBI Colonoscopy

Gross, S., Kennel, M., Stehle, T., Wulff, J., Tischendorf, J., Trautwein, C., Aach, T.

Abstract
Endoscopic screening of the colon (colonoscopy) is performed to prevent cancer and to support therapy. During intervention colon polyps are located, inspected and, if need be, removed by the investigator. We propose a segmentation algorithm as a part of an automatic polyp classification system for colonoscopic Narrow-Band images. Our approach includes multi-scale filtering for noise reduction, suppression of small blood vessels, and enhancement of major edges. Results of the subsequent edge detection are compared to a set of elliptic templates and evaluated. We validated our algorithm on our polyp database with images acquired during routine colonoscopic examinations. The presented results show the reliable segmentation performance of our method and its robustness to image variations.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Computational mechanisms for the recognition of time sequences of images in the visual cortex

Tan, C., Jhuang, H., Singer, J., Serre, T., Sheinberg, D., Poggio, T.

Society for Neuroscience, 2009 (conference)

pdf [BibTex]

pdf [BibTex]


Thumb xl vriphys2009
Interactive Inverse Kinematics for Monocular Motion Estimation

Morten Engell-Norregaard, Soren Hauberg, Jerome Lapuyade, Kenny Erleben, Kim S. Pedersen

In The 6th Workshop on Virtual Reality Interaction and Physical Simulation (VRIPHYS), 2009 (inproceedings)

Conference site Paper site [BibTex]

Conference site Paper site [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 12.17.40 pm
A Comprehensive Grasp Taxonomy

Feix, T., Pawlik, R., Schmiedmayer, H., Romero, J., Kragic, D.

In Robotics, Science and Systems: Workshop on Understanding the Human Hand for Advancing Robotic Manipulation, 2009 (inproceedings)

Pdf [BibTex]

Pdf [BibTex]


no image
Population coding of ground truth motion in natural scenes in the early visual system

Stanley, G., Black, M. J., Lewis, J., Desbordes, G., Jin, J., Alonso, J.

COSYNE, 2009 (conference)

[BibTex]

[BibTex]


Thumb xl cmbe
Level Set Based Automatic Segmentation of Human Aorta

Si Yong Yeo, Xianghua Xie, Igor Sazonov, Perumal Nithiarasu

In International Conference on Computational & Mathematical Biomedical Engineering, pages: 242-245, 2009 (inproceedings)

[BibTex]

[BibTex]


Thumb xl mbec1
A Curvature-Based Approach for Left Ventricular Shape Analysis from Cardiac Magnetic Resonance Imaging

Si Yong Yeo, Liang Zhong, Yi Su, Ru San Tan, Dhanjoo Ghista

Medical & Biological Engineering & Computing, 47(3):313-322, 2009 (article)

Abstract
It is believed that left ventricular (LV) regional shape is indicative of LV regional function, and cardiac pathologies are often associated with regional alterations in ventricular shape. In this article, we present a set of procedures for evaluating regional LV surface shape from anatomically accurate models reconstructed from cardiac magnetic resonance (MR) images. LV surface curvatures are computed using local surface fitting method, which enables us to assess regional LV shape and its variation. Comparisons are made between normal and diseased hearts. It is illustrated that LV surface curvatures at different regions of the normal heart are higher than those of the diseased heart. Also, the normal heart experiences a larger change in regional curvedness during contraction than the diseased heart. It is believed that with a wide range of dataset being evaluated, this approach will provide a new and efficient way of quantifying LV regional function.

link (url) [BibTex]

link (url) [BibTex]


Thumb xl orthonormaity
In Defense of Orthonormality Constraints for Nonrigid Structure from Motion

Akhter, I., Sheikh, Y., Khan, S.

In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages: 2447-2453, 2009 (inproceedings)

Abstract
In factorization approaches to nonrigid structure from motion, the 3D shape of a deforming object is usually modeled as a linear combination of a small number of basis shapes. The original approach to simultaneously estimate the shape basis and nonrigid structure exploited orthonormality constraints for metric rectification. Recently, it has been asserted that structure recovery through orthonormality constraints alone is inherently ambiguous and cannot result in a unique solution. This assertion has been accepted as conventional wisdom and is the justification of many remedial heuristics in literature. Our key contribution is to prove that orthonormality constraints are in fact sufficient to recover the 3D structure from image observations alone. We characterize the true nature of the ambiguity in using orthonormality constraints for the shape basis and show that it has no impact on structure reconstruction. We conclude from our experimentation that the primary challenge in using shape basis for nonrigid structure from motion is the difficulty in the optimization problem rather than the ambiguity in orthonormality constraints.

pdf [BibTex]

pdf [BibTex]


no image
Dynamic distortion correction for endoscopy systems with exchangeable optics

Stehle, T., Hennes, M., Gross, S., Behrens, A., Wulff, J., Aach, T.

In Bildverarbeitung für die Medizin 2009, pages: 142-146, Springer Berlin Heidelberg, 2009 (inproceedings)

Abstract
Endoscopic images are strongly affected by lens distortion caused by the use of wide angle lenses. In case of endoscopy systems with exchangeable optics, e.g. in bladder endoscopy or sinus endoscopy, the camera sensor and the optics do not form a rigid system but they can be shifted and rotated with respect to each other during an examination. This flexibility has a major impact on the location of the distortion centre as it is moved along with the optics. In this paper, we describe an algorithm for the dynamic correction of lens distortion in cystoscopy which is based on a one time calibration. For the compensation, we combine a conventional static method for distortion correction with an algorithm to detect the position and the orientation of the elliptic field of view. This enables us to estimate the position of the distortion centre according to the relative movement of camera and optics. Therewith, a distortion correction for arbitrary rotation angles and shifts becomes possible without performing static calibrations for every possible combination of shifts and angles beforehand.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl emmcvpr2009
Three Dimensional Monocular Human Motion Analysis in End-Effector Space

Soren Hauberg, Jerome Lapuyade, Morten Engell-Norregaard, Kenny Erleben, Kim S. Pedersen

In Energy Minimization Methods in Computer Vision and Pattern Recognition, 5681, pages: 235-248, Lecture Notes in Computer Science, (Editors: Cremers, Daniel and Boykov, Yuri and Blake, Andrew and Schmidt, Frank), Springer Berlin Heidelberg, 2009 (inproceedings)

Publishers site Paper site PDF [BibTex]

Publishers site Paper site PDF [BibTex]


no image
Decoding visual motion from correlated firing of thalamic neurons

Stanley, G. B., Black, M. J., Desbordes, G., Jin, J., Wang, Y., Alonso, J.

2009 Abstract Viewer and Itinerary Planner. Society for Neuroscience, Society for Neuroscience, 2009 (conference)

[BibTex]

[BibTex]