Header logo is ps


2017


Thumb xl web teaser
Detailed, accurate, human shape estimation from clothed 3D scan sequences

Zhang, C., Pujades, S., Black, M., Pons-Moll, G.

In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, Washington, DC, USA, July 2017, Spotlight (inproceedings)

Abstract
We address the problem of estimating human body shape from 3D scans over time. Reliable estimation of 3D body shape is necessary for many applications including virtual try-on, health monitoring, and avatar creation for virtual reality. Scanning bodies in minimal clothing, however, presents a practical barrier to these applications. We address this problem by estimating body shape under clothing from a sequence of 3D scans. Previous methods that have exploited statistical models of body shape produce overly smooth shapes lacking personalized details. In this paper we contribute a new approach to recover not only an approximate shape of the person, but also their detailed shape. Our approach allows the estimated shape to deviate from a parametric model to fit the 3D scans. We demonstrate the method using high quality 4D data as well as sequences of visual hulls extracted from multi-view images. We also make available a new high quality 4D dataset that enables quantitative evaluation. Our method outperforms the previous state of the art, both qualitatively and quantitatively.

arxiv_preprint video dataset pdf supplemental DOI Project Page [BibTex]

2017

arxiv_preprint video dataset pdf supplemental DOI Project Page [BibTex]


Thumb xl slide1
3D Menagerie: Modeling the 3D Shape and Pose of Animals

Zuffi, S., Kanazawa, A., Jacobs, D., Black, M. J.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pages: 5524-5532, IEEE, Piscataway, NJ, USA, July 2017 (inproceedings)

Abstract
There has been significant work on learning realistic, articulated, 3D models of the human body. In contrast, there are few such models of animals, despite many applications. The main challenge is that animals are much less cooperative than humans. The best human body models are learned from thousands of 3D scans of people in specific poses, which is infeasible with live animals. Consequently, we learn our model from a small set of 3D scans of toy figurines in arbitrary poses. We employ a novel part-based shape model to compute an initial registration to the scans. We then normalize their pose, learn a statistical shape model, and refine the registrations and the model together. In this way, we accurately align animal scans from different quadruped families with very different shapes and poses. With the registration to a common template we learn a shape space representing animals including lions, cats, dogs, horses, cows and hippos. Animal shapes can be sampled from the model, posed, animated, and fit to data. We demonstrate generalization by fitting it to images of real animals including species not seen in training.

pdf video Project Page [BibTex]

pdf video Project Page [BibTex]


Thumb xl pyramid
Optical Flow Estimation using a Spatial Pyramid Network

Ranjan, A., Black, M.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, July 2017 (inproceedings)

Abstract
We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions; these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96% smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (< 1 pixel), a convolutional approach applied to pairs of warped images is appropriate. Third, unlike FlowNet, the learned convolution filters appear similar to classical spatio-temporal filters, giving insight into the method and how to improve it. Our results are more accurate than FlowNet on most standard benchmarks, suggesting a new direction of combining classical flow methods with deep learning.

pdf SupMat project/code [BibTex]

pdf SupMat project/code [BibTex]


Thumb xl imgidx 00197
Multiple People Tracking by Lifted Multicut and Person Re-identification

Tang, S., Andriluka, M., Andres, B., Schiele, B.

In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 3701-3710, IEEE Computer Society, Washington, DC, USA, July 2017 (inproceedings)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Thumb xl vpn teaser
Video Propagation Networks

Jampani, V., Gadde, R., Gehler, P. V.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, July 2017 (inproceedings)

pdf supplementary arXiv project page code Project Page [BibTex]

pdf supplementary arXiv project page code Project Page [BibTex]


Thumb xl anja
Generating Descriptions with Grounded and Co-Referenced People

Rohrbach, A., Rohrbach, M., Tang, S., Oh, S. J., Schiele, B.

In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 4196-4206, IEEE, Piscataway, NJ, USA, July 2017 (inproceedings)

PDF DOI Project Page [BibTex]

PDF DOI Project Page [BibTex]


Thumb xl cvpr2017 landpsace
Semantic Multi-view Stereo: Jointly Estimating Objects and Voxels

Ulusoy, A. O., Black, M. J., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, July 2017 (inproceedings)

Abstract
Dense 3D reconstruction from RGB images is a highly ill-posed problem due to occlusions, textureless or reflective surfaces, as well as other challenges. We propose object-level shape priors to address these ambiguities. Towards this goal, we formulate a probabilistic model that integrates multi-view image evidence with 3D shape information from multiple objects. Inference in this model yields a dense 3D reconstruction of the scene as well as the existence and precise 3D pose of the objects in it. Our approach is able to recover fine details not captured in the input shapes while defaulting to the input models in occluded regions where image evidence is weak. Due to its probabilistic nature, the approach is able to cope with the approximate geometry of the 3D models as well as input shapes that are not present in the scene. We evaluate the approach quantitatively on several challenging indoor and outdoor datasets.

YouTube pdf suppmat Project Page [BibTex]

YouTube pdf suppmat Project Page [BibTex]


Thumb xl judith
Deep representation learning for human motion prediction and classification

Bütepage, J., Black, M., Kragic, D., Kjellström, H.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, July 2017 (inproceedings)

Abstract
Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction.

arXiv Project Page [BibTex]

arXiv Project Page [BibTex]


Thumb xl teasercrop
Unite the People: Closing the Loop Between 3D and 2D Human Representations

Lassner, C., Romero, J., Kiefel, M., Bogo, F., Black, M. J., Gehler, P. V.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, July 2017 (inproceedings)

Abstract
3D models provide a common ground for different representations of human bodies. In turn, robust 2D estimation has proven to be a powerful tool to obtain 3D fits “in-the-wild”. However, depending on the level of detail, it can be hard to impossible to acquire labeled data for training 2D estimators on large scale. We propose a hybrid approach to this problem: with an extended version of the recently introduced SMPLify method, we obtain high quality 3D body model fits for multiple human pose datasets. Human annotators solely sort good and bad fits. This procedure leads to an initial dataset, UP-3D, with rich annotations. With a comprehensive set of experiments, we show how this data can be used to train discriminative models that produce results with an unprecedented level of detail: our models predict 31 segments and 91 landmark locations on the body. Using the 91 landmark pose estimator, we present state-of-the art results for 3D human pose and shape estimation using an order of magnitude less training data and without assumptions about gender or pose in the fitting procedure. We show that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. The data, code and models are available for research purposes.

arXiv project/code/data Project Page [BibTex]

arXiv project/code/data Project Page [BibTex]


Thumb xl dapepatent
System and method for simulating realistic clothing

Black, M. J., Guan, P.

June 2017, U.S.~Patent 9,679,409 B2 (misc)

Abstract
Systems, methods, and computer-readable storage media for simulating realistic clothing. The system generates a clothing deformation model for a clothing type, wherein the clothing deformation model factors a change of clothing shape due to rigid limb rotation, pose-independent body shape, and pose-dependent deformations. Next, the system generates a custom-shaped garment for a given body by mapping, via the clothing deformation model, body shape parameters to clothing shape parameters. The system then automatically dresses the given body with the custom- shaped garment.

Google Patents pdf [BibTex]


Thumb xl image  1
Human Shape Estimation using Statistical Body Models

Loper, M. M.

University of Tübingen, May 2017 (thesis)

Abstract
Human body estimation methods transform real-world observations into predictions about human body state. These estimation methods benefit a variety of health, entertainment, clothing, and ergonomics applications. State may include pose, overall body shape, and appearance. Body state estimation is underconstrained by observations; ambiguity presents itself both in the form of missing data within observations, and also in the form of unknown correspondences between observations. We address this challenge with the use of a statistical body model: a data-driven virtual human. This helps resolve ambiguity in two ways. First, it fills in missing data, meaning that incomplete observations still result in complete shape estimates. Second, the model provides a statistically-motivated penalty for unlikely states, which enables more plausible body shape estimates. Body state inference requires more than a body model; we therefore build obser- vation models whose output is compared with real observations. In this thesis, body state is estimated from three types of observations: 3D motion capture markers, depth and color images, and high-resolution 3D scans. In each case, a forward process is proposed which simulates observations. By comparing observations to the results of the forward process, state can be adjusted to minimize the difference between simulated and observed data. We use gradient-based methods because they are critical to the precise estimation of state with a large number of parameters. The contributions of this work include three parts. First, we propose a method for the estimation of body shape, nonrigid deformation, and pose from 3D markers. Second, we present a concise approach to differentiating through the rendering process, with application to body shape estimation. And finally, we present a statistical body model trained from human body scans, with state-of-the-art fidelity, good runtime performance, and compatibility with existing animation packages.

Official Version [BibTex]


Thumb xl early stopping teaser
Early Stopping Without a Validation Set

Mahsereci, M., Balles, L., Lassner, C., Hennig, P.

arXiv preprint arXiv:1703.09580, 2017 (article)

Abstract
Early stopping is a widely used technique to prevent poor generalization performance when training an over-expressive model by means of gradient-based optimization. To find a good point to halt the optimizer, a common practice is to split the dataset into a training and a smaller validation set to obtain an ongoing estimate of the generalization performance. In this paper we propose a novel early stopping criterion which is based on fast-to-compute, local statistics of the computed gradients and entirely removes the need for a held-out validation set. Our experiments show that this is a viable approach in the setting of least-squares and logistic regression as well as neural networks.

link (url) Project Page Project Page [BibTex]


Thumb xl appealingavatars
Appealing Avatars from 3D Body Scans: Perceptual Effects of Stylization

Fleming, R., Mohler, B. J., Romero, J., Black, M. J., Breidt, M.

In Computer Vision, Imaging and Computer Graphics Theory and Applications: 11th International Joint Conference, VISIGRAPP 2016, Rome, Italy, February 27 – 29, 2016, Revised Selected Papers, pages: 175-196, Springer International Publishing, 2017 (inbook)

Abstract
Using styles derived from existing popular character designs, we present a novel automatic stylization technique for body shape and colour information based on a statistical 3D model of human bodies. We investigate whether such stylized body shapes result in increased perceived appeal with two different experiments: One focuses on body shape alone, the other investigates the additional role of surface colour and lighting. Our results consistently show that the most appealing avatar is a partially stylized one. Importantly, avatars with high stylization or no stylization at all were rated to have the least appeal. The inclusion of colour information and improvements to render quality had no significant effect on the overall perceived appeal of the avatars, and we observe that the body shape primarily drives the change in appeal ratings. For body scans with colour information, we found that a partially stylized avatar was perceived as most appealing.

publisher site pdf DOI [BibTex]

publisher site pdf DOI [BibTex]


Thumb xl gcpr2017 nugget
Learning to Filter Object Detections

Prokudin, S., Kappler, D., Nowozin, S., Gehler, P.

In Pattern Recognition: 39th German Conference, GCPR 2017, Basel, Switzerland, September 12–15, 2017, Proceedings, pages: 52-62, Springer International Publishing, Cham, 2017 (inbook)

Abstract
Most object detection systems consist of three stages. First, a set of individual hypotheses for object locations is generated using a proposal generating algorithm. Second, a classifier scores every generated hypothesis independently to obtain a multi-class prediction. Finally, all scored hypotheses are filtered via a non-differentiable and decoupled non-maximum suppression (NMS) post-processing step. In this paper, we propose a filtering network (FNet), a method which replaces NMS with a differentiable neural network that allows joint reasoning and re-scoring of the generated set of hypotheses per image. This formulation enables end-to-end training of the full object detection pipeline. First, we demonstrate that FNet, a feed-forward network architecture, is able to mimic NMS decisions, despite the sequential nature of NMS. We further analyze NMS failures and propose a loss formulation that is better aligned with the mean average precision (mAP) evaluation metric. We evaluate FNet on several standard detection datasets. Results surpass standard NMS on highly occluded settings of a synthetic overlapping MNIST dataset and show competitive behavior on PascalVOC2007 and KITTI detection benchmarks.

Paper link (url) DOI Project Page [BibTex]

Paper link (url) DOI Project Page [BibTex]


Thumb xl web image
Data-Driven Physics for Human Soft Tissue Animation

Kim, M., Pons-Moll, G., Pujades, S., Bang, S., Kim, J., Black, M. J., Lee, S.

ACM Transactions on Graphics, (Proc. SIGGRAPH), 36(4):54:1-54:12, 2017 (article)

Abstract
Data driven models of human poses and soft-tissue deformations can produce very realistic results, but they only model the visible surface of the human body and cannot create skin deformation due to interactions with the environment. Physical simulations can generalize to external forces, but their parameters are difficult to control. In this paper, we present a layered volumetric human body model learned from data. Our model is composed of a data-driven inner layer and a physics-based external layer. The inner layer is driven with a volumetric statistical body model (VSMPL). The soft tissue layer consists of a tetrahedral mesh that is driven using the finite element method (FEM). Model parameters, namely the segmentation of the body into layers and the soft tissue elasticity, are learned directly from 4D registrations of humans exhibiting soft tissue deformations. The learned two layer model is a realistic full-body avatar that generalizes to novel motions and external forces. Experiments show that the resulting avatars produce realistic results on held out sequences and react to external forces. Moreover, the model supports the retargeting of physical properties from one avatar when they share the same topology.

video paper link (url) Project Page [BibTex]

video paper link (url) Project Page [BibTex]


Thumb xl phd thesis teaser
Learning Inference Models for Computer Vision

Jampani, V.

MPI for Intelligent Systems and University of Tübingen, 2017 (phdthesis)

Abstract
Computer vision can be understood as the ability to perform 'inference' on image data. Breakthroughs in computer vision technology are often marked by advances in inference techniques, as even the model design is often dictated by the complexity of inference in them. This thesis proposes learning based inference schemes and demonstrates applications in computer vision. We propose techniques for inference in both generative and discriminative computer vision models. Despite their intuitive appeal, the use of generative models in vision is hampered by the difficulty of posterior inference, which is often too complex or too slow to be practical. We propose techniques for improving inference in two widely used techniques: Markov Chain Monte Carlo (MCMC) sampling and message-passing inference. Our inference strategy is to learn separate discriminative models that assist Bayesian inference in a generative model. Experiments on a range of generative vision models show that the proposed techniques accelerate the inference process and/or converge to better solutions. A main complication in the design of discriminative models is the inclusion of prior knowledge in a principled way. For better inference in discriminative models, we propose techniques that modify the original model itself, as inference is simple evaluation of the model. We concentrate on convolutional neural network (CNN) models and propose a generalization of standard spatial convolutions, which are the basic building blocks of CNN architectures, to bilateral convolutions. First, we generalize the existing use of bilateral filters and then propose new neural network architectures with learnable bilateral filters, which we call `Bilateral Neural Networks'. We show how the bilateral filtering modules can be used for modifying existing CNN architectures for better image segmentation and propose a neural network approach for temporal information propagation in videos. Experiments demonstrate the potential of the proposed bilateral networks on a wide range of vision tasks and datasets. In summary, we propose learning based techniques for better inference in several computer vision models ranging from inverse graphics to freely parameterized neural networks. In generative vision models, our inference techniques alleviate some of the crucial hurdles in Bayesian posterior inference, paving new ways for the use of model based machine learning in vision. In discriminative CNN models, the proposed filter generalizations aid in the design of new neural network architectures that can handle sparse high-dimensional data as well as provide a way for incorporating prior knowledge into CNNs.

pdf [BibTex]

pdf [BibTex]


Thumb xl web teaser eg
Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs

(Best Paper, Eurographics 2017)

Marcard, T. V., Rosenhahn, B., Black, M., Pons-Moll, G.

Computer Graphics Forum 36(2), Proceedings of the 38th Annual Conference of the European Association for Computer Graphics (Eurographics), pages: 349-360 , 2017 (article)

Abstract
We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables motion capture using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall

video pdf Project Page [BibTex]

video pdf Project Page [BibTex]


Thumb xl pami 2017 teaser
Efficient 2D and 3D Facade Segmentation using Auto-Context

Gadde, R., Jampani, V., Marlet, R., Gehler, P.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017 (article)

Abstract
This paper introduces a fast and efficient segmentation technique for 2D images and 3D point clouds of building facades. Facades of buildings are highly structured and consequently most methods that have been proposed for this problem aim to make use of this strong prior information. Contrary to most prior work, we are describing a system that is almost domain independent and consists of standard segmentation methods. We train a sequence of boosted decision trees using auto-context features. This is learned using stacked generalization. We find that this technique performs better, or comparable with all previous published methods and present empirical results on all available 2D and 3D facade benchmark datasets. The proposed method is simple to implement, easy to extend, and very efficient at test-time inference.

arXiv Project Page [BibTex]

arXiv Project Page [BibTex]


Thumb xl web image
ClothCap: Seamless 4D Clothing Capture and Retargeting

Pons-Moll, G., Pujades, S., Hu, S., Black, M.

ACM Transactions on Graphics, (Proc. SIGGRAPH), 36(4):73:1-73:15, ACM, New York, NY, USA, 2017, Two first authors contributed equally (article)

Abstract
Designing and simulating realistic clothing is challenging and, while several methods have addressed the capture of clothing from 3D scans, previous methods have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be segmented from the others and the body. Our ClothCap approach uses a new multi-part 3D model of clothed bodies, automatically segments each piece of clothing, estimates the naked body shape and pose under the clothing, and tracks the 3D deformations of the clothing over time. We estimate the garments and their motion from 4D scans; that is, high-resolution 3D scans of the subject in motion at 60 fps. The model allows us to capture a clothed person in motion, extract their clothing, and retarget the clothing to new body shapes. ClothCap provides a step towards virtual try-on with a technology for capturing, modeling, and analyzing clothing in motion.

video project_page paper link (url) DOI Project Page Project Page [BibTex]

video project_page paper link (url) DOI Project Page Project Page [BibTex]


Thumb xl muvs
Towards Accurate Marker-less Human Shape and Pose Estimation over Time

Huang, Y., Bogo, F., Lassner, C., Kanazawa, A., Gehler, P. V., Romero, J., Akhter, I., Black, M. J.

In International Conference on 3D Vision (3DV), pages: 421-430, 2017 (inproceedings)

Abstract
Existing markerless motion capture methods often assume known backgrounds, static cameras, and sequence specific motion priors, limiting their application scenarios. Here we present a fully automatic method that, given multiview videos, estimates 3D human pose and body shape. We take the recently proposed SMPLify method [12] as the base method and extend it in several ways. First we fit a 3D human body model to 2D features detected in multi-view images. Second, we use a CNN method to segment the person in each image and fit the 3D body model to the contours, further improving accuracy. Third we utilize a generic and robust DCT temporal prior to handle the left and right side swapping issue sometimes introduced by the 2D pose estimator. Validation on standard benchmarks shows our results are comparable to the state of the art and also provide a realistic 3D shape avatar. We also demonstrate accurate results on HumanEva and on challenging monocular sequences of dancing from YouTube.

Code pdf DOI Project Page [BibTex]


Thumb xl auroteaser
Decentralized Simultaneous Multi-target Exploration using a Connected Network of Multiple Robots

Nestmeyer, T., Robuffo Giordano, P., Bülthoff, H. H., Franchi, A.

In pages: 989-1011, Autonomous Robots, 2017 (incollection)

[BibTex]

[BibTex]


Thumb xl coverhand wilson
Capturing Hand-Object Interaction and Reconstruction of Manipulated Objects

Tzionas, D.

University of Bonn, 2017 (phdthesis)

Abstract
Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however, even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priori knowledge of the object's shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning effectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically using RGB-D data. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow.

Thesis link (url) Project Page [BibTex]

2010


Thumb xl screen shot 2012 12 01 at 2.37.12 pm
Visibility Maps for Improving Seam Carving

Mansfield, A., Gehler, P., Van Gool, L., Rother, C.

In Media Retargeting Workshop, European Conference on Computer Vision (ECCV), september 2010 (inproceedings)

webpage pdf slides supplementary code [BibTex]

2010

webpage pdf slides supplementary code [BibTex]


Thumb xl eigenclothingimagesmall2
A 2D human body model dressed in eigen clothing

Guan, P., Freifeld, O., Black, M. J.

In European Conf. on Computer Vision, (ECCV), pages: 285-298, Springer-Verlag, September 2010 (inproceedings)

Abstract
Detection, tracking, segmentation and pose estimation of people in monocular images are widely studied. Two-dimensional models of the human body are extensively used, however, they are typically fairly crude, representing the body either as a rough outline or in terms of articulated geometric primitives. We describe a new 2D model of the human body contour that combines an underlying naked body with a low-dimensional clothing model. The naked body is represented as a Contour Person that can take on a wide variety of poses and body shapes. Clothing is represented as a deformation from the underlying body contour. This deformation is learned from training examples using principal component analysis to produce eigen clothing. We find that the statistics of clothing deformations are skewed and we model the a priori probability of these deformations using a Beta distribution. The resulting generative model captures realistic human forms in monocular images and is used to infer 2D body shape and pose under clothing. We also use the coefficients of the eigen clothing to recognize different categories of clothing on dressed people. The method is evaluated quantitatively on synthetic and real images and achieves better accuracy than previous methods for estimating body shape under clothing.

pdf data poster Project Page [BibTex]

pdf data poster Project Page [BibTex]


Thumb xl teaser eccvw
Analyzing and Evaluating Markerless Motion Tracking Using Inertial Sensors

Baak, A., Helten, T., Müller, M., Pons-Moll, G., Rosenhahn, B., Seidel, H.

In European Conference on Computer Vision (ECCV Workshops), September 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl testing results 1
Trainable, Vision-Based Automated Home Cage Behavioral Phenotyping

Jhuang, H., Garrote, E., Edelman, N., Poggio, T., Steele, A., Serre, T.

In Measuring Behavior, August 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl graspimagesmall
Decoding complete reach and grasp actions from local primary motor cortex populations

(Featured in Nature’s Research Highlights (Nature, Vol 466, 29 July 2010))

Vargas-Irwin, C. E., Shakhnarovich, G., Yadollahpour, P., Mislow, J., Black, M. J., Donoghue, J. P.

J. of Neuroscience, 39(29):9659-9669, July 2010 (article)

pdf pdf from publisher Movie 1 Movie 2 Project Page [BibTex]

pdf pdf from publisher Movie 1 Movie 2 Project Page [BibTex]


Thumb xl teaser cvpr2010
Multisensor-Fusion for 3D Full-Body Human Motion Capture

Pons-Moll, G., Baak, A., Helten, T., Müller, M., Seidel, H., Rosenhahn, B.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2010 (inproceedings)

project page pdf [BibTex]

project page pdf [BibTex]


Thumb xl deblur small
Coded exposure imaging for projective motion deblurring

Tai, Y., Kong, N., Lin, S., Shin, S. Y.

In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 2408-2415, June 2010 (inproceedings)

Abstract
We propose a method for deblurring of spatially variant object motion. A principal challenge of this problem is how to estimate the point spread function (PSF) of the spatially variant blur. Based on the projective motion blur model of, we present a blur estimation technique that jointly utilizes a coded exposure camera and simple user interactions to recover the PSF. With this spatially variant PSF, objects that exhibit projective motion can be effectively de-blurred. We validate this method with several challenging image examples.

Publisher site [BibTex]

Publisher site [BibTex]


Thumb xl cvpr10
Tracking people interacting with objects

Kjellstrom, H., Kragic, D., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, pages: 747-754, June 2010 (inproceedings)

pdf Video [BibTex]

pdf Video [BibTex]


Thumb xl contourpersonimagesmall
Contour people: A parameterized model of 2D articulated human shape

Freifeld, O., Weiss, A., Zuffi, S., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, (CVPR), pages: 639-646, IEEE, June 2010 (inproceedings)

pdf slides video of CVPR talk Project Page [BibTex]

pdf slides video of CVPR talk Project Page [BibTex]


Thumb xl secretsimagesmall2
Secrets of optical flow estimation and their principles

Sun, D., Roth, S., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 2432-2439, IEEE, June 2010 (inproceedings)

pdf Matlab code code copryright notice [BibTex]

pdf Matlab code code copryright notice [BibTex]


Thumb xl ijcvcoverhd
Guest editorial: State of the art in image- and video-based human pose and motion estimation

Sigal, L., Black, M. J.

International Journal of Computer Vision, 87(1):1-3, March 2010 (article)

pdf from publisher [BibTex]

pdf from publisher [BibTex]


Thumb xl humanevaimagesmall2
HumanEva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion

Sigal, L., Balan, A., Black, M. J.

International Journal of Computer Vision, 87(1):4-27, Springer Netherlands, March 2010 (article)

Abstract
While research on articulated human motion and pose estimation has progressed rapidly in the last few years, there has been no systematic quantitative evaluation of competing methods to establish the current state of the art. We present data obtained using a hardware system that is able to capture synchronized video and ground-truth 3D motion. The resulting HumanEva datasets contain multiple subjects performing a set of predefined actions with a number of repetitions. On the order of 40,000 frames of synchronized motion capture and multi-view video (resulting in over one quarter million image frames in total) were collected at 60 Hz with an additional 37,000 time instants of pure motion capture data. A standard set of error measures is defined for evaluating both 2D and 3D pose estimation and tracking algorithms. We also describe a baseline algorithm for 3D articulated tracking that uses a relatively standard Bayesian framework with optimization in the form of Sequential Importance Resampling and Annealed Particle Filtering. In the context of this baseline algorithm we explore a variety of likelihood functions, prior models of human motion and the effects of algorithm parameters. Our experiments suggest that image observation models and motion priors play important roles in performance, and that in a multi-view laboratory environment, where initialization is available, Bayesian filtering tends to perform well. The datasets and the software are made available to the research community. This infrastructure will support the development of new articulated motion and pose estimation algorithms, will provide a baseline for the evaluation and comparison of new methods, and will help establish the current state of the art in human pose estimation and tracking.

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


no image
Modellbasierte Echtzeit-Bewegungsschätzung in der Fluoreszenzendoskopie

Stehle, T., Wulff, J., Behrens, A., Gross, S., Aach, T.

In Bildverarbeitung für die Medizin, 574, pages: 435-439, CEUR Workshop Proceedings, 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl acva2010
Robust one-shot 3D scanning using loopy belief propagation

Ulusoy, A., Calakli, F., Taubin, G.

In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages: 15-22, IEEE, 2010 (inproceedings)

Abstract
A structured-light technique can greatly simplify the problem of shape recovery from images. There are currently two main research challenges in design of such techniques. One is handling complicated scenes involving texture, occlusions, shadows, sharp discontinuities, and in some cases even dynamic change; and the other is speeding up the acquisition process by requiring small number of images and computationally less demanding algorithms. This paper presents a “one-shot” variant of such techniques to tackle the aforementioned challenges. It works by projecting a static grid pattern onto the scene and identifying the correspondence between grid stripes and the camera image. The correspondence problem is formulated using a novel graphical model and solved efficiently using loopy belief propagation. Unlike prior approaches, the proposed approach uses non-deterministic geometric constraints, thereby can handle spurious connections of stripe images. The effectiveness of the proposed approach is verified on a variety of complicated real scenes.

pdf link (url) DOI [BibTex]

pdf link (url) DOI [BibTex]


Thumb xl screen shot 2012 12 01 at 2.43.22 pm
Scene Carving: Scene Consistent Image Retargeting

Mansfield, A., Gehler, P., Van Gool, L., Rother, C.

In European Conference on Computer Vision (ECCV), 2010 (inproceedings)

webpage+code pdf supplementary poster [BibTex]

webpage+code pdf supplementary poster [BibTex]


Thumb xl new thumb incos
Epione: An Innovative Pain Management System Using Facial Expression Analysis, Biofeedback and Augmented Reality-Based Distraction

Georgoulis, S., Eleftheriadis, S., Tzionas, D., Vrenas, K., Petrantonakis, P., Hadjileontiadis, L. J.

In Proceedings of the 2010 International Conference on Intelligent Networking and Collaborative Systems, pages: 259-266, INCOS ’10, IEEE Computer Society, Washington, DC, USA, 2010 (inproceedings)

Abstract
An innovative pain management system, namely Epione, is presented here. Epione deals with three main types of pain, i.e., acute pain, chronic pain, and phantom limb pain. In particular, by using facial expression analysis, Epione forms a dynamic pain meter, which then triggers biofeedback and augmented reality-based destruction scenarios, in an effort to maximize patient's pain relief. This unique combination sets Epione not only a novel pain management approach, but also a means that provides an understanding and integration of the needs of the whole community involved i.e., patients and physicians, in a joint attempt to facilitate easing of their suffering, provide efficient monitoring and contribute to a better quality of life.

Paper Project Page DOI [BibTex]

Paper Project Page DOI [BibTex]


Thumb xl new thumb dsai
Phantom Limb Pain Management Using Facial Expression Analysis, Biofeedback and Augmented Reality Interfacing

Tzionas, D., Vrenas, K., Eleftheriadis, S., Georgoulis, S., Petrantonakis, P. C., Hadjileontiadis, L. J.

In Proceedings of the 3rd International Conferenceon Software Development for EnhancingAccessibility and Fighting Info-Exclusion, pages: 23-30, DSAI ’10, UTAD - Universidade de Trás-os-Montes e Alto Douro, 2010 (inproceedings)

Abstract
Post-amputation sensation often translates to the feeling of severe pain in the missing limb, referred to as phantom limb pain (PLP). A clear and rational treatment regimen is difficult to establish, as long as the underlying pathophysiology is not fully known. In this work, an innovative PLP management system is presented, as a module of an holistic computer-mediated pain management environment, namely Epione. The proposed Epione-PLP scheme is structured upon advanced facial expression analysis, used to form a dynamic pain meter, which, in turn, is used to trigger biofeedback and augmented reality-based PLP distraction scenarios. The latter incorporate a model of the missing limb for its visualization, in an effort to provide to the amputee the feeling of its existence and control, and, thus, maximize his/her PLP relief. The novel Epione-PLP management approach integrates edge-technology within the context of personalized health and it could be used to facilitate easing of PLP patients' suffering, provide efficient progress monitoring and contribute to the increase in their quality of life.

Paper Project Page link (url) [BibTex]

Paper Project Page link (url) [BibTex]


Thumb xl ncomm fig2
Automated Home-Cage Behavioral Phenotyping of Mice

Jhuang, H., Garrote, E., Mutch, J., Poggio, T., Steele, A., Serre, T.

Nature Communications, Nature Communications, 2010 (article)

software, demo pdf [BibTex]

software, demo pdf [BibTex]


no image
An automated action initiation system reveals behavioral deficits in MyosinVa deficient mice

Pandian, S., Edelman, N., Jhuang, H., Serre, T., Poggio, T., Constantine-Paton, M.

Society for Neuroscience, 2010 (conference)

pdf [BibTex]

pdf [BibTex]


Thumb xl vista
Dense Marker-less Three Dimensional Motion Capture

Soren Hauberg, Bente Rona Jensen, Morten Engell-Norregaard, Kenny Erleben, Kim S. Pedersen

In Virtual Vistas; Eleventh International Symposium on the 3D Analysis of Human Movement, 2010 (inproceedings)

Conference site [BibTex]

Conference site [BibTex]


Thumb xl jampani10 msr
ImageFlow: Streaming Image Search

Jampani, V., Ramos, G., Drucker, S.

MSR-TR-2010-148, Microsoft Research, Redmond, 2010 (techreport)

Abstract
Traditional grid and list representations of image search results are the dominant interaction paradigms that users face on a daily basis, yet it is unclear that such paradigms are well-suited for experiences where the user‟s task is to browse images for leisure, to discover new information or to seek particular images to represent ideas. We introduce ImageFlow, a novel image search user interface that ex-plores a different alternative to the traditional presentation of image search results. ImageFlow presents image results on a canvas where we map semantic features (e.g., rele-vance, related queries) to the canvas‟ spatial dimensions (e.g., x, y, z) in a way that allows for several levels of en-gagement – from passively viewing a stream of images, to seamlessly navigating through the semantic space and ac-tively collecting images for sharing and reuse. We have implemented our system as a fully functioning prototype, and we report on promising, preliminary usage results.

url pdf link (url) [BibTex]

url pdf link (url) [BibTex]


Thumb xl accv2010
Stick It! Articulated Tracking using Spatial Rigid Object Priors

Soren Hauberg, Kim S. Pedersen

In Computer Vision – ACCV 2010, 6494, pages: 758-769, Lecture Notes in Computer Science, (Editors: Kimmel, Ron and Klette, Reinhard and Sugimoto, Akihiro), Springer Berlin Heidelberg, 2010 (inproceedings)

Publishers site Paper site Code PDF [BibTex]

Publishers site Paper site Code PDF [BibTex]


Thumb xl eccv2010a
Gaussian-like Spatial Priors for Articulated Tracking

Soren Hauberg, Stefan Sommer, Kim S. Pedersen

In Computer Vision – ECCV 2010, 6311, pages: 425-437, Lecture Notes in Computer Science, (Editors: Daniilidis, Kostas and Maragos, Petros and Paragios, Nikos), Springer Berlin Heidelberg, 2010 (inproceedings)

Publishers site Paper site Code PDF [BibTex]

Publishers site Paper site Code PDF [BibTex]


no image
Reach to grasp actions in rhesus macaques: Dimensionality reduction of hand, wrist, and upper arm motor subspaces using principal component analysis

Vargas-Irwin, C., Franquemont, L., Shakhnarovich, G., Yadollahpour, P., Black, M., Donoghue, J.

2010 Abstract Viewer and Itinerary Planner, Society for Neuroscience, 2010, Online (conference)

[BibTex]

[BibTex]


Thumb xl nips2010layersimagesmall
Layered image motion with explicit occlusions, temporal consistency, and depth ordering

Sun, D., Sudderth, E., Black, M. J.

In Advances in Neural Information Processing Systems 23 (NIPS), pages: 2226-2234, MIT Press, 2010 (inproceedings)

Abstract
Layered models are a powerful way of describing natural scenes containing smooth surfaces that may overlap and occlude each other. For image motion estimation, such models have a long history but have not achieved the wide use or accuracy of non-layered methods. We present a new probabilistic model of optical flow in layers that addresses many of the shortcomings of previous approaches. In particular, we define a probabilistic graphical model that explicitly captures: 1) occlusions and disocclusions; 2) depth ordering of the layers; 3) temporal consistency of the layer segmentation. Additionally the optical flow in each layer is modeled by a combination of a parametric model and a smooth deviation based on an MRF with a robust spatial prior; the resulting model allows roughness in layers. Finally, a key contribution is the formulation of the layers using an image dependent hidden field prior based on recent models for static scene segmentation. The method achieves state-of-the-art results on the Middlebury benchmark and produces meaningful scene segmentations as well as detected occlusion regions.

main paper supplemental material paper and supplemental material in one pdf file Project Page [BibTex]


Thumb xl eccv2010b
Manifold Valued Statistics, Exact Principal Geodesic Analysis and the Effect of Linear Approximations

Stefan Sommer, Francois Lauze, Soren Hauberg, Mads Nielsen

In Computer Vision – ECCV 2010, 6316, pages: 43-56, (Editors: Daniilidis, Kostas and Maragos, Petros and Paragios, Nikos), Springer Berlin Heidelberg, 2010 (inproceedings)

Publishers site PDF [BibTex]

Publishers site PDF [BibTex]