Header logo is ps


2020


STAR: Sparse Trained Articulated Human Body Regressor
STAR: Sparse Trained Articulated Human Body Regressor

Osman, A. A. A., Bolkart, T., Black, M. J.

In European Conference on Computer Vision (ECCV) , August 2020 (inproceedings)

Abstract
The SMPL body model is widely used for the estimation, synthesis, and analysis of 3D human pose and shape. While popular, we show that SMPL has several limitations and introduce STAR, which is quantitatively and qualitatively superior to SMPL. First, SMPL has a huge number of parameters resulting from its use of global blend shapes. These dense pose-corrective offsets relate every vertex on the mesh to all the joints in the kinematic tree, capturing spurious long-range correlations. To address this, we define per-joint pose correctives and learn the subset of mesh vertices that are influenced by each joint movement. This sparse formulation results in more realistic deformations and significantly reduces the number of model parameters to 20% of SMPL. When trained on the same data as SMPL, STAR generalizes better despite having many fewer parameters. Second, SMPL factors pose-dependent deformations from body shape while, in reality, people with different shapes deform differently. Consequently, we learn shape-dependent pose-corrective blend shapes that depend on both body pose and BMI. Third, we show that the shape space of SMPL is not rich enough to capture the variation in the human population. We address this by training STAR with an additional 10,000 scans of male and female subjects, and show that this results in better model generalization. STAR is compact, generalizes better to new bodies and is a drop-in replacement for SMPL. STAR is publicly available for research purposes at http://star.is.tue.mpg.de.

Project Page Code Video paper supplemental [BibTex]


Monocular Expressive Body Regression through Body-Driven Attention
Monocular Expressive Body Regression through Body-Driven Attention

Choutas, V., Pavlakos, G., Bolkart, T., Tzionas, D., Black, M. J.

In Computer Vision – ECCV 2020, Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
To understand how people look, interact, or perform tasks,we need to quickly and accurately capture their 3D body, face, and hands together from an RGB image. Most existing methods focus only on parts of the body. A few recent approaches reconstruct full expressive 3D humans from images using 3D body models that include the face and hands. These methods are optimization-based and thus slow, prone to local optima, and require 2D keypoints as input. We address these limitations by introducing ExPose (EXpressive POse and Shape rEgression), which directly regresses the body, face, and hands, in SMPL-X format, from an RGB image. This is a hard problem due to the high dimensionality of the body and the lack of expressive training data. Additionally, hands and faces are much smaller than the body, occupying very few image pixels. This makes hand and face estimation hard when body images are downscaled for neural networks. We make three main contributions. First, we account for the lack of training data by curating a dataset of SMPL-X fits on in-the-wild images. Second, we observe that body estimation localizes the face and hands reasonably well. We introduce body-driven attention for face and hand regions in the original image to extract higher-resolution crops that are fed to dedicated refinement modules. Third, these modules exploit part-specific knowledge from existing face and hand-only datasets. ExPose estimates expressive 3D humans more accurately than existing optimization methods at a small fraction of the computational cost. Our data, model and code are available for research at https://expose.is.tue.mpg.de.

code Short video Long video arxiv pdf suppl link (url) Project Page [BibTex]


GRAB: A Dataset of Whole-Body Human Grasping of Objects
GRAB: A Dataset of Whole-Body Human Grasping of Objects

Taheri, O., Ghorbani, N., Black, M. J., Tzionas, D.

In Computer Vision – ECCV 2020, Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time. While "grasping" is commonly thought of as a single hand stably lifting an object, we capture the motion of the entire body and adopt the generalized notion of "whole-body grasps". Thus, we collect a new dataset, called GRAB (GRasping Actions with Bodies), of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size. Given MoCap markers, we fit the full 3D body shape and pose, including the articulated face and hands, as well as the 3D object pose. This gives detailed 3D meshes over time, from which we compute contact between the body and object. This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task. We illustrate the practical value of GRAB with an example application; we train GrabNet, a conditional generative network, to predict 3D hand grasps for unseen 3D object shapes. The dataset and code are available for research purposes at https://grab.is.tue.mpg.de.

pdf suppl video (long) video (short) link (url) DOI [BibTex]

pdf suppl video (long) video (short) link (url) DOI [BibTex]


Learning to Dress 3D People in Generative Clothing
Learning to Dress 3D People in Generative Clothing

Ma, Q., Yang, J., Ranjan, A., Pujades, S., Pons-Moll, G., Tang, S., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), pages: 6468-6477, IEEE, June 2020 (inproceedings)

Abstract
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shape. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term on SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses.

Project page Code Short video Long video arXiv DOI [BibTex]

Project page Code Short video Long video arXiv DOI [BibTex]


{GENTEL : GENerating Training data Efficiently for Learning to segment medical images}
GENTEL : GENerating Training data Efficiently for Learning to segment medical images

Thakur, R. P., Rocamora, S. P., Goel, L., Pohmann, R., Machann, J., Black, M. J.

Congrès Reconnaissance des Formes, Image, Apprentissage et Perception (RFAIP), June 2020 (conference)

Abstract
Accurately segmenting MRI images is crucial for many clinical applications. However, manually segmenting images with accurate pixel precision is a tedious and time consuming task. In this paper we present a simple, yet effective method to improve the efficiency of the image segmentation process. We propose to transform the image annotation task into a binary choice task. We start by using classical image processing algorithms with different parameter values to generate multiple, different segmentation masks for each input MRI image. Then, instead of segmenting the pixels of the images, the user only needs to decide whether a segmentation is acceptable or not. This method allows us to efficiently obtain high quality segmentations with minor human intervention. With the selected segmentations, we train a state-of-the-art neural network model. For the evaluation, we use a second MRI dataset (1.5T Dataset), acquired with a different protocol and containing annotations. We show that the trained network i) is able to automatically segment cases where none of the classical methods obtain a high quality result ; ii) generalizes to the second MRI dataset, which was acquired with a different protocol and was never seen at training time ; and iii) enables detection of miss-annotations in this second dataset. Quantitatively, the trained network obtains very good results: DICE score - mean 0.98, median 0.99- and Hausdorff distance (in pixels) - mean 4.7, median 2.0-.

Project Page PDF [BibTex]

Project Page PDF [BibTex]


Generating 3D People in Scenes without People
Generating 3D People in Scenes without People

Zhang, Y., Hassan, M., Neumann, H., Black, M. J., Tang, S.

In Computer Vision and Pattern Recognition (CVPR), pages: 6194-6204, June 2020 (inproceedings)

Abstract
We present a fully automatic system that takes a 3D scene and generates plausible 3D human bodies that are posed naturally in that 3D scene. Given a 3D scene without people, humans can easily imagine how people could interact with the scene and the objects in it. However, this is a challenging task for a computer as solving it requires that (1) the generated human bodies to be semantically plausible within the 3D environment (e.g. people sitting on the sofa or cooking near the stove), and (2) the generated human-scene interaction to be physically feasible such that the human body and scene do not interpenetrate while, at the same time, body-scene contact supports physical interactions. To that end, we make use of the surface-based 3D human model SMPL-X. We first train a conditional variational autoencoder to predict semantically plausible 3D human poses conditioned on latent scene representations, then we further refine the generated 3D bodies using scene constraints to enforce feasible physical interaction. We show that our approach is able to synthesize realistic and expressive 3D human bodies that naturally interact with 3D environment. We perform extensive experiments demonstrating that our generative framework compares favorably with existing methods, both qualitatively and quantitatively. We believe that our scene-conditioned 3D human generation pipeline will be useful for numerous applications; e.g. to generate training data for human pose estimation, in video games and in VR/AR. Our project page for data and code can be seen at: \url{https://vlg.inf.ethz.ch/projects/PSI/}.

Code PDF DOI [BibTex]

Code PDF DOI [BibTex]


Learning Physics-guided Face Relighting under Directional Light
Learning Physics-guided Face Relighting under Directional Light

Nestmeyer, T., Lalonde, J., Matthews, I., Lehrmann, A. M.

In Conference on Computer Vision and Pattern Recognition, pages: 5123-5132, IEEE/CVF, June 2020 (inproceedings) Accepted

Abstract
Relighting is an essential step in realistically transferring objects from a captured image into another environment. For example, authentic telepresence in Augmented Reality requires faces to be displayed and relit consistent with the observer's scene lighting. We investigate end-to-end deep learning architectures that both de-light and relight an image of a human face. Our model decomposes the input image into intrinsic components according to a diffuse physics-based image formation model. We enable non-diffuse effects including cast shadows and specular highlights by predicting a residual correction to the diffuse render. To train and evaluate our model, we collected a portrait database of 21 subjects with various expressions and poses. Each sample is captured in a controlled light stage setup with 32 individual light sources. Our method creates precise and believable relighting results and generalizes to complex illumination conditions and challenging poses, including when the subject is not looking straight at the camera.

Paper [BibTex]

Paper [BibTex]


{VIBE}: Video Inference for Human Body Pose and Shape Estimation
VIBE: Video Inference for Human Body Pose and Shape Estimation

Kocabas, M., Athanasiou, N., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 5252-5262, IEEE, June 2020 (inproceedings)

Abstract
Human motion is fundamental to understanding behavior. Despite progress on single-image 3D pose and shape estimation, existing video-based state-of-the-art methodsfail to produce accurate and natural motion sequences due to a lack of ground-truth 3D motion data for training. To address this problem, we propose “Video Inference for Body Pose and Shape Estimation” (VIBE), which makes use of an existing large-scale motion capture dataset (AMASS) together with unpaired, in-the-wild, 2D keypoint annotations. Our key novelty is an adversarial learning framework that leverages AMASS to discriminate between real human motions and those produced by our temporal pose and shape regression networks. We define a temporal network architecture and show that adversarial training, at the sequence level, produces kinematically plausible motion sequences without in-the-wild ground-truth 3D labels. We perform extensive experimentation to analyze the importance of motion and demonstrate the effectiveness of VIBE on challenging 3D pose estimation datasets, achieving state-of-the-art performance. Code and pretrained models are available at https://github.com/mkocabas/VIBE

arXiv code video supplemental video DOI Project Page [BibTex]

arXiv code video supplemental video DOI Project Page [BibTex]


From Variational to Deterministic Autoencoders
From Variational to Deterministic Autoencoders

Ghosh*, P., Sajjadi*, M. S. M., Vergari, A., Black, M. J., Schölkopf, B.

8th International Conference on Learning Representations (ICLR) , April 2020, *equal contribution (conference)

Abstract
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations
Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations

Rueegg, N., Lassner, C., Black, M. J., Schindler, K.

In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), pages: 5561-5569, Febuary 2020 (inproceedings)

Abstract
The goal of many computer vision systems is to transform image pixels into 3D representations. Recent popular models use neural networks to regress directly from pixels to 3D object parameters. Such an approach works well when supervision is available, but in problems like human pose and shape estimation, it is difficult to obtain natural images with 3D ground truth. To go one step further, we propose a new architecture that facilitates unsupervised, or lightly supervised, learning. The idea is to break the problem into a series of transformations between increasingly abstract representations. Each step involves a cycle designed to be learnable without annotated training data, and the chain of cycles delivers the final solution. Specifically, we use 2D body part segments as an intermediate representation that contains enough information to be lifted to 3D, and at the same time is simple enough to be learned in an unsupervised way. We demonstrate the method by learning 3D human pose and shape from un-paired and un-annotated images. We also explore varying amounts of paired data and show that cycling greatly alleviates the need for paired data. While we present results for modeling humans, our formulation is general and can be applied to other vision problems.

pdf [BibTex]

pdf [BibTex]

2010


Visibility Maps for Improving Seam Carving
Visibility Maps for Improving Seam Carving

Mansfield, A., Gehler, P., Van Gool, L., Rother, C.

In Media Retargeting Workshop, European Conference on Computer Vision (ECCV), september 2010 (inproceedings)

webpage pdf slides supplementary code [BibTex]

2010

webpage pdf slides supplementary code [BibTex]


A {2D} human body model dressed in eigen clothing
A 2D human body model dressed in eigen clothing

Guan, P., Freifeld, O., Black, M. J.

In European Conf. on Computer Vision, (ECCV), pages: 285-298, Springer-Verlag, September 2010 (inproceedings)

Abstract
Detection, tracking, segmentation and pose estimation of people in monocular images are widely studied. Two-dimensional models of the human body are extensively used, however, they are typically fairly crude, representing the body either as a rough outline or in terms of articulated geometric primitives. We describe a new 2D model of the human body contour that combines an underlying naked body with a low-dimensional clothing model. The naked body is represented as a Contour Person that can take on a wide variety of poses and body shapes. Clothing is represented as a deformation from the underlying body contour. This deformation is learned from training examples using principal component analysis to produce eigen clothing. We find that the statistics of clothing deformations are skewed and we model the a priori probability of these deformations using a Beta distribution. The resulting generative model captures realistic human forms in monocular images and is used to infer 2D body shape and pose under clothing. We also use the coefficients of the eigen clothing to recognize different categories of clothing on dressed people. The method is evaluated quantitatively on synthetic and real images and achieves better accuracy than previous methods for estimating body shape under clothing.

pdf data poster Project Page [BibTex]

pdf data poster Project Page [BibTex]


Analyzing and Evaluating Markerless Motion Tracking Using Inertial Sensors
Analyzing and Evaluating Markerless Motion Tracking Using Inertial Sensors

Baak, A., Helten, T., Müller, M., Pons-Moll, G., Rosenhahn, B., Seidel, H.

In European Conference on Computer Vision (ECCV Workshops), September 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Trainable, Vision-Based Automated Home Cage Behavioral Phenotyping
Trainable, Vision-Based Automated Home Cage Behavioral Phenotyping

Jhuang, H., Garrote, E., Edelman, N., Poggio, T., Steele, A., Serre, T.

In Measuring Behavior, August 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Multisensor-Fusion for 3D Full-Body Human Motion Capture
Multisensor-Fusion for 3D Full-Body Human Motion Capture

Pons-Moll, G., Baak, A., Helten, T., Müller, M., Seidel, H., Rosenhahn, B.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2010 (inproceedings)

project page pdf [BibTex]

project page pdf [BibTex]


Contour people: A parameterized model of {2D} articulated human shape
Contour people: A parameterized model of 2D articulated human shape

Freifeld, O., Weiss, A., Zuffi, S., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, (CVPR), pages: 639-646, IEEE, June 2010 (inproceedings)

Abstract
We define a new “contour person” model of the human body that has the expressive power of a detailed 3D model and the computational benefits of a simple 2D part-based model. The contour person (CP) model is learned from a 3D SCAPE model of the human body that captures natural shape and pose variations; the projected contours of this model, along with their segmentation into parts forms the training set. The CP model factors deformations of the body into three components: shape variation, viewpoint change and part rotation. This latter model also incorporates a learned non-rigid deformation model. The result is a 2D articulated model that is compact to represent, simple to compute with and more expressive than previous models. We demonstrate the value of such a model in 2D pose estimation and segmentation. Given an initial pose from a standard pictorial-structures method, we refine the pose and shape using an objective function that segments the scene into foreground and background regions. The result is a parametric, human-specific, image segmentation.

pdf slides video of CVPR talk Project Page [BibTex]

pdf slides video of CVPR talk Project Page [BibTex]


Secrets of optical flow estimation and their principles
Secrets of optical flow estimation and their principles

(2020 Longuet-Higgins Prize)

Sun, D., Roth, S., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 2432-2439, IEEE, June 2010 (inproceedings)

pdf Matlab code code copryright notice [BibTex]

pdf Matlab code code copryright notice [BibTex]


Coded exposure imaging for projective motion deblurring
Coded exposure imaging for projective motion deblurring

Tai, Y., Kong, N., Lin, S., Shin, S. Y.

In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 2408-2415, June 2010 (inproceedings)

Abstract
We propose a method for deblurring of spatially variant object motion. A principal challenge of this problem is how to estimate the point spread function (PSF) of the spatially variant blur. Based on the projective motion blur model of, we present a blur estimation technique that jointly utilizes a coded exposure camera and simple user interactions to recover the PSF. With this spatially variant PSF, objects that exhibit projective motion can be effectively de-blurred. We validate this method with several challenging image examples.

Publisher site [BibTex]

Publisher site [BibTex]


Tracking people interacting with objects
Tracking people interacting with objects

Kjellstrom, H., Kragic, D., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, pages: 747-754, June 2010 (inproceedings)

pdf Video [BibTex]

pdf Video [BibTex]


no image
Modellbasierte Echtzeit-Bewegungsschätzung in der Fluoreszenzendoskopie

Stehle, T., Wulff, J., Behrens, A., Gross, S., Aach, T.

In Bildverarbeitung für die Medizin, 574, pages: 435-439, CEUR Workshop Proceedings, 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


{Robust one-shot 3D scanning using loopy belief propagation}
Robust one-shot 3D scanning using loopy belief propagation

Ulusoy, A., Calakli, F., Taubin, G.

In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages: 15-22, IEEE, 2010 (inproceedings)

Abstract
A structured-light technique can greatly simplify the problem of shape recovery from images. There are currently two main research challenges in design of such techniques. One is handling complicated scenes involving texture, occlusions, shadows, sharp discontinuities, and in some cases even dynamic change; and the other is speeding up the acquisition process by requiring small number of images and computationally less demanding algorithms. This paper presents a “one-shot” variant of such techniques to tackle the aforementioned challenges. It works by projecting a static grid pattern onto the scene and identifying the correspondence between grid stripes and the camera image. The correspondence problem is formulated using a novel graphical model and solved efficiently using loopy belief propagation. Unlike prior approaches, the proposed approach uses non-deterministic geometric constraints, thereby can handle spurious connections of stripe images. The effectiveness of the proposed approach is verified on a variety of complicated real scenes.

pdf link (url) DOI [BibTex]

pdf link (url) DOI [BibTex]


Scene Carving: Scene Consistent Image Retargeting
Scene Carving: Scene Consistent Image Retargeting

Mansfield, A., Gehler, P., Van Gool, L., Rother, C.

In European Conference on Computer Vision (ECCV), 2010 (inproceedings)

webpage+code pdf supplementary poster [BibTex]

webpage+code pdf supplementary poster [BibTex]


Epione: An Innovative Pain Management System Using Facial Expression Analysis, Biofeedback and Augmented Reality-Based Distraction
Epione: An Innovative Pain Management System Using Facial Expression Analysis, Biofeedback and Augmented Reality-Based Distraction

Georgoulis, S., Eleftheriadis, S., Tzionas, D., Vrenas, K., Petrantonakis, P., Hadjileontiadis, L. J.

In Proceedings of the 2010 International Conference on Intelligent Networking and Collaborative Systems, pages: 259-266, INCOS ’10, IEEE Computer Society, Washington, DC, USA, 2010 (inproceedings)

Abstract
An innovative pain management system, namely Epione, is presented here. Epione deals with three main types of pain, i.e., acute pain, chronic pain, and phantom limb pain. In particular, by using facial expression analysis, Epione forms a dynamic pain meter, which then triggers biofeedback and augmented reality-based destruction scenarios, in an effort to maximize patient's pain relief. This unique combination sets Epione not only a novel pain management approach, but also a means that provides an understanding and integration of the needs of the whole community involved i.e., patients and physicians, in a joint attempt to facilitate easing of their suffering, provide efficient monitoring and contribute to a better quality of life.

Paper Project Page DOI [BibTex]

Paper Project Page DOI [BibTex]


Phantom Limb Pain Management Using Facial Expression Analysis, Biofeedback and Augmented Reality Interfacing
Phantom Limb Pain Management Using Facial Expression Analysis, Biofeedback and Augmented Reality Interfacing

Tzionas, D., Vrenas, K., Eleftheriadis, S., Georgoulis, S., Petrantonakis, P. C., Hadjileontiadis, L. J.

In Proceedings of the 3rd International Conferenceon Software Development for EnhancingAccessibility and Fighting Info-Exclusion, pages: 23-30, DSAI ’10, UTAD - Universidade de Trás-os-Montes e Alto Douro, 2010 (inproceedings)

Abstract
Post-amputation sensation often translates to the feeling of severe pain in the missing limb, referred to as phantom limb pain (PLP). A clear and rational treatment regimen is difficult to establish, as long as the underlying pathophysiology is not fully known. In this work, an innovative PLP management system is presented, as a module of an holistic computer-mediated pain management environment, namely Epione. The proposed Epione-PLP scheme is structured upon advanced facial expression analysis, used to form a dynamic pain meter, which, in turn, is used to trigger biofeedback and augmented reality-based PLP distraction scenarios. The latter incorporate a model of the missing limb for its visualization, in an effort to provide to the amputee the feeling of its existence and control, and, thus, maximize his/her PLP relief. The novel Epione-PLP management approach integrates edge-technology within the context of personalized health and it could be used to facilitate easing of PLP patients' suffering, provide efficient progress monitoring and contribute to the increase in their quality of life.

Paper Project Page link (url) [BibTex]

Paper Project Page link (url) [BibTex]


no image
An automated action initiation system reveals behavioral deficits in MyosinVa deficient mice

Pandian, S., Edelman, N., Jhuang, H., Serre, T., Poggio, T., Constantine-Paton, M.

Society for Neuroscience, 2010 (conference)

pdf [BibTex]

pdf [BibTex]


Dense Marker-less Three Dimensional Motion Capture
Dense Marker-less Three Dimensional Motion Capture

Soren Hauberg, Bente Rona Jensen, Morten Engell-Norregaard, Kenny Erleben, Kim S. Pedersen

In Virtual Vistas; Eleventh International Symposium on the 3D Analysis of Human Movement, 2010 (inproceedings)

Conference site [BibTex]

Conference site [BibTex]


Stick It! Articulated Tracking using Spatial Rigid Object Priors
Stick It! Articulated Tracking using Spatial Rigid Object Priors

Soren Hauberg, Kim S. Pedersen

In Computer Vision – ACCV 2010, 6494, pages: 758-769, Lecture Notes in Computer Science, (Editors: Kimmel, Ron and Klette, Reinhard and Sugimoto, Akihiro), Springer Berlin Heidelberg, 2010 (inproceedings)

Publishers site Paper site Code PDF [BibTex]

Publishers site Paper site Code PDF [BibTex]


Gaussian-like Spatial Priors for Articulated Tracking
Gaussian-like Spatial Priors for Articulated Tracking

Soren Hauberg, Stefan Sommer, Kim S. Pedersen

In Computer Vision – ECCV 2010, 6311, pages: 425-437, Lecture Notes in Computer Science, (Editors: Daniilidis, Kostas and Maragos, Petros and Paragios, Nikos), Springer Berlin Heidelberg, 2010 (inproceedings)

Publishers site Paper site Code PDF [BibTex]

Publishers site Paper site Code PDF [BibTex]


no image
Reach to grasp actions in rhesus macaques: Dimensionality reduction of hand, wrist, and upper arm motor subspaces using principal component analysis

Vargas-Irwin, C., Franquemont, L., Shakhnarovich, G., Yadollahpour, P., Black, M., Donoghue, J.

2010 Abstract Viewer and Itinerary Planner, Society for Neuroscience, 2010, Online (conference)

[BibTex]

[BibTex]


Layered image motion with explicit occlusions, temporal consistency, and depth ordering
Layered image motion with explicit occlusions, temporal consistency, and depth ordering

Sun, D., Sudderth, E., Black, M. J.

In Advances in Neural Information Processing Systems 23 (NIPS), pages: 2226-2234, MIT Press, 2010 (inproceedings)

Abstract
Layered models are a powerful way of describing natural scenes containing smooth surfaces that may overlap and occlude each other. For image motion estimation, such models have a long history but have not achieved the wide use or accuracy of non-layered methods. We present a new probabilistic model of optical flow in layers that addresses many of the shortcomings of previous approaches. In particular, we define a probabilistic graphical model that explicitly captures: 1) occlusions and disocclusions; 2) depth ordering of the layers; 3) temporal consistency of the layer segmentation. Additionally the optical flow in each layer is modeled by a combination of a parametric model and a smooth deviation based on an MRF with a robust spatial prior; the resulting model allows roughness in layers. Finally, a key contribution is the formulation of the layers using an image dependent hidden field prior based on recent models for static scene segmentation. The method achieves state-of-the-art results on the Middlebury benchmark and produces meaningful scene segmentations as well as detected occlusion regions.

main paper supplemental material paper and supplemental material in one pdf file Project Page [BibTex]


Manifold Valued Statistics, Exact Principal Geodesic Analysis and the Effect of Linear Approximations
Manifold Valued Statistics, Exact Principal Geodesic Analysis and the Effect of Linear Approximations

Stefan Sommer, Francois Lauze, Soren Hauberg, Mads Nielsen

In Computer Vision – ECCV 2010, 6316, pages: 43-56, (Editors: Daniilidis, Kostas and Maragos, Petros and Paragios, Nikos), Springer Berlin Heidelberg, 2010 (inproceedings)

Publishers site PDF [BibTex]

Publishers site PDF [BibTex]


GPU Accelerated Likelihoods for Stereo-Based Articulated Tracking
GPU Accelerated Likelihoods for Stereo-Based Articulated Tracking

Rune Mollegaard Friborg, Soren Hauberg, Kenny Erleben

In The CVGPU workshop at European Conference on Computer Vision (ECCV) 2010, 2010 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Unsupervised learning of a low-dimensional non-linear representation of motor cortical neuronal ensemble activity using Spatio-Temporal Isomap

Kim, S., Tsoli, A., Jenkins, O., Simeral, J., Donoghue, J., Black, M.

2010 Abstract Viewer and Itinerary Planner, Society for Neuroscience, 2010, Online (conference)

[BibTex]

[BibTex]


Vision-Based Automated Recognition of Mice Home-Cage Behaviors.
Vision-Based Automated Recognition of Mice Home-Cage Behaviors.

Jhuang, H., Garrote, E., Edelman, N., Poggio, T., Steele, A., Serre, T.

Workshop: Visual Observation and Analysis of Animal and Insect Behavior, in conjunction with International Conference on Pattern Recognition (ICPR) , 2010 (conference)

pdf [BibTex]

pdf [BibTex]


Hands in action: real-time 3{D} reconstruction of hands in interaction with objects
Hands in action: real-time 3D reconstruction of hands in interaction with objects

Romero, J., Kjellström, H., Kragic, D.

In IEEE International Conference on Robotics and Automation (ICRA), pages: 458-463, 2010 (inproceedings)

Pdf Project Page [BibTex]

Pdf Project Page [BibTex]


no image
Orientation and direction selectivity in the population code of the visual thalamus

Stanley, G., Jin, J., Wang, Y., Desbordes, G., Black, M., Alonso, J.

COSYNE, 2010 (conference)

[BibTex]

[BibTex]


3{D} Knowledge-Based Segmentation Using Pose-Invariant Higher-Order  Graphs
3D Knowledge-Based Segmentation Using Pose-Invariant Higher-Order Graphs

Wang, C., Teboul, O., Michel, F., Essafi, S., Paragios, N.

In International Conference, Medical Image Computing and Computer Assisted Intervention (MICCAI), 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Computational Mechanisms for the motion processing in visual area MT
Computational Mechanisms for the motion processing in visual area MT

Jhuang, H., Serre, T., Poggio, T.

Society for Neuroscience, 2010 (conference)

pdf [BibTex]

pdf [BibTex]


Spatio-Temporal Modeling of Grasping Actions
Spatio-Temporal Modeling of Grasping Actions

Romero, J., Feix, T., Kjellström, H., Kragic, D.

In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, pages: 2103-2108, 2010 (inproceedings)

Pdf Project Page [BibTex]

Pdf Project Page [BibTex]


Estimating Shadows with the Bright Channel Cue
Estimating Shadows with the Bright Channel Cue

Panagopoulos, A., Wang, C., Samaras, D., Paragios, N.

In Color and Reflectance in Imaging and Computer Vision Workshop (CRICV) (in conjunction with ECCV 2010), 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Dense non-rigid surface registration using high-order graph matching
Dense non-rigid surface registration using high-order graph matching

Zeng, Y., Wang, C., Wang, Y., Gu, X., Samaras, D., Paragios, N.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]

2007


A Database and Evaluation Methodology for Optical Flow
A Database and Evaluation Methodology for Optical Flow

Baker, S., Scharstein, D., Lewis, J.P., Roth, S., Black, M.J., Szeliski, R.

In Int. Conf. on Computer Vision, ICCV, pages: 1-8, Rio de Janeiro, Brazil, October 2007 (inproceedings)

pdf [BibTex]

2007

pdf [BibTex]


Shining a light on human pose: On shadows, shading and the estimation of pose and shape,
Shining a light on human pose: On shadows, shading and the estimation of pose and shape,

Balan, A., Black, M. J., Haussecker, H., Sigal, L.

In Int. Conf. on Computer Vision, ICCV, pages: 1-8, Rio de Janeiro, Brazil, October 2007 (inproceedings)

pdf YouTube [BibTex]

pdf YouTube [BibTex]


no image
Ensemble spiking activity as a source of cortical control signals in individuals with tetraplegia

Simeral, J. D., Kim, S. P., Black, M. J., Donoghue, J. P., Hochberg, L. R.

Biomedical Engineering Society, BMES, september 2007 (conference)

[BibTex]

[BibTex]


Detailed human shape and pose from images
Detailed human shape and pose from images

Balan, A., Sigal, L., Black, M. J., Davis, J., Haussecker, H.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, pages: 1-8, Minneapolis, June 2007 (inproceedings)

Abstract
Much of the research on video-based human motion capture assumes the body shape is known a priori and is represented coarsely (e.g. using cylinders or superquadrics to model limbs). These body models stand in sharp contrast to the richly detailed 3D body models used by the graphics community. Here we propose a method for recovering such models directly from images. Specifically, we represent the body using a recently proposed triangulated mesh model called SCAPE which employs a low-dimensional, but detailed, parametric model of shape and pose-dependent deformations that is learned from a database of range scans of human bodies. Previous work showed that the parameters of the SCAPE model could be estimated from marker-based motion capture data. Here we go further to estimate the parameters directly from image data. We define a cost function between image observations and a hypothesized mesh and formulate the problem as optimization over the body shape and pose parameters using stochastic search. Our results show that such rich generative models enable the automatic recovery of detailed human shape and pose from images.

pdf YouTube [BibTex]

pdf YouTube [BibTex]


Decoding grasp aperture from motor-cortical population activity
Decoding grasp aperture from motor-cortical population activity

Artemiadis, P., Shakhnarovich, G., Vargas-Irwin, C., Donoghue, J. P., Black, M. J.

In The 3rd International IEEE EMBS Conference on Neural Engineering, pages: 518-521, May 2007 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Multi-state decoding of point-and-click control signals from motor cortical activity in a human with tetraplegia
Multi-state decoding of point-and-click control signals from motor cortical activity in a human with tetraplegia

Kim, S., Simeral, J., Hochberg, L., Donoghue, J. P., Friehs, G., Black, M. J.

In The 3rd International IEEE EMBS Conference on Neural Engineering, pages: 486-489, May 2007 (inproceedings)

Abstract
Basic neural-prosthetic control of a computer cursor has been recently demonstrated by Hochberg et al. [1] using the BrainGate system (Cyberkinetics Neurotechnology Systems, Inc.). While these results demonstrate the feasibility of intracortically-driven prostheses for humans with paralysis, a practical cursor-based computer interface requires more precise cursor control and the ability to “click” on areas of interest. Here we present a practical point and click device that decodes both continuous states (e.g. cursor kinematics) and discrete states (e.g. click state) from single neural population in human motor cortex. We describe a probabilistic multi-state decoder and the necessary training paradigms that enable point and click cursor control by a human with tetraplegia using an implanted microelectrode array. We present results from multiple recording sessions and quantify the point and click performance.

pdf [BibTex]

pdf [BibTex]


Deterministic Annealing for Multiple-Instance Learning
Deterministic Annealing for Multiple-Instance Learning

Gehler, P., Chapelle, O.

In Artificial Intelligence and Statistics (AIStats), 2007 (inproceedings)

pdf [BibTex]

pdf [BibTex]