Header logo is ps


2010


Thumb xl graspimagesmall
Decoding complete reach and grasp actions from local primary motor cortex populations

(Featured in Nature’s Research Highlights (Nature, Vol 466, 29 July 2010))

Vargas-Irwin, C. E., Shakhnarovich, G., Yadollahpour, P., Mislow, J., Black, M. J., Donoghue, J. P.

J. of Neuroscience, 39(29):9659-9669, July 2010 (article)

pdf pdf from publisher Movie 1 Movie 2 Project Page [BibTex]

2010

pdf pdf from publisher Movie 1 Movie 2 Project Page [BibTex]


Thumb xl ijcvcoverhd
Guest editorial: State of the art in image- and video-based human pose and motion estimation

Sigal, L., Black, M. J.

International Journal of Computer Vision, 87(1):1-3, March 2010 (article)

pdf from publisher [BibTex]

pdf from publisher [BibTex]


Thumb xl humanevaimagesmall2
HumanEva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion

Sigal, L., Balan, A., Black, M. J.

International Journal of Computer Vision, 87(1):4-27, Springer Netherlands, March 2010 (article)

Abstract
While research on articulated human motion and pose estimation has progressed rapidly in the last few years, there has been no systematic quantitative evaluation of competing methods to establish the current state of the art. We present data obtained using a hardware system that is able to capture synchronized video and ground-truth 3D motion. The resulting HumanEva datasets contain multiple subjects performing a set of predefined actions with a number of repetitions. On the order of 40,000 frames of synchronized motion capture and multi-view video (resulting in over one quarter million image frames in total) were collected at 60 Hz with an additional 37,000 time instants of pure motion capture data. A standard set of error measures is defined for evaluating both 2D and 3D pose estimation and tracking algorithms. We also describe a baseline algorithm for 3D articulated tracking that uses a relatively standard Bayesian framework with optimization in the form of Sequential Importance Resampling and Annealed Particle Filtering. In the context of this baseline algorithm we explore a variety of likelihood functions, prior models of human motion and the effects of algorithm parameters. Our experiments suggest that image observation models and motion priors play important roles in performance, and that in a multi-view laboratory environment, where initialization is available, Bayesian filtering tends to perform well. The datasets and the software are made available to the research community. This infrastructure will support the development of new articulated motion and pose estimation algorithms, will provide a baseline for the evaluation and comparison of new methods, and will help establish the current state of the art in human pose estimation and tracking.

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


Thumb xl ncomm fig2
Automated Home-Cage Behavioral Phenotyping of Mice

Jhuang, H., Garrote, E., Mutch, J., Poggio, T., Steele, A., Serre, T.

Nature Communications, Nature Communications, 2010 (article)

software, demo pdf [BibTex]

software, demo pdf [BibTex]


Thumb xl jampani10 msr
ImageFlow: Streaming Image Search

Jampani, V., Ramos, G., Drucker, S.

MSR-TR-2010-148, Microsoft Research, Redmond, 2010 (techreport)

Abstract
Traditional grid and list representations of image search results are the dominant interaction paradigms that users face on a daily basis, yet it is unclear that such paradigms are well-suited for experiences where the user‟s task is to browse images for leisure, to discover new information or to seek particular images to represent ideas. We introduce ImageFlow, a novel image search user interface that ex-plores a different alternative to the traditional presentation of image search results. ImageFlow presents image results on a canvas where we map semantic features (e.g., rele-vance, related queries) to the canvas‟ spatial dimensions (e.g., x, y, z) in a way that allows for several levels of en-gagement – from passively viewing a stream of images, to seamlessly navigating through the semantic space and ac-tively collecting images for sharing and reuse. We have implemented our system as a fully functioning prototype, and we report on promising, preliminary usage results.

url pdf link (url) [BibTex]

url pdf link (url) [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 12.00.36 pm
Visual Object-Action Recognition: Inferring Object Affordances from Human Demonstration

Kjellström, H., Romero, J., Kragic, D.

Computer Vision and Image Understanding, pages: 81-90, 2010 (article)

Pdf [BibTex]

Pdf [BibTex]

2006


Thumb xl screen shot 2012 06 06 at 11.31.38 am
Implicit Wiener Series, Part II: Regularised estimation

Gehler, P., Franz, M.

(148), Max Planck Institute, 2006 (techreport)

pdf [BibTex]

2006


Thumb xl evatr
HumanEva: Synchronized video and motion capture dataset for evaluation of articulated human motion

Sigal, L., Black, M. J.

(CS-06-08), Brown University, Department of Computer Science, 2006 (techreport)

pdf abstract [BibTex]

pdf abstract [BibTex]


Thumb xl neuralcomp
Bayesian population decoding of motor cortical activity using a Kalman filter

Wu, W., Gao, Y., Bienenstock, E., Donoghue, J. P., Black, M. J.

Neural Computation, 18(1):80-118, 2006 (article)

Abstract
Effective neural motor prostheses require a method for decoding neural activity representing desired movement. In particular, the accurate reconstruction of a continuous motion signal is necessary for the control of devices such as computer cursors, robots, or a patient's own paralyzed limbs. For such applications, we developed a real-time system that uses Bayesian inference techniques to estimate hand motion from the firing rates of multiple neurons. In this study, we used recordings that were previously made in the arm area of primary motor cortex in awake behaving monkeys using a chronically implanted multielectrode microarray. Bayesian inference involves computing the posterior probability of the hand motion conditioned on a sequence of observed firing rates; this is formulated in terms of the product of a likelihood and a prior. The likelihood term models the probability of firing rates given a particular hand motion. We found that a linear gaussian model could be used to approximate this likelihood and could be readily learned from a small amount of training data. The prior term defines a probabilistic model of hand kinematics and was also taken to be a linear gaussian model. Decoding was performed using a Kalman filter, which gives an efficient recursive method for Bayesian inference when the likelihood and prior are linear and gaussian. In off-line experiments, the Kalman filter reconstructions of hand trajectory were more accurate than previously reported results. The resulting decoding algorithm provides a principled probabilistic model of motor-cortical coding, decodes hand motion in real time, provides an estimate of uncertainty, and is straightforward to implement. Additionally the formulation unifies and extends previous models of neural coding while providing insights into the motor-cortical code.

pdf preprint pdf from publisher abstract [BibTex]

pdf preprint pdf from publisher abstract [BibTex]


Thumb xl screen shot 2012 06 06 at 11.15.02 am
Products of “Edge-perts”

Gehler, P., Welling, M.

In Advances in Neural Information Processing Systems 18, pages: 419-426, (Editors: Weiss, Y. and Sch"olkopf, B. and Platt, J.), MIT Press, Cambridge, MA, 2006 (incollection)

pdf [BibTex]

pdf [BibTex]

2005


Thumb xl ivc05
Representing cyclic human motion using functional analysis

Ormoneit, D., Black, M. J., Hastie, T., Kjellström, H.

Image and Vision Computing, 23(14):1264-1276, December 2005 (article)

Abstract
We present a robust automatic method for modeling cyclic 3D human motion such as walking using motion-capture data. The pose of the body is represented by a time-series of joint angles which are automatically segmented into a sequence of motion cycles. The mean and the principal components of these cycles are computed using a new algorithm that enforces smooth transitions between the cycles by operating in the Fourier domain. Key to this method is its ability to automatically deal with noise and missing data. A learned walking model is then exploited for Bayesian tracking of 3D human motion.

pdf pdf from publisher DOI [BibTex]

2005

pdf pdf from publisher DOI [BibTex]

2003


Thumb xl hedvig
Learning the statistics of people in images and video

Sidenbladh, H., Black, M. J.

International Journal of Computer Vision, 54(1-3):183-209, August 2003 (article)

Abstract
This paper address the problems of modeling the appearance of humans and distinguishing human appearance from the appearance of general scenes. We seek a model of appearance and motion that is generic in that it accounts for the ways in which people's appearance varies and, at the same time, is specific enough to be useful for tracking people in natural scenes. Given a 3D model of the person projected into an image we model the likelihood of observing various image cues conditioned on the predicted locations and orientations of the limbs. These cues are taken to be steered filter responses corresponding to edges, ridges, and motion-compensated temporal differences. Motivated by work on the statistics of natural scenes, the statistics of these filter responses for human limbs are learned from training images containing hand-labeled limb regions. Similarly, the statistics of the filter responses in general scenes are learned to define a “background” distribution. The likelihood of observing a scene given a predicted pose of a person is computed, for each limb, using the likelihood ratio between the learned foreground (person) and background distributions. Adopting a Bayesian formulation allows cues to be combined in a principled way. Furthermore, the use of learned distributions obviates the need for hand-tuned image noise models and thresholds. The paper provides a detailed analysis of the statistics of how people appear in scenes and provides a connection between work on natural image statistics and the Bayesian tracking of people.

pdf pdf from publisher code DOI [BibTex]

2003

pdf pdf from publisher code DOI [BibTex]


Thumb xl delatorreijcvteaser
A framework for robust subspace learning

De la Torre, F., Black, M. J.

International Journal of Computer Vision, 54(1-3):117-142, August 2003 (article)

Abstract
Many computer vision, signal processing and statistical problems can be posed as problems of learning low dimensional linear or multi-linear models. These models have been widely used for the representation of shape, appearance, motion, etc., in computer vision applications. Methods for learning linear models can be seen as a special case of subspace fitting. One draw-back of previous learning methods is that they are based on least squares estimation techniques and hence fail to account for “outliers” which are common in realistic training sets. We review previous approaches for making linear learning methods robust to outliers and present a new method that uses an intra-sample outlier process to account for pixel outliers. We develop the theory of Robust Subspace Learning (RSL) for linear models within a continuous optimization framework based on robust M-estimation. The framework applies to a variety of linear learning problems in computer vision including eigen-analysis and structure from motion. Several synthetic and natural examples are used to develop and illustrate the theory and applications of robust subspace learning in computer vision.

pdf code pdf from publisher Project Page [BibTex]

pdf code pdf from publisher Project Page [BibTex]


Thumb xl ijcvcoverhd
Guest editorial: Computational vision at Brown

Black, M. J., Kimia, B.

International Journal of Computer Vision, 54(1-3):5-11, August 2003 (article)

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


Thumb xl cviu91teaser
Robust parameterized component analysis: Theory and applications to 2D facial appearance models

De la Torre, F., Black, M. J.

Computer Vision and Image Understanding, 91(1-2):53-71, July 2003 (article)

Abstract
Principal component analysis (PCA) has been successfully applied to construct linear models of shape, graylevel, and motion in images. In particular, PCA has been widely used to model the variation in the appearance of people's faces. We extend previous work on facial modeling for tracking faces in video sequences as they undergo significant changes due to facial expressions. Here we consider person-specific facial appearance models (PSFAM), which use modular PCA to model complex intra-person appearance changes. Such models require aligned visual training data; in previous work, this has involved a time consuming and error-prone hand alignment and cropping process. Instead, the main contribution of this paper is to introduce parameterized component analysis to learn a subspace that is invariant to affine (or higher order) geometric transformations. The automatic learning of a PSFAM given a training image sequence is posed as a continuous optimization problem and is solved with a mixture of stochastic and deterministic techniques achieving sub-pixel accuracy. We illustrate the use of the 2D PSFAM model with preliminary experiments relevant to applications including video-conferencing and avatar animation.

pdf [BibTex]

pdf [BibTex]