Header logo is ps


2012


Thumb xl rat4
The Ankyrin 3 (ANK3) Bipolar Disorder Gene Regulates Psychiatric-related Behaviors that are Modulated by Lithium and Stress

Leussis, M., Berry-Scott, E., Saito, M., Jhuang, H., Haan, G., Alkan, O., Luce, C., Madison, J., Sklar, P., Serre, T., Root, D., Petryshen, T.

Biological Psychiatry , 2012 (article)

Prepublication Article Abstract [BibTex]

2012

Prepublication Article Abstract [BibTex]


Thumb xl icptnra1
Segmentation of Vessel Geometries from Medical Images Using GPF Deformable Model

Si Yong Yeo, Xianghua Xie, Igor Sazonov, Perumal Nithiarasu

In International Conference on Pattern Recognition Applications and Methods, 2012 (inproceedings)

Abstract
We present a method for the reconstruction of vascular geometries from medical images. Image denoising is performed using vessel enhancing diffusion, which can smooth out image noise and enhance vessel structures. The Canny edge detection technique which produces object edges with single pixel width is used for accurate detection of the lumen boundaries. The image gradients are then used to compute the geometric potential field which gives a global representation of the geometric configuration. The deformable model uses a regional constraint to suppress calcified regions for accurate segmentation of the vessel geometries. The proposed framework show high accuracy when applied to the segmentation of the carotid arteries from CT images.

[BibTex]

[BibTex]


Thumb xl tseb1
Scan-Based Flow Modelling in Human Upper Airways

Perumal Nithiarasu, Igor Sazonov, Si Yong Yeo

In Patient-Specific Modeling in Tomorrow’s Medicine, pages: 241 - 280, 0, (Editors: Amit Gefen), Springer, 2012 (inbook)

[BibTex]

[BibTex]


Thumb xl superfloxel
SuperFloxels: A Mid-Level Representation for Video Sequences

Ravichandran, A., Wang, C., Raptis, M., Soatto, S.

In Analysis and Retrieval of Tracked Events and Motion in Imagery Streams Workshop (ARTEMIS) (in conjunction with ECCV 2012), 2012 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl smcfv1
Implicit Active Contours for N-Dimensional Biomedical Image Segmentation

Si Yong Yeo

In IEEE International Conference on Systems, Man, and Cybernetics, pages: 2855 - 2860, 2012 (inproceedings)

Abstract
The segmentation of shapes from biomedical images has a wide range of uses such as image based modelling and bioimage analysis. In this paper, an active contour model is proposed for the segmentation of N-dimensional biomedical images. The proposed model uses a curvature smoothing flow and an image attraction force derived from the interactions between the geometries of the active contour model and the image objects. The active contour model is formulated using the level set method so as to handle topological changes automatically. The magnitude and orientation of the image attraction force is based on the relative geometric configurations between the active contour model and the image object boundaries. The vector force field is therefore dynamic, and the active contour model can propagate through narrow structures to segment complex shapes efficiently. The proposed model utilizes pixel interactions across the image domain, which gives a coherent representation of the image object shapes. This allows the active contour model to be robust to image noise and weak object edges. The proposed model is compared against widely used active contour models in the segmentation of anatomical shapes from biomedical images. It is shown that the proposed model has several advantages over existing techniques and can be used for the segmentation of biomedical images efficiently.

[BibTex]

[BibTex]


Thumb xl cells
Interactive Object Detection

Yao, A., Gall, J., Leistner, C., van Gool, L.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 3242-3249, IEEE, Providence, RI, USA, 2012 (inproceedings)

video pdf Project Page [BibTex]

video pdf Project Page [BibTex]


Thumb xl headpose
Real Time 3D Head Pose Estimation: Recent Achievements and Future Challenges

Fanelli, G., Gall, J., van Gool, L.

In 5th International Symposium on Communications, Control and Signal Processing (ISCCSP), 2012 (inproceedings)

data and code pdf Project Page [BibTex]

data and code pdf Project Page [BibTex]


Thumb xl hands
Motion Capture of Hands in Action using Discriminative Salient Points

Ballan, L., Taneja, A., Gall, J., van Gool, L., Pollefeys, M.

In European Conference on Computer Vision (ECCV), 7577, pages: 640-653, LNCS, Springer, 2012 (inproceedings)

data video pdf supplementary Project Page [BibTex]

data video pdf supplementary Project Page [BibTex]


Thumb xl selfsimilarity small
Sparsity Potentials for Detecting Objects with the Hough Transform

Razavi, N., Alvar, N., Gall, J., van Gool, L.

In British Machine Vision Conference (BMVC), pages: 11.1-11.10, (Editors: Bowden, Richard and Collomosse, John and Mikolajczyk, Krystian), BMVA Press, 2012 (inproceedings)

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Thumb xl multiclasshf
An Introduction to Random Forests for Multi-class Object Detection

Gall, J., Razavi, N., van Gool, L.

In Outdoor and Large-Scale Real-World Scene Analysis, 7474, pages: 243-263, LNCS, (Editors: Dellaert, Frank and Frahm, Jan-Michael and Pollefeys, Marc and Rosenhahn, Bodo and Leal-Taix’e, Laura), Springer, 2012 (incollection)

code code for Hough forest publisher's site pdf Project Page [BibTex]

code code for Hough forest publisher's site pdf Project Page [BibTex]


Thumb xl metricpose
Metric Learning from Poses for Temporal Clustering of Human Motion

L’opez-M’endez, A., Gall, J., Casas, J., van Gool, L.

In British Machine Vision Conference (BMVC), pages: 49.1-49.12, (Editors: Bowden, Richard and Collomosse, John and Mikolajczyk, Krystian), BMVA Press, 2012 (inproceedings)

video pdf Project Page Project Page [BibTex]

video pdf Project Page Project Page [BibTex]


Thumb xl objectproposal
Local Context Priors for Object Proposal Generation

Ristin, M., Gall, J., van Gool, L.

In Asian Conference on Computer Vision (ACCV), 7724, pages: 57-70, LNCS, Springer-Verlag, 2012 (inproceedings)

pdf DOI Project Page [BibTex]

pdf DOI Project Page [BibTex]


Thumb xl kinectbookchap
Home 3D body scans from noisy image and range data

Weiss, A., Hirshberg, D., Black, M. J.

In Consumer Depth Cameras for Computer Vision: Research Topics and Applications, pages: 99-118, 6, (Editors: Andrea Fossati and Juergen Gall and Helmut Grabner and Xiaofeng Ren and Kurt Konolige), Springer-Verlag, 2012 (incollection)

Project Page [BibTex]

Project Page [BibTex]


Thumb xl cvprlayers12crop
Layered segmentation and optical flow estimation over time

Sun, D., Sudderth, E., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 1768-1775, IEEE, 2012 (inproceedings)

Abstract
Layered models provide a compelling approach for estimating image motion and segmenting moving scenes. Previous methods, however, have failed to capture the structure of complex scenes, provide precise object boundaries, effectively estimate the number of layers in a scene, or robustly determine the depth order of the layers. Furthermore, previous methods have focused on optical flow between pairs of frames rather than longer sequences. We show that image sequences with more frames are needed to resolve ambiguities in depth ordering at occlusion boundaries; temporal layer constancy makes this feasible. Our generative model of image sequences is rich but difficult to optimize with traditional gradient descent methods. We propose a novel discrete approximation of the continuous objective in terms of a sequence of depth-ordered MRFs and extend graph-cut optimization methods with new “moves” that make joint layer segmentation and motion estimation feasible. Our optimizer, which mixes discrete and continuous optimization, automatically determines the number of layers and reasons about their depth ordering. We demonstrate the value of layered models, our optimization strategy, and the use of more than two frames on both the Middlebury optical flow benchmark and the MIT layer segmentation benchmark.

pdf sup mat poster Project Page Project Page [BibTex]

pdf sup mat poster Project Page Project Page [BibTex]


Thumb xl imavis2012
Natural Metrics and Least-Committed Priors for Articulated Tracking

Soren Hauberg, Stefan Sommer, Kim S. Pedersen

Image and Vision Computing, 30(6-7):453-461, Elsevier, 2012 (article)

Publishers site Code PDF [BibTex]

Publishers site Code PDF [BibTex]


Thumb xl bookcdc4cv
Consumer Depth Cameras for Computer Vision - Research Topics and Applications

Fossati, A., Gall, J., Grabner, H., Ren, X., Konolige, K.

Advances in Computer Vision and Pattern Recognition, Springer, 2012 (book)

workshop publisher's site [BibTex]

workshop publisher's site [BibTex]


Thumb xl amdo2012v2
Spatial Measures between Human Poses for Classification and Understanding

Soren Hauberg, Kim S. Pedersen

In Articulated Motion and Deformable Objects, 7378, pages: 26-36, LNCS, (Editors: Perales, Francisco J. and Fisher, Robert B. and Moeslund, Thomas B.), Springer Berlin Heidelberg, 2012 (inproceedings)

Publishers site Project Page [BibTex]

Publishers site Project Page [BibTex]


Thumb xl nips teaser
A Geometric Take on Metric Learning

Hauberg, S., Freifeld, O., Black, M. J.

In Advances in Neural Information Processing Systems (NIPS) 25, pages: 2033-2041, (Editors: P. Bartlett and F.C.N. Pereira and C.J.C. Burges and L. Bottou and K.Q. Weinberger), MIT Press, 2012 (inproceedings)

Abstract
Multi-metric learning techniques learn local metric tensors in different parts of a feature space. With such an approach, even simple classifiers can be competitive with the state-of-the-art because the distance measure locally adapts to the structure of the data. The learned distance measure is, however, non-metric, which has prevented multi-metric learning from generalizing to tasks such as dimensionality reduction and regression in a principled way. We prove that, with appropriate changes, multi-metric learning corresponds to learning the structure of a Riemannian manifold. We then show that this structure gives us a principled way to perform dimensionality reduction and regression according to the learned metrics. Algorithmically, we provide the first practical algorithm for computing geodesics according to the learned metrics, as well as algorithms for computing exponential and logarithmic maps on the Riemannian manifold. Together, these tools let many Euclidean algorithms take advantage of multi-metric learning. We illustrate the approach on regression and dimensionality reduction tasks that involve predicting measurements of the human body from shape data.

PDF Youtube Suppl. material Poster Project Page [BibTex]

PDF Youtube Suppl. material Poster Project Page [BibTex]

2003


Thumb xl iccv2003 copy
Image statistics and anisotropic diffusion

Scharr, H., Black, M. J., Haussecker, H.

In Int. Conf. on Computer Vision, pages: 840-847, October 2003 (inproceedings)

pdf [BibTex]

2003

pdf [BibTex]


Thumb xl switching2003
A switching Kalman filter model for the motor cortical coding of hand motion

Wu, W., Black, M. J., Mumford, D., Gao, Y., Bienenstock, E., Donoghue, J. P.

In Proc. IEEE Engineering in Medicine and Biology Society, pages: 2083-2086, September 2003 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl hedvig
Learning the statistics of people in images and video

Sidenbladh, H., Black, M. J.

International Journal of Computer Vision, 54(1-3):183-209, August 2003 (article)

Abstract
This paper address the problems of modeling the appearance of humans and distinguishing human appearance from the appearance of general scenes. We seek a model of appearance and motion that is generic in that it accounts for the ways in which people's appearance varies and, at the same time, is specific enough to be useful for tracking people in natural scenes. Given a 3D model of the person projected into an image we model the likelihood of observing various image cues conditioned on the predicted locations and orientations of the limbs. These cues are taken to be steered filter responses corresponding to edges, ridges, and motion-compensated temporal differences. Motivated by work on the statistics of natural scenes, the statistics of these filter responses for human limbs are learned from training images containing hand-labeled limb regions. Similarly, the statistics of the filter responses in general scenes are learned to define a “background” distribution. The likelihood of observing a scene given a predicted pose of a person is computed, for each limb, using the likelihood ratio between the learned foreground (person) and background distributions. Adopting a Bayesian formulation allows cues to be combined in a principled way. Furthermore, the use of learned distributions obviates the need for hand-tuned image noise models and thresholds. The paper provides a detailed analysis of the statistics of how people appear in scenes and provides a connection between work on natural image statistics and the Bayesian tracking of people.

pdf pdf from publisher code DOI [BibTex]

pdf pdf from publisher code DOI [BibTex]


Thumb xl delatorreijcvteaser
A framework for robust subspace learning

De la Torre, F., Black, M. J.

International Journal of Computer Vision, 54(1-3):117-142, August 2003 (article)

Abstract
Many computer vision, signal processing and statistical problems can be posed as problems of learning low dimensional linear or multi-linear models. These models have been widely used for the representation of shape, appearance, motion, etc., in computer vision applications. Methods for learning linear models can be seen as a special case of subspace fitting. One draw-back of previous learning methods is that they are based on least squares estimation techniques and hence fail to account for “outliers” which are common in realistic training sets. We review previous approaches for making linear learning methods robust to outliers and present a new method that uses an intra-sample outlier process to account for pixel outliers. We develop the theory of Robust Subspace Learning (RSL) for linear models within a continuous optimization framework based on robust M-estimation. The framework applies to a variety of linear learning problems in computer vision including eigen-analysis and structure from motion. Several synthetic and natural examples are used to develop and illustrate the theory and applications of robust subspace learning in computer vision.

pdf code pdf from publisher Project Page [BibTex]

pdf code pdf from publisher Project Page [BibTex]


Thumb xl ijcvcoverhd
Guest editorial: Computational vision at Brown

Black, M. J., Kimia, B.

International Journal of Computer Vision, 54(1-3):5-11, August 2003 (article)

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


Thumb xl cviu91teaser
Robust parameterized component analysis: Theory and applications to 2D facial appearance models

De la Torre, F., Black, M. J.

Computer Vision and Image Understanding, 91(1-2):53-71, July 2003 (article)

Abstract
Principal component analysis (PCA) has been successfully applied to construct linear models of shape, graylevel, and motion in images. In particular, PCA has been widely used to model the variation in the appearance of people's faces. We extend previous work on facial modeling for tracking faces in video sequences as they undergo significant changes due to facial expressions. Here we consider person-specific facial appearance models (PSFAM), which use modular PCA to model complex intra-person appearance changes. Such models require aligned visual training data; in previous work, this has involved a time consuming and error-prone hand alignment and cropping process. Instead, the main contribution of this paper is to introduce parameterized component analysis to learn a subspace that is invariant to affine (or higher order) geometric transformations. The automatic learning of a PSFAM given a training image sequence is posed as a continuous optimization problem and is solved with a mixture of stochastic and deterministic techniques achieving sub-pixel accuracy. We illustrate the use of the 2D PSFAM model with preliminary experiments relevant to applications including video-conferencing and avatar animation.

pdf [BibTex]

pdf [BibTex]


no image
A Gaussian mixture model for the motor cortical coding of hand motion

Wu, W., Mumford, D., Black, M. J., Gao, Y., Bienenstock, E., Donoghue, J. P.

Neural Control of Movement, Santa Barbara, CA, April 2003 (conference)

abstract [BibTex]

abstract [BibTex]


Thumb xl bildschirmfoto 2013 01 15 um 09.35.12
Connecting brains with machines: The neural control of 2D cursor movement

Black, M. J., Bienenstock, E., Donoghue, J. P., Serruya, M., Wu, W., Gao, Y.

In 1st International IEEE/EMBS Conference on Neural Engineering, pages: 580-583, Capri, Italy, March 2003 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 15 um 09.44.01
A quantitative comparison of linear and non-linear models of motor cortical activity for the encoding and decoding of arm motions

Gao, Y., Black, M. J., Bienenstock, E., Wu, W., Donoghue, J. P.

In 1st International IEEE/EMBS Conference on Neural Engineering, pages: 189-192, Capri, Italy, March 2003 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Accuracy of manual spike sorting: Results for the Utah intracortical array

Wood, F., Fellows, M., Vargas-Irwin, C., Black, M. J., Donoghue, J. P.

Program No. 279.2. 2003, Abstract Viewer and Itinerary Planner, Society for Neuroscience, Washington, DC, 2003, Online (conference)

abstract [BibTex]

abstract [BibTex]


no image
Specular flow and the perception of surface reflectance

Roth, S., Domini, F., Black, M. J.

Journal of Vision, 3 (9): 413a, 2003 (conference)

abstract poster [BibTex]

abstract poster [BibTex]


Thumb xl attractiveteaser
Attractive people: Assembling loose-limbed models using non-parametric belief propagation

Sigal, L., Isard, M. I., Sigelman, B. H., Black, M. J.

In Advances in Neural Information Processing Systems 16, NIPS, pages: 1539-1546, (Editors: S. Thrun and L. K. Saul and B. Schölkopf), MIT Press, 2003 (inproceedings)

Abstract
The detection and pose estimation of people in images and video is made challenging by the variability of human appearance, the complexity of natural scenes, and the high dimensionality of articulated body models. To cope with these problems we represent the 3D human body as a graphical model in which the relationships between the body parts are represented by conditional probability distributions. We formulate the pose estimation problem as one of probabilistic inference over a graphical model where the random variables correspond to the individual limb parameters (position and orientation). Because the limbs are described by 6-dimensional vectors encoding pose in 3-space, discretization is impractical and the random variables in our model must be continuous-valued. To approximate belief propagation in such a graph we exploit a recently introduced generalization of the particle filter. This framework facilitates the automatic initialization of the body-model from low level cues and is robust to occlusion of body parts and scene clutter.

pdf (color) pdf (black and white) [BibTex]

pdf (color) pdf (black and white) [BibTex]


Thumb xl bildschirmfoto 2013 01 15 um 09.48.31
Neural decoding of cursor motion using a Kalman filter

(Nominated: Best student paper)

Wu, W., Black, M. J., Gao, Y., Bienenstock, E., Serruya, M., Shaikhouni, A., Donoghue, J. P.

In Advances in Neural Information Processing Systems 15, pages: 133-140, MIT Press, 2003 (inproceedings)

pdf [BibTex]

pdf [BibTex]

2000


Thumb xl ijcv2000teaser
Probabilistic detection and tracking of motion boundaries

Black, M. J., Fleet, D. J.

Int. J. of Computer Vision, 38(3):231-245, July 2000 (article)

Abstract
We propose a Bayesian framework for representing and recognizing local image motion in terms of two basic models: translational motion and motion boundaries. Motion boundaries are represented using a non-linear generative model that explicitly encodes the orientation of the boundary, the velocities on either side, the motion of the occluding edge over time, and the appearance/disappearance of pixels at the boundary. We represent the posterior probability distribution over the model parameters given the image data using discrete samples. This distribution is propagated over time using a particle filtering algorithm. To efficiently represent such a high-dimensional space we initialize samples using the responses of a low-level motion discontinuity detector. The formulation and computational model provide a general probabilistic framework for motion estimation with multiple, non-linear, models.

pdf pdf from publisher Video [BibTex]

2000

pdf pdf from publisher Video [BibTex]


Thumb xl bildschirmfoto 2012 12 11 um 12.12.25
Stochastic tracking of 3D human figures using 2D image motion

(Winner of the 2010 Koenderink Prize for Fundamental Contributions in Computer Vision)

Sidenbladh, H., Black, M. J., Fleet, D.

In European Conference on Computer Vision, ECCV, pages: 702-718, LNCS 1843, Springer Verlag, Dublin, Ireland, June 2000 (inproceedings)

Abstract
A probabilistic method for tracking 3D articulated human figures in monocular image sequences is presented. Within a Bayesian framework, we define a generative model of image appearance, a robust likelihood function based on image gray level differences, and a prior probability distribution over pose and joint angles that models how humans move. The posterior probability distribution over model parameters is represented using a discrete set of samples and is propagated over time using particle filtering. The approach extends previous work on parameterized optical flow estimation to exploit a complex 3D articulated motion model. It also extends previous work on human motion tracking by including a perspective camera model, by modeling limb self occlusion, and by recovering 3D motion from a monocular sequence. The explicit posterior probability distribution represents ambiguities due to image matching, model singularities, and perspective projection. The method relies only on a frame-to-frame assumption of brightness constancy and hence is able to track people under changing viewpoints, in grayscale image sequences, and with complex unknown backgrounds.

pdf code [BibTex]

pdf code [BibTex]


no image
Functional analysis of human motion data

Ormoneit, D., Hastie, T., Black, M. J.

In In Proc. 5th World Congress of the Bernoulli Society for Probability and Mathematical Statistics and 63rd Annual Meeting of the Institute of Mathematical Statistics, Guanajuato, Mexico, May 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Stochastic modeling and tracking of human motion

Ormoneit, D., Sidenbladh, H., Black, M. J., Hastie, T.

Learning 2000, Snowbird, UT, April 2000 (conference)

abstract [BibTex]

abstract [BibTex]


Thumb xl bildschirmfoto 2012 12 12 um 11.40.47
A framework for modeling the appearance of 3D articulated figures

Sidenbladh, H., De la Torre, F., Black, M. J.

In Int. Conf. on Automatic Face and Gesture Recognition, pages: 368-375, Grenoble, France, March 2000 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2012 12 06 um 09.22.34
Design and use of linear models for image motion analysis

Fleet, D. J., Black, M. J., Yacoob, Y., Jepson, A. D.

Int. J. of Computer Vision, 36(3):171-193, 2000 (article)

Abstract
Linear parameterized models of optical flow, particularly affine models, have become widespread in image motion analysis. The linear model coefficients are straightforward to estimate, and they provide reliable estimates of the optical flow of smooth surfaces. Here we explore the use of parameterized motion models that represent much more varied and complex motions. Our goals are threefold: to construct linear bases for complex motion phenomena; to estimate the coefficients of these linear models; and to recognize or classify image motions from the estimated coefficients. We consider two broad classes of motions: i) generic “motion features” such as motion discontinuities and moving bars; and ii) non-rigid, object-specific, motions such as the motion of human mouths. For motion features we construct a basis of steerable flow fields that approximate the motion features. For object-specific motions we construct basis flow fields from example motions using principal component analysis. In both cases, the model coefficients can be estimated directly from spatiotemporal image derivatives with a robust, multi-resolution scheme. Finally, we show how these model coefficients can be use to detect and recognize specific motions such as occlusion boundaries and facial expressions.

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2012 12 06 um 09.48.16
Robustly estimating changes in image appearance

Black, M. J., Fleet, D. J., Yacoob, Y.

Computer Vision and Image Understanding, 78(1):8-31, 2000 (article)

Abstract
We propose a generalized model of image “appearance change” in which brightness variation over time is represented as a probabilistic mixture of different causes. We define four generative models of appearance change due to (1) object or camera motion; (2) illumination phenomena; (3) specular reflections; and (4) “iconic changes” which are specific to the objects being viewed. These iconic changes include complex occlusion events and changes in the material properties of the objects. We develop a robust statistical framework for recovering these appearance changes in image sequences. This approach generalizes previous work on optical flow to provide a richer description of image events and more reliable estimates of image motion in the presence of shadows and specular reflections.

pdf pdf from publisher DOI [BibTex]

pdf pdf from publisher DOI [BibTex]