Header logo is ps


2002


A layered motion representation with occlusion and compact spatial support
A layered motion representation with occlusion and compact spatial support

Fleet, D. J., Jepson, A., Black, M. J.

In European Conf. on Computer Vision, ECCV 2002, 1, pages: 692-706, LNCS 2353, (Editors: A. Heyden and G. Sparr and M. Nielsen and P. Johansen), Springer-Verlag , 2002 (inproceedings)

Abstract
We describe a 2.5D layered representation for visual motion analysis. The representation provides a global interpretation of image motion in terms of several spatially localized foreground regions along with a background region. Each of these regions comprises a parametric shape model and a parametric motion model. The representation also contains depth ordering so visibility and occlusion are rightly included in the estimation of the model parameters. Finally, because the number of objects, their positions, shapes and sizes, and their relative depths are all unknown, initial models are drawn from a proposal distribution, and then compared using a penalized likelihood criterion. This allows us to automatically initialize new models, and to compare different depth orderings.

pdf [BibTex]

2002

pdf [BibTex]


Implicit probabilistic models of human motion for synthesis and tracking
Implicit probabilistic models of human motion for synthesis and tracking

Sidenbladh, H., Black, M. J., Sigal, L.

In European Conf. on Computer Vision, 1, pages: 784-800, 2002 (inproceedings)

Abstract
This paper addresses the problem of probabilistically modeling 3D human motion for synthesis and tracking. Given the high dimensional nature of human motion, learning an explicit probabilistic model from available training data is currently impractical. Instead we exploit methods from texture synthesis that treat images as representing an implicit empirical distribution. These methods replace the problem of representing the probability of a texture pattern with that of searching the training data for similar instances of that pattern. We extend this idea to temporal data representing 3D human motion with a large database of example motions. To make the method useful in practice, we must address the problem of efficient search in a large training set; efficiency is particularly important for tracking. Towards that end, we learn a low dimensional linear model of human motion that is used to structure the example motion database into a binary tree. An approximate probabilistic tree search method exploits the coefficients of this low-dimensional representation and runs in sub-linear time. This probabilistic tree search returns a particular sample human motion with probability approximating the true distribution of human motions in the database. This sampling method is suitable for use with particle filtering techniques and is applied to articulated 3D tracking of humans within a Bayesian framework. Successful tracking results are presented, along with examples of synthesizing human motion using the model.

pdf [BibTex]

pdf [BibTex]


Robust parameterized component analysis: Theory and applications to {2D} facial modeling
Robust parameterized component analysis: Theory and applications to 2D facial modeling

De la Torre, F., Black, M. J.

In European Conf. on Computer Vision, ECCV 2002, 4, pages: 653-669, LNCS 2353, Springer-Verlag, 2002 (inproceedings)

pdf [BibTex]

pdf [BibTex]

1993


Mixture models for optical flow computation
Mixture models for optical flow computation

Jepson, A., Black, M.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR-93, pages: 760-761, New York, NY, June 1993 (inproceedings)

Abstract
The computation of optical flow relies on merging information available over an image patch to form an estimate of 2-D image velocity at a point. This merging process raises many issues. These include the treatment of outliers in component velocity measurements and the modeling of multiple motions within a patch which arise from occlusion boundaries or transparency. A new approach for dealing with these issues is presented. It is based on the use of a probabilistic mixture model to explicitly represent multiple motions within a patch. A simple extension of the EM-algorithm is used to compute a maximum likelihood estimate for the various motion parameters. Preliminary experiments indicate that this approach is computationally efficient, and that it can provide robust estimates of the optical flow values in the presence of outliers and multiple motions.

pdf tech report [BibTex]

1993

pdf tech report [BibTex]


A framework for the robust estimation of optical flow
A framework for the robust estimation of optical flow

(Helmholtz Prize)

Black, M. J., Anandan, P.

In Fourth International Conf. on Computer Vision, ICCV-93, pages: 231-236, Berlin, Germany, May 1993 (inproceedings)

Abstract
Most approaches for estimating optical flow assume that, within a finite image region, only a single motion is present. This single motion assumption is violated in common situations involving transparency, depth discontinuities, independently moving objects, shadows, and specular reflections. To robustly estimate optical flow, the single motion assumption must be relaxed. This work describes a framework based on robust estimation that addresses violations of the brightness constancy and spatial smoothness assumptions caused by multiple motions. We show how the robust estimation framework can be applied to standard formulations of the optical flow problem thus reducing their sensitivity to violations of their underlying assumptions. The approach has been applied to three standard techniques for recovering optical flow: area-based regression, correlation, and regularization with motion discontinuities. This work focuses on the recovery of multiple parametric motion models within a region as well as the recovery of piecewise-smooth flow fields and provides examples with natural and synthetic image sequences.

pdf video abstract code [BibTex]

pdf video abstract code [BibTex]


Mixture models for optical flow computation
Mixture models for optical flow computation

Jepson, A., Black, M.

In Partitioning Data Sets, DIMACS Workshop, pages: 271-286, (Editors: Ingemar Cox, Pierre Hansen, and Bela Julesz), AMS Pub, Providence, RI., April 1993 (incollection)

pdf [BibTex]

pdf [BibTex]


Action, representation, and purpose: Re-evaluating the foundations of computational vision
Action, representation, and purpose: Re-evaluating the foundations of computational vision

Black, M. J., Aloimonos, Y., Brown, C. M., Horswill, I., Malik, J., G. Sandini, , Tarr, M. J.

In International Joint Conference on Artificial Intelligence, IJCAI-93, pages: 1661-1666, Chambery, France, 1993 (inproceedings)

pdf [BibTex]

pdf [BibTex]

1990


A model for the detection of motion over time
A model for the detection of motion over time

Black, M. J., Anandan, P.

In Proc. Int. Conf. on Computer Vision, ICCV-90, pages: 33-37, Osaka, Japan, December 1990 (inproceedings)

Abstract
We propose a model for the recovery of visual motion fields from image sequences. Our model exploits three constraints on the motion of a patch in the environment: i) Data Conservation: the intensity structure corresponding to an environmental surface patch changes gradually over time; ii) Spatial Coherence: since surfaces have spatial extent neighboring points have similar motions; iii) Temporal Coherence: the direction and velocity of motion for a surface patch changes gradually. The formulation of the constraints takes into account the possibility of multiple motions at a particular location. We also present a highly parallel computational model for realizing these constraints in which computation occurs locally, knowledge about the motion increases over time, and occlusion and disocclusion boundaries are estimated. An implementation of the model using a stochastic temporal updating scheme is described. Experiments with both synthetic and real imagery are presented.

pdf [BibTex]

1990

pdf [BibTex]


Constraints for the early detection of discontinuity from motion
Constraints for the early detection of discontinuity from motion

Black, M. J., Anandan, P.

In Proc. National Conf. on Artificial Intelligence, AAAI-90, pages: 1060-1066, Boston, MA, 1990 (inproceedings)

Abstract
Surface discontinuities are detected in a sequence of images by exploiting physical constraints at early stages in the processing of visual motion. To achieve accurate early discontinuity detection we exploit five physical constraints on the presence of discontinuities: i) the shape of the sum of squared differences (SSD) error surface in the presence of surface discontinuities; ii) the change in the shape of the SSD surface due to relative surface motion; iii) distribution of optic flow in a neighborhood of a discontinuity; iv) spatial consistency of discontinuities; V) temporal consistency of discontinuities. The constraints are described, and experimental results on sequences of real and synthetic images are presented. The work has applications in the recovery of environmental structure from motion and in the generation of dense optic flow fields.

pdf [BibTex]

pdf [BibTex]