Header logo is ps


2012


Pottics {--} The Potts Topic Model for Semantic Image Segmentation
Pottics – The Potts Topic Model for Semantic Image Segmentation

Dann, C., Gehler, P., Roth, S., Nowozin, S.

In Proceedings of 34th DAGM Symposium, pages: 397-407, Lecture Notes in Computer Science, (Editors: Pinz, Axel and Pock, Thomas and Bischof, Horst and Leberl, Franz), Springer, August 2012 (inproceedings)

code pdf poster [BibTex]

2012

code pdf poster [BibTex]


Psoriasis segmentation through chromatic regions and Geometric Active Contours
Psoriasis segmentation through chromatic regions and Geometric Active Contours

Bogo, F., Samory, M., Belloni Fortina, A., Piaserico, S., Peserico, E.

In 34th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC’12), pages: 5388-5391, San Diego, August 2012 (inproceedings)

pdf [BibTex]

pdf [BibTex]


PCA-enhanced stochastic optimization methods
PCA-enhanced stochastic optimization methods

Kuznetsova, A., Pons-Moll, G., Rosenhahn, B.

In German Conference on Pattern Recognition (GCPR), August 2012 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Quasi-Newton Methods: A New Direction
Quasi-Newton Methods: A New Direction

Hennig, P., Kiefel, M.

In Proceedings of the 29th International Conference on Machine Learning, pages: 25-32, ICML ’12, (Editors: John Langford and Joelle Pineau), Omnipress, New York, NY, USA, July 2012 (inproceedings)

Abstract
Four decades after their invention, quasi- Newton methods are still state of the art in unconstrained numerical optimization. Although not usually interpreted thus, these are learning algorithms that fit a local quadratic approximation to the objective function. We show that many, including the most popular, quasi-Newton methods can be interpreted as approximations of Bayesian linear regression under varying prior assumptions. This new notion elucidates some shortcomings of classical algorithms, and lights the way to a novel nonparametric quasi-Newton method, which is able to make more efficient use of available information at computational cost similar to its predecessors.

website+code pdf link (url) [BibTex]

website+code pdf link (url) [BibTex]


Learning Search Based Inference for Object Detection
Learning Search Based Inference for Object Detection

Gehler, P., Lehmann, A.

In International Conference on Machine Learning (ICML) workshop on Inferning: Interactions between Inference and Learning, Edinburgh, Scotland, UK, July 2012, short version of BMVC11 paper (http://ps.is.tue.mpg.de/publications/31/get_file) (inproceedings)

pdf [BibTex]

pdf [BibTex]


Distribution Fields for Tracking
Distribution Fields for Tracking

Sevilla-Lara, L., Learned-Miller, E.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, June 2012 (inproceedings)

Abstract
Visual tracking of general objects often relies on the assumption that gradient descent of the alignment function will reach the global optimum. A common technique to smooth the objective function is to blur the image. However, blurring the image destroys image information, which can cause the target to be lost. To address this problem we introduce a method for building an image descriptor using distribution fields (DFs), a representation that allows smoothing the objective function without destroying information about pixel values. We present experimental evidence on the superiority of the width of the basin of attraction around the global optimum of DFs over other descriptors. DFs also allow the representation of uncertainty about the tracked object. This helps in disregarding outliers during tracking (like occlusions or small misalignments) without modeling them explicitly. Finally, this provides a convenient way to aggregate the observations of the object through time and maintain an updated model. We present a simple tracking algorithm that uses DFs and obtains state-of-the-art results on standard benchmarks.

pdf Matlab code [BibTex]

pdf Matlab code [BibTex]


From pictorial structures to deformable structures
From pictorial structures to deformable structures

Zuffi, S., Freifeld, O., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 3546-3553, IEEE, June 2012 (inproceedings)

Abstract
Pictorial Structures (PS) define a probabilistic model of 2D articulated objects in images. Typical PS models assume an object can be represented by a set of rigid parts connected with pairwise constraints that define the prior probability of part configurations. These models are widely used to represent non-rigid articulated objects such as humans and animals despite the fact that such objects have parts that deform non-rigidly. Here we define a new Deformable Structures (DS) model that is a natural extension of previous PS models and that captures the non-rigid shape deformation of the parts. Each part in a DS model is represented by a low-dimensional shape deformation space and pairwise potentials between parts capture how the shape varies with pose and the shape of neighboring parts. A key advantage of such a model is that it more accurately models object boundaries. This enables image likelihood models that are more discriminative than previous PS likelihoods. This likelihood is learned using training imagery annotated using a DS “puppet.” We focus on a human DS model learned from 2D projections of a realistic 3D human body model and use it to infer human poses in images using a form of non-parametric belief propagation.

pdf sup mat code poster Project Page Project Page Project Page Project Page [BibTex]

pdf sup mat code poster Project Page Project Page Project Page Project Page [BibTex]


Teaching 3D Geometry to Deformable Part Models
Teaching 3D Geometry to Deformable Part Models

Pepik, B., Stark, M., Gehler, P., Schiele, B.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 3362 -3369, IEEE, Providence, RI, USA, June 2012, oral presentation (inproceedings)

pdf DOI Project Page [BibTex]

pdf DOI Project Page [BibTex]


Branch-and-price global optimization for multi-view multi-object tracking
Branch-and-price global optimization for multi-view multi-object tracking

Leal-Taixé, L., Pons-Moll, G., Rosenhahn, B.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2012 (inproceedings)

project page paper poster [BibTex]

project page paper poster [BibTex]


A physically-based approach to reflection separation
A physically-based approach to reflection separation

Kong, N., Tai, Y., Shin, S. Y.

In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 9-16, June 2012 (inproceedings)

Abstract
We propose a physically-based approach to separate reflection using multiple polarized images with a background scene captured behind glass. The input consists of three polarized images, each captured from the same view point but with a different polarizer angle separated by 45 degrees. The output is the high-quality separation of the reflection and background layers from each of the input images. A main technical challenge for this problem is that the mixing coefficient for the reflection and background layers depends on the angle of incidence and the orientation of the plane of incidence, which are spatially-varying over the pixels of an image. Exploiting physical properties of polarization for a double-surfaced glass medium, we propose an algorithm which automatically finds the optimal separation of the reflection and background layers. Thorough experiments, we demonstrate that our approach can generate superior results to those of previous methods.

Publisher site [BibTex]

Publisher site [BibTex]


{High Resolution Surface Reconstruction from Multi-view Aerial Imagery}
High Resolution Surface Reconstruction from Multi-view Aerial Imagery

Calakli, F., Ulusoy, A. O., Restrepo, M. I., Taubin, G., Mundy, J. L.

In 3D Imaging Modeling Processing Visualization Transmission (3DIMPVT), pages: 25-32, IEEE, 2012 (inproceedings)

Abstract
This paper presents a novel framework for surface reconstruction from multi-view aerial imagery of large scale urban scenes, which combines probabilistic volumetric modeling with smooth signed distance surface estimation, to produce very detailed and accurate surfaces. Using a continuous probabilistic volumetric model which allows for explicit representation of ambiguities caused by moving objects, reflective surfaces, areas of constant appearance, and self-occlusions, the algorithm learns the geometry and appearance of a scene from a calibrated image sequence. An online implementation of Bayesian learning precess in GPUs significantly reduces the time required to process a large number of images. The probabilistic volumetric model of occupancy is subsequently used to estimate a smooth approximation of the signed distance function to the surface. This step, which reduces to the solution of a sparse linear system, is very efficient and scalable to large data sets. The proposed algorithm is shown to produce high quality surfaces in challenging aerial scenes where previous methods make large errors in surface localization. The general applicability of the algorithm beyond aerial imagery is confirmed against the Middlebury benchmark.

Video pdf link (url) DOI [BibTex]

Video pdf link (url) DOI [BibTex]


Detection and Tracking of Occluded People
Detection and Tracking of Occluded People

(Best Paper Award)

Tang, S., Andriluka, M., Schiele, B.

In British Machine Vision Conference (BMVC), 2012, BMVC Best Paper Award (inproceedings)

PDF [BibTex]

PDF [BibTex]


3{D} Cardiac Segmentation with Pose-Invariant Higher-Order {MRFs}
3D Cardiac Segmentation with Pose-Invariant Higher-Order MRFs

Xiang, B., Wang, C., Deux, J., Rahmouni, A., Paragios, N.

In IEEE International Symposium on Biomedical Imaging (ISBI), 2012 (inproceedings)

[BibTex]

[BibTex]


Real-time Facial Feature Detection using Conditional Regression Forests
Real-time Facial Feature Detection using Conditional Regression Forests

Dantone, M., Gall, J., Fanelli, G., van Gool, L.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 2578-2585, IEEE, Providence, RI, USA, 2012 (inproceedings)

code pdf Project Page [BibTex]

code pdf Project Page [BibTex]


Latent Hough Transform for Object Detection
Latent Hough Transform for Object Detection

Razavi, N., Gall, J., Kohli, P., van Gool, L.

In European Conference on Computer Vision (ECCV), 7574, pages: 312-325, LNCS, Springer, 2012 (inproceedings)

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Destination Flow for Crowd Simulation
Destination Flow for Crowd Simulation

Pellegrini, S., Gall, J., Sigal, L., van Gool, L.

In Workshop on Analysis and Retrieval of Tracked Events and Motion in Imagery Streams, 7585, pages: 162-171, LNCS, Springer, 2012 (inproceedings)

pdf Project Page [BibTex]

pdf Project Page [BibTex]


From Deformations to Parts: Motion-based Segmentation of {3D} Objects
From Deformations to Parts: Motion-based Segmentation of 3D Objects

Ghosh, S., Sudderth, E., Loper, M., Black, M.

In Advances in Neural Information Processing Systems 25 (NIPS), pages: 2006-2014, (Editors: P. Bartlett and F.C.N. Pereira and C.J.C. Burges and L. Bottou and K.Q. Weinberger), MIT Press, 2012 (inproceedings)

Abstract
We develop a method for discovering the parts of an articulated object from aligned meshes of the object in various three-dimensional poses. We adapt the distance dependent Chinese restaurant process (ddCRP) to allow nonparametric discovery of a potentially unbounded number of parts, while simultaneously guaranteeing a spatially connected segmentation. To allow analysis of datasets in which object instances have varying 3D shapes, we model part variability across poses via affine transformations. By placing a matrix normal-inverse-Wishart prior on these affine transformations, we develop a ddCRP Gibbs sampler which tractably marginalizes over transformation uncertainty. Analyzing a dataset of humans captured in dozens of poses, we infer parts which provide quantitatively better deformation predictions than conventional clustering methods.

pdf supplemental code poster link (url) Project Page [BibTex]

pdf supplemental code poster link (url) Project Page [BibTex]


Segmentation of Vessel Geometries from Medical Images Using GPF Deformable Model
Segmentation of Vessel Geometries from Medical Images Using GPF Deformable Model

Si Yong Yeo, Xianghua Xie, Igor Sazonov, Perumal Nithiarasu

In International Conference on Pattern Recognition Applications and Methods, 2012 (inproceedings)

Abstract
We present a method for the reconstruction of vascular geometries from medical images. Image denoising is performed using vessel enhancing diffusion, which can smooth out image noise and enhance vessel structures. The Canny edge detection technique which produces object edges with single pixel width is used for accurate detection of the lumen boundaries. The image gradients are then used to compute the geometric potential field which gives a global representation of the geometric configuration. The deformable model uses a regional constraint to suppress calcified regions for accurate segmentation of the vessel geometries. The proposed framework show high accuracy when applied to the segmentation of the carotid arteries from CT images.

[BibTex]

[BibTex]


SuperFloxels: A Mid-Level Representation for Video Sequences
SuperFloxels: A Mid-Level Representation for Video Sequences

Ravichandran, A., Wang, C., Raptis, M., Soatto, S.

In Analysis and Retrieval of Tracked Events and Motion in Imagery Streams Workshop (ARTEMIS) (in conjunction with ECCV 2012), 2012 (inproceedings)

pdf [BibTex]

pdf [BibTex]


 Implicit Active Contours for N-Dimensional Biomedical Image Segmentation
Implicit Active Contours for N-Dimensional Biomedical Image Segmentation

Si Yong Yeo

In IEEE International Conference on Systems, Man, and Cybernetics, pages: 2855 - 2860, 2012 (inproceedings)

Abstract
The segmentation of shapes from biomedical images has a wide range of uses such as image based modelling and bioimage analysis. In this paper, an active contour model is proposed for the segmentation of N-dimensional biomedical images. The proposed model uses a curvature smoothing flow and an image attraction force derived from the interactions between the geometries of the active contour model and the image objects. The active contour model is formulated using the level set method so as to handle topological changes automatically. The magnitude and orientation of the image attraction force is based on the relative geometric configurations between the active contour model and the image object boundaries. The vector force field is therefore dynamic, and the active contour model can propagate through narrow structures to segment complex shapes efficiently. The proposed model utilizes pixel interactions across the image domain, which gives a coherent representation of the image object shapes. This allows the active contour model to be robust to image noise and weak object edges. The proposed model is compared against widely used active contour models in the segmentation of anatomical shapes from biomedical images. It is shown that the proposed model has several advantages over existing techniques and can be used for the segmentation of biomedical images efficiently.

[BibTex]

[BibTex]


Interactive Object Detection
Interactive Object Detection

Yao, A., Gall, J., Leistner, C., van Gool, L.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 3242-3249, IEEE, Providence, RI, USA, 2012 (inproceedings)

video pdf Project Page [BibTex]

video pdf Project Page [BibTex]


Real Time 3D Head Pose Estimation: Recent Achievements and Future Challenges
Real Time 3D Head Pose Estimation: Recent Achievements and Future Challenges

Fanelli, G., Gall, J., van Gool, L.

In 5th International Symposium on Communications, Control and Signal Processing (ISCCSP), 2012 (inproceedings)

data and code pdf Project Page [BibTex]

data and code pdf Project Page [BibTex]


Motion Capture of Hands in Action using Discriminative Salient Points
Motion Capture of Hands in Action using Discriminative Salient Points

Ballan, L., Taneja, A., Gall, J., van Gool, L., Pollefeys, M.

In European Conference on Computer Vision (ECCV), 7577, pages: 640-653, LNCS, Springer, 2012 (inproceedings)

data video pdf supplementary Project Page [BibTex]

data video pdf supplementary Project Page [BibTex]


Sparsity Potentials for Detecting Objects with the Hough Transform
Sparsity Potentials for Detecting Objects with the Hough Transform

Razavi, N., Alvar, N., Gall, J., van Gool, L.

In British Machine Vision Conference (BMVC), pages: 11.1-11.10, (Editors: Bowden, Richard and Collomosse, John and Mikolajczyk, Krystian), BMVA Press, 2012 (inproceedings)

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Metric Learning from Poses for Temporal Clustering of Human Motion
Metric Learning from Poses for Temporal Clustering of Human Motion

L’opez-M’endez, A., Gall, J., Casas, J., van Gool, L.

In British Machine Vision Conference (BMVC), pages: 49.1-49.12, (Editors: Bowden, Richard and Collomosse, John and Mikolajczyk, Krystian), BMVA Press, 2012 (inproceedings)

video pdf Project Page Project Page [BibTex]

video pdf Project Page Project Page [BibTex]


Local Context Priors for Object Proposal Generation
Local Context Priors for Object Proposal Generation

Ristin, M., Gall, J., van Gool, L.

In Asian Conference on Computer Vision (ACCV), 7724, pages: 57-70, LNCS, Springer-Verlag, 2012 (inproceedings)

pdf DOI Project Page [BibTex]

pdf DOI Project Page [BibTex]


Layered segmentation and optical flow estimation over time
Layered segmentation and optical flow estimation over time

Sun, D., Sudderth, E., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 1768-1775, IEEE, 2012 (inproceedings)

Abstract
Layered models provide a compelling approach for estimating image motion and segmenting moving scenes. Previous methods, however, have failed to capture the structure of complex scenes, provide precise object boundaries, effectively estimate the number of layers in a scene, or robustly determine the depth order of the layers. Furthermore, previous methods have focused on optical flow between pairs of frames rather than longer sequences. We show that image sequences with more frames are needed to resolve ambiguities in depth ordering at occlusion boundaries; temporal layer constancy makes this feasible. Our generative model of image sequences is rich but difficult to optimize with traditional gradient descent methods. We propose a novel discrete approximation of the continuous objective in terms of a sequence of depth-ordered MRFs and extend graph-cut optimization methods with new “moves” that make joint layer segmentation and motion estimation feasible. Our optimizer, which mixes discrete and continuous optimization, automatically determines the number of layers and reasons about their depth ordering. We demonstrate the value of layered models, our optimization strategy, and the use of more than two frames on both the Middlebury optical flow benchmark and the MIT layer segmentation benchmark.

pdf sup mat poster Project Page Project Page [BibTex]

pdf sup mat poster Project Page Project Page [BibTex]


Consumer Depth Cameras for Computer Vision - Research Topics and Applications
Consumer Depth Cameras for Computer Vision - Research Topics and Applications

Fossati, A., Gall, J., Grabner, H., Ren, X., Konolige, K.

Advances in Computer Vision and Pattern Recognition, Springer, 2012 (book)

workshop publisher's site [BibTex]

workshop publisher's site [BibTex]


Spatial Measures between Human Poses for Classification and Understanding
Spatial Measures between Human Poses for Classification and Understanding

Soren Hauberg, Kim S. Pedersen

In Articulated Motion and Deformable Objects, 7378, pages: 26-36, LNCS, (Editors: Perales, Francisco J. and Fisher, Robert B. and Moeslund, Thomas B.), Springer Berlin Heidelberg, 2012 (inproceedings)

Publishers site Project Page [BibTex]

Publishers site Project Page [BibTex]


A Geometric Take on Metric Learning
A Geometric Take on Metric Learning

Hauberg, S., Freifeld, O., Black, M. J.

In Advances in Neural Information Processing Systems (NIPS) 25, pages: 2033-2041, (Editors: P. Bartlett and F.C.N. Pereira and C.J.C. Burges and L. Bottou and K.Q. Weinberger), MIT Press, 2012 (inproceedings)

Abstract
Multi-metric learning techniques learn local metric tensors in different parts of a feature space. With such an approach, even simple classifiers can be competitive with the state-of-the-art because the distance measure locally adapts to the structure of the data. The learned distance measure is, however, non-metric, which has prevented multi-metric learning from generalizing to tasks such as dimensionality reduction and regression in a principled way. We prove that, with appropriate changes, multi-metric learning corresponds to learning the structure of a Riemannian manifold. We then show that this structure gives us a principled way to perform dimensionality reduction and regression according to the learned metrics. Algorithmically, we provide the first practical algorithm for computing geodesics according to the learned metrics, as well as algorithms for computing exponential and logarithmic maps on the Riemannian manifold. Together, these tools let many Euclidean algorithms take advantage of multi-metric learning. We illustrate the approach on regression and dimensionality reduction tasks that involve predicting measurements of the human body from shape data.

PDF Youtube Suppl. material Poster Project Page [BibTex]

PDF Youtube Suppl. material Poster Project Page [BibTex]

1998


The Digital Office: Overview
The Digital Office: Overview

Black, M., Berard, F., Jepson, A., Newman, W., Saund, E., Socher, G., Taylor, M.

In AAAI Spring Symposium on Intelligent Environments, pages: 1-6, Stanford, March 1998 (inproceedings)

pdf [BibTex]

1998

pdf [BibTex]


A framework for modeling appearance change in image sequences
A framework for modeling appearance change in image sequences

Black, M. J., Fleet, D. J., Yacoob, Y.

In Sixth International Conf. on Computer Vision, ICCV’98, pages: 660-667, Mumbai, India, January 1998 (inproceedings)

Abstract
Image "appearance" may change over time due to a variety of causes such as 1) object or camera motion; 2) generic photometric events including variations in illumination (e.g. shadows) and specular reflections; and 3) "iconic changes" which are specific to the objects being viewed and include complex occlusion events and changes in the material properties of the objects. We propose a general framework for representing and recovering these "appearance changes" in an image sequence as a "mixture" of different causes. The approach generalizes previous work on optical flow to provide a richer description of image events and more reliable estimates of image motion.

pdf video [BibTex]

pdf video [BibTex]


Parameterized modeling and recognition of activities
Parameterized modeling and recognition of activities

Yacoob, Y., Black, M. J.

In Sixth International Conf. on Computer Vision, ICCV’98, pages: 120-127, Mumbai, India, January 1998 (inproceedings)

Abstract
A framework for modeling and recognition of temporal activities is proposed. The modeling of sets of exemplar activities is achieved by parameterizing their representation in the form of principal components. Recognition of spatio-temporal variants of modeled activities is achieved by parameterizing the search in the space of admissible transformations that the activities can undergo. Experiments on recognition of articulated and deformable object motion from image motion parameters are presented.

pdf [BibTex]

pdf [BibTex]


Motion feature detection using steerable flow fields
Motion feature detection using steerable flow fields

Fleet, D. J., Black, M. J., Jepson, A. D.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR-98, pages: 274-281, IEEE, Santa Barbara, CA, 1998 (inproceedings)

Abstract
The estimation and detection of occlusion boundaries and moving bars are important and challenging problems in image sequence analysis. Here, we model such motion features as linear combinations of steerable basis flow fields. These models constrain the interpretation of image motion, and are used in the same way as translational or affine motion models. We estimate the subspace coefficients of the motion feature models directly from spatiotemporal image derivatives using a robust regression method. From the subspace coefficients we detect the presence of a motion feature and solve for the orientation of the feature and the relative velocities of the surfaces. Our method does not require the prior computation of optical flow and recovers accurate estimates of orientation and velocity.

pdf [BibTex]

pdf [BibTex]


Visual surveillance of human activity
Visual surveillance of human activity

L. Davis, S. F., Harwood, D., Yacoob, Y., Hariatoglu, I., Black, M.

In Asian Conference on Computer Vision, ACCV, 1998 (inproceedings)

pdf [BibTex]

pdf [BibTex]


A Probabilistic framework for matching temporal trajectories: Condensation-based recognition of gestures and expressions
A Probabilistic framework for matching temporal trajectories: Condensation-based recognition of gestures and expressions

Black, M. J., Jepson, A. D.

In European Conf. on Computer Vision, ECCV-98, pages: 909-924, Freiburg, Germany, 1998 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Recognizing temporal trajectories using the {Condensation} algorithm
Recognizing temporal trajectories using the Condensation algorithm

Black, M. J., Jepson, A. D.

In Int. Conf. on Automatic Face and Gesture Recognition, pages: 16-21, Nara, Japan, 1998 (inproceedings)

pdf [BibTex]

pdf [BibTex]

1993


Mixture models for optical flow computation
Mixture models for optical flow computation

Jepson, A., Black, M.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR-93, pages: 760-761, New York, NY, June 1993 (inproceedings)

Abstract
The computation of optical flow relies on merging information available over an image patch to form an estimate of 2-D image velocity at a point. This merging process raises many issues. These include the treatment of outliers in component velocity measurements and the modeling of multiple motions within a patch which arise from occlusion boundaries or transparency. A new approach for dealing with these issues is presented. It is based on the use of a probabilistic mixture model to explicitly represent multiple motions within a patch. A simple extension of the EM-algorithm is used to compute a maximum likelihood estimate for the various motion parameters. Preliminary experiments indicate that this approach is computationally efficient, and that it can provide robust estimates of the optical flow values in the presence of outliers and multiple motions.

pdf tech report [BibTex]

1993

pdf tech report [BibTex]


A framework for the robust estimation of optical flow
A framework for the robust estimation of optical flow

(Helmholtz Prize)

Black, M. J., Anandan, P.

In Fourth International Conf. on Computer Vision, ICCV-93, pages: 231-236, Berlin, Germany, May 1993 (inproceedings)

Abstract
Most approaches for estimating optical flow assume that, within a finite image region, only a single motion is present. This single motion assumption is violated in common situations involving transparency, depth discontinuities, independently moving objects, shadows, and specular reflections. To robustly estimate optical flow, the single motion assumption must be relaxed. This work describes a framework based on robust estimation that addresses violations of the brightness constancy and spatial smoothness assumptions caused by multiple motions. We show how the robust estimation framework can be applied to standard formulations of the optical flow problem thus reducing their sensitivity to violations of their underlying assumptions. The approach has been applied to three standard techniques for recovering optical flow: area-based regression, correlation, and regularization with motion discontinuities. This work focuses on the recovery of multiple parametric motion models within a region as well as the recovery of piecewise-smooth flow fields and provides examples with natural and synthetic image sequences.

pdf video abstract code [BibTex]

pdf video abstract code [BibTex]


Action, representation, and purpose: Re-evaluating the foundations of computational vision
Action, representation, and purpose: Re-evaluating the foundations of computational vision

Black, M. J., Aloimonos, Y., Brown, C. M., Horswill, I., Malik, J., G. Sandini, , Tarr, M. J.

In International Joint Conference on Artificial Intelligence, IJCAI-93, pages: 1661-1666, Chambery, France, 1993 (inproceedings)

pdf [BibTex]

pdf [BibTex]

1991


Dynamic motion estimation and feature extraction over long image sequences
Dynamic motion estimation and feature extraction over long image sequences

Black, M. J., Anandan, P.

In Proc. IJCAI Workshop on Dynamic Scene Understanding, Sydney, Australia, August 1991 (inproceedings)

[BibTex]

1991

[BibTex]


Robust dynamic motion estimation over time
Robust dynamic motion estimation over time

(IEEE Computer Society Outstanding Paper Award)

Black, M. J., Anandan, P.

In Proc. Computer Vision and Pattern Recognition, CVPR-91,, pages: 296-302, Maui, Hawaii, June 1991 (inproceedings)

Abstract
This paper presents a novel approach to incrementally estimating visual motion over a sequence of images. We start by formulating constraints on image motion to account for the possibility of multiple motions. This is achieved by exploiting the notions of weak continuity and robust statistics in the formulation of the minimization problem. The resulting objective function is non-convex. Traditional stochastic relaxation techniques for minimizing such functions prove inappropriate for the task. We present a highly parallel incremental stochastic minimization algorithm which has a number of advantages over previous approaches. The incremental nature of the scheme makes it truly dynamic and permits the detection of occlusion and disocclusion boundaries.

pdf video abstract [BibTex]

pdf video abstract [BibTex]

1990


A model for the detection of motion over time
A model for the detection of motion over time

Black, M. J., Anandan, P.

In Proc. Int. Conf. on Computer Vision, ICCV-90, pages: 33-37, Osaka, Japan, December 1990 (inproceedings)

Abstract
We propose a model for the recovery of visual motion fields from image sequences. Our model exploits three constraints on the motion of a patch in the environment: i) Data Conservation: the intensity structure corresponding to an environmental surface patch changes gradually over time; ii) Spatial Coherence: since surfaces have spatial extent neighboring points have similar motions; iii) Temporal Coherence: the direction and velocity of motion for a surface patch changes gradually. The formulation of the constraints takes into account the possibility of multiple motions at a particular location. We also present a highly parallel computational model for realizing these constraints in which computation occurs locally, knowledge about the motion increases over time, and occlusion and disocclusion boundaries are estimated. An implementation of the model using a stochastic temporal updating scheme is described. Experiments with both synthetic and real imagery are presented.

pdf [BibTex]

1990

pdf [BibTex]


Constraints for the early detection of discontinuity from motion
Constraints for the early detection of discontinuity from motion

Black, M. J., Anandan, P.

In Proc. National Conf. on Artificial Intelligence, AAAI-90, pages: 1060-1066, Boston, MA, 1990 (inproceedings)

Abstract
Surface discontinuities are detected in a sequence of images by exploiting physical constraints at early stages in the processing of visual motion. To achieve accurate early discontinuity detection we exploit five physical constraints on the presence of discontinuities: i) the shape of the sum of squared differences (SSD) error surface in the presence of surface discontinuities; ii) the change in the shape of the SSD surface due to relative surface motion; iii) distribution of optic flow in a neighborhood of a discontinuity; iv) spatial consistency of discontinuities; V) temporal consistency of discontinuities. The constraints are described, and experimental results on sequences of real and synthetic images are presented. The work has applications in the recovery of environmental structure from motion and in the generation of dense optic flow fields.

pdf [BibTex]

pdf [BibTex]