Header logo is ps


2004


no image
Automatic spike sorting for neural decoding

Wood, F. D., Fellows, M., Donoghue, J. P., Black, M. J.

In Proc. IEEE Engineering in Medicine and Biology Society, pages: 4009-4012, September 2004 (inproceedings)

pdf [BibTex]

2004

pdf [BibTex]


Thumb xl wuembs2004
Closed-loop neural control of cursor motion using a Kalman filter

Wu, W., Shaikhouni, A., Donoghue, J. P., Black, M. J.

In Proc. IEEE Engineering in Medicine and Biology Society, pages: 4126-4129, September 2004 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl ivr04
The dense estimation of motion and appearance in layers

Yalcin, H., Black, M. J., Fablet, R.

In IEEE Workshop on Image and Video Registration, June 2004 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl sidworkshop04
3D human limb detection using space carving and multi-view eigen models

Bhatia, S., Sigal, L., Isard, M., Black, M. J.

In IEEE Workshop on Articulated and Nonrigid Motion, June 2004 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl cvpr2004sigal
Tracking loose-limbed people

Sigal, L., Bhatia, S., Roth, S., Black, M. J., Isard, M.

In IEEE Conf. on Computer Vision and Pattern Recognition, 1, pages: 421-428, June 2004 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl cvpr2004roth
Gibbs likelihoods for Bayesian tracking

Roth, S., Sigal, L., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, 1, pages: 886-893, June 2004 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
A direct brain-machine interface for 2D cursor control using a Kalman filter

Shaikhouni, A., Wu, W., Moris, D. S., Donoghue, J. P., Black, M. J.

Society for Neuroscience, 2004, Online (conference)

abstract [BibTex]

abstract [BibTex]

1997


Thumb xl sharpening
Robust anisotropic diffusion and sharpening of scalar and vector images

Black, M. J., Sapiro, G., Marimont, D., Heeger, D.

In Int. Conf. on Image Processing, ICIP, 1, pages: 263-266, Vol. 1, Santa Barbara, CA, October 1997 (inproceedings)

Abstract
Relations between anisotropic diffusion and robust statistics are described. We show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edge-stopping" function in the anisotropic diffusion equation is closely related to the error norm and influence function in the robust estimation framework. This connection leads to a new "edge-stopping" function based on Tukey's biweight robust estimator, that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion. The robust statistical interpretation also provides a means for detecting the boundaries (edges) between the piecewise smooth regions in the image. We extend the framework to vector-valued images and show applications to robust image sharpening.

pdf publisher site [BibTex]

1997

pdf publisher site [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 10.31.38
Robust anisotropic diffusion: Connections between robust statistics, line processing, and anisotropic diffusion

Black, M. J., Sapiro, G., Marimont, D., Heeger, D.

In Scale-Space Theory in Computer Vision, Scale-Space’97, pages: 323-326, LNCS 1252, Springer Verlag, Utrecht, the Netherlands, July 1997 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 10.05.56
Learning parameterized models of image motion

Black, M. J., Yacoob, Y., Jepson, A. D., Fleet, D. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR-97, pages: 561-567, Puerto Rico, June 1997 (inproceedings)

Abstract
A framework for learning parameterized models of optical flow from image sequences is presented. A class of motions is represented by a set of orthogonal basis flow fields that are computed from a training set using principal component analysis. Many complex image motions can be represented by a linear combination of a small number of these basis flows. The learned motion models may be used for optical flow estimation and for model-based recognition. For optical flow estimation we describe a robust, multi-resolution scheme for directly computing the parameters of the learned flow models from image derivatives. As examples we consider learning motion discontinuities, non-rigid motion of human mouths, and articulated human motion.

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 10.13.51
Analysis of gesture and action in technical talks for video indexing

Ju, S. X., Black, M. J., Minneman, S., Kimber, D.

In IEEE Conf. on Computer Vision and Pattern Recognition, pages: 595-601, CVPR-97, Puerto Rico, June 1997 (inproceedings)

Abstract
In this paper, we present an automatic system for analyzing and annotating video sequences of technical talks. Our method uses a robust motion estimation technique to detect key frames and segment the video sequence into subsequences containing a single overhead slide. The subsequences are stabilized to remove motion that occurs when the speaker adjusts their slides. Any changes remaining between frames in the stabilized sequences may be due to speaker gestures such as pointing or writing and we use active contours to automatically track these potential gestures. Given the constrained domain we define a simple ``vocabulary'' of actions which can easily be recognized based on the active contour shape and motion. The recognized actions provide a rich annotation of the sequence that can be used to access a condensed version of the talk from a web page.

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 10.36.36
Modeling appearance change in image sequences

Black, M. J., Yacoob, Y., Fleet, D. J.

In Advances in Visual Form Analysis, pages: 11-20, Proceedings of the Third International Workshop on Visual Form, Capri, Italy, May 1997 (inproceedings)

abstract [BibTex]

abstract [BibTex]

1990


Thumb xl bildschirmfoto 2013 01 14 um 12.09.14
A model for the detection of motion over time

Black, M. J., Anandan, P.

In Proc. Int. Conf. on Computer Vision, ICCV-90, pages: 33-37, Osaka, Japan, December 1990 (inproceedings)

Abstract
We propose a model for the recovery of visual motion fields from image sequences. Our model exploits three constraints on the motion of a patch in the environment: i) Data Conservation: the intensity structure corresponding to an environmental surface patch changes gradually over time; ii) Spatial Coherence: since surfaces have spatial extent neighboring points have similar motions; iii) Temporal Coherence: the direction and velocity of motion for a surface patch changes gradually. The formulation of the constraints takes into account the possibility of multiple motions at a particular location. We also present a highly parallel computational model for realizing these constraints in which computation occurs locally, knowledge about the motion increases over time, and occlusion and disocclusion boundaries are estimated. An implementation of the model using a stochastic temporal updating scheme is described. Experiments with both synthetic and real imagery are presented.

pdf [BibTex]

1990

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 12.14.18
Constraints for the early detection of discontinuity from motion

Black, M. J., Anandan, P.

In Proc. National Conf. on Artificial Intelligence, AAAI-90, pages: 1060-1066, Boston, MA, 1990 (inproceedings)

Abstract
Surface discontinuities are detected in a sequence of images by exploiting physical constraints at early stages in the processing of visual motion. To achieve accurate early discontinuity detection we exploit five physical constraints on the presence of discontinuities: i) the shape of the sum of squared differences (SSD) error surface in the presence of surface discontinuities; ii) the change in the shape of the SSD surface due to relative surface motion; iii) distribution of optic flow in a neighborhood of a discontinuity; iv) spatial consistency of discontinuities; V) temporal consistency of discontinuities. The constraints are described, and experimental results on sequences of real and synthetic images are presented. The work has applications in the recovery of environmental structure from motion and in the generation of dense optic flow fields.

pdf [BibTex]

pdf [BibTex]