Header logo is ps


2008


Thumb xl learningflow
Learning Optical Flow

Sun, D., Roth, S., Lewis, J., Black, M. J.

In European Conf. on Computer Vision, ECCV, 5304, pages: 83-97, LNCS, (Editors: Forsyth, D. and Torr, P. and Zisserman, A.), Springer-Verlag, October 2008 (inproceedings)

Abstract
Assumptions of brightness constancy and spatial smoothness underlie most optical flow estimation methods. In contrast to standard heuristic formulations, we learn a statistical model of both brightness constancy error and the spatial properties of optical flow using image sequences with associated ground truth flow fields. The result is a complete probabilistic model of optical flow. Specifically, the ground truth enables us to model how the assumption of brightness constancy is violated in naturalistic sequences, resulting in a probabilistic model of "brightness inconstancy". We also generalize previous high-order constancy assumptions, such as gradient constancy, by modeling the constancy of responses to various linear filters in a high-order random field framework. These filters are free variables that can be learned from training data. Additionally we study the spatial structure of the optical flow and how motion boundaries are related to image intensity boundaries. Spatial smoothness is modeled using a Steerable Random Field, where spatial derivatives of the optical flow are steered by the image brightness structure. These models provide a statistical motivation for previous methods and enable the learning of all parameters from training data. All proposed models are quantitatively compared on the Middlebury flow dataset.

pdf Springerlink version [BibTex]

2008

pdf Springerlink version [BibTex]


no image
Probabilistic Roadmap Method and Real Time Gait Changing Technique Implementation for Travel Time Optimization on a Designed Six-legged Robot

Ahmad, A., Dhang, N.

In pages: 1-5, October 2008 (inproceedings)

Abstract
This paper presents design and development of a six legged robot with a total of 12 degrees of freedom, two in each limb and then an implementation of 'obstacle and undulated terrain-based' probabilistic roadmap method for motion planning of this hexaped which is able to negotiate large undulations as obstacles. The novelty in this implementation is that, it doesnt require the complete view of the robot's configuration space at any given time during the traversal. It generates a map of the area that is in visibility range and finds the best suitable point in that field of view to make it as the next node of the algorithm. A particular category of undulations which are small enough are automatically 'run-over' as a part of the terrain and not considered as obstacles. The traversal between the nodes is optimized by taking the shortest path and the most optimum gait at that instance which the hexaped can assume. This is again a novel approach to have a real time gait changing technique to optimize the travel time. The hexaped limb can swing in the robot's X-Y plane and the lower link of the limb can move in robot's Z plane by an implementation of a four-bar mechanism. A GUI based server 'Yellow Ladybird' eventually which is the name of the hexaped, is made for real time monitoring and communicating to it the final destination co-ordinates.

link (url) [BibTex]


Thumb xl eccv08
The naked truth: Estimating body shape under clothing,

Balan, A., Black, M. J.

In European Conf. on Computer Vision, ECCV, 5304, pages: 15-29, LNCS, (Editors: D. Forsyth and P. Torr and A. Zisserman), Springer-Verlag, Marseilles, France, October 2008 (inproceedings)

Abstract
We propose a method to estimate the detailed 3D shape of a person from images of that person wearing clothing. The approach exploits a model of human body shapes that is learned from a database of over 2000 range scans. We show that the parameters of this shape model can be recovered independently of body pose. We further propose a generalization of the visual hull to account for the fact that observed silhouettes of clothed people do not provide a tight bound on the true 3D shape. With clothed subjects, different poses provide different constraints on the possible underlying 3D body shape. We consequently combine constraints across pose to more accurately estimate 3D body shape in the presence of occluding clothing. Finally we use the recovered 3D shape to estimate the gender of subjects and then employ gender-specific body models to refine our shape estimates. Results on a novel database of thousands of images of clothed and "naked" subjects, as well as sequences from the HumanEva dataset, suggest the method may be accurate enough for biometric shape analysis in video.

pdf pdf with higher quality images Springerlink version YouTube video on applications data slides [BibTex]

pdf pdf with higher quality images Springerlink version YouTube video on applications data slides [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 12.23.39 pm
Dynamic time warping for binocular hand tracking and reconstruction

Romero, J., Kragic, D., Kyrki, V., Argyros, A.

In IEEE International Conference on Robotics and Automation,ICRA, pages: 2289 -2294, May 2008 (inproceedings)

Pdf [BibTex]

Pdf [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 12.28.24 pm
Simultaneous Visual Recognition of Manipulation Actions and Manipulated Objects

Kjellström, H., Romero, J., Martinez, D., Kragic, D.

In European Conference on Computer Vision, ECCV, pages: 336-349, 2008 (inproceedings)

Pdf [BibTex]

Pdf [BibTex]


no image
Tuning analysis of motor cortical neurons in a person with paralysis during performance of visually instructed cursor control tasks

Kim, S., Simeral, J. D., Hochberg, L. R., Truccolo, W., Donoghue, J., Friehs, G. M., Black, M. J.

2008 Abstract Viewer and Itinerary Planner, Society for Neuroscience, Washington, DC, 2008, Online (conference)

[BibTex]

[BibTex]


Thumb xl screen shot 2012 06 06 at 11.28.04 am
Infinite Kernel Learning

Gehler, P., Nowozin, S.

In Proceedings of NIPS 2008 Workshop on "Kernel Learning: Automatic Selection of Optimal Kernels", 2008 (inproceedings)

project page pdf [BibTex]

project page pdf [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 12.29.08 pm
Visual Recognition of Grasps for Human-to-Robot Mapping

Kjellström, H., Romero, J., Kragic, D.

In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, pages: 3192-3199, 2008 (inproceedings)

Pdf [BibTex]

Pdf [BibTex]


no image
More than two years of intracortically-based cursor control via a neural interface system

Hochberg, L. R., Simeral, J. D., Kim, S., Stein, J., Friehs, G. M., Black, M. J., Donoghue, J. P.

2008 Abstract Viewer and Itinerary Planner, Society for Neuroscience, Washington, DC, 2008, Online (conference)

[BibTex]

[BibTex]


no image
Decoding of reach and grasp from MI population spiking activity using a low-dimensional model of hand and arm posture

Yadollahpour, P., Shakhnarovich, G., Vargas-Irwin, C., Donoghue, J. P., Black, M. J.

2008 Abstract Viewer and Itinerary Planner, Society for Neuroscience, Washington, DC, 2008, Online (conference)

[BibTex]

[BibTex]


no image
Neural activity in the motor cortex of humans with tetraplegia

Donoghue, J., Simeral, J., Black, M., Kim, S., Truccolo, W., Hochberg, L.

AREADNE Research in Encoding And Decoding of Neural Ensembles, June, Santorini, Greece, 2008 (conference)

[BibTex]

[BibTex]


Thumb xl trajectory nips
Nonrigid Structure from Motion in Trajectory Space

Akhter, I., Sheikh, Y., Khan, S., Kanade, T.

In Neural Information Processing Systems, 1(2):41-48, 2008 (inproceedings)

Abstract
Existing approaches to nonrigid structure from motion assume that the instantaneous 3D shape of a deforming object is a linear combination of basis shapes, which have to be estimated anew for each video sequence. In contrast, we propose that the evolving 3D structure be described by a linear combination of basis trajectories. The principal advantage of this approach is that we do not need to estimate any basis vectors during computation. We show that generic bases over trajectories, such as the Discrete Cosine Transform (DCT) basis, can be used to compactly describe most real motions. This results in a significant reduction in unknowns, and corresponding stability in estimation. We report empirical performance, quantitatively using motion capture data, and qualitatively on several video sequences exhibiting nonrigid motions including piece-wise rigid motion, partially nonrigid motion (such as a facial expression), and highly nonrigid motion (such as a person dancing).

pdf project page [BibTex]

pdf project page [BibTex]


Thumb xl sigalnips
Combined discriminative and generative articulated pose and non-rigid shape estimation

Sigal, L., Balan, A., Black, M. J.

In Advances in Neural Information Processing Systems 20, NIPS-2007, pages: 1337–1344, MIT Press, 2008 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Reconstructing reach and grasp actions using neural population activity from Primary Motor Cortex

Vargas-Irwin, C. E., Yadollahpour, P., Shakhnarovich, G., Black, M. J., Donoghue, J. P.

2008 Abstract Viewer and Itinerary Planner, Society for Neuroscience, Washington, DC, 2008, Online (conference)

[BibTex]

[BibTex]

1997


Thumb xl sharpening
Robust anisotropic diffusion and sharpening of scalar and vector images

Black, M. J., Sapiro, G., Marimont, D., Heeger, D.

In Int. Conf. on Image Processing, ICIP, 1, pages: 263-266, Vol. 1, Santa Barbara, CA, October 1997 (inproceedings)

Abstract
Relations between anisotropic diffusion and robust statistics are described. We show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edge-stopping" function in the anisotropic diffusion equation is closely related to the error norm and influence function in the robust estimation framework. This connection leads to a new "edge-stopping" function based on Tukey's biweight robust estimator, that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion. The robust statistical interpretation also provides a means for detecting the boundaries (edges) between the piecewise smooth regions in the image. We extend the framework to vector-valued images and show applications to robust image sharpening.

pdf publisher site [BibTex]

1997

pdf publisher site [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 10.31.38
Robust anisotropic diffusion: Connections between robust statistics, line processing, and anisotropic diffusion

Black, M. J., Sapiro, G., Marimont, D., Heeger, D.

In Scale-Space Theory in Computer Vision, Scale-Space’97, pages: 323-326, LNCS 1252, Springer Verlag, Utrecht, the Netherlands, July 1997 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 10.05.56
Learning parameterized models of image motion

Black, M. J., Yacoob, Y., Jepson, A. D., Fleet, D. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR-97, pages: 561-567, Puerto Rico, June 1997 (inproceedings)

Abstract
A framework for learning parameterized models of optical flow from image sequences is presented. A class of motions is represented by a set of orthogonal basis flow fields that are computed from a training set using principal component analysis. Many complex image motions can be represented by a linear combination of a small number of these basis flows. The learned motion models may be used for optical flow estimation and for model-based recognition. For optical flow estimation we describe a robust, multi-resolution scheme for directly computing the parameters of the learned flow models from image derivatives. As examples we consider learning motion discontinuities, non-rigid motion of human mouths, and articulated human motion.

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 10.13.51
Analysis of gesture and action in technical talks for video indexing

Ju, S. X., Black, M. J., Minneman, S., Kimber, D.

In IEEE Conf. on Computer Vision and Pattern Recognition, pages: 595-601, CVPR-97, Puerto Rico, June 1997 (inproceedings)

Abstract
In this paper, we present an automatic system for analyzing and annotating video sequences of technical talks. Our method uses a robust motion estimation technique to detect key frames and segment the video sequence into subsequences containing a single overhead slide. The subsequences are stabilized to remove motion that occurs when the speaker adjusts their slides. Any changes remaining between frames in the stabilized sequences may be due to speaker gestures such as pointing or writing and we use active contours to automatically track these potential gestures. Given the constrained domain we define a simple ``vocabulary'' of actions which can easily be recognized based on the active contour shape and motion. The recognized actions provide a rich annotation of the sequence that can be used to access a condensed version of the talk from a web page.

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 10.36.36
Modeling appearance change in image sequences

Black, M. J., Yacoob, Y., Fleet, D. J.

In Advances in Visual Form Analysis, pages: 11-20, Proceedings of the Third International Workshop on Visual Form, Capri, Italy, May 1997 (inproceedings)

abstract [BibTex]

abstract [BibTex]

1996


Thumb xl bildschirmfoto 2013 01 14 um 10.40.24
Cardboard people: A parameterized model of articulated motion

Ju, S. X., Black, M. J., Yacoob, Y.

In 2nd Int. Conf. on Automatic Face- and Gesture-Recognition, pages: 38-44, Killington, Vermont, October 1996 (inproceedings)

Abstract
We extend the work of Black and Yacoob on the tracking and recognition of human facial expressions using parameterized models of optical flow to deal with the articulated motion of human limbs. We define a "cardboard person model" in which a person's limbs are represented by a set of connected planar patches. The parameterized image motion of these patches is constrained to enforce articulated motion and is solved for directly using a robust estimation technique. The recovered motion parameters provide a rich and concise description of the activity that can be used for recognition. We propose a method for performing view-based recognition of human activities from the optical flow parameters that extends previous methods to cope with the cyclical nature of human motion. We illustrate the method with examples of tracking human legs over long image sequences.

pdf [BibTex]

1996

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 10.48.32
Skin and Bones: Multi-layer, locally affine, optical flow and regularization with transparency

(Nominated: Best paper)

Ju, S., Black, M. J., Jepson, A. D.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR’96, pages: 307-314, San Francisco, CA, June 1996 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 10.52.58
EigenTracking: Robust matching and tracking of articulated objects using a view-based representation

Black, M. J., Jepson, A.

In Proc. Fourth European Conf. on Computer Vision, ECCV’96, pages: 329-342, LNCS 1064, Springer Verlag, Cambridge, England, April 1996 (inproceedings)

pdf video [BibTex]

pdf video [BibTex]

1994


Thumb xl bildschirmfoto 2013 01 14 um 11.32.33
Estimating multiple independent motions in segmented images using parametric models with local deformations

Black, M. J., Jepson, A.

In Workshop on Non-rigid and Articulate Motion, pages: 220-227, Austin, Texas, November 1994 (inproceedings)

pdf abstract [BibTex]

1994

pdf abstract [BibTex]


Thumb xl spiettc
Time to contact from active tracking of motion boundaries

Ju, X., Black, M. J.

In Intelligent Robots and Computer Vision XIII: 3D Vision, Product Inspection, and Active Vision, pages: 26-37, Proc. SPIE 2354, Boston, Massachusetts, November 1994 (inproceedings)

pdf abstract [BibTex]

pdf abstract [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 11.39.54
The outlier process: Unifying line processes and robust statistics

Black, M., Rangarajan, A.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR’94, pages: 15-22, Seattle, WA, June 1994 (inproceedings)

pdf abstract [BibTex]

pdf abstract [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 11.42.57
Recursive non-linear estimation of discontinuous flow fields

Black, M.

In Proc. Third European Conf. on Computer Vision, ECCV’94,, pages: 138-145, LNCS 800, Springer Verlag, Sweden, May 1994 (inproceedings)

pdf abstract [BibTex]

pdf abstract [BibTex]

1991


Thumb xl ijcai91
Dynamic motion estimation and feature extraction over long image sequences

Black, M. J., Anandan, P.

In Proc. IJCAI Workshop on Dynamic Scene Understanding, Sydney, Australia, August 1991 (inproceedings)

[BibTex]

1991

[BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 12.06.42
Robust dynamic motion estimation over time

(IEEE Computer Society Outstanding Paper Award)

Black, M. J., Anandan, P.

In Proc. Computer Vision and Pattern Recognition, CVPR-91,, pages: 296-302, Maui, Hawaii, June 1991 (inproceedings)

Abstract
This paper presents a novel approach to incrementally estimating visual motion over a sequence of images. We start by formulating constraints on image motion to account for the possibility of multiple motions. This is achieved by exploiting the notions of weak continuity and robust statistics in the formulation of the minimization problem. The resulting objective function is non-convex. Traditional stochastic relaxation techniques for minimizing such functions prove inappropriate for the task. We present a highly parallel incremental stochastic minimization algorithm which has a number of advantages over previous approaches. The incremental nature of the scheme makes it truly dynamic and permits the detection of occlusion and disocclusion boundaries.

pdf video abstract [BibTex]

pdf video abstract [BibTex]

1990


Thumb xl bildschirmfoto 2013 01 14 um 12.09.14
A model for the detection of motion over time

Black, M. J., Anandan, P.

In Proc. Int. Conf. on Computer Vision, ICCV-90, pages: 33-37, Osaka, Japan, December 1990 (inproceedings)

Abstract
We propose a model for the recovery of visual motion fields from image sequences. Our model exploits three constraints on the motion of a patch in the environment: i) Data Conservation: the intensity structure corresponding to an environmental surface patch changes gradually over time; ii) Spatial Coherence: since surfaces have spatial extent neighboring points have similar motions; iii) Temporal Coherence: the direction and velocity of motion for a surface patch changes gradually. The formulation of the constraints takes into account the possibility of multiple motions at a particular location. We also present a highly parallel computational model for realizing these constraints in which computation occurs locally, knowledge about the motion increases over time, and occlusion and disocclusion boundaries are estimated. An implementation of the model using a stochastic temporal updating scheme is described. Experiments with both synthetic and real imagery are presented.

pdf [BibTex]

1990

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 12.14.18
Constraints for the early detection of discontinuity from motion

Black, M. J., Anandan, P.

In Proc. National Conf. on Artificial Intelligence, AAAI-90, pages: 1060-1066, Boston, MA, 1990 (inproceedings)

Abstract
Surface discontinuities are detected in a sequence of images by exploiting physical constraints at early stages in the processing of visual motion. To achieve accurate early discontinuity detection we exploit five physical constraints on the presence of discontinuities: i) the shape of the sum of squared differences (SSD) error surface in the presence of surface discontinuities; ii) the change in the shape of the SSD surface due to relative surface motion; iii) distribution of optic flow in a neighborhood of a discontinuity; iv) spatial consistency of discontinuities; V) temporal consistency of discontinuities. The constraints are described, and experimental results on sequences of real and synthetic images are presented. The work has applications in the recovery of environmental structure from motion and in the generation of dense optic flow fields.

pdf [BibTex]

pdf [BibTex]