Header logo is ps


2008


Thumb xl learningflow
Learning Optical Flow

Sun, D., Roth, S., Lewis, J., Black, M. J.

In European Conf. on Computer Vision, ECCV, 5304, pages: 83-97, LNCS, (Editors: Forsyth, D. and Torr, P. and Zisserman, A.), Springer-Verlag, October 2008 (inproceedings)

Abstract
Assumptions of brightness constancy and spatial smoothness underlie most optical flow estimation methods. In contrast to standard heuristic formulations, we learn a statistical model of both brightness constancy error and the spatial properties of optical flow using image sequences with associated ground truth flow fields. The result is a complete probabilistic model of optical flow. Specifically, the ground truth enables us to model how the assumption of brightness constancy is violated in naturalistic sequences, resulting in a probabilistic model of "brightness inconstancy". We also generalize previous high-order constancy assumptions, such as gradient constancy, by modeling the constancy of responses to various linear filters in a high-order random field framework. These filters are free variables that can be learned from training data. Additionally we study the spatial structure of the optical flow and how motion boundaries are related to image intensity boundaries. Spatial smoothness is modeled using a Steerable Random Field, where spatial derivatives of the optical flow are steered by the image brightness structure. These models provide a statistical motivation for previous methods and enable the learning of all parameters from training data. All proposed models are quantitatively compared on the Middlebury flow dataset.

pdf Springerlink version [BibTex]

2008

pdf Springerlink version [BibTex]


no image
Probabilistic Roadmap Method and Real Time Gait Changing Technique Implementation for Travel Time Optimization on a Designed Six-legged Robot

Ahmad, A., Dhang, N.

In pages: 1-5, October 2008 (inproceedings)

Abstract
This paper presents design and development of a six legged robot with a total of 12 degrees of freedom, two in each limb and then an implementation of 'obstacle and undulated terrain-based' probabilistic roadmap method for motion planning of this hexaped which is able to negotiate large undulations as obstacles. The novelty in this implementation is that, it doesnt require the complete view of the robot's configuration space at any given time during the traversal. It generates a map of the area that is in visibility range and finds the best suitable point in that field of view to make it as the next node of the algorithm. A particular category of undulations which are small enough are automatically 'run-over' as a part of the terrain and not considered as obstacles. The traversal between the nodes is optimized by taking the shortest path and the most optimum gait at that instance which the hexaped can assume. This is again a novel approach to have a real time gait changing technique to optimize the travel time. The hexaped limb can swing in the robot's X-Y plane and the lower link of the limb can move in robot's Z plane by an implementation of a four-bar mechanism. A GUI based server 'Yellow Ladybird' eventually which is the name of the hexaped, is made for real time monitoring and communicating to it the final destination co-ordinates.

link (url) [BibTex]


Thumb xl eccv08
The naked truth: Estimating body shape under clothing,

Balan, A., Black, M. J.

In European Conf. on Computer Vision, ECCV, 5304, pages: 15-29, LNCS, (Editors: D. Forsyth and P. Torr and A. Zisserman), Springer-Verlag, Marseilles, France, October 2008 (inproceedings)

Abstract
We propose a method to estimate the detailed 3D shape of a person from images of that person wearing clothing. The approach exploits a model of human body shapes that is learned from a database of over 2000 range scans. We show that the parameters of this shape model can be recovered independently of body pose. We further propose a generalization of the visual hull to account for the fact that observed silhouettes of clothed people do not provide a tight bound on the true 3D shape. With clothed subjects, different poses provide different constraints on the possible underlying 3D body shape. We consequently combine constraints across pose to more accurately estimate 3D body shape in the presence of occluding clothing. Finally we use the recovered 3D shape to estimate the gender of subjects and then employ gender-specific body models to refine our shape estimates. Results on a novel database of thousands of images of clothed and "naked" subjects, as well as sequences from the HumanEva dataset, suggest the method may be accurate enough for biometric shape analysis in video.

pdf pdf with higher quality images Springerlink version YouTube video on applications data slides [BibTex]

pdf pdf with higher quality images Springerlink version YouTube video on applications data slides [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 12.23.39 pm
Dynamic time warping for binocular hand tracking and reconstruction

Romero, J., Kragic, D., Kyrki, V., Argyros, A.

In IEEE International Conference on Robotics and Automation,ICRA, pages: 2289 -2294, May 2008 (inproceedings)

Pdf [BibTex]

Pdf [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 12.28.24 pm
Simultaneous Visual Recognition of Manipulation Actions and Manipulated Objects

Kjellström, H., Romero, J., Martinez, D., Kragic, D.

In European Conference on Computer Vision, ECCV, pages: 336-349, 2008 (inproceedings)

Pdf [BibTex]

Pdf [BibTex]


no image
Tuning analysis of motor cortical neurons in a person with paralysis during performance of visually instructed cursor control tasks

Kim, S., Simeral, J. D., Hochberg, L. R., Truccolo, W., Donoghue, J., Friehs, G. M., Black, M. J.

2008 Abstract Viewer and Itinerary Planner, Society for Neuroscience, Washington, DC, 2008, Online (conference)

[BibTex]

[BibTex]


Thumb xl screen shot 2012 06 06 at 11.28.04 am
Infinite Kernel Learning

Gehler, P., Nowozin, S.

In Proceedings of NIPS 2008 Workshop on "Kernel Learning: Automatic Selection of Optimal Kernels", 2008 (inproceedings)

project page pdf [BibTex]

project page pdf [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 12.29.08 pm
Visual Recognition of Grasps for Human-to-Robot Mapping

Kjellström, H., Romero, J., Kragic, D.

In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, pages: 3192-3199, 2008 (inproceedings)

Pdf [BibTex]

Pdf [BibTex]


no image
More than two years of intracortically-based cursor control via a neural interface system

Hochberg, L. R., Simeral, J. D., Kim, S., Stein, J., Friehs, G. M., Black, M. J., Donoghue, J. P.

2008 Abstract Viewer and Itinerary Planner, Society for Neuroscience, Washington, DC, 2008, Online (conference)

[BibTex]

[BibTex]


no image
Decoding of reach and grasp from MI population spiking activity using a low-dimensional model of hand and arm posture

Yadollahpour, P., Shakhnarovich, G., Vargas-Irwin, C., Donoghue, J. P., Black, M. J.

2008 Abstract Viewer and Itinerary Planner, Society for Neuroscience, Washington, DC, 2008, Online (conference)

[BibTex]

[BibTex]


no image
Neural activity in the motor cortex of humans with tetraplegia

Donoghue, J., Simeral, J., Black, M., Kim, S., Truccolo, W., Hochberg, L.

AREADNE Research in Encoding And Decoding of Neural Ensembles, June, Santorini, Greece, 2008 (conference)

[BibTex]

[BibTex]


Thumb xl trajectory nips
Nonrigid Structure from Motion in Trajectory Space

Akhter, I., Sheikh, Y., Khan, S., Kanade, T.

In Neural Information Processing Systems, 1(2):41-48, 2008 (inproceedings)

Abstract
Existing approaches to nonrigid structure from motion assume that the instantaneous 3D shape of a deforming object is a linear combination of basis shapes, which have to be estimated anew for each video sequence. In contrast, we propose that the evolving 3D structure be described by a linear combination of basis trajectories. The principal advantage of this approach is that we do not need to estimate any basis vectors during computation. We show that generic bases over trajectories, such as the Discrete Cosine Transform (DCT) basis, can be used to compactly describe most real motions. This results in a significant reduction in unknowns, and corresponding stability in estimation. We report empirical performance, quantitatively using motion capture data, and qualitatively on several video sequences exhibiting nonrigid motions including piece-wise rigid motion, partially nonrigid motion (such as a facial expression), and highly nonrigid motion (such as a person dancing).

pdf project page [BibTex]

pdf project page [BibTex]


Thumb xl sigalnips
Combined discriminative and generative articulated pose and non-rigid shape estimation

Sigal, L., Balan, A., Black, M. J.

In Advances in Neural Information Processing Systems 20, NIPS-2007, pages: 1337–1344, MIT Press, 2008 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Reconstructing reach and grasp actions using neural population activity from Primary Motor Cortex

Vargas-Irwin, C. E., Yadollahpour, P., Shakhnarovich, G., Black, M. J., Donoghue, J. P.

2008 Abstract Viewer and Itinerary Planner, Society for Neuroscience, Washington, DC, 2008, Online (conference)

[BibTex]

[BibTex]

2002


Thumb xl bildschirmfoto 2013 01 15 um 09.54.19
Inferring hand motion from multi-cell recordings in motor cortex using a Kalman filter

Wu, W., Black, M. J., Gao, Y., Bienenstock, E., Serruya, M., Donoghue, J. P.

In SAB’02-Workshop on Motor Control in Humans and Robots: On the Interplay of Real Brains and Artificial Devices, pages: 66-73, Edinburgh, Scotland (UK), August 2002 (inproceedings)

pdf [BibTex]

2002

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 15 um 10.33.56
Bayesian Inference of Visual Motion Boundaries

Fleet, D. J., Black, M. J., Nestares, O.

In Exploring Artificial Intelligence in the New Millennium, pages: 139-174, (Editors: Lakemeyer, G. and Nebel, B.), Morgan Kaufmann Pub., July 2002 (incollection)

Abstract
This chapter addresses an open problem in visual motion analysis, the estimation of image motion in the vicinity of occlusion boundaries. With a Bayesian formulation, local image motion is explained in terms of multiple, competing, nonlinear models, including models for smooth (translational) motion and for motion boundaries. The generative model for motion boundaries explicitly encodes the orientation of the boundary, the velocities on either side, the motion of the occluding edge over time, and the appearance/disappearance of pixels at the boundary. We formulate the posterior probability distribution over the models and model parameters, conditioned on the image sequence. Approximate inference is achieved with a combination of tools: A Bayesian filter provides for online computation; factored sampling allows us to represent multimodal non-Gaussian distributions and to propagate beliefs with nonlinear dynamics from one time to the next; and mixture models are used to simplify the computation of joint prediction distributions in the Bayesian filter. To efficiently represent such a high-dimensional space, we also initialize samples using the responses of a low-level motion-discontinuity detector. The basic formulation and computational model provide a general probabilistic framework for motion estimation with multiple, nonlinear models.

pdf [BibTex]

pdf [BibTex]


no image
Inferring hand motion from multi-cell recordings in motor cortex using a Kalman filter

Wu, W., Black M., Gao, Y., Bienenstock, E., Serruya, M., Donoghue, J.

Program No. 357.5. 2002 Abstract Viewer/Itinerary Planner, Society for Neuroscience, Washington, DC, 2002, Online (conference)

abstract [BibTex]

abstract [BibTex]


Thumb xl bildschirmfoto 2012 12 11 um 09.50.58
Automatic detection and tracking of human motion with a view-based representation

Fablet, R., Black, M. J.

In European Conf. on Computer Vision, ECCV 2002, 1, pages: 476-491, LNCS 2353, (Editors: A. Heyden and G. Sparr and M. Nielsen and P. Johansen), Springer-Verlag , 2002 (inproceedings)

Abstract
This paper proposes a solution for the automatic detection and tracking of human motion in image sequences. Due to the complexity of the human body and its motion, automatic detection of 3D human motion remains an open, and important, problem. Existing approaches for automatic detection and tracking focus on 2D cues and typically exploit object appearance (color distribution, shape) or knowledge of a static background. In contrast, we exploit 2D optical flow information which provides rich descriptive cues, while being independent of object and background appearance. To represent the optical flow patterns of people from arbitrary viewpoints, we develop a novel representation of human motion using low-dimensional spatio-temporal models that are learned using motion capture data of human subjects. In addition to human motion (the foreground) we probabilistically model the motion of generic scenes (the background); these statistical models are defined as Gibbsian fields specified from the first-order derivatives of motion observations. Detection and tracking are posed in a principled Bayesian framework which involves the computation of a posterior probability distribution over the model parameters (i.e., the location and the type of the human motion) given a sequence of optical flow observations. Particle filtering is used to represent and predict this non-Gaussian posterior distribution over time. The model parameters of samples from this distribution are related to the pose parameters of a 3D articulated model (e.g. the approximate joint angles and movement direction). Thus the approach proves suitable for initializing more complex probabilistic models of human motion. As shown by experiments on real image sequences, our method is able to detect and track people under different viewpoints with complex backgrounds.

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2012 12 11 um 10.06.33
A layered motion representation with occlusion and compact spatial support

Fleet, D. J., Jepson, A., Black, M. J.

In European Conf. on Computer Vision, ECCV 2002, 1, pages: 692-706, LNCS 2353, (Editors: A. Heyden and G. Sparr and M. Nielsen and P. Johansen), Springer-Verlag , 2002 (inproceedings)

Abstract
We describe a 2.5D layered representation for visual motion analysis. The representation provides a global interpretation of image motion in terms of several spatially localized foreground regions along with a background region. Each of these regions comprises a parametric shape model and a parametric motion model. The representation also contains depth ordering so visibility and occlusion are rightly included in the estimation of the model parameters. Finally, because the number of objects, their positions, shapes and sizes, and their relative depths are all unknown, initial models are drawn from a proposal distribution, and then compared using a penalized likelihood criterion. This allows us to automatically initialize new models, and to compare different depth orderings.

pdf [BibTex]

pdf [BibTex]


Thumb xl eccv2002hvg
Implicit probabilistic models of human motion for synthesis and tracking

Sidenbladh, H., Black, M. J., Sigal, L.

In European Conf. on Computer Vision, 1, pages: 784-800, 2002 (inproceedings)

Abstract
This paper addresses the problem of probabilistically modeling 3D human motion for synthesis and tracking. Given the high dimensional nature of human motion, learning an explicit probabilistic model from available training data is currently impractical. Instead we exploit methods from texture synthesis that treat images as representing an implicit empirical distribution. These methods replace the problem of representing the probability of a texture pattern with that of searching the training data for similar instances of that pattern. We extend this idea to temporal data representing 3D human motion with a large database of example motions. To make the method useful in practice, we must address the problem of efficient search in a large training set; efficiency is particularly important for tracking. Towards that end, we learn a low dimensional linear model of human motion that is used to structure the example motion database into a binary tree. An approximate probabilistic tree search method exploits the coefficients of this low-dimensional representation and runs in sub-linear time. This probabilistic tree search returns a particular sample human motion with probability approximating the true distribution of human motions in the database. This sampling method is suitable for use with particle filtering techniques and is applied to articulated 3D tracking of humans within a Bayesian framework. Successful tracking results are presented, along with examples of synthesizing human motion using the model.

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2012 12 11 um 10.29.56
Robust parameterized component analysis: Theory and applications to 2D facial modeling

De la Torre, F., Black, M. J.

In European Conf. on Computer Vision, ECCV 2002, 4, pages: 653-669, LNCS 2353, Springer-Verlag, 2002 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 15 um 10.03.10
Probabilistic inference of hand motion from neural activity in motor cortex

Gao, Y., Black, M. J., Bienenstock, E., Shoham, S., Donoghue, J.

In Advances in Neural Information Processing Systems 14, pages: 221-228, MIT Press, 2002 (inproceedings)

pdf [BibTex]

pdf [BibTex]