Header logo is ps


2020


Learning to Dress 3D People in Generative Clothing
Learning to Dress 3D People in Generative Clothing

Ma, Q., Yang, J., Ranjan, A., Pujades, S., Pons-Moll, G., Tang, S., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shape. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term on SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses.

arxiv project page [BibTex]

2020


{GENTEL : GENerating Training data Efficiently for Learning to segment medical images}
GENTEL : GENerating Training data Efficiently for Learning to segment medical images

Thakur, R. P., Rocamora, S. P., Goel, L., Pohmann, R., Machann, J., Black, M. J.

Congrès Reconnaissance des Formes, Image, Apprentissage et Perception (RFAIP), June 2020 (conference)

Abstract
Accurately segmenting MRI images is crucial for many clinical applications. However, manually segmenting images with accurate pixel precision is a tedious and time consuming task. In this paper we present a simple, yet effective method to improve the efficiency of the image segmentation process. We propose to transform the image annotation task into a binary choice task. We start by using classical image processing algorithms with different parameter values to generate multiple, different segmentation masks for each input MRI image. Then, instead of segmenting the pixels of the images, the user only needs to decide whether a segmentation is acceptable or not. This method allows us to efficiently obtain high quality segmentations with minor human intervention. With the selected segmentations, we train a state-of-the-art neural network model. For the evaluation, we use a second MRI dataset (1.5T Dataset), acquired with a different protocol and containing annotations. We show that the trained network i) is able to automatically segment cases where none of the classical methods obtain a high quality result ; ii) generalizes to the second MRI dataset, which was acquired with a different protocol and was never seen at training time ; and iii) enables detection of miss-annotations in this second dataset. Quantitatively, the trained network obtains very good results: DICE score - mean 0.98, median 0.99- and Hausdorff distance (in pixels) - mean 4.7, median 2.0-.

[BibTex]

[BibTex]


Generating 3D People in Scenes without People
Generating 3D People in Scenes without People

Zhang, Y., Hassan, M., Neumann, H., Black, M. J., Tang, S.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
We present a fully automatic system that takes a 3D scene and generates plausible 3D human bodies that are posed naturally in that 3D scene. Given a 3D scene without people, humans can easily imagine how people could interact with the scene and the objects in it. However, this is a challenging task for a computer as solving it requires that (1) the generated human bodies to be semantically plausible within the 3D environment (e.g. people sitting on the sofa or cooking near the stove), and (2) the generated human-scene interaction to be physically feasible such that the human body and scene do not interpenetrate while, at the same time, body-scene contact supports physical interactions. To that end, we make use of the surface-based 3D human model SMPL-X. We first train a conditional variational autoencoder to predict semantically plausible 3D human poses conditioned on latent scene representations, then we further refine the generated 3D bodies using scene constraints to enforce feasible physical interaction. We show that our approach is able to synthesize realistic and expressive 3D human bodies that naturally interact with 3D environment. We perform extensive experiments demonstrating that our generative framework compares favorably with existing methods, both qualitatively and quantitatively. We believe that our scene-conditioned 3D human generation pipeline will be useful for numerous applications; e.g. to generate training data for human pose estimation, in video games and in VR/AR. Our project page for data and code can be seen at: \url{https://vlg.inf.ethz.ch/projects/PSI/}.

Code PDF [BibTex]

Code PDF [BibTex]


Learning Physics-guided Face Relighting under Directional Light
Learning Physics-guided Face Relighting under Directional Light

Nestmeyer, T., Lalonde, J., Matthews, I., Lehrmann, A. M.

In Conference on Computer Vision and Pattern Recognition, IEEE/CVF, June 2020 (inproceedings) Accepted

Abstract
Relighting is an essential step in realistically transferring objects from a captured image into another environment. For example, authentic telepresence in Augmented Reality requires faces to be displayed and relit consistent with the observer's scene lighting. We investigate end-to-end deep learning architectures that both de-light and relight an image of a human face. Our model decomposes the input image into intrinsic components according to a diffuse physics-based image formation model. We enable non-diffuse effects including cast shadows and specular highlights by predicting a residual correction to the diffuse render. To train and evaluate our model, we collected a portrait database of 21 subjects with various expressions and poses. Each sample is captured in a controlled light stage setup with 32 individual light sources. Our method creates precise and believable relighting results and generalizes to complex illumination conditions and challenging poses, including when the subject is not looking straight at the camera.

Paper [BibTex]

Paper [BibTex]


{VIBE}: Video Inference for Human Body Pose and Shape Estimation
VIBE: Video Inference for Human Body Pose and Shape Estimation

Kocabas, M., Athanasiou, N., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Human motion is fundamental to understanding behavior. Despite progress on single-image 3D pose and shape estimation, existing video-based state-of-the-art methodsfail to produce accurate and natural motion sequences due to a lack of ground-truth 3D motion data for training. To address this problem, we propose “Video Inference for Body Pose and Shape Estimation” (VIBE), which makes use of an existing large-scale motion capture dataset (AMASS) together with unpaired, in-the-wild, 2D keypoint annotations. Our key novelty is an adversarial learning framework that leverages AMASS to discriminate between real human motions and those produced by our temporal pose and shape regression networks. We define a temporal network architecture and show that adversarial training, at the sequence level, produces kinematically plausible motion sequences without in-the-wild ground-truth 3D labels. We perform extensive experimentation to analyze the importance of motion and demonstrate the effectiveness of VIBE on challenging 3D pose estimation datasets, achieving state-of-the-art performance. Code and pretrained models are available at https://github.com/mkocabas/VIBE

arXiv code [BibTex]

arXiv code [BibTex]


From Variational to Deterministic Autoencoders
From Variational to Deterministic Autoencoders

Ghosh*, P., Sajjadi*, M. S. M., Vergari, A., Black, M. J., Schölkopf, B.

8th International Conference on Learning Representations (ICLR) , April 2020, *equal contribution (conference) Accepted

Abstract
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

arXiv [BibTex]

arXiv [BibTex]


Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations
Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations

Rueegg, N., Lassner, C., Black, M. J., Schindler, K.

In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), Febuary 2020 (inproceedings)

Abstract
The goal of many computer vision systems is to transform image pixels into 3D representations. Recent popular models use neural networks to regress directly from pixels to 3D object parameters. Such an approach works well when supervision is available, but in problems like human pose and shape estimation, it is difficult to obtain natural images with 3D ground truth. To go one step further, we propose a new architecture that facilitates unsupervised, or lightly supervised, learning. The idea is to break the problem into a series of transformations between increasingly abstract representations. Each step involves a cycle designed to be learnable without annotated training data, and the chain of cycles delivers the final solution. Specifically, we use 2D body part segments as an intermediate representation that contains enough information to be lifted to 3D, and at the same time is simple enough to be learned in an unsupervised way. We demonstrate the method by learning 3D human pose and shape from un-paired and un-annotated images. We also explore varying amounts of paired data and show that cycling greatly alleviates the need for paired data. While we present results for modeling humans, our formulation is general and can be applied to other vision problems.

pdf [BibTex]

pdf [BibTex]

2007


A Database and Evaluation Methodology for Optical Flow
A Database and Evaluation Methodology for Optical Flow

Baker, S., Scharstein, D., Lewis, J.P., Roth, S., Black, M.J., Szeliski, R.

In Int. Conf. on Computer Vision, ICCV, pages: 1-8, Rio de Janeiro, Brazil, October 2007 (inproceedings)

pdf [BibTex]

2007

pdf [BibTex]


Shining a light on human pose: On shadows, shading and the estimation of pose and shape,
Shining a light on human pose: On shadows, shading and the estimation of pose and shape,

Balan, A., Black, M. J., Haussecker, H., Sigal, L.

In Int. Conf. on Computer Vision, ICCV, pages: 1-8, Rio de Janeiro, Brazil, October 2007 (inproceedings)

pdf YouTube [BibTex]

pdf YouTube [BibTex]


no image
Ensemble spiking activity as a source of cortical control signals in individuals with tetraplegia

Simeral, J. D., Kim, S. P., Black, M. J., Donoghue, J. P., Hochberg, L. R.

Biomedical Engineering Society, BMES, september 2007 (conference)

[BibTex]

[BibTex]


Detailed human shape and pose from images
Detailed human shape and pose from images

Balan, A., Sigal, L., Black, M. J., Davis, J., Haussecker, H.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, pages: 1-8, Minneapolis, June 2007 (inproceedings)

Abstract
Much of the research on video-based human motion capture assumes the body shape is known a priori and is represented coarsely (e.g. using cylinders or superquadrics to model limbs). These body models stand in sharp contrast to the richly detailed 3D body models used by the graphics community. Here we propose a method for recovering such models directly from images. Specifically, we represent the body using a recently proposed triangulated mesh model called SCAPE which employs a low-dimensional, but detailed, parametric model of shape and pose-dependent deformations that is learned from a database of range scans of human bodies. Previous work showed that the parameters of the SCAPE model could be estimated from marker-based motion capture data. Here we go further to estimate the parameters directly from image data. We define a cost function between image observations and a hypothesized mesh and formulate the problem as optimization over the body shape and pose parameters using stochastic search. Our results show that such rich generative models enable the automatic recovery of detailed human shape and pose from images.

pdf YouTube [BibTex]

pdf YouTube [BibTex]


Decoding grasp aperture from motor-cortical population activity
Decoding grasp aperture from motor-cortical population activity

Artemiadis, P., Shakhnarovich, G., Vargas-Irwin, C., Donoghue, J. P., Black, M. J.

In The 3rd International IEEE EMBS Conference on Neural Engineering, pages: 518-521, May 2007 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Multi-state decoding of point-and-click control signals from motor cortical activity in a human with tetraplegia
Multi-state decoding of point-and-click control signals from motor cortical activity in a human with tetraplegia

Kim, S., Simeral, J., Hochberg, L., Donoghue, J. P., Friehs, G., Black, M. J.

In The 3rd International IEEE EMBS Conference on Neural Engineering, pages: 486-489, May 2007 (inproceedings)

Abstract
Basic neural-prosthetic control of a computer cursor has been recently demonstrated by Hochberg et al. [1] using the BrainGate system (Cyberkinetics Neurotechnology Systems, Inc.). While these results demonstrate the feasibility of intracortically-driven prostheses for humans with paralysis, a practical cursor-based computer interface requires more precise cursor control and the ability to “click” on areas of interest. Here we present a practical point and click device that decodes both continuous states (e.g. cursor kinematics) and discrete states (e.g. click state) from single neural population in human motor cortex. We describe a probabilistic multi-state decoder and the necessary training paradigms that enable point and click cursor control by a human with tetraplegia using an implanted microelectrode array. We present results from multiple recording sessions and quantify the point and click performance.

pdf [BibTex]

pdf [BibTex]


Deterministic Annealing for Multiple-Instance Learning
Deterministic Annealing for Multiple-Instance Learning

Gehler, P., Chapelle, O.

In Artificial Intelligence and Statistics (AIStats), 2007 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Point-and-click cursor control by a person with tetraplegia using an intracortical neural interface system

Kim, S., Simeral, J. D., Hochberg, L. R., Friehs, G., Donoghue, J. P., Black, M. J.

Program No. 517.2. 2007 Abstract Viewer and Itinerary Planner, Society for Neuroscience, San Diego, CA, 2007, Online (conference)

[BibTex]

[BibTex]


Learning Appearances with Low-Rank SVM
Learning Appearances with Low-Rank SVM

Wolf, L., Jhuang, H., Hazan, T.

In Conference on Computer Vision and Pattern Recognition (CVPR), 2007 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Neural correlates of grip aperture in primary motor cortex

Vargas-Irwin, C., Shakhnarovich, G., Artemiadis, P., Donoghue, J. P., Black, M. J.

Program No. 517.10. 2007 Abstract Viewer and Itinerary Planner, Society for Neuroscience, San Diego, CA, 2007, Online (conference)

[BibTex]

[BibTex]


no image
Directional tuning in motor cortex of a person with ALS

Simeral, J. D., Donoghue, J. P., Black, M. J., Friehs, G. M., Brown, R. H., Krivickas, L. S., Hochberg, L. R.

Program No. 517.4. 2007 Abstract Viewer and Itinerary Planner, Society for Neuroscience, San Diego, CA, 2007, Online (conference)

[BibTex]

[BibTex]


Steerable random fields
Steerable random fields

(Best Paper Award, INI-Graphics Net, 2008)

Roth, S., Black, M. J.

In Int. Conf. on Computer Vision, ICCV, pages: 1-8, Rio de Janeiro, Brazil, 2007 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Toward standardized assessment of pointing devices for brain-computer interfaces

Donoghue, J., Simeral, J., Kim, S., G.M. Friehs, L. H., Black, M.

Program No. 517.16. 2007 Abstract Viewer and Itinerary Planner, Society for Neuroscience, San Diego, CA, 2007, Online (conference)

[BibTex]

[BibTex]


A Biologically Inspired System for Action Recognition
A Biologically Inspired System for Action Recognition

Jhuang, H., Serre, T., Wolf, L., Poggio, T.

In International Conference on Computer Vision (ICCV), 2007 (inproceedings)

code pdf [BibTex]

code pdf [BibTex]


no image
AREADNE Research in Encoding And Decoding of Neural Ensembles

Shakhnarovich, G., Hochberg, L. R., Donoghue, J. P., Stein, J., Brown, R. H., Krivickas, L. S., Friehs, G. M., Black, M. J.

Program No. 517.8. 2007 Abstract Viewer and Itinerary Planner, Society for Neuroscience, San Diego, CA, 2007, Online (conference)

[BibTex]

[BibTex]

2006


no image
Finding directional movement representations in motor cortical neural populations using nonlinear manifold learning

WorKim, S., Simeral, J., Jenkins, O., Donoghue, J., Black, M.

World Congress on Medical Physics and Biomedical Engineering 2006, Seoul, Korea, August 2006 (conference)

[BibTex]

2006

[BibTex]


A non-parametric {Bayesian} approach to spike sorting
A non-parametric Bayesian approach to spike sorting

Wood, F., Goldwater, S., Black, M. J.

In International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, pages: 1165-1169, New York, NY, August 2006 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Predicting {3D} people from {2D} pictures
Predicting 3D people from 2D pictures

(Best Paper)

Sigal, L., Black, M. J.

In Proc. IV Conf. on Articulated Motion and DeformableObjects (AMDO), LNCS 4069, pages: 185-195, July 2006 (inproceedings)

Abstract
We propose a hierarchical process for inferring the 3D pose of a person from monocular images. First we infer a learned view-based 2D body model from a single image using non-parametric belief propagation. This approach integrates information from bottom-up body-part proposal processes and deals with self-occlusion to compute distributions over limb poses. Then, we exploit a learned Mixture of Experts model to infer a distribution of 3D poses conditioned on 2D poses. This approach is more general than recent work on inferring 3D pose directly from silhouettes since the 2D body model provides a richer representation that includes the 2D joint angles and the poses of limbs that may be unobserved in the silhouette. We demonstrate the method in a laboratory setting where we evaluate the accuracy of the 3D poses against ground truth data. We also estimate 3D body pose in a monocular image sequence. The resulting 3D estimates are sufficiently accurate to serve as proposals for the Bayesian inference of 3D human motion over time

pdf pdf from publisher Video [BibTex]

pdf pdf from publisher Video [BibTex]


Specular flow and the recovery of surface structure
Specular flow and the recovery of surface structure

Roth, S., Black, M.

In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, 2, pages: 1869-1876, New York, NY, June 2006 (inproceedings)

Abstract
In scenes containing specular objects, the image motion observed by a moving camera may be an intermixed combination of optical flow resulting from diffuse reflectance (diffuse flow) and specular reflection (specular flow). Here, with few assumptions, we formalize the notion of specular flow, show how it relates to the 3D structure of the world, and develop an algorithm for estimating scene structure from 2D image motion. Unlike previous work on isolated specular highlights we use two image frames and estimate the semi-dense flow arising from the specular reflections of textured scenes. We parametrically model the image motion of a quadratic surface patch viewed from a moving camera. The flow is modeled as a probabilistic mixture of diffuse and specular components and the 3D shape is recovered using an Expectation-Maximization algorithm. Rather than treating specular reflections as noise to be removed or ignored, we show that the specular flow provides additional constraints on scene geometry that improve estimation of 3D structure when compared with reconstruction from diffuse flow alone. We demonstrate this for a set of synthetic and real sequences of mixed specular-diffuse objects.

pdf [BibTex]

pdf [BibTex]


An adaptive appearance model approach for model-based articulated object tracking
An adaptive appearance model approach for model-based articulated object tracking

Balan, A., Black, M. J.

In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, 1, pages: 758-765, New York, NY, June 2006 (inproceedings)

Abstract
The detection and tracking of three-dimensional human body models has progressed rapidly but successful approaches typically rely on accurate foreground silhouettes obtained using background segmentation. There are many practical applications where such information is imprecise. Here we develop a new image likelihood function based on the visual appearance of the subject being tracked. We propose a robust, adaptive, appearance model based on the Wandering-Stable-Lost framework extended to the case of articulated body parts. The method models appearance using a mixture model that includes an adaptive template, frame-to-frame matching and an outlier process. We employ an annealed particle filtering algorithm for inference and take advantage of the 3D body model to predict self occlusion and improve pose estimation accuracy. Quantitative tracking results are presented for a walking sequence with a 180 degree turn, captured with four synchronized and calibrated cameras and containing significant appearance changes and self-occlusion in each view.

pdf [BibTex]

pdf [BibTex]


Measure locally, reason globally: Occlusion-sensitive articulated pose estimation
Measure locally, reason globally: Occlusion-sensitive articulated pose estimation

Sigal, L., Black, M. J.

In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, 2, pages: 2041-2048, New York, NY, June 2006 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Statistical analysis of the non-stationarity of neural population codes
Statistical analysis of the non-stationarity of neural population codes

Kim, S., Wood, F., Fellows, M., Donoghue, J. P., Black, M. J.

In BioRob 2006, The first IEEE / RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics, pages: 295-299, Pisa, Italy, Febuary 2006 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
How to choose the covariance for Gaussian process regression independently of the basis

Franz, M., Gehler, P.

In Proceedings of the Workshop Gaussian Processes in Practice, 2006 (inproceedings)

pdf [BibTex]

pdf [BibTex]


The rate adapting poisson model for information retrieval and object recognition
The rate adapting poisson model for information retrieval and object recognition

Gehler, P. V., Holub, A. D., Welling, M.

In Proceedings of the 23rd international conference on Machine learning, pages: 337-344, ICML ’06, ACM, New York, NY, USA, 2006 (inproceedings)

project page pdf DOI [BibTex]

project page pdf DOI [BibTex]


Tracking complex objects using graphical object models
Tracking complex objects using graphical object models

Sigal, L., Zhu, Y., Comaniciu, D., Black, M. J.

In International Workshop on Complex Motion, LNCS 3417, pages: 223-234, Springer-Verlag, 2006 (inproceedings)

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


Hierarchical Approach for Articulated {3D} Pose-Estimation and Tracking (extended abstract)
Hierarchical Approach for Articulated 3D Pose-Estimation and Tracking (extended abstract)

Sigal, L., Black, M. J.

In Learning, Representation and Context for Human Sensing in Video Workshop (in conjunction with CVPR), 2006 (inproceedings)

pdf poster [BibTex]

pdf poster [BibTex]


Nonlinear physically-based models for decoding motor-cortical population activity
Nonlinear physically-based models for decoding motor-cortical population activity

Shakhnarovich, G., Kim, S., Black, M. J.

In Advances in Neural Information Processing Systems 19, NIPS-2006, pages: 1257-1264, MIT Press, 2006 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
A comparison of decoding models for imagined motion from human motor cortex

Kim, S., Simeral, J., Donoghue, J. P., Hocherberg, L. R., Friehs, G., Mukand, J. A., Chen, D., Black, M. J.

Program No. 256.11. 2006 Abstract Viewer and Itinerary Planner, Society for Neuroscience, Atlanta, GA, 2006, Online (conference)

[BibTex]

[BibTex]


Denoising archival films using a learned {Bayesian} model
Denoising archival films using a learned Bayesian model

Moldovan, T. M., Roth, S., Black, M. J.

In Int. Conf. on Image Processing, ICIP, pages: 2641-2644, Atlanta, 2006 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Efficient belief propagation with learned higher-order {Markov} random fields
Efficient belief propagation with learned higher-order Markov random fields

Lan, X., Roth, S., Huttenlocher, D., Black, M. J.

In European Conference on Computer Vision, ECCV, II, pages: 269-282, Graz, Austria, 2006 (inproceedings)

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


no image
Modeling neural control of physically realistic movement

Shaknarovich, G., Kim, S., Donoghue, J. P., Hocherberg, L. R., Friehs, G., Mukand, J. A., Chen, D., Black, M. J.

Program No. 256.12. 2006 Abstract Viewer and Itinerary Planner, Society for Neuroscience, Atlanta, GA, 2006, Online (conference)

[BibTex]

[BibTex]

1998


The Digital Office: Overview
The Digital Office: Overview

Black, M., Berard, F., Jepson, A., Newman, W., Saund, E., Socher, G., Taylor, M.

In AAAI Spring Symposium on Intelligent Environments, pages: 1-6, Stanford, March 1998 (inproceedings)

pdf [BibTex]

1998

pdf [BibTex]


A framework for modeling appearance change in image sequences
A framework for modeling appearance change in image sequences

Black, M. J., Fleet, D. J., Yacoob, Y.

In Sixth International Conf. on Computer Vision, ICCV’98, pages: 660-667, Mumbai, India, January 1998 (inproceedings)

Abstract
Image "appearance" may change over time due to a variety of causes such as 1) object or camera motion; 2) generic photometric events including variations in illumination (e.g. shadows) and specular reflections; and 3) "iconic changes" which are specific to the objects being viewed and include complex occlusion events and changes in the material properties of the objects. We propose a general framework for representing and recovering these "appearance changes" in an image sequence as a "mixture" of different causes. The approach generalizes previous work on optical flow to provide a richer description of image events and more reliable estimates of image motion.

pdf video [BibTex]

pdf video [BibTex]


Parameterized modeling and recognition of activities
Parameterized modeling and recognition of activities

Yacoob, Y., Black, M. J.

In Sixth International Conf. on Computer Vision, ICCV’98, pages: 120-127, Mumbai, India, January 1998 (inproceedings)

Abstract
A framework for modeling and recognition of temporal activities is proposed. The modeling of sets of exemplar activities is achieved by parameterizing their representation in the form of principal components. Recognition of spatio-temporal variants of modeled activities is achieved by parameterizing the search in the space of admissible transformations that the activities can undergo. Experiments on recognition of articulated and deformable object motion from image motion parameters are presented.

pdf [BibTex]

pdf [BibTex]


Motion feature detection using steerable flow fields
Motion feature detection using steerable flow fields

Fleet, D. J., Black, M. J., Jepson, A. D.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR-98, pages: 274-281, IEEE, Santa Barbara, CA, 1998 (inproceedings)

Abstract
The estimation and detection of occlusion boundaries and moving bars are important and challenging problems in image sequence analysis. Here, we model such motion features as linear combinations of steerable basis flow fields. These models constrain the interpretation of image motion, and are used in the same way as translational or affine motion models. We estimate the subspace coefficients of the motion feature models directly from spatiotemporal image derivatives using a robust regression method. From the subspace coefficients we detect the presence of a motion feature and solve for the orientation of the feature and the relative velocities of the surfaces. Our method does not require the prior computation of optical flow and recovers accurate estimates of orientation and velocity.

pdf [BibTex]

pdf [BibTex]


Visual surveillance of human activity
Visual surveillance of human activity

L. Davis, S. F., Harwood, D., Yacoob, Y., Hariatoglu, I., Black, M.

In Asian Conference on Computer Vision, ACCV, 1998 (inproceedings)

pdf [BibTex]

pdf [BibTex]


A Probabilistic framework for matching temporal trajectories: Condensation-based recognition of gestures and expressions
A Probabilistic framework for matching temporal trajectories: Condensation-based recognition of gestures and expressions

Black, M. J., Jepson, A. D.

In European Conf. on Computer Vision, ECCV-98, pages: 909-924, Freiburg, Germany, 1998 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Recognizing temporal trajectories using the {Condensation} algorithm
Recognizing temporal trajectories using the Condensation algorithm

Black, M. J., Jepson, A. D.

In Int. Conf. on Automatic Face and Gesture Recognition, pages: 16-21, Nara, Japan, 1998 (inproceedings)

pdf [BibTex]

pdf [BibTex]

1996


Cardboard people: A parameterized model of articulated motion
Cardboard people: A parameterized model of articulated motion

Ju, S. X., Black, M. J., Yacoob, Y.

In 2nd Int. Conf. on Automatic Face- and Gesture-Recognition, pages: 38-44, Killington, Vermont, October 1996 (inproceedings)

Abstract
We extend the work of Black and Yacoob on the tracking and recognition of human facial expressions using parameterized models of optical flow to deal with the articulated motion of human limbs. We define a "cardboard person model" in which a person's limbs are represented by a set of connected planar patches. The parameterized image motion of these patches is constrained to enforce articulated motion and is solved for directly using a robust estimation technique. The recovered motion parameters provide a rich and concise description of the activity that can be used for recognition. We propose a method for performing view-based recognition of human activities from the optical flow parameters that extends previous methods to cope with the cyclical nature of human motion. We illustrate the method with examples of tracking human legs over long image sequences.

pdf [BibTex]

1996

pdf [BibTex]


Skin and Bones: Multi-layer, locally affine, optical flow and regularization with transparency
Skin and Bones: Multi-layer, locally affine, optical flow and regularization with transparency

(Nominated: Best paper)

Ju, S., Black, M. J., Jepson, A. D.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR’96, pages: 307-314, San Francisco, CA, June 1996 (inproceedings)

pdf [BibTex]

pdf [BibTex]


EigenTracking: Robust matching and tracking of articulated objects using a view-based representation
EigenTracking: Robust matching and tracking of articulated objects using a view-based representation

Black, M. J., Jepson, A.

In Proc. Fourth European Conf. on Computer Vision, ECCV’96, pages: 329-342, LNCS 1064, Springer Verlag, Cambridge, England, April 1996 (inproceedings)

pdf video [BibTex]

pdf video [BibTex]