Header logo is ps


2011


Thumb xl ijnmbe1
Computational flow studies in a subject-specific human upper airway using a one-equation turbulence model. Influence of the nasal cavity

Prihambodo Saksono, Perumal Nithiarasu, Igor Sazonov, Si Yong Yeo

International Journal for Numerical Methods in Biomedical Engineering, 87(1-5):96–114, 2011 (article)

Abstract
This paper focuses on the impact of including nasal cavity on airflow through a human upper respiratory tract. A computational study is carried out on a realistic geometry, reconstructed from CT scans of a subject. The geometry includes nasal cavity, pharynx, larynx, trachea and two generations of airway bifurcations below trachea. The unstructured mesh generation procedure is discussed in some length due to the complex nature of the nasal cavity structure and poor scan resolution normally available from hospitals. The fluid dynamic studies have been carried out on the geometry with and without the inclusion of the nasal cavity. The characteristic-based split scheme along with the one-equation Spalart–Allmaras turbulence model is used in its explicit form to obtain flow solutions at steady state. Results reveal that the exclusion of nasal cavity significantly influences the resulting solution. In particular, the location of recirculating flow in the trachea is dramatically different when the truncated geometry is used. In addition, we also address the differences in the solution due to imposed, equally distributed and proportionally distributed flow rates at inlets (both nares). The results show that the differences in flow pattern between the two inlet conditions are not confined to the nasal cavity and nasopharyngeal region, but they propagate down to the trachea.

[BibTex]

2011

[BibTex]


Thumb xl sufacematching ssvm11
Discrete Minimum Distortion Correspondence Problems for Non-rigid Shape Matching

Wang, C., Bronstein, M. M., Bronstein, A. M., Paragios, N.

In International Conference on Scale Space and Variational Methods in Computer Vision (SSVM), 2011 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl viewpointinvariantmodel iccv11.
Viewpoint Invariant 3D Landmark Model Inference from Monocular 2D Images Using Higher-Order Priors

Wang, C., Zeng, Y., Simon, L., Kakadiaris, I., Samaras, D., Paragios, N.

In IEEE International Conference on Computer Vision (ICCV), 2011 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Correspondence estimation from non-rigid motion information

Wulff, J., Lotz, T., Stehle, T., Aach, T., Chase, J. G.

In Proc. SPIE, Proc. SPIE, (Editors: B. M. Dawant, D. R. Haynor), SPIE, 2011 (inproceedings)

Abstract
The DIET (Digital Image Elasto Tomography) system is a novel approach to screen for breast cancer using only optical imaging information of the surface of a vibrating breast. 3D tracking of skin surface motion without the requirement of external markers is desirable. A novel approach to establish point correspondences using pure skin images is presented here. Instead of the intensity, motion is used as the primary feature, which can be extracted using optical flow algorithms. Taking sequences of multiple frames into account, this motion information alone is accurate and unambiguous enough to allow for a 3D reconstruction of the breast surface. Two approaches, direct and probabilistic, for this correspondence estimation are presented here, suitable for different levels of calibration information accuracy. Reconstructions show that the results obtained using these methods are comparable in accuracy to marker-based methods while considerably increasing resolution. The presented method has high potential in optical tissue deformation and motion sensing.

pdf link (url) DOI [BibTex]

pdf link (url) DOI [BibTex]


Thumb xl ijcv2012
Predicting Articulated Human Motion from Spatial Processes

Soren Hauberg, Kim S. Pedersen

International Journal of Computer Vision, 94, pages: 317-334, Springer Netherlands, 2011 (article)

Publishers site Code Paper site PDF [BibTex]

Publishers site Code Paper site PDF [BibTex]


Thumb xl icann2011
An Empirical Study on the Performance of Spectral Manifold Learning Techniques

Peter Mysling, Soren Hauberg, Kim S. Pedersen

In Artificial Neural Networks and Machine Learning – ICANN 2011, 6791, pages: 347-354, Lecture Notes in Computer Science, (Editors: Honkela, Timo and Duch, Włodzisław and Girolami, Mark and Kaski, Samuel), Springer Berlin Heidelberg, 2011 (inproceedings)

Publishers site PDF [BibTex]

Publishers site PDF [BibTex]


no image
Separation of visual object features and grasp strategy in primate ventral premotor cortex

Vargas-Irwin, C., Franquemont, L., Black, M., Donoghue, J.

Neural Control of Movement, 21st Annual Conference, 2011 (conference)

[BibTex]

[BibTex]

2005


Thumb xl ivc05
Representing cyclic human motion using functional analysis

Ormoneit, D., Black, M. J., Hastie, T., Kjellström, H.

Image and Vision Computing, 23(14):1264-1276, December 2005 (article)

Abstract
We present a robust automatic method for modeling cyclic 3D human motion such as walking using motion-capture data. The pose of the body is represented by a time-series of joint angles which are automatically segmented into a sequence of motion cycles. The mean and the principal components of these cycles are computed using a new algorithm that enforces smooth transitions between the cycles by operating in the Fourier domain. Key to this method is its ability to automatically deal with noise and missing data. A learned walking model is then exploited for Bayesian tracking of 3D human motion.

pdf pdf from publisher DOI [BibTex]

2005

pdf pdf from publisher DOI [BibTex]


Thumb xl pets 2005 copy
A quantitative evaluation of video-based 3D person tracking

Balan, A. O., Sigal, L., Black, M. J.

In The Second Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, VS-PETS, pages: 349-356, October 2005 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl embs05
Inferring attentional state and kinematics from motor cortical firing rates

Wood, F., Prabhat, , Donoghue, J. P., Black, M. J.

In Proc. IEEE Engineering in Medicine and Biology Society, pages: 1544-1547, September 2005 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl arma
Motor cortical decoding using an autoregressive moving average model

Fisher, J., Black, M. J.

In Proc. IEEE Engineering in Medicine and Biology Society, pages: 1469-1472, September 2005 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl cvpr2005
Fields of Experts: A framework for learning image priors

Roth, S., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, 2, pages: 860-867, June 2005 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl picture for seq 15 stabilization
A Flow-Based Approach to Vehicle Detection and Background Mosaicking in Airborne Video

Yalcin, H. C. R. B. M. J. H. M.

IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Video Proceedings,, pages: 1202, 2005 (patent)

YouTube pdf [BibTex]

YouTube pdf [BibTex]


Thumb xl iccv05roth
On the spatial statistics of optical flow

(Marr Prize, Honorable Mention)

Roth, S., Black, M. J.

In International Conf. on Computer Vision, International Conf. on Computer Vision, pages: 42-49, 2005 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl nips05
Modeling neural population spiking activity with Gibbs distributions

Wood, F., Roth, S., Black, M. J.

In Advances in Neural Information Processing Systems 18, pages: 1537-1544, 2005 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Energy-based models of motor cortical population activity

Wood, F., Black, M.

Program No. 689.20. 2005 Abstract Viewer/Itinerary Planner, Society for Neuroscience, Washington, DC, 2005 (conference)

abstract [BibTex]

abstract [BibTex]

1999


Thumb xl bildschirmfoto 2013 01 14 um 09.07.06
Edges as outliers: Anisotropic smoothing using local image statistics

Black, M. J., Sapiro, G.

In Scale-Space Theories in Computer Vision, Second Int. Conf., Scale-Space ’99, pages: 259-270, LNCS 1682, Springer, Corfu, Greece, September 1999 (inproceedings)

Abstract
Edges are viewed as statistical outliers with respect to local image gradient magnitudes. Within local image regions we compute a robust statistical measure of the gradient variation and use this in an anisotropic diffusion framework to determine a spatially varying "edge-stopping" parameter σ. We show how to determine this parameter for two edge-stopping functions described in the literature (Perona-Malik and the Tukey biweight). Smoothing of the image is related the local texture and in regions of low texture, small gradient values may be treated as edges whereas in regions of high texture, large gradient magnitudes are necessary before an edge is preserved. Intuitively these results have similarities with human perceptual phenomena such as masking and "popout". Results are shown on a variety of standard images.

pdf [BibTex]

1999

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 07 um 12.35.15
Probabilistic detection and tracking of motion discontinuities

(Marr Prize, Honorable Mention)

Black, M. J., Fleet, D. J.

In Int. Conf. on Computer Vision, ICCV-99, pages: 551-558, ICCV, Corfu, Greece, September 1999 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl paircover
Artscience Sciencart

Black, M. J., Levy, D., PamelaZ,

In Art and Innovation: The Xerox PARC Artist-in-Residence Program, pages: 244-300, (Editors: Harris, C.), MIT-Press, 1999 (incollection)

Abstract
One of the effects of the PARC Artist In Residence (PAIR) program has been to expose the strong connections between scientists and artists. Both do what they do because they need to do it. They are often called upon to justify their work in order to be allowed to continue to do it. They need to justify it to funders, to sponsoring institutions, corporations, the government, the public. They publish papers, teach workshops, and write grants touting the educational or health benefits of what they do. All of these things are to some extent valid, but the fact of the matter is: artists and scientists do their work because they are driven to do it. They need to explore and create.

This chapter attempts to give a flavor of one multi-way "PAIRing" between performance artist PamelaZ and two PARC researchers, Michael Black and David Levy. The three of us paired up because we found each other interesting. We chose each other. While most artists in the program are paired with a single researcher Pamela jokingly calls herself a bigamist for choosing two PAIR "husbands" with different backgrounds and interests.

There are no "rules" to the PAIR program; no one told us what to do with our time. Despite this we all had a sense that we needed to produce something tangible during Pamela's year-long residency. In fact, Pamela kept extending her residency because she did not feel as though we had actually made anything concrete. The interesting thing was that all along we were having great conversations, some of which Pamela recorded. What we did not see at the time was that it was these conversations between artists and scientists that are at the heart of the PAIR program and that these conversations were changing the way we thought about our own work and the relationships between science and art.

To give these conversations their due, and to allow the reader into our PAIR interactions, we include two of our many conversations in this chapter.

[BibTex]

[BibTex]


Thumb xl bildschirmfoto 2012 12 06 um 09.38.15
Parameterized modeling and recognition of activities

Yacoob, Y., Black, M. J.

Computer Vision and Image Understanding, 73(2):232-247, 1999 (article)

Abstract
In this paper we consider a class of human activities—atomic activities—which can be represented as a set of measurements over a finite temporal window (e.g., the motion of human body parts during a walking cycle) and which has a relatively small space of variations in performance. A new approach for modeling and recognition of atomic activities that employs principal component analysis and analytical global transformations is proposed. The modeling of sets of exemplar instances of activities that are similar in duration and involve similar body part motions is achieved by parameterizing their representation using principal component analysis. The recognition of variants of modeled activities is achieved by searching the space of admissible parameterized transformations that these activities can undergo. This formulation iteratively refines the recognition of the class to which the observed activity belongs and the transformation parameters that relate it to the model in its class. We provide several experiments on recognition of articulated and deformable human motions from image motion parameters.

pdf pdf from publisher DOI [BibTex]

pdf pdf from publisher DOI [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.12.47
Explaining optical flow events with parameterized spatio-temporal models

Black, M. J.

In IEEE Proc. Computer Vision and Pattern Recognition, CVPR’99, pages: 326-332, IEEE, Fort Collins, CO, 1999 (inproceedings)

pdf video [BibTex]

pdf video [BibTex]

1998


Thumb xl bildschirmfoto 2012 12 06 um 10.05.20
Summarization of video-taped presentations: Automatic analysis of motion and gesture

Ju, S. X., Black, M. J., Minneman, S., Kimber, D.

IEEE Trans. on Circuits and Systems for Video Technology, 8(5):686-696, September 1998 (article)

Abstract
This paper presents an automatic system for analyzing and annotating video sequences of technical talks. Our method uses a robust motion estimation technique to detect key frames and segment the video sequence into subsequences containing a single overhead slide. The subsequences are stabilized to remove motion that occurs when the speaker adjusts their slides. Any changes remaining between frames in the stabilized sequences may be due to speaker gestures such as pointing or writing, and we use active contours to automatically track these potential gestures. Given the constrained domain, we define a simple set of actions that can be recognized based on the active contour shape and motion. The recognized actions provide an annotation of the sequence that can be used to access a condensed version of the talk from a Web page.

pdf pdf from publisher DOI [BibTex]

1998

pdf pdf from publisher DOI [BibTex]


Thumb xl bildschirmfoto 2012 12 06 um 12.22.18
Robust anisotropic diffusion

Black, M. J., Sapiro, G., Marimont, D., Heeger, D.

IEEE Transactions on Image Processing, 7(3):421-432, March 1998 (article)

Abstract
Relations between anisotropic diffusion and robust statistics are described in this paper. Specifically, we show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The edge-stopping; function in the anisotropic diffusion equation is closely related to the error norm and influence function in the robust estimation framework. This connection leads to a new edge-stopping; function based on Tukey's biweight robust estimator that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion. The robust statistical interpretation also provides a means for detecting the boundaries (edges) between the piecewise smooth regions in an image that has been smoothed with anisotropic diffusion. Additionally, we derive a relationship between anisotropic diffusion and regularization with line processes. Adding constraints on the spatial organization of the line processes allows us to develop new anisotropic diffusion equations that result in a qualitative improvement in the continuity of edges

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.33.36
The Digital Office: Overview

Black, M., Berard, F., Jepson, A., Newman, W., Saund, E., Socher, G., Taylor, M.

In AAAI Spring Symposium on Intelligent Environments, pages: 1-6, Stanford, March 1998 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.46.31
A framework for modeling appearance change in image sequences

Black, M. J., Fleet, D. J., Yacoob, Y.

In Sixth International Conf. on Computer Vision, ICCV’98, pages: 660-667, Mumbai, India, January 1998 (inproceedings)

Abstract
Image "appearance" may change over time due to a variety of causes such as 1) object or camera motion; 2) generic photometric events including variations in illumination (e.g. shadows) and specular reflections; and 3) "iconic changes" which are specific to the objects being viewed and include complex occlusion events and changes in the material properties of the objects. We propose a general framework for representing and recovering these "appearance changes" in an image sequence as a "mixture" of different causes. The approach generalizes previous work on optical flow to provide a richer description of image events and more reliable estimates of image motion.

pdf video [BibTex]

pdf video [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.49.49
Parameterized modeling and recognition of activities

Yacoob, Y., Black, M. J.

In Sixth International Conf. on Computer Vision, ICCV’98, pages: 120-127, Mumbai, India, January 1998 (inproceedings)

Abstract
A framework for modeling and recognition of temporal activities is proposed. The modeling of sets of exemplar activities is achieved by parameterizing their representation in the form of principal components. Recognition of spatio-temporal variants of modeled activities is achieved by parameterizing the search in the space of admissible transformations that the activities can undergo. Experiments on recognition of articulated and deformable object motion from image motion parameters are presented.

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.23.21
Motion feature detection using steerable flow fields

Fleet, D. J., Black, M. J., Jepson, A. D.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR-98, pages: 274-281, IEEE, Santa Barbara, CA, 1998 (inproceedings)

Abstract
The estimation and detection of occlusion boundaries and moving bars are important and challenging problems in image sequence analysis. Here, we model such motion features as linear combinations of steerable basis flow fields. These models constrain the interpretation of image motion, and are used in the same way as translational or affine motion models. We estimate the subspace coefficients of the motion feature models directly from spatiotemporal image derivatives using a robust regression method. From the subspace coefficients we detect the presence of a motion feature and solve for the orientation of the feature and the relative velocities of the surfaces. Our method does not require the prior computation of optical flow and recovers accurate estimates of orientation and velocity.

pdf [BibTex]

pdf [BibTex]


Thumb xl paybotteaser
PLAYBOT: A visually-guided robot for physically disabled children

Tsotsos, J. K., Verghese, G., Dickinson, S., Jenkin, M., Jepson, A., Milios, E., Nuflo, F., Stevenson, S., Black, M., Metaxas, D., Culhane, S., Ye, Y., Mann, R.

Image & Vision Computing, Special Issue on Vision for the Disabled, 16(4):275-292, 1998 (article)

Abstract
This paper overviews the PLAYBOT project, a long-term, large-scale research program whose goal is to provide a directable robot which may enable physically disabled children to access and manipulate toys. This domain is the first test domain, but there is nothing inherent in the design of PLAYBOT that prohibits its extension to other tasks. The research is guided by several important goals: vision is the primary sensor; vision is task directed; the robot must be able to visually search its environment; object and event recognition are basic capabilities; environments must be natural and dynamic; users and environments are assumed to be unpredictable; task direction and reactivity must be smoothly integrated; and safety is of high importance. The emphasis of the research has been on vision for the robot this is the most challenging research aspect and the major bottleneck to the development of intelligent robots. Since the control framework is behavior-based, the visual capabilities of PLAYBOT are described in terms of visual behaviors. Many of the components of PLAYBOT are briefly described and several examples of implemented sub-systems are shown. The paper concludes with a description of the current overall system implementation, and a complete example of PLAYBOT performing a simple task.

pdf pdf from publisher DOI [BibTex]

pdf pdf from publisher DOI [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.18.33
Visual surveillance of human activity

L. Davis, S. F., Harwood, D., Yacoob, Y., Hariatoglu, I., Black, M.

In Asian Conference on Computer Vision, ACCV, 1998 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.29.19
A Probabilistic framework for matching temporal trajectories: Condensation-based recognition of gestures and expressions

Black, M. J., Jepson, A. D.

In European Conf. on Computer Vision, ECCV-98, pages: 909-924, Freiburg, Germany, 1998 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2012 12 06 um 12.33.38
EigenTracking: Robust matching and tracking of articulated objects using a view-based representation

Black, M. J., Jepson, A.

International Journal of Computer Vision, 26(1):63-84, 1998 (article)

Abstract
This paper describes an approach for tracking rigid and articulated objects using a view-based representation. The approach builds on and extends work on eigenspace representations, robust estimation techniques, and parameterized optical flow estimation. First, we note that the least-squares image reconstruction of standard eigenspace techniques has a number of problems and we reformulate the reconstruction problem as one of robust estimation. Second we define a “subspace constancy assumption” that allows us to exploit techniques for parameterized optical flow estimation to simultaneously solve for the view of an object and the affine transformation between the eigenspace and the image. To account for large affine transformations between the eigenspace and the image we define a multi-scale eigenspace representation and a coarse-to-fine matching strategy. Finally, we use these techniques to track objects over long image sequences in which the objects simultaneously undergo both affine image motions and changes of view. In particular we use this “EigenTracking” technique to track and recognize the gestures of a moving hand.

pdf pdf from publisher video [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.40.25
Recognizing temporal trajectories using the Condensation algorithm

Black, M. J., Jepson, A. D.

In Int. Conf. on Automatic Face and Gesture Recognition, pages: 16-21, Nara, Japan, 1998 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl cipollabook
Looking at people in action - An overview

Yacoob, Y., Davis, L. S., Black, M., Gavrila, D., Horprasert, T., Morimoto, C.

In Computer Vision for Human–Machine Interaction, (Editors: R. Cipolla and A. Pentland), Cambridge University Press, 1998 (incollection)

publisher site google books [BibTex]

publisher site google books [BibTex]