Header logo is ps


2013


Thumb xl secretstr
A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them

Sun, D., Roth, S., Black, M. J.

(CS-10-03), Brown University, Department of Computer Science, January 2013 (techreport)

pdf [BibTex]

2013

pdf [BibTex]


Thumb xl thumbiccvsilvia
Estimating Human Pose with Flowing Puppets

Zuffi, S., Romero, J., Schmid, C., Black, M. J.

In IEEE International Conference on Computer Vision (ICCV), pages: 3312-3319, 2013 (inproceedings)

Abstract
We address the problem of upper-body human pose estimation in uncontrolled monocular video sequences, without manual initialization. Most current methods focus on isolated video frames and often fail to correctly localize arms and hands. Inferring pose over a video sequence is advantageous because poses of people in adjacent frames exhibit properties of smooth variation due to the nature of human and camera motion. To exploit this, previous methods have used prior knowledge about distinctive actions or generic temporal priors combined with static image likelihoods to track people in motion. Here we take a different approach based on a simple observation: Information about how a person moves from frame to frame is present in the optical flow field. We develop an approach for tracking articulated motions that "links" articulated shape models of people in adjacent frames trough the dense optical flow. Key to this approach is a 2D shape model of the body that we use to compute how the body moves over time. The resulting "flowing puppets" provide a way of integrating image evidence across frames to improve pose inference. We apply our method on a challenging dataset of TV video sequences and show state-of-the-art performance.

pdf code data DOI Project Page Project Page Project Page [BibTex]

pdf code data DOI Project Page Project Page Project Page [BibTex]


Thumb xl gcpr thumbnail 200 112
A Comparison of Directional Distances for Hand Pose Estimation

Tzionas, D., Gall, J.

In German Conference on Pattern Recognition (GCPR), 8142, pages: 131-141, Lecture Notes in Computer Science, (Editors: Weickert, Joachim and Hein, Matthias and Schiele, Bernt), Springer, 2013 (inproceedings)

Abstract
Benchmarking methods for 3d hand tracking is still an open problem due to the difficulty of acquiring ground truth data. We introduce a new dataset and benchmarking protocol that is insensitive to the accumulative error of other protocols. To this end, we create testing frame pairs of increasing difficulty and measure the pose estimation error separately for each of them. This approach gives new insights and allows to accurately study the performance of each feature or method without employing a full tracking pipeline. Following this protocol, we evaluate various directional distances in the context of silhouette-based 3d hand tracking, expressed as special cases of a generalized Chamfer distance form. An appropriate parameter setup is proposed for each of them, and a comparative study reveals the best performing method in this context.

pdf Supplementary Project Page link (url) DOI Project Page [BibTex]

pdf Supplementary Project Page link (url) DOI Project Page [BibTex]


Thumb xl illuminationpami13
Simultaneous Cast Shadows, Illumination and Geometry Inference Using Hypergraphs

Panagopoulos, A., Wang, C., Samaras, D., Paragios, N.

IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 35(2):437-449, 2013 (article)

pdf [BibTex]

pdf [BibTex]


no image
Right Ventricle Segmentation by Temporal Information Constrained Gradient Vector Flow

X. Yang, S. Y. Yeo, Y. Su, C. Lim, M. Wan, L. Zhong, R. S. Tan

In IEEE International Conference on Systems, Man, and Cybernetics, 2013 (inproceedings)

Abstract
Evaluation of right ventricular (RV) structure and function is of importance in the management of most cardiac disorders. But the segmentation of RV has always been consid- ered challenging due to low contrast of the myocardium with surrounding and high shape variability of the RV. In this paper, we present a 2D + T active contour model for segmentation and tracking of RV endocardium on cardiac magnetic resonance (MR) images. To take into account the temporal information between adjacent frames, we propose to integrate the time-dependent constraints into the energy functional of the classical gradient vector flow (GVF). As a result, the prior motion knowledge of RV is introduced in the deformation process through the time-dependent constraints in the proposed GVF-T model. A weighting parameter is introduced to adjust the weight of the temporal information against the image data itself. The additional external edge forces retrieved from the temporal constraints may be useful for the RV segmentation, such that lead to a better segmentation performance. The effectiveness of the proposed approach is supported by experimental results on synthetic and cardiac MR images.

[BibTex]

[BibTex]


Thumb xl iccv13
Dynamic Probabilistic Volumetric Models

Ulusoy, A. O., Biris, O., Mundy, J. L.

In ICCV, pages: 505-512, 2013 (inproceedings)

Abstract
This paper presents a probabilistic volumetric framework for image based modeling of general dynamic 3-d scenes. The framework is targeted towards high quality modeling of complex scenes evolving over thousands of frames. Extensive storage and computational resources are required in processing large scale space-time (4-d) data. Existing methods typically store separate 3-d models at each time step and do not address such limitations. A novel 4-d representation is proposed that adaptively subdivides in space and time to explain the appearance of 3-d dynamic surfaces. This representation is shown to achieve compression of 4-d data and provide efficient spatio-temporal processing. The advances of the proposed framework is demonstrated on standard datasets using free-viewpoint video and 3-d tracking applications.

video pdf DOI [BibTex]

video pdf DOI [BibTex]


Thumb xl pic cdc iccv13
A Generic Deformation Model for Dense Non-Rigid Surface Registration: a Higher-Order MRF-based Approach

Zeng, Y., Wang, C., Gu, X., Samaras, D., Paragios, N.

In IEEE International Conference on Computer Vision (ICCV), pages: 3360~3367, 2013 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl shapeinvariance bookchapter2012
Modeling Shapes with Higher-Order Graphs: Theory and Applications

Wang, C., Zeng, Y., Samaras, D., Paragios, N.

In Shape Perception in Human and Computer Vision: An Interdisciplinary Perspective, (Editors: Zygmunt Pizlo and Sven Dickinson), Springer, 2013 (incollection)

Publishers site [BibTex]

Publishers site [BibTex]


Thumb xl apcom1
Model Reconstruction of Patient-Specific Cardiac Mesh from Segmented Contour Lines

C. W. Lim, Y. Su, S. Y. Yeo, G. M. Ng, V. T. Nguyen, L. Zhong, R. S. Tan, K. K. Poh, P. Chai,

In Asia Pacific Congress on Computational Mechanics, 2013 (inproceedings)

Abstract
We propose an automatic algorithm for the reconstruction of a set of patient-specific dynamic cardiac mesh model with 1-to-1 mesh correspondence over the whole cardiac cycle. This work focus on both the reconstruction technique of the initial 3D model of the heart and also the consistent mapping of the vertex positions throughout all the 3D meshes. This process is technically more challenging due to the wide interval spacing between MRI images as compared to CT images, making overlapping blood vessels much harder to discern. We propose a tree-based connectivity data structure to perform a filtering process to eliminate weak connections between contours on adjacent slices. The reconstructed 3D model from the first time step is used as a base template model, and deformed to fit the segmented contours in the next time step. Our algorithm has been tested on an actual acquisition of cardiac MRI images over one cardiac cycle.

[BibTex]

[BibTex]


Thumb xl training faces
Random Forests for Real Time 3D Face Analysis

Fanelli, G., Dantone, M., Gall, J., Fossati, A., van Gool, L.

International Journal of Computer Vision, 101(3):437-458, Springer, 2013 (article)

Abstract
We present a random forest-based framework for real time head pose estimation from depth images and extend it to localize a set of facial features in 3D. Our algorithm takes a voting approach, where each patch extracted from the depth image can directly cast a vote for the head pose or each of the facial features. Our system proves capable of handling large rotations, partial occlusions, and the noisy depth data acquired using commercial sensors. Moreover, the algorithm works on each frame independently and achieves real time performance without resorting to parallel computations on a GPU. We present extensive experiments on publicly available, challenging datasets and present a new annotated head pose database recorded using a Microsoft Kinect.

data and code publisher's site pdf DOI Project Page [BibTex]

data and code publisher's site pdf DOI Project Page [BibTex]


Thumb xl humans3tracking
Markerless Motion Capture of Multiple Characters Using Multi-view Image Segmentation

Liu, Y., Gall, J., Stoll, C., Dai, Q., Seidel, H., Theobalt, C.

Transactions on Pattern Analysis and Machine Intelligence, 35(11):2720-2735, 2013 (article)

Abstract
Capturing the skeleton motion and detailed time-varying surface geometry of multiple, closely interacting peoples is a very challenging task, even in a multicamera setup, due to frequent occlusions and ambiguities in feature-to-person assignments. To address this task, we propose a framework that exploits multiview image segmentation. To this end, a probabilistic shape and appearance model is employed to segment the input images and to assign each pixel uniquely to one person. Given the articulated template models of each person and the labeled pixels, a combined optimization scheme, which splits the skeleton pose optimization problem into a local one and a lower dimensional global one, is applied one by one to each individual, followed with surface estimation to capture detailed nonrigid deformations. We show on various sequences that our approach can capture the 3D motion of humans accurately even if they move rapidly, if they wear wide apparel, and if they are engaged in challenging multiperson motions, including dancing, wrestling, and hugging.

data and video pdf DOI Project Page [BibTex]

data and video pdf DOI Project Page [BibTex]


Thumb xl perception
Viewpoint and pose in body-form adaptation

Sekunova, A., Black, M., Parkinson, L., Barton, J. J. S.

Perception, 42(2):176-186, 2013 (article)

Abstract
Faces and bodies are complex structures, perception of which can play important roles in person identification and inference of emotional state. Face representations have been explored using behavioural adaptation: in particular, studies have shown that face aftereffects show relatively broad tuning for viewpoint, consistent with origin in a high-level structural descriptor far removed from the retinal image. Our goals were to determine first, if body aftereffects also showed a degree of viewpoint invariance, and second if they also showed pose invariance, given that changes in pose create even more dramatic changes in the 2-D retinal image. We used a 3-D model of the human body to generate headless body images, whose parameters could be varied to generate different body forms, viewpoints, and poses. In the first experiment, subjects adapted to varying viewpoints of either slim or heavy bodies in a neutral stance, followed by test stimuli that were all front-facing. In the second experiment, we used the same front-facing bodies in neutral stance as test stimuli, but compared adaptation from bodies in the same neutral stance to adaptation with the same bodies in different poses. We found that body aftereffects were obtained over substantial viewpoint changes, with no significant decline in aftereffect magnitude with increasing viewpoint difference between adapting and test images. Aftereffects also showed transfer across one change in pose but not across another. We conclude that body representations may have more viewpoint invariance than faces, and demonstrate at least some transfer across pose, consistent with a high-level structural description. Keywords: aftereffect, shape, face, representation

pdf from publisher abstract pdf link (url) Project Page [BibTex]

pdf from publisher abstract pdf link (url) Project Page [BibTex]


Thumb xl embs1
Reconstructing patient-specific cardiac models from contours via Delaunay triangulation and graph-cuts

Min Wan, Calvin Lim, Junmei Zhang, Yi Su, Si Yong Yeo, Desheng Wang, Ru San Tan, Liang Zhong

In International Conference of the IEEE Engineering in Medicine and Biology Society, pages: 2976-9, 2013 (inproceedings)

[BibTex]

[BibTex]


Thumb xl ncmrf cvpr2013
Nonlinearly Constrained MRFs: Exploring the Intrinsic Dimensions of Higher-Order Cliques

Zeng, Y., Wang, C., Soatto, S., Yau, S.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl houghforest
Class-Specific Hough Forests for Object Detection

Gall, J., Lempitsky, V.

In Decision Forests for Computer Vision and Medical Image Analysis, pages: 143-157, 11, (Editors: Criminisi, A. and Shotton, J.), Springer, 2013 (incollection)

code Project Page [BibTex]

code Project Page [BibTex]


Thumb xl cinc1
Regional comparison of left ventricle systolic wall stress reveals intraregional uniformity in healthy subjects

Soo Kng Teo, Si Yong Yeo, May Ling Tan, Chi Wan Lim, Liang Zhong, Ru San Tan, Yi Su

In Computing in Cardiology Conference, pages: 575 - 578, 2013 (inproceedings)

Abstract
This study aimed to assess the feasibility of using the regional uniformity of the left ventricle (LV) wall stress (WS) to diagnose patients with myocardial infarction. We present a novel method using a similarity map that measures the degree of uniformity in nominal systolic WS across pairs of segments within the same patient. The values of the nominal WS are computed at each vertex point from a 1-to-1 corresponding mesh pair of the LV at the end-diastole (ED) and end-systole (ES) phases. The 3D geometries of the LV at ED and ES are reconstructed from border-delineated MRI images and the 1-to-1 mesh generated using a strain-energy minimization approach. The LV is then partitioned into 16 segments based on published clinical standard and the nominal WS histogram distribution for each of the segment was computed. A similarity index is then computed for each pair of histogram distributions to generate a 16-by-16 similarity map. Based on our initial study involving 12 MI patients and 9 controls, we observed uniformity for intra- regional comparisons in the controls compared against the patients. Our results suggest that the regional uniformity of the nominal systolic WS in the form of a similarity map can potentially be used as a discriminant between MI patients and normal controls.

[BibTex]

[BibTex]


Thumb xl 2013 ivc rkek teaser
Non-parametric hand pose estimation with object context

Romero, J., Kjellström, H., Ek, C. H., Kragic, D.

Image and Vision Computing , 31(8):555 - 564, 2013 (article)

Abstract
In the spirit of recent work on contextual recognition and estimation, we present a method for estimating the pose of human hands, employing information about the shape of the object in the hand. Despite the fact that most applications of human hand tracking involve grasping and manipulation of objects, the majority of methods in the literature assume a free hand, isolated from the surrounding environment. Occlusion of the hand from grasped objects does in fact often pose a severe challenge to the estimation of hand pose. In the presented method, object occlusion is not only compensated for, it contributes to the pose estimation in a contextual fashion; this without an explicit model of object shape. Our hand tracking method is non-parametric, performing a nearest neighbor search in a large database (.. entries) of hand poses with and without grasped objects. The system that operates in real time, is robust to self occlusions, object occlusions and segmentation errors, and provides full hand pose reconstruction from monocular video. Temporal consistency in hand pose is taken into account, without explicitly tracking the hand in the high-dim pose space. Experiments show the non-parametric method to outperform other state of the art regression methods, while operating at a significantly lower computational cost than comparable model-based hand tracking methods.

Publisher site pdf link (url) [BibTex]

Publisher site pdf link (url) [BibTex]


Thumb xl dfmdv1
Image Gradient Based Level Set Methods in 2D and 3D

Xianhua Xie, Si Yong Yeo, Majid Mirmehdi, Igor Sazonov, Perumal Nithiarasu

In Deformation Models: Tracking, Animation and Applications, pages: 101-120, 0, (Editors: Manuel González Hidalgo and Arnau Mir Torres and Javier Varona Gómez), Springer, 2013 (inbook)

Abstract
This chapter presents an image gradient based approach to perform 2D and 3D deformable model segmentation using level set. The 2D method uses an external force field that is based on magnetostatics and hypothesized magnetic interactions between the active contour and object boundaries. The major contribution of the method is that the interaction of its forces can greatly improve the active contour in capturing complex geometries and dealing with difficult initializations, weak edges and broken boundaries. This method is then generalized to 3D by reformulating its external force based on geometrical interactions between the relative geometries of the deformable model and the object boundary characterized by image gradient. The evolution of the deformable model is solved using the level set method so that topological changes are handled automatically. The relative geometrical configurations between the deformable model and the object boundaries contribute to a dynamic vector force field that changes accordingly as the deformable model evolves. The geometrically induced dynamic interaction force has been shown to greatly improve the deformable model performance in acquiring complex geometries and highly concave boundaries, and it gives the deformable model a high invariancy in initialization configurations. The voxel interactions across the whole image domain provide a global view of the object boundary representation, giving the external force a long attraction range. The bidirectionality of the external force field allows the new deformable model to deal with arbitrary cross-boundary initializations, and facilitates the handling of weak edges and broken boundaries.

[BibTex]

[BibTex]