Header logo is ps


2014


Thumb xl modeltransport
Model Transport: Towards Scalable Transfer Learning on Manifolds

Freifeld, O., Hauberg, S., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 1378 -1385, Columbus, Ohio, USA, June 2014 (inproceedings)

Abstract
We consider the intersection of two research fields: transfer learning and statistics on manifolds. In particular, we consider, for manifold-valued data, transfer learning of tangent-space models such as Gaussians distributions, PCA, regression, or classifiers. Though one would hope to simply use ordinary Rn-transfer learning ideas, the manifold structure prevents it. We overcome this by basing our method on inner-product-preserving parallel transport, a well-known tool widely used in other problems of statistics on manifolds in computer vision. At first, this straightforward idea seems to suffer from an obvious shortcoming: Transporting large datasets is prohibitively expensive, hindering scalability. Fortunately, with our approach, we never transport data. Rather, we show how the statistical models themselves can be transported, and prove that for the tangent-space models above, the transport “commutes” with learning. Consequently, our compact framework, applicable to a large class of manifolds, is not restricted by the size of either the training or test sets. We demonstrate the approach by transferring PCA and logistic-regression models of real-world data involving 3D shapes and image descriptors.

pdf SupMat Video poster DOI Project Page [BibTex]

2014

pdf SupMat Video poster DOI Project Page [BibTex]


Thumb xl screen shot 2014 07 09 at 15.49.27
Robot Arm Pose Estimation through Pixel-Wise Part Classification

Bohg, J., Romero, J., Herzog, A., Schaal, S.

In IEEE International Conference on Robotics and Automation (ICRA) 2014, pages: 3143-3150, June 2014 (inproceedings)

Abstract
We propose to frame the problem of marker-less robot arm pose estimation as a pixel-wise part classification problem. As input, we use a depth image in which each pixel is classified to be either from a particular robot part or the background. The classifier is a random decision forest trained on a large number of synthetically generated and labeled depth images. From all the training samples ending up at a leaf node, a set of offsets is learned that votes for relative joint positions. Pooling these votes over all foreground pixels and subsequent clustering gives us an estimate of the true joint positions. Due to the intrinsic parallelism of pixel-wise classification, this approach can run in super real-time and is more efficient than previous ICP-like methods. We quantitatively evaluate the accuracy of this approach on synthetic data. We also demonstrate that the method produces accurate joint estimates on real data despite being purely trained on synthetic data.

video code pdf DOI Project Page [BibTex]

video code pdf DOI Project Page [BibTex]


Thumb xl dfm
Efficient Non-linear Markov Models for Human Motion

Lehrmann, A. M., Gehler, P. V., Nowozin, S.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 1314-1321, IEEE, June 2014 (inproceedings)

Abstract
Dynamic Bayesian networks such as Hidden Markov Models (HMMs) are successfully used as probabilistic models for human motion. The use of hidden variables makes them expressive models, but inference is only approximate and requires procedures such as particle filters or Markov chain Monte Carlo methods. In this work we propose to instead use simple Markov models that only model observed quantities. We retain a highly expressive dynamic model by using interactions that are nonlinear and non-parametric. A presentation of our approach in terms of latent variables shows logarithmic growth for the computation of exact loglikelihoods in the number of latent states. We validate our model on human motion capture data and demonstrate state-of-the-art performance on action recognition and motion completion tasks.

Project page pdf DOI Project Page [BibTex]

Project page pdf DOI Project Page [BibTex]


Thumb xl grassmann
Grassmann Averages for Scalable Robust PCA

Hauberg, S., Feragen, A., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 3810 -3817, Columbus, Ohio, USA, June 2014 (inproceedings)

Abstract
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase – "big data" implies "big outliers". While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA do not scale beyond small-to-medium sized datasets. To address this, we introduce the Grassmann Average (GA), which expresses dimensionality reduction as an average of the subspaces spanned by the data. Because averages can be efficiently computed, we immediately gain scalability. GA is inherently more robust than PCA, but we show that they coincide for Gaussian data. We exploit that averages can be made robust to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. Robustness can be with respect to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements, making it scalable to "big noisy data." We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie.

pdf code supplementary material tutorial video results video talk poster DOI Project Page [BibTex]

pdf code supplementary material tutorial video results video talk poster DOI Project Page [BibTex]


Thumb xl 3basic posebits
Posebits for Monocular Human Pose Estimation

Pons-Moll, G., Fleet, D. J., Rosenhahn, B.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 2345-2352, Columbus, Ohio, USA, June 2014 (inproceedings)

Abstract
We advocate the inference of qualitative information about 3D human pose, called posebits, from images. Posebits represent boolean geometric relationships between body parts (e.g., left-leg in front of right-leg or hands close to each other). The advantages of posebits as a mid-level representation are 1) for many tasks of interest, such qualitative pose information may be sufficient (e.g. , semantic image retrieval), 2) it is relatively easy to annotate large image corpora with posebits, as it simply requires answers to yes/no questions; and 3) they help resolve challenging pose ambiguities and therefore facilitate the difficult talk of image-based 3D pose estimation. We introduce posebits, a posebit database, a method for selecting useful posebits for pose estimation and a structural SVM model for posebit inference. Experiments show the use of posebits for semantic image retrieval and for improving 3D pose estimation.

pdf Project Page Project Page [BibTex]

pdf Project Page Project Page [BibTex]


Thumb xl roser
Simultaneous Underwater Visibility Assessment, Enhancement and Improved Stereo

Roser, M., Dunbabin, M., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 3840 - 3847 , Hong Kong, China, June 2014 (conference)

Abstract
Vision-based underwater navigation and obstacle avoidance demands robust computer vision algorithms, particularly for operation in turbid water with reduced visibility. This paper describes a novel method for the simultaneous underwater image quality assessment, visibility enhancement and disparity computation to increase stereo range resolution under dynamic, natural lighting and turbid conditions. The technique estimates the visibility properties from a sparse 3D map of the original degraded image using a physical underwater light attenuation model. Firstly, an iterated distance-adaptive image contrast enhancement enables a dense disparity computation and visibility estimation. Secondly, using a light attenuation model for ocean water, a color corrected stereo underwater image is obtained along with a visibility distance estimate. Experimental results in shallow, naturally lit, high-turbidity coastal environments show the proposed technique improves range estimation over the original images as well as image quality and color for habitat classification. Furthermore, the recursiveness and robustness of the technique allows real-time implementation onboard an Autonomous Underwater Vehicles for improved navigation and obstacle avoidance performance.

pdf DOI [BibTex]

pdf DOI [BibTex]


Thumb xl icmlteaser
Preserving Modes and Messages via Diverse Particle Selection

Pacheco, J., Zuffi, S., Black, M. J., Sudderth, E.

In Proceedings of the 31st International Conference on Machine Learning (ICML-14), 32(1):1152-1160, J. Machine Learning Research Workshop and Conf. and Proc., Beijing, China, June 2014 (inproceedings)

Abstract
In applications of graphical models arising in domains such as computer vision and signal processing, we often seek the most likely configurations of high-dimensional, continuous variables. We develop a particle-based max-product algorithm which maintains a diverse set of posterior mode hypotheses, and is robust to initialization. At each iteration, the set of hypotheses at each node is augmented via stochastic proposals, and then reduced via an efficient selection algorithm. The integer program underlying our optimization-based particle selection minimizes errors in subsequent max-product message updates. This objective automatically encourages diversity in the maintained hypotheses, without requiring tuning of application-specific distances among hypotheses. By avoiding the stochastic resampling steps underlying particle sum-product algorithms, we also avoid common degeneracies where particles collapse onto a single hypothesis. Our approach significantly outperforms previous particle-based algorithms in experiments focusing on the estimation of human pose from single images.

pdf SupMat link (url) Project Page Project Page [BibTex]

pdf SupMat link (url) Project Page Project Page [BibTex]


Thumb xl schoenbein
Calibrating and Centering Quasi-Central Catadioptric Cameras

Schoenbein, M., Strauss, T., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 4443 - 4450, Hong Kong, China, June 2014 (conference)

Abstract
Non-central catadioptric models are able to cope with irregular camera setups and inaccuracies in the manufacturing process but are computationally demanding and thus not suitable for robotic applications. On the other hand, calibrating a quasi-central (almost central) system with a central model introduces errors due to a wrong relationship between the viewing ray orientations and the pixels on the image sensor. In this paper, we propose a central approximation to quasi-central catadioptric camera systems that is both accurate and efficient. We observe that the distance to points in 3D is typically large compared to deviations from the single viewpoint. Thus, we first calibrate the system using a state-of-the-art non-central camera model. Next, we show that by remapping the observations we are able to match the orientation of the viewing rays of a much simpler single viewpoint model with the true ray orientations. While our approximation is general and applicable to all quasi-central camera systems, we focus on one of the most common cases in practice: hypercatadioptric cameras. We compare our model to a variety of baselines in synthetic and real localization and motion estimation experiments. We show that by using the proposed model we are able to achieve near non-central accuracy while obtaining speed-ups of more than three orders of magnitude compared to state-of-the-art non-central models.

pdf DOI [BibTex]

pdf DOI [BibTex]


Thumb xl pami
3D Traffic Scene Understanding from Movable Platforms

Geiger, A., Lauer, M., Wojek, C., Stiller, C., Urtasun, R.

IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 36(5):1012-1025, published, IEEE, Los Alamitos, CA, May 2014 (article)

Abstract
In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow and occupancy grids. For each of these cues we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.

pdf link (url) [BibTex]

pdf link (url) [BibTex]


Thumb xl blueman cropped2
Modeling the Human Body in 3D: Data Registration and Human Shape Representation

Tsoli, A.

Brown University, Department of Computer Science, May 2014 (phdthesis)

pdf [BibTex]

pdf [BibTex]


Thumb xl modeltransport
Model transport: towards scalable transfer learning on manifolds - supplemental material

Freifeld, O., Hauberg, S., Black, M. J.

(9), April 2014 (techreport)

Abstract
This technical report is complementary to "Model Transport: Towards Scalable Transfer Learning on Manifolds" and contains proofs, explanation of the attached video (visualization of bases from the body shape experiments), and high-resolution images of select results of individual reconstructions from the shape experiments. It is identical to the supplemental mate- rial submitted to the Conference on Computer Vision and Pattern Recognition (CVPR 2014) on November 2013.

PDF [BibTex]


Thumb xl aistats2014
Probabilistic Solutions to Differential Equations and their Application to Riemannian Statistics

Hennig, P., Hauberg, S.

In Proceedings of the 17th International Conference on Artificial Intelligence and Statistics, 33, pages: 347-355, JMLR: Workshop and Conference Proceedings, (Editors: S Kaski and J Corander), Microtome Publishing, Brookline, MA, April 2014 (inproceedings)

Abstract
We study a probabilistic numerical method for the solution of both boundary and initial value problems that returns a joint Gaussian process posterior over the solution. Such methods have concrete value in the statistics on Riemannian manifolds, where non-analytic ordinary differential equations are involved in virtually all computations. The probabilistic formulation permits marginalising the uncertainty of the numerical solution such that statistics are less sensitive to inaccuracies. This leads to new Riemannian algorithms for mean value computations and principal geodesic analysis. Marginalisation also means results can be less precise than point estimates, enabling a noticeable speed-up over the state of the art. Our approach is an argument for a wider point that uncertainty caused by numerical calculations should be tracked throughout the pipeline of machine learning algorithms.

pdf Youtube Supplements Project page link (url) [BibTex]

pdf Youtube Supplements Project page link (url) [BibTex]


Thumb xl thumb
Multi-View Priors for Learning Detectors from Sparse Viewpoint Data

Pepik, B., Stark, M., Gehler, P., Schiele, B.

International Conference on Learning Representations, April 2014 (conference)

Abstract
While the majority of today's object class models provide only 2D bounding boxes, far richer output hypotheses are desirable including viewpoint, fine-grained category, and 3D geometry estimate. However, models trained to provide richer output require larger amounts of training data, preferably well covering the relevant aspects such as viewpoint and fine-grained categories. In this paper, we address this issue from the perspective of transfer learning, and design an object class model that explicitly leverages correlations between visual features. Specifically, our model represents prior distributions over permissible multi-view detectors in a parametric way -- the priors are learned once from training data of a source object class, and can later be used to facilitate the learning of a detector for a target class. As we show in our experiments, this transfer is not only beneficial for detectors based on basic-level category representations, but also enables the robust learning of detectors that represent classes at finer levels of granularity, where training data is typically even scarcer and more unbalanced. As a result, we report largely improved performance in simultaneous 2D object localization and viewpoint estimation on a recent dataset of challenging street scenes.

reviews pdf Project Page [BibTex]

reviews pdf Project Page [BibTex]


no image
RoCKIn@Work in a Nutshell

Ahmad, A., Amigoni, A., Awaad, I., Berghofer, J., Bischoff, R., Bonarini, A., Dwiputra, R., Fontana, G., Hegger, F., Hochgeschwender, N., Iocchi, L., Kraetzschmar, G., Lima, P., Matteucci, M., Nardi, D., Schiaffonati, V., Schneider, S.

(FP7-ICT-601012 Revision 1.2), RoCKIn - Robot Competitions Kick Innovation in Cognitive Systems and Robotics, March 2014 (techreport)

Abstract
The main purpose of RoCKIn@Work is to foster innovation in industrial service robotics. Innovative robot applications for industry call for the capability to work interactively with humans and reduced initial programming requirements. This will open new opportunities to automate challenging manufacturing processes, even for small to medium-sized lots and highly customer-specific production requirements. Thereby, the RoCKIn competitions pave the way for technology transfer and contribute to the continued commercial competitiveness of European industry.

[BibTex]

[BibTex]


no image
RoCKIn@Home in a Nutshell

Ahmad, A., Amigoni, F., Awaad, I., Berghofer, J., Bischoff, R., Bonarini, A., Dwiputra, R., Fontana, G., Hegger, F., Hochgeschwender, N., Iocchi, L., Kraetzschmar, G., Lima, P., Matteucci, M., Nardi, D., Schneider, S.

(FP7-ICT-601012 Revision 0.8), RoCKIn - Robot Competitions Kick Innovation in Cognitive Systems and Robotics, March 2014 (techreport)

Abstract
RoCKIn@Home is a competition that aims at bringing together the benefits of scientific benchmarking with the attraction of scientific competitions in the realm of domestic service robotics. The objectives are to bolster research in service robotics for home applications and to raise public awareness of the current and future capabilities of such robot systems to meet societal challenges like healthy ageing and longer independent living.

[BibTex]

[BibTex]


Thumb xl figure1
NRSfM using Local Rigidity

Rehan, A., Zaheer, A., Akhter, I., Saeed, A., Mahmood, B., Usmani, M., Khan, S.

In Proceedings Winter Conference on Applications of Computer Vision, pages: 69-74, open access, IEEE , Steamboat Springs, CO, USA, March 2014 (inproceedings)

Abstract
Factorization methods for computation of nonrigid structure have limited practicality, and work well only when there is large enough camera motion between frames, with long sequences and limited or no occlusions. We show that typical nonrigid structure can often be approximated well as locally rigid sub-structures in time and space. Specifically, we assume that: 1) the structure can be approximated as rigid in a short local time window and 2) some point pairs stay relatively rigid in space, maintaining a fixed distance between them during the sequence. We first use the triangulation constraints in rigid SFM over a sliding time window to get an initial estimate of the nonrigid 3D structure. We then automatically identify relatively rigid point pairs in this structure, and use their length-constancy simultaneously with triangulation constraints to refine the structure estimate. Unlike factorization methods, the structure is estimated independent of the camera motion computation, adding to the simplicity and stability of the approach. Further, local factorization inherently handles significant natural occlusions gracefully, performing much better than the state-of-the art. We show more stable and accurate results as compared to the state-of-the art on even short sequences starting from 15 frames only, containing camera rotations as small as 2 degree and up to 50% missing data.

link (url) [BibTex]

link (url) [BibTex]


Thumb xl homerjournal
Adaptive Offset Correction for Intracortical Brain Computer Interfaces

Homer, M. L., Perge, J. A., Black, M. J., Harrison, M. T., Cash, S. S., Hochberg, L. R.

IEEE Transactions on Neural Systems and Rehabilitation Engineering, 22(2):239-248, March 2014 (article)

Abstract
Intracortical brain computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user’s ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called MOCA, was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors (10.6 ± 10.1\%; p < 0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs.

pdf DOI Project Page [BibTex]

pdf DOI Project Page [BibTex]


Thumb xl aggteaser
Model-based Anthropometry: Predicting Measurements from 3D Human Scans in Multiple Poses

Tsoli, A., Loper, M., Black, M. J.

In Proceedings Winter Conference on Applications of Computer Vision, pages: 83-90, IEEE , March 2014 (inproceedings)

Abstract
Extracting anthropometric or tailoring measurements from 3D human body scans is important for applications such as virtual try-on, custom clothing, and online sizing. Existing commercial solutions identify anatomical landmarks on high-resolution 3D scans and then compute distances or circumferences on the scan. Landmark detection is sensitive to acquisition noise (e.g. holes) and these methods require subjects to adopt a specific pose. In contrast, we propose a solution we call model-based anthropometry. We fit a deformable 3D body model to scan data in one or more poses; this model-based fitting is robust to scan noise. This brings the scan into registration with a database of registered body scans. Then, we extract features from the registered model (rather than from the scan); these include, limb lengths, circumferences, and statistical features of global shape. Finally, we learn a mapping from these features to measurements using regularized linear regression. We perform an extensive evaluation using the CAESAR dataset and demonstrate that the accuracy of our method outperforms state-of-the-art methods.

pdf DOI Project Page Project Page [BibTex]

pdf DOI Project Page Project Page [BibTex]


Thumb xl tpami small
A physically-based approach to reflection separation: from physical modeling to constrained optimization

Kong, N., Tai, Y., Shin, J. S.

IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 36(2):209-221, IEEE Computer Society, Febuary 2014 (article)

Abstract
We propose a physically-based approach to separate reflection using multiple polarized images with a background scene captured behind glass. The input consists of three polarized images, each captured from the same view point but with a different polarizer angle separated by 45 degrees. The output is the high-quality separation of the reflection and background layers from each of the input images. A main technical challenge for this problem is that the mixing coefficient for the reflection and background layers depends on the angle of incidence and the orientation of the plane of incidence, which are spatially varying over the pixels of an image. Exploiting physical properties of polarization for a double-surfaced glass medium, we propose a multiscale scheme which automatically finds the optimal separation of the reflection and background layers. Through experiments, we demonstrate that our approach can generate superior results to those of previous methods.

Publisher site [BibTex]

Publisher site [BibTex]


Thumb xl tbme
Simpler, faster, more accurate melanocytic lesion segmentation through MEDS

Peruch, F., Bogo, F., Bonazza, M., Cappelleri, V., Peserico, E.

IEEE Transactions on Biomedical Engineering, 61(2):557-565, February 2014 (article)

DOI [BibTex]

DOI [BibTex]


Thumb xl iccv2013 siyu 1
Learning People Detectors for Tracking in Crowded Scenes.

Tang, S., Andriluka, M., Milan, A., Schindler, K., Roth, S., Schiele, B.

2014, Scene Understanding Workshop (SUNw, CVPR workshop) (unpublished)

[BibTex]

[BibTex]


Thumb xl isprs2014
Evaluation of feature-based 3-d registration of probabilistic volumetric scenes

Restrepo, M. I., Ulusoy, A. O., Mundy, J. L.

In ISPRS Journal of Photogrammetry and Remote Sensing, 98(0):1-18, 2014 (inproceedings)

Abstract
Automatic estimation of the world surfaces from aerial images has seen much attention and progress in recent years. Among current modeling technologies, probabilistic volumetric models (PVMs) have evolved as an alternative representation that can learn geometry and appearance in a dense and probabilistic manner. Recent progress, in terms of storage and speed, achieved in the area of volumetric modeling, opens the opportunity to develop new frameworks that make use of the {PVM} to pursue the ultimate goal of creating an entire map of the earth, where one can reason about the semantics and dynamics of the 3-d world. Aligning 3-d models collected at different time-instances constitutes an important step for successful fusion of large spatio-temporal information. This paper evaluates how effectively probabilistic volumetric models can be aligned using robust feature-matching techniques, while considering different scenarios that reflect the kind of variability observed across aerial video collections from different time instances. More precisely, this work investigates variability in terms of discretization, resolution and sampling density, errors in the camera orientation, and changes in illumination and geographic characteristics. All results are given for large-scale, outdoor sites. In order to facilitate the comparison of the registration performance of {PVMs} to that of other 3-d reconstruction techniques, the registration pipeline is also carried out using Patch-based Multi-View Stereo (PMVS) algorithm. Registration performance is similar for scenes that have favorable geometry and the appearance characteristics necessary for high quality reconstruction. In scenes containing trees, such as a park, or many buildings, such as a city center, registration performance is significantly more accurate when using the PVM.

Publisher site link (url) DOI [BibTex]

Publisher site link (url) DOI [BibTex]


Thumb xl freelymoving2
A freely-moving monkey treadmill model

Foster, J., Nuyujukian, P., Freifeld, O., Gao, H., Walker, R., Ryu, S., Meng, T., Murmann, B., Black, M., Shenoy, K.

J. of Neural Engineering, 11(4):046020, 2014 (article)

Abstract
Objective: Motor neuroscience and brain-machine interface (BMI) design is based on examining how the brain controls voluntary movement, typically by recording neural activity and behavior from animal models. Recording technologies used with these animal models have traditionally limited the range of behaviors that can be studied, and thus the generality of science and engineering research. We aim to design a freely-moving animal model using neural and behavioral recording technologies that do not constrain movement. Approach: We have established a freely-moving rhesus monkey model employing technology that transmits neural activity from an intracortical array using a head-mounted device and records behavior through computer vision using markerless motion capture. We demonstrate the excitability and utility of this new monkey model, including the fi rst recordings from motor cortex while rhesus monkeys walk quadrupedally on a treadmill. Main results: Using this monkey model, we show that multi-unit threshold-crossing neural activity encodes the phase of walking and that the average ring rate of the threshold crossings covaries with the speed of individual steps. On a population level, we find that neural state-space trajectories of walking at diff erent speeds have similar rotational dynamics in some dimensions that evolve at the step rate of walking, yet robustly separate by speed in other state-space dimensions. Significance: Freely-moving animal models may allow neuroscientists to examine a wider range of behaviors and can provide a flexible experimental paradigm for examining the neural mechanisms that underlie movement generation across behaviors and environments. For BMIs, freely-moving animal models have the potential to aid prosthetic design by examining how neural encoding changes with posture, environment, and other real-world context changes. Understanding this new realm of behavior in more naturalistic settings is essential for overall progress of basic motor neuroscience and for the successful translation of BMIs to people with paralysis.

pdf Supplementary DOI Project Page [BibTex]

pdf Supplementary DOI Project Page [BibTex]


Thumb xl dissertation teaser scaled
Human Pose Estimation from Video and Inertial Sensors

Pons-Moll, G.

Ph.D Thesis, -, 2014 (book)

Abstract
The analysis and understanding of human movement is central to many applications such as sports science, medical diagnosis and movie production. The ability to automatically monitor human activity in security sensitive areas such as airports, lobbies or borders is of great practical importance. Furthermore, automatic pose estimation from images leverages the processing and understanding of massive digital libraries available on the Internet. We build upon a model based approach where the human shape is modelled with a surface mesh and the motion is parametrized by a kinematic chain. We then seek for the pose of the model that best explains the available observations coming from different sensors. In a first scenario, we consider a calibrated mult-iview setup in an indoor studio. To obtain very accurate results, we propose a novel tracker that combines information coming from video and a small set of Inertial Measurement Units (IMUs). We do so by locally optimizing a joint energy consisting of a term that measures the likelihood of the video data and a term for the IMU data. This is the first work to successfully combine video and IMUs information for full body pose estimation. When compared to commercial marker based systems the proposed solution is more cost efficient and less intrusive for the user. In a second scenario, we relax the assumption of an indoor studio and we tackle outdoor scenes with background clutter, illumination changes, large recording volumes and difficult motions of people interacting with objects. Again, we combine information from video and IMUs. Here we employ a particle based optimization approach that allows us to be more robust to tracking failures. To satisfy the orientation constraints imposed by the IMUs, we derive an analytic Inverse Kinematics (IK) procedure to sample from the manifold of valid poses. The generated hypothesis come from a lower dimensional manifold and therefore the computational cost can be reduced. Experiments on challenging sequences suggest the proposed tracker can be applied to capture in outdoor scenarios. Furthermore, the proposed IK sampling procedure can be used to integrate any kind of constraints derived from the environment. Finally, we consider the most challenging possible scenario: pose estimation of monocular images. Here, we argue that estimating the pose to the degree of accuracy as in an engineered environment is too ambitious with the current technology. Therefore, we propose to extract meaningful semantic information about the pose directly from image features in a discriminative fashion. In particular, we introduce posebits which are semantic pose descriptors about the geometric relationships between parts in the body. The experiments show that the intermediate step of inferring posebits from images can improve pose estimation from monocular imagery. Furthermore, posebits can be very useful as input feature for many computer vision algorithms.

pdf [BibTex]


no image
Left Ventricle Segmentation by Dynamic Shape Constrained Random Walk

X. Yang, Y. Su, M. Wan, S. Y. Yeo, C. Lim, S. T. Wong, L. Zhong, R. S. Tan

In Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2014 (inproceedings)

Abstract
Accurate and robust extraction of the left ventricle (LV) cavity is a key step for quantitative analysis of cardiac functions. In this study, we propose an improved LV cavity segmentation method that incorporates a dynamic shape constraint into the weighting function of the random walks algorithm. The method involves an iterative process that updates an intermediate result to the desired solution. The shape constraint restricts the solution space of the segmentation result, such that the robustness of the algorithm is increased to handle misleading information that emanates from noise, weak boundaries, and clutter. Our experiments on real cardiac magnetic resonance images demonstrate that the proposed method obtains better segmentation performance than standard method.

[BibTex]

[BibTex]


Thumb xl tang14ijcv
Detection and Tracking of Occluded People

Tang, S., Andriluka, M., Schiele, B.

International Journal of Computer Vision, 110, pages: 58-69, 2014 (article)

PDF [BibTex]

PDF [BibTex]


Thumb xl jnb1
Segmentation of Biomedical Images Using Active Contour Model with Robust Image Feature and Shape Prior

S. Y. Yeo, X. Xie, I. Sazonov, P. Nithiarasu

International Journal for Numerical Methods in Biomedical Engineering, 30(2):232- 248, 2014 (article)

Abstract
In this article, a new level set model is proposed for the segmentation of biomedical images. The image energy of the proposed model is derived from a robust image gradient feature which gives the active contour a global representation of the geometric configuration, making it more robust in dealing with image noise, weak edges, and initial configurations. Statistical shape information is incorporated using nonparametric shape density distribution, which allows the shape model to handle relatively large shape variations. The segmentation of various shapes from both synthetic and real images depict the robustness and efficiency of the proposed method.

[BibTex]

[BibTex]


Thumb xl simulated annealing
Simulated Annealing

Gall, J.

In Encyclopedia of Computer Vision, pages: 737-741, 0, (Editors: Ikeuchi, K. ), Springer Verlag, 2014, to appear (inbook)

[BibTex]

[BibTex]


Thumb xl ijcvflow2
A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles behind Them

Sun, D., Roth, S., Black, M. J.

International Journal of Computer Vision (IJCV), 106(2):115-137, 2014 (article)

Abstract
The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that "classical'' flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. One key implementation detail is the median filtering of intermediate flow fields during optimization. While this improves the robustness of classical methods it actually leads to higher energy solutions, meaning that these methods are not optimizing the original objective function. To understand the principles behind this phenomenon, we derive a new objective function that formalizes the median filtering heuristic. This objective function includes a non-local smoothness term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that can better preserve motion details. To take advantage of the trend towards video in wide-screen format, we further introduce an asymmetric pyramid downsampling scheme that enables the estimation of longer range horizontal motions. The methods are evaluated on Middlebury, MPI Sintel, and KITTI datasets using the same parameter settings.

pdf full text code [BibTex]


Thumb xl glsn1
Automatic 4D Reconstruction of Patient-Specific Cardiac Mesh with 1- to-1 Vertex Correspondence from Segmented Contours Lines

C. W. Lim, Y. Su, S. Y. Yeo, G. M. Ng, V. T. Nguyen, L. Zhong, R. S. Tan, K. K. Poh, P. Chai,

PLOS ONE, 9(4), 2014 (article)

Abstract
We propose an automatic algorithm for the reconstruction of patient-specific cardiac mesh models with 1-to-1 vertex correspondence. In this framework, a series of 3D meshes depicting the endocardial surface of the heart at each time step is constructed, based on a set of border delineated magnetic resonance imaging (MRI) data of the whole cardiac cycle. The key contribution in this work involves a novel reconstruction technique to generate a 4D (i.e., spatial–temporal) model of the heart with 1-to-1 vertex mapping throughout the time frames. The reconstructed 3D model from the first time step is used as a base template model and then deformed to fit the segmented contours from the subsequent time steps. A method to determine a tree-based connectivity relationship is proposed to ensure robust mapping during mesh deformation. The novel feature is the ability to handle intra- and inter-frame 2D topology changes of the contours, which manifests as a series of merging and splitting of contours when the images are viewed either in a spatial or temporal sequence. Our algorithm has been tested on five acquisitions of cardiac MRI and can successfully reconstruct the full 4D heart model in around 30 minutes per subject. The generated 4D heart model conforms very well with the input segmented contours and the mesh element shape is of reasonably good quality. The work is important in the support of downstream computational simulation activities.

[BibTex]

[BibTex]

2007


Thumb xl floweval
A Database and Evaluation Methodology for Optical Flow

Baker, S., Scharstein, D., Lewis, J.P., Roth, S., Black, M.J., Szeliski, R.

In Int. Conf. on Computer Vision, ICCV, pages: 1-8, Rio de Janeiro, Brazil, October 2007 (inproceedings)

pdf [BibTex]

2007

pdf [BibTex]


Thumb xl iccv07b
Shining a light on human pose: On shadows, shading and the estimation of pose and shape,

Balan, A., Black, M. J., Haussecker, H., Sigal, L.

In Int. Conf. on Computer Vision, ICCV, pages: 1-8, Rio de Janeiro, Brazil, October 2007 (inproceedings)

pdf YouTube [BibTex]

pdf YouTube [BibTex]


no image
Ensemble spiking activity as a source of cortical control signals in individuals with tetraplegia

Simeral, J. D., Kim, S. P., Black, M. J., Donoghue, J. P., Hochberg, L. R.

Biomedical Engineering Society, BMES, september 2007 (conference)

[BibTex]

[BibTex]


Thumb xl cvpr07scape
Detailed human shape and pose from images

Balan, A., Sigal, L., Black, M. J., Davis, J., Haussecker, H.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, pages: 1-8, Minneapolis, June 2007 (inproceedings)

pdf YouTube [BibTex]

pdf YouTube [BibTex]


no image
Learning static Gestalt laws through dynamic experience

Ostrovsky, Y., Wulff, J., Sinha, P.

Journal of Vision, 7(9):315-315, ARVO, June 2007 (article)

Abstract
The Gestalt laws (Wertheimer 1923) are widely regarded as the rules that help us parse the world into objects. However, it is unclear as to how these laws are acquired by an infant's visual system. Classically, these “laws” have been presumed to be innate (Kellman and Spelke 1983). But, more recent work in infant development, showing the protracted time-course over which these grouping principles emerge (e.g., Johnson and Aslin 1995; Craton 1996), suggests that visual experience might play a role in their genesis. Specifically, our studies of patients with late-onset vision (Project Prakash; VSS 2006) and evidence from infant development both point to an early role of common motion cues for object grouping. Here we explore the possibility that the privileged status of motion in the developmental timeline is not happenstance, but rather serves to bootstrap the learning of static Gestalt cues. Our approach involves computational analyses of real-world motion sequences to investigate whether primitive optic flow information is correlated with static figural cues that could eventually come to serve as proxies for grouping in the form of Gestalt principles. We calculated local optic flow maps and then examined how similarity of motion across image patches co-varied with similarity of certain figural properties in static frames. Results indicate that patches with similar motion are much more likely to have similar luminance, color, and orientation as compared to patches with dissimilar motion vectors. This regularity suggests that, in principle, common motion extracted from dynamic visual experience can provide enough information to bootstrap region grouping based on luminance and color and contour continuation mechanisms in static scenes. These observations, coupled with the cited experimental studies, lend credence to the hypothesis that static Gestalt laws might be learned through a bootstrapping process based on early dynamic experience.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl aperture
Decoding grasp aperture from motor-cortical population activity

Artemiadis, P., Shakhnarovich, G., Vargas-Irwin, C., Donoghue, J. P., Black, M. J.

In The 3rd International IEEE EMBS Conference on Neural Engineering, pages: 518-521, May 2007 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl ner07
Multi-state decoding of point-and-click control signals from motor cortical activity in a human with tetraplegia

Kim, S., Simeral, J., Hochberg, L., Donoghue, J. P., Friehs, G., Black, M. J.

In The 3rd International IEEE EMBS Conference on Neural Engineering, pages: 486-489, May 2007 (inproceedings)

Abstract
Basic neural-prosthetic control of a computer cursor has been recently demonstrated by Hochberg et al. [1] using the BrainGate system (Cyberkinetics Neurotechnology Systems, Inc.). While these results demonstrate the feasibility of intracortically-driven prostheses for humans with paralysis, a practical cursor-based computer interface requires more precise cursor control and the ability to “click” on areas of interest. Here we present a practical point and click device that decodes both continuous states (e.g. cursor kinematics) and discrete states (e.g. click state) from single neural population in human motor cortex. We describe a probabilistic multi-state decoder and the necessary training paradigms that enable point and click cursor control by a human with tetraplegia using an implanted microelectrode array. We present results from multiple recording sessions and quantify the point and click performance.

pdf [BibTex]

pdf [BibTex]


Thumb xl pedestal
Neuromotor prosthesis development

Donoghue, J., Hochberg, L., Nurmikko, A., Black, M., Simeral, J., Friehs, G.

Medicine & Health Rhode Island, 90(1):12-15, January 2007 (article)

Abstract
Article describes a neuromotor prosthesis (NMP), in development at Brown University, that records human brain signals, decodes them, and transforms them into movement commands. An NMP is described as a system consisting of a neural interface, a decoding system, and a user interface, also called an effector; a closed-loop system would be completed by a feedback signal from the effector to the brain. The interface is based on neural spiking, a source of information-rich, rapid, complex control signals from the nervous system. The NMP described, named BrainGate, consists of a match-head sized platform with 100 thread-thin electrodes implanted just into the surface of the motor cortex where commands to move the hand emanate. Neural signals are decoded by a rack of computers that displays the resultant output as the motion of a cursor on a computer monitor. While computer cursor motion represents a form of virtual device control, this same command signal could be routed to a device to command motion of paralyzed muscles or the actions of prosthetic limbs. The researchers’ overall goal is the development of a fully implantable, wireless multi-neuron sensor for broad research, neural prosthetic, and human neurodiagnostic applications.

pdf [BibTex]

pdf [BibTex]


Thumb xl ijcvflow2
On the spatial statistics of optical flow

Roth, S., Black, M. J.

International Journal of Computer Vision, 74(1):33-50, 2007 (article)

Abstract
We present an analysis of the spatial and temporal statistics of "natural" optical flow fields and a novel flow algorithm that exploits their spatial statistics. Training flow fields are constructed using range images of natural scenes and 3D camera motions recovered from hand-held and car-mounted video sequences. A detailed analysis of optical flow statistics in natural scenes is presented and machine learning methods are developed to learn a Markov random field model of optical flow. The prior probability of a flow field is formulated as a Field-of-Experts model that captures the spatial statistics in overlapping patches and is trained using contrastive divergence. This new optical flow prior is compared with previous robust priors and is incorporated into a recent, accurate algorithm for dense optical flow computation. Experiments with natural and synthetic sequences illustrate how the learned optical flow prior quantitatively improves flow accuracy and how it captures the rich spatial structure found in natural scene motion.

pdf preprint pdf from publisher [BibTex]

pdf preprint pdf from publisher [BibTex]


Thumb xl screen shot 2012 06 06 at 11.20.23 am
Deterministic Annealing for Multiple-Instance Learning

Gehler, P., Chapelle, O.

In Artificial Intelligence and Statistics (AIStats), 2007 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Point-and-click cursor control by a person with tetraplegia using an intracortical neural interface system

Kim, S., Simeral, J. D., Hochberg, L. R., Friehs, G., Donoghue, J. P., Black, M. J.

Program No. 517.2. 2007 Abstract Viewer and Itinerary Planner, Society for Neuroscience, San Diego, CA, 2007, Online (conference)

[BibTex]

[BibTex]


Thumb xl arrayhd
Assistive technology and robotic control using MI ensemble-based neural interface systems in humans with tetraplegia

Donoghue, J. P., Nurmikko, A., Black, M. J., Hochberg, L.

Journal of Physiology, Special Issue on Brain Computer Interfaces, 579, pages: 603-611, 2007 (article)

Abstract
This review describes the rationale, early stage development, and initial human application of neural interface systems (NISs) for humans with paralysis. NISs are emerging medical devices designed to allowpersonswith paralysis to operate assistive technologies or to reanimatemuscles based upon a command signal that is obtained directly fromthe brain. Such systems require the development of sensors to detect brain signals, decoders to transformneural activity signals into a useful command, and an interface for the user.We review initial pilot trial results of an NIS that is based on an intracortical microelectrode sensor that derives control signals from the motor cortex.We review recent findings showing, first, that neurons engaged by movement intentions persist in motor cortex years after injury or disease to the motor system, and second, that signals derived from motor cortex can be used by persons with paralysis to operate a range of devices. We suggest that, with further development, this form of NIS holds promise as a useful new neurotechnology for those with limited motor function or communication.We also discuss the additional potential for neural sensors to be used in the diagnosis and management of various neurological conditions and as a new way to learn about human brain function.

pdf preprint pdf from publisher DOI [BibTex]

pdf preprint pdf from publisher DOI [BibTex]


Thumb xl implant
Probabilistically modeling and decoding neural population activity in motor cortex

Black, M. J., Donoghue, J. P.

In Toward Brain-Computer Interfacing, pages: 147-159, (Editors: Dornhege, G. and del R. Millan, J. and Hinterberger, T. and McFarland, D. and Muller, K.-R.), MIT Press, London, 2007 (incollection)

pdf [BibTex]

pdf [BibTex]


Thumb xl screen shot 2012 02 23 at 1.59.51 pm
Learning Appearances with Low-Rank SVM

Wolf, L., Jhuang, H., Hazan, T.

In Conference on Computer Vision and Pattern Recognition (CVPR), 2007 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Neural correlates of grip aperture in primary motor cortex

Vargas-Irwin, C., Shakhnarovich, G., Artemiadis, P., Donoghue, J. P., Black, M. J.

Program No. 517.10. 2007 Abstract Viewer and Itinerary Planner, Society for Neuroscience, San Diego, CA, 2007, Online (conference)

[BibTex]

[BibTex]


no image
Directional tuning in motor cortex of a person with ALS

Simeral, J. D., Donoghue, J. P., Black, M. J., Friehs, G. M., Brown, R. H., Krivickas, L. S., Hochberg, L. R.

Program No. 517.4. 2007 Abstract Viewer and Itinerary Planner, Society for Neuroscience, San Diego, CA, 2007, Online (conference)

[BibTex]

[BibTex]


Thumb xl mabuse
Denoising archival films using a learned Bayesian model

Moldovan, T. M., Roth, S., Black, M. J.

(CS-07-03), Brown University, Department of Computer Science, 2007 (techreport)

pdf [BibTex]

pdf [BibTex]


Thumb xl srf
Steerable random fields

(Best Paper Award, INI-Graphics Net, 2008)

Roth, S., Black, M. J.

In Int. Conf. on Computer Vision, ICCV, pages: 1-8, Rio de Janeiro, Brazil, 2007 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Toward standardized assessment of pointing devices for brain-computer interfaces

Donoghue, J., Simeral, J., Kim, S., G.M. Friehs, L. H., Black, M.

Program No. 517.16. 2007 Abstract Viewer and Itinerary Planner, Society for Neuroscience, San Diego, CA, 2007, Online (conference)

[BibTex]

[BibTex]


Thumb xl alg
A Biologically Inspired System for Action Recognition

Jhuang, H., Serre, T., Wolf, L., Poggio, T.

In International Conference on Computer Vision (ICCV), 2007 (inproceedings)

code pdf [BibTex]

code pdf [BibTex]