The emergence of multi-view capture systems has yield a tremendous amount of video sequences. The task of capturing spatio-temporal models from real world imagery (4D modeling) should arguably benefit from this enormous visual information. In order to achieve highly realistic representations both geometry and appearance need to be modeled in high precision. Yet, even with the great progress of the geometric modeling, the appearance aspect has not been fully explored and visual quality can still be improved.
I will explain how we can optimally exploit the redundant visual information of the captured video sequences and provide a temporally coherent, super-resolved, view-independent appearance representation. I will further discuss how to exploit the interdependency of both geometry and appearance as separate modalities to enhance visual perception and finally how to decompose appearance representations into intrinsic components (shading & albedo) and super-resolve them jointly to allow for more realistic renderings.
Biography: Dr. Vagia Tsiminaki is a postdoctoral researcher at the Computer Vision and Geometry Group (CVG) of the Institute for Visual Computing of the Department of Computer Science at ETH Zurich headed by Prof. Marc Pollefeys.
Before joining the CVG group, she was part of the Morpheo research team of the Inria Grenoble Rhone-Alpes research center, where she also obtained her PhD under the supervision of Edmond Boyer and Jean-Sebastien Franco. Her research interests lie in computer vision, image processing, optimization and machine learning and she is mainly interested in understanding and modelling the 3D world and its evolution over time using visual cues.