Since the release of the Kinect, RGB-D cameras have been used in several consumer devices, including smartphones. In this talk, I will present two challenging uses of this technology. With multiple RGB-D cameras, it is possible to reconstruct a 3D scene and visualize it from any point of view. In the first part of the talk, I will show how such a scene can be streamed and rendered as a point cloud in a compelling way and its appearance improved by the use of external cinema cameras. In the second part of the talk, I will present my work on how an RGB-D camera can be used for enabling real-walking in virtual reality by making the user aware of the surrounding obstacles. I present a pipeline to create an occupancy map from a point cloud on the fly on a mobile phone used as a virtual reality headset. This occupancy map can then be used to prevent the user from hitting physical obstacles when walking in the virtual scene.
Biography: Marilyn is currently a research engineer working on 3D vision and its application to virtual reality. After 2 years studying signal processing at Grenoble INP Phelma, she gets her Master of Science at the Karlsruhe Institute of Technology where she specializes in computer vision and computer graphics. Her Master Thesis was done at BBC R&D on the streaming of scenes captured by multiple RGB-D cameras. Before this, she has been working for 3 months as an intern at Inria Grenoble in the Morpheo team on the joint calibration of a mocap and multi-camera system. She is interested in 3D reconstruction, movement reconstruction and semantic segmentation.