Multi-person articulated pose tracking is an important while challenging problem in human behavior understanding. In this talk, going along the road of top-down approaches, I will introduce a decent and efficient pose tracker based on pose flows. This approach can achieve real-time pose tracking without loss of accuracy. Besides, to better understand human activities in visual contents, clothes texture and geometric details also play indispensable roles. However, extrapolating them from a single image is much more difficult than rigid objects due to its large variations in pose, shape, and cloth. I will present a two-stage pipeline to predict human bodies and synthesize human novel views from one single-view image.
Biography: My name is Yuliang Xiu. I am currently a final year M.Phil. in Machine Vision and Intelligence Group at Shanghai Jiao Tong University, advised by Prof. Cewu Lu. Previously, I got the B.Eng. in Research Center of HCI and VR at Shandong University, advised by Prof. Lu Wang. I spent 5 wonderful months as a research intern in Prof. Hao Li's Vision and Graphics Lab at the Institute for Creative Technologies during the summer of 2018. My research interests are in 2D Human Pose and 3D Human Body.