University of Tübingen, December 2019 (phdthesis)
The motion of the world is inherently dependent on the spatial structure of the world and its geometry. Therefore, classical optical flow methods try to model this geometry to solve for the motion. However, recent deep learning methods take a completely different approach. They try to predict optical flow by learning from labelled data. Although deep networks have shown state-of-the-art performance on classification problems in computer vision, they have not been as effective in solving optical flow. The key reason is that deep learning methods do not explicitly model the structure of the world in a neural network, and instead expect the network to learn about the structure from data. We hypothesize that it is difficult for a network to learn about motion without any constraint on the structure of the world. Therefore, we explore several approaches to explicitly model the geometry of the world and its spatial structure in deep neural networks.
The spatial structure in images can be captured by representing it at multiple scales. To represent multiple scales of images in deep neural nets, we introduce a Spatial Pyramid Network (SpyNet). Such a network can leverage global information for estimating large motions and local information for estimating small motions. We show that SpyNet significantly improves over previous optical flow networks while also being the smallest and fastest neural network for motion estimation. SPyNet achieves a 97% reduction in model parameters over previous methods and is more accurate.
The spatial structure of the world extends to people and their motion. Humans have a very well-defined structure, and this information is useful in estimating optical flow for humans. To leverage this information, we create a synthetic dataset for human optical flow using a statistical human body model and motion capture sequences. We use this dataset to train deep networks and see significant improvement in the ability of the networks to estimate human optical flow.
The structure and geometry of the world affects the motion. Therefore, learning about the structure of the scene together with the motion can benefit both problems. To facilitate this, we introduce Competitive Collaboration, where several neural networks are constrained by geometry and can jointly learn about structure and motion in the scene without any labels. To this end, we show that jointly learning single view depth prediction, camera motion, optical flow and motion segmentation using Competitive Collaboration achieves state-of-the-art results among unsupervised approaches.
Our findings provide support for our hypothesis that explicit constraints on structure and geometry of the world lead to better methods for motion estimation.
NeuroImage, 202(15):116085, November 2019 (article) , Zhao, M., , , Mohler, B. J., Bartels, A., Bülthoff, I.
IEEE Robotics and Automation Letters, Robotics and Automation Letters, 4(4):4491-4498, IEEE, October 2019 (article) , , , Karlapalem, K., Bülthoff, H. H., ,
Egger, B., Smith, W. A. P., Tewari, A., Wuhrer, S., Zollhoefer, M., Beeler, T., Bernard, F.,arxiv preprint arXiv:1909.01815, September 2019 (article) , Kortylewski, A., Romdhani, S., Theobalt, C., Blanz, V., Vetter, T.
Hesse, N.,Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019 (article) , , Arens, M., Hofmann, U., Schroeder, S.
Kenny, S.,ACM Trans. Appl. Percept., 16(1):2:1-2:18, Febuary 2019 (article) , Honda, C., ,
IEEE Transactions on Visualization and Computer Graphics, 25, pages: 1887,1897, IEEE, 2019 (article) , , , , , Hesse, N., Bülthoff, H. H.,
ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 37, pages: 185:1-185:15, ACM, November 2018, Two first authors contributed equally (article) , Kaufmann, M., Aksan, E., , Hilliges, O.,
IEEE Robotics and Automation Letters, Robotics and Automation Letters, 3(4):3193-3200, IEEE, October 2018, Also accepted and presented in the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). (article) , , , , Buelthoff, H. H., ,
Hu, Y., Parde, C. J.,Psychological Science, 29(12):1969-–1983, October 2018 (article) , , O’Toole, A. J.
Frontiers in ICT, 5, pages: 1-14, September 2018 (article) , Piryankova, I., Stefanucci, J. K., , de la Rosa, S., , , ,
Borno, M. A.,Computer Graphics Forum, 37, pages: 6:1-12, July 2018 (article) , , Delp, S. L., Fiume, E.,
Tuebingen University, April 2018 (phdthesis)
Psychological Medicine, 48(4):642-653, March 2018 (article) , , , , , , Zipfel, S., Karnath, H., Giel, K. E.
PLoS ONE, 13(2), Febuary 2018 (article) , Geuss, M. N., , Giel, K. E., , , ,
arXiv preprint arXiv:1803.05790, 2018 (article) , Sun, H., , Neumann, H.
Keuper, M.,IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018 (article) , Andres, B., Brox, T., Schiele, B.
(Featured in Nature’s Research Highlights (Nature, Vol 466, 29 July 2010))
Vargas-Irwin, C. E., Shakhnarovich, G., Yadollahpour, P., Mislow, J.,J. of Neuroscience, 39(29):9659-9669, July 2010 (article) , Donoghue, J. P.
Sigal, L.,International Journal of Computer Vision, 87(1):1-3, March 2010 (article)
Sigal, L., Balan, A.,International Journal of Computer Vision, 87(1):4-27, Springer Netherlands, March 2010 (article)
Nature Communications, Nature Communications, 2010 (article) , Garrote, E., Mutch, J., Poggio, T., Steele, A., Serre, T.
Kjellström, H.,Computer Vision and Image Understanding, pages: 81-90, 2010 (article) , Kragic, D.
Wu, W., Gao, Y., Bienenstock, E., Donoghue, J. P.,Neural Computation, 18(1):80-118, 2006 (article)
Tarr, M. J.,CVGIP: Image Understanding, 60(1):65-73, July 1994 (article)
Tarr, M. J.,CVGIP: Image Understanding, 60(1):113-118, July 1994 (article)
Yale University, Department of Computer Science, New Haven, CT, 1992, Research Report YALEU-DCS-RR-923 (phdthesis)