Semantic Video CNNs through Representation Warping

2017

Conference Paper

ps


In this work, we propose a technique to convert CNN models for semantic segmentation of static images into CNNs for video data. We describe a warping method that can be used to augment existing architectures with very lit- tle extra computational cost. This module is called Net- Warp and we demonstrate its use for a range of network architectures. The main design principle is to use optical flow of adjacent frames for warping internal network repre- sentations across time. A key insight of this work is that fast optical flow methods can be combined with many different CNN architectures for improved performance and end-to- end training. Experiments validate that the proposed ap- proach incurs only little extra computational cost, while im- proving performance, when video streams are available. We achieve new state-of-the-art results on the standard CamVid and Cityscapes benchmark datasets and show reliable im- provements over different baseline networks. Our code and models are available at http://segmentation.is. tue.mpg.de

Author(s): Gadde, Raghudeep and Jampani, Varun and Gehler, Peter V.
Book Title: IEEE International Conference on Computer Vision (ICCV)
Year: 2017

Department(s): Perceiving Systems
Bibtex Type: Conference Paper (inproceedings)
Paper Type: Conference

State: Accepted
Attachments: pdf
Supplementary

BibTex

@inproceedings{gadde2017semantic,
  title = {Semantic Video CNNs through Representation Warping},
  author = {Gadde, Raghudeep and Jampani, Varun and Gehler, Peter V.},
  booktitle = {IEEE International Conference on Computer Vision (ICCV)},
  year = {2017}
}