Abstract / Description of output
Self-supervised monocular depth estimation networks are trained to predict scene depth using nearby frames as a supervision signal during training. However, for many applications, sequence information in the form of video frames is also available at test time. The vast majority of monocular networks do not make use of this extra signal, thus ignoring valuable information that could be used to improve the predicted depth. Those that do, either use computationally expensive test-time refinement techniques or off-the-shelf recurrent networks, which only indirectly make use of the geometric information that is inherently available.
We propose ManyDepth, an adaptive approach to dense depth estimation that can make use of sequence information at test time, when it is available. Taking inspiration from multi-view stereo, we propose a deep end-to-end cost volume based approach that is trained using self-supervision only. We present a novel consistency loss that encourages the network to ignore the cost volume when it is deemed unreliable, e.g. in the case of moving objects, and an augmentation scheme to cope with static cameras. Our detailed experiments on both KITTI and Cityscapes show that we outperform all published self-supervised baselines, including those that use single or multiple frames at test time.
We propose ManyDepth, an adaptive approach to dense depth estimation that can make use of sequence information at test time, when it is available. Taking inspiration from multi-view stereo, we propose a deep end-to-end cost volume based approach that is trained using self-supervision only. We present a novel consistency loss that encourages the network to ignore the cost volume when it is deemed unreliable, e.g. in the case of moving objects, and an augmentation scheme to cope with static cameras. Our detailed experiments on both KITTI and Cityscapes show that we outperform all published self-supervised baselines, including those that use single or multiple frames at test time.
Original language | English |
---|---|
Title of host publication | Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition |
Publisher | Institute of Electrical and Electronics Engineers |
Pages | 1164-1174 |
Number of pages | 11 |
ISBN (Electronic) | 978-1-6654-4509-2 |
ISBN (Print) | 978-1-6654-4510-8 |
DOIs | |
Publication status | Published - 13 Nov 2021 |
Event | IEEE Conference on Computer Vision and Pattern Recognition 2021 - Virtual Duration: 19 Jun 2021 → 25 Jun 2021 http://cvpr2021.thecvf.com/ |
Publication series
Name | |
---|---|
ISSN (Print) | 1063-6919 |
ISSN (Electronic) | 2575-7075 |
Conference
Conference | IEEE Conference on Computer Vision and Pattern Recognition 2021 |
---|---|
Abbreviated title | CVPR 2021 |
Period | 19/06/21 → 25/06/21 |
Internet address |