Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 2 von 2382
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, p.6612-6619
2017
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Unsupervised Learning of Depth and Ego-Motion from Video
Ist Teil von
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, p.6612-6619
Ort / Verlag
IEEE
Erscheinungsjahr
2017
Quelle
Alma/SFX Local Collection
Beschreibungen/Notizen
  • We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.
Sprache
Englisch
Identifikatoren
ISSN: 1063-6919
DOI: 10.1109/CVPR.2017.700
Titel-ID: cdi_ieee_primary_8100183

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX