Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 18 von 47
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, p.16989-16999
2022

Details

Autor(en) / Beteiligte
Titel
Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization Using Satellite Image
Ist Teil von
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, p.16989-16999
Ort / Verlag
IEEE
Erscheinungsjahr
2022
Link zum Volltext
Quelle
IEEE Xplore Digital Library
Beschreibungen/Notizen
  • This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map. Existing methods often treat this problem as cross-view image retrieval, and use learned deep features to match the ground-level query im-age to a partition (e.g., a small patch) of the satellite map. By these methods, the localization accuracy is limited by the partitioning density of the satellite map (often in the order of tens meters). Departing from the conventional wisdom of image retrieval, this paper presents a novel solution that can achieve highly-accurate localization. The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization. Specifically, we design a two-branch CNN to extract robust features from the ground and satellite images, respectively. To bridge the vast cross-view domain gap, we resort to a Geometry Projection module that projects features from the satellite map to the ground-view, based on a relative camera pose. Aiming to minimize the differences between the projected features and the observed features, we employ a differentiable Levenberg-Marquardt (LM) module to search for the optimal camera pose iteratively. The entire pipeline is differen-tiable and runs end-to-end. Extensive experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method. Notably, e.g., starting from a coarse estimate of camera location within a wide region of 40m × 40m, with an 80% likelihood our method quickly reduces the lateral location error to be within 5m on a new KITTI cross-view dataset.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX