Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 12 von 36383
Deep Cross-Modal Hashing
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, p.3270-3278
2017
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Deep Cross-Modal Hashing
Ist Teil von
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, p.3270-3278
Ort / Verlag
IEEE
Erscheinungsjahr
2017
Quelle
Alma/SFX Local Collection
Beschreibungen/Notizen
  • Due to its low storage cost and fast query speed, cross-modal hashing (CMH) has been widely used for similarity search in multimedia retrieval applications. However, most existing CMH methods are based on hand-crafted features which might not be optimally compatible with the hash-code learning procedure. As a result, existing CMH methods with hand-crafted features may not achieve satisfactory performance. In this paper, we propose a novel CMH method, called deep cross-modal hashing (DCMH), by integrating feature learning and hash-code learning intothe same framework. DCMH is an end-to-end learning framework with deep neural networks, one for each modality, to perform feature learning from scratch. Experiments on three real datasets with image-text modalities show that DCMH can outperform other baselines to achieve the state-of-the-art performance in cross-modal retrieval applications.
Sprache
Englisch
Identifikatoren
ISSN: 1063-6919
DOI: 10.1109/CVPR.2017.348
Titel-ID: cdi_ieee_primary_8099831

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX