Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
International journal of computer vision, 2021-04, Vol.129 (4), p.921-941
2021
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Hierarchical Visual-Textual Knowledge Distillation for Life-Long Correlation Learning
Ist Teil von
  • International journal of computer vision, 2021-04, Vol.129 (4), p.921-941
Ort / Verlag
New York: Springer US
Erscheinungsjahr
2021
Quelle
Alma/SFX Local Collection
Beschreibungen/Notizen
  • Correlation learning among different types of multimedia data, such as visual and textual content, faces huge challenges from two important perspectives, namely, cross modal and cross domain . Cross modal means the heterogeneous properties of different types of multimedia data, where the data from different modalities have inconsistent distributions and representations. This situation leads to the first challenge: cross-modal similarity measurement. Cross domain means the multisource property of multimedia data from various domains, in which data from new domains arrive continually, leading to the second challenge: model storage and retraining. Therefore, correlation learning requires a cross-modal continual learning approach, in which only the data from the new domains are used for training, but the previously learned correlation capabilities are preserved. To address the above issues, we introduce the idea of life-long learning into visual-textual cross-modal correlation modeling and propose a visual-textual life-long knowledge distillation (VLKD) approach. In this study, we construct a hierarchical recurrent network that can leverage knowledge from both semantic and attention levels through adaptive network expansion to support cross-modal retrieval in life-long scenarios across various domains. The results of extensive experiments performed on multiple cross-modal datasets with different domains verify the effectiveness of the proposed VLKD approach for life-long cross-modal retrieval.
Sprache
Englisch
Identifikatoren
ISSN: 0920-5691
eISSN: 1573-1405
DOI: 10.1007/s11263-020-01392-1
Titel-ID: cdi_crossref_primary_10_1007_s11263_020_01392_1

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX