Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
IEEE transactions on multimedia, 2019-06, Vol.21 (6), p.1538-1550
2019

Details

Autor(en) / Beteiligte
Titel
Show and Tell in the Loop: Cross-Modal Circular Correlation Learning
Ist Teil von
  • IEEE transactions on multimedia, 2019-06, Vol.21 (6), p.1538-1550
Ort / Verlag
Piscataway: IEEE
Erscheinungsjahr
2019
Link zum Volltext
Quelle
IEEE Electronic Library (IEL)
Beschreibungen/Notizen
  • Multimedia data with various modalities, such as image and text, are huge in quantity but have inconsistent distribution and representation. Many works have been done to break the boundary between image and text to measure their correlation. However, they focus on either the transformation to common subspace or the unidirectional generation from one to another individually, which cannot fully explore their interactions. It is noted that the bidirectional generation between image and text not only can provide complementary hints and mutually boost to learn cross-modal correlation but also cross-modal correlation learning can feed back to give comprehensive clues for promoting the cross-modal generation process. Therefore, we have the motivation that information transmission between image and text should be treated as a circular process, which aims to fully understand their latent correlation, and further realize cross-modal generation to produce both realistic images and text descriptions in a unified framework. In this paper, we propose the cross-modal circular correlation learning approach to perform both cross-modal correlation learning and generation simultaneously through an efficient circular learning training procedure. First, we propose the cross-modal circular learning model to perform an image-to-text caption and text-to-image synthesis circularly and learn common representation as a round-trip bridge, which can realize efficient interactions to fully exploit latent cross-modal correlations. Second, a unified bidirectional framework is proposed to conduct cross-modal mutual generation and is trained in an efficient circular process to enhance the generative ability of common representation, which can feed back circularly to further promote cross-modal correlation learning. In summary, we simultaneously perform cross-modal retrieval , image-to-text caption , and text-to-image synthesis in a unified framework with the circular learning process, which has high scalability and generality to realize universal cognition on the cross-modal data. We conduct extensive experiments to not only evaluate the correlation performance by cross-modal retrieval but also to show the generation effectiveness of both image caption and synthesis on the MS-COCO dataset.
Sprache
Englisch
Identifikatoren
ISSN: 1520-9210
eISSN: 1941-0077
DOI: 10.1109/TMM.2018.2877885
Titel-ID: cdi_crossref_primary_10_1109_TMM_2018_2877885

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX