Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 1 von 10
IEEE transactions on pattern analysis and machine intelligence, 2017-04, Vol.39 (4), p.664-676
2017

Details

Autor(en) / Beteiligte
Titel
Deep Visual-Semantic Alignments for Generating Image Descriptions
Ist Teil von
  • IEEE transactions on pattern analysis and machine intelligence, 2017-04, Vol.39 (4), p.664-676
Ort / Verlag
United States: IEEE
Erscheinungsjahr
2017
Link zum Volltext
Quelle
IEEE Xplore
Beschreibungen/Notizen
  • We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks (RNN) over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions outperform retrieval baselines on both full images and on a new dataset of region-level annotations. Finally, we conduct large-scale analysis of our RNN language model on the Visual Genome dataset of 4.1 million captions and highlight the differences between image and region-level caption statistics.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX