Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 4 von 12
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, p.4985-4994
2016
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
VisualWord2Vec (Vis-W2V): Learning Visually Grounded Word Embeddings Using Abstract Scenes
Ist Teil von
  • 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, p.4985-4994
Ort / Verlag
IEEE
Erscheinungsjahr
2016
Quelle
IEEE Electronic Library Online
Beschreibungen/Notizen
  • We propose a model to learn visually grounded word embeddings (vis-w2v) to capture visual notions of semantic relatedness. While word embeddings trained using text have been extremely successful, they cannot uncover notions of semantic relatedness implicit in our visual world. For instance, although "eats" and "stares at" seem unrelated in text, they share semantics visually. When people are eating something, they also tend to stare at the food. Grounding diverse relations like "eats" and "stares at" into vision remains challenging, despite recent progress in vision. We note that the visual grounding of words depends on semantics, and not the literal pixels. We thus use abstract scenes created from clipart to provide the visual grounding. We find that the embeddings we learn capture fine-grained, visually grounded notions of semantic relatedness. We show improvements over text-only word embeddings (word2vec) on three tasks: common-sense assertion classification, visual paraphrasing and text-based image retrieval. Our code and datasets are available online.
Sprache
Englisch; Japanisch
Identifikatoren
eISSN: 1063-6919
DOI: 10.1109/CVPR.2016.539
Titel-ID: cdi_ieee_primary_7780908

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX