Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Open Access
Multimodal Distributional Semantics
The Journal of artificial intelligence research, 2014-01, Vol.49, p.1-47
2014

Details

Autor(en) / Beteiligte
Titel
Multimodal Distributional Semantics
Ist Teil von
  • The Journal of artificial intelligence research, 2014-01, Vol.49, p.1-47
Ort / Verlag
San Francisco: AI Access Foundation
Erscheinungsjahr
2014
Link zum Volltext
Quelle
ACM Digital Library
Beschreibungen/Notizen
  • Distributional semantic models derive computational representations of word meaning from the patterns of co-occurrence of words in text. Such models have been a success story of computational linguistics, being able to provide reliable estimates of semantic relatedness for the many semantic tasks requiring them. However, distributional models extract meaning information exclusively from text, which is an extremely impoverished basis compared to the rich perceptual sources that ground human semantic knowledge. We address the lack of perceptual grounding of distributional models by exploiting computer vision techniques that automatically identify discrete “visual words” in images, so that the distributional representation of a word can be extended to also encompass its co-occurrence with the visual words of images it is associated with. We propose a flexible architecture to integrate text- and image-based distributional information, and we show in a set of empirical tests that our integrated model is superior to the purely text-based approach, and it provides somewhat complementary semantic information with respect to the latter.
Sprache
Englisch
Identifikatoren
ISSN: 1076-9757
eISSN: 1943-5037
DOI: 10.1613/jair.4135
Titel-ID: cdi_proquest_journals_2554099077

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX