Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 22 von 383
2020 International Joint Conference on Neural Networks (IJCNN), 2020, p.1-8
2020
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Not All Synonyms Are Created Equal: Incorporating Similarity of Synonyms to Enhance Word Embeddings
Ist Teil von
  • 2020 International Joint Conference on Neural Networks (IJCNN), 2020, p.1-8
Ort / Verlag
IEEE
Erscheinungsjahr
2020
Quelle
IEEE Electronic Library Online
Beschreibungen/Notizen
  • Traditional word embedding approaches learn semantic information from the associated contexts of words on large unlabeled corpora, which ignores a fact that synonymy between words happens often within different contexts in a corpus, so this relationship will not be well embedded into vectors. Furthermore, existing synonymy-based models directly incorporate synonyms to train word embeddings, but still neglect the similarity between words and corresponding synonyms. In this paper, we explore a novel approach that employs the similarity between words and corresponding synonyms to train and enhance word embeddings. To this purpose, we build two Synonymy Similarity Models (SSMs), named SSM-W and SSM-M respectively, which adopt different strategies to incorporate the similarity between words and corresponding synonyms during the training process. We evaluated our models for both Chinese and English. The results demonstrate that our models outperform the baselines on seven word similarity datasets. For the analogical reasoning and text classification tasks, our models also surpass all the baselines including a synonymy-based model.
Sprache
Englisch
Identifikatoren
eISSN: 2161-4407
DOI: 10.1109/IJCNN48605.2020.9207311
Titel-ID: cdi_ieee_primary_9207311

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX