Dirichlet-Smoothed Word Embeddings for Low-Resource Settings

Dirichlet-Smoothed Word Embeddings for Low-Resource Settings

Abstract

Nowadays, classical count-based word embeddings using positive pointwise mutual information (PPMI) weighted co-occurrence matrices have been widely superseded by machine-learning-based methods like word2vec and GloVe. But these methods are usually applied using very large amounts of text data. In many cases, however, there is not much text data available, for example for specific domains or low-resource languages. This paper revisits PPMI by adding Dirichlet smoothing to correct its bias towards rare words. We evaluate on standard word similarity data sets and compare to word2vec and the recent state of the art for low-resource settings: Positive and Unlabeled (PU) Learning for word embeddings. The proposed method outperforms PU-Learning for low-resource settings and obtains competitive results for Maltese and Luxembourgish.

Grafik Top
Authors
  • Jungmaier, Jakob
  • Kassner, Nora
  • Roth, Benjamin
Grafik Top
Shortfacts
Category
Paper in Conference Proceedings or in Workshop Proceedings (Paper)
Event Title
Proceedings of the Twelfth Language Resources and Evaluation Conference
Divisions
Data Mining and Machine Learning
Subjects
Kuenstliche Intelligenz
Event Location
Virtual
Event Type
Conference
Event Dates
11 to 16 May
Publisher
European Language Resources Association
Page Range
pp. 3560-3565
Date
May 2020
Official URL
https://aclanthology.org/2020.lrec-1.437/
Export
Grafik Top