Imputing out-of-vocabulary embeddings with LOVE makes language models robust with little cost - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Imputing out-of-vocabulary embeddings with LOVE makes language models robust with little cost

Résumé

State-of-the-art NLP systems represent inputs with word embeddings, but these are brittle when faced with Out-of-Vocabulary (OOV) words. To address this issue, we follow the principle of mimick-like models to generate vectors for unseen words, by learning the behavior of pre-trained embeddings using only the surface form of words. We present a simple contrastive learning framework, LOVE, which extends the word representation of an existing pre-trained language model (such as BERT), and makes it robust to OOV with few additional parameters. Extensive evaluations demonstrate that our lightweight model achieves similar or even better performances than prior competitors, both on original datasets and on corrupted variants. Moreover, it can be used in a plug-and-play fashion with FastText and BERT, where it significantly improves their robustness.
Fichier principal
Vignette du fichier
Imputing OOV Embeddings with LOVE.pdf (787.54 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03613101 , version 2 (19-03-2022)

Identifiants

  • HAL Id : hal-03613101 , version 2

Citer

Lihu Chen, Gaël Varoquaux, Fabian Suchanek. Imputing out-of-vocabulary embeddings with LOVE makes language models robust with little cost. ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, May 2022, Dublin, Ireland. ⟨hal-03613101⟩
311 Consultations
335 Téléchargements

Partager

Gmail Facebook X LinkedIn More