RNN Language Model Estimation for Out-of-Vocabulary Words - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Lecture Notes in Artificial Intelligence Année : 2020

RNN Language Model Estimation for Out-of-Vocabulary Words

Résumé

One important issue of speech recognition systems is Out-of Vocabulary words (OOV). These words, often proper nouns or new words, are essential for documents to be transcribed correctly. Thus, they must be integrated in the language model (LM) and the lexicon of the speech recognition system. This article proposes new approaches to OOV proper noun probability estimation using Recurrent Neural Network Language Model (RNNLM). The proposed approaches are based on the notion of closest in-vocabulary (IV) words (list of brothers) to a given OOV proper noun. The probabilities of these words are used to estimate the probabilities of OOV proper nouns thanks to RNNLM. Three methods for retrieving the relevant list of brothers are studied. The main advantages of the proposed approaches are that the RNNLM is not retrained and the architecture of the RNNLM is kept intact. Experiments on real text data from the website of the Euronews channel show relative perplexity reductions of about 14% compared to baseline RNNLM.
Fichier principal
Vignette du fichier
LTC-2017-LNAI-IllinaFohr.pdf (1.14 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03054936 , version 1 (11-12-2020)

Identifiants

Citer

Irina Illina, Dominique Fohr. RNN Language Model Estimation for Out-of-Vocabulary Words. Lecture Notes in Artificial Intelligence, 2020, 12598, ⟨10.1007/978-3-030-66527-2_15⟩. ⟨hal-03054936⟩
90 Consultations
240 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More