Structured output layer neural network language models for speech recognition - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue IEEE/ACM Transactions on Audio, Speech and Language Processing Année : 2013

Structured output layer neural network language models for speech recognition

Résumé

This paper extends a novel neural network language model (NNLM) which relies on word clustering to structure the output vocabulary: Structured OUtput Layer (SOUL) NNLM. This model is able to handle arbitrarily-sized vocabularies, hence dispensing with the need for shortlists that are commonly used in NNLMs. Several softmax layers replace the standard output layer in this model. The output structure depends on the word clustering which is based on the continuous word representation determined by the NNLM. Mandarin and Arabic data are used to evaluate the SOUL NNLM accuracy via speech-to-text experiments. Well tuned speech-to-text systems (with error rates around 10%) serve as the baselines. The SOUL model achieves consistent improvements over a classical shortlist NNLM both in terms of perplexity and recognition accuracy for these two languages that are quite different in terms of their internal structure and recognition vocabulary size. An enhanced training scheme is proposed that allows more data to be used at each training iteration of the neural network.
Fichier non déposé

Dates et versions

hal-01908377 , version 1 (30-10-2018)

Identifiants

  • HAL Id : hal-01908377 , version 1

Citer

Hai Son Le, Ilya Oparin, Alexandre Allauzen, Jean-Luc Gauvain, François Yvon. Structured output layer neural network language models for speech recognition. IEEE/ACM Transactions on Audio, Speech and Language Processing, 2013, 21, pp.197-206. ⟨hal-01908377⟩
23 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More