How to Evaluate ASR Output for Named Entity Recognition? - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2015

How to Evaluate ASR Output for Named Entity Recognition?

Résumé

The standard metric to evaluate automatic speech recognition (ASR) systems is the word error rate (WER). WER has proven very useful in stand-alone ASR systems. Nowadays, these systems are often embedded in complex natural language processing systems to perform tasks like speech translation, manmachine dialogue, or information retrieval from speech. This exacerbates the need for the speech processing community to design a new evaluation metric to estimate the quality of automatic transcriptions within their larger applicative context. We introduce a new measure to evaluate ASR in the context of named entity recognition, which makes use of a probabilistic model to estimate the risk of ASR errors inducing downstream errors in named entity detection. Our evaluation, on the ETAPE data, shows that ATENE achieves a higher correlation than WER between the performances in named entities recognition and in automatic speech transcription.

Domaines

Linguistique
Fichier non déposé

Dates et versions

hal-01251370 , version 1 (06-01-2016)

Identifiants

  • HAL Id : hal-01251370 , version 1

Citer

Mohamed Ben Jannet, Olivier Galibert, Martine Adda-Decker, Sophie Rosset. How to Evaluate ASR Output for Named Entity Recognition?. 16th Annual Conference of the International Speech Communication Association (Interspeech'15), Sep 2015, Dresden, Germany. ⟨hal-01251370⟩
337 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More