Skip to Main content Skip to Navigation
Conference papers

Qualitative Evaluation of Language Model Rescoring in Automatic Speech Recognition

Abstract : Evaluating automatic speech recognition (ASR) systems is a classical but difficult and still open problem, which often boils down to focusing only on the word error rate (WER). However, this metric suffers from many limitations and does not allow an in-depth analysis of automatic transcription errors. In this paper, we propose to study and understand the impact of rescoring using language models in ASR systems by means of several metrics often used in other natural language processing (NLP) tasks in addition to the WER. In particular, we introduce two measures related to morpho-syntactic and semantic aspects of transcribed words: 1) the POSER (Part-of-speech Error Rate), which should highlight the grammatical aspects, and 2) the Em-bER (Embedding Error Rate), a measurement that modifies the WER by providing a weighting according to the semantic distance of the wrongly transcribed words. These metrics illustrate the linguistic contributions of the language models that are applied during a posterior rescoring step on transcription hypotheses.
Document type :
Conference papers
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03712735
Contributor : Richard Dufour Connect in order to contact the contributor
Submitted on : Monday, July 4, 2022 - 10:57:12 AM
Last modification on : Friday, August 5, 2022 - 2:54:52 PM

File

Thibault_Roux___InterSpeech_20...
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03712735, version 1

Citation

Thibault Bañeras Roux, Mickael Rouvier, Jane Wottawa, Richard Dufour. Qualitative Evaluation of Language Model Rescoring in Automatic Speech Recognition. Interspeech, Sep 2022, Incheon, South Korea. ⟨hal-03712735⟩

Share

Metrics

Record views

0

Files downloads

0