Skip to Main content Skip to Navigation
Journal articles

Beyond ASR 1-best: Using word confusion networks in spoken language understanding

Abstract : We are interested in the problem of robust understanding from noisy spontaneous speech input. With the advances in automated speech recognition (ASR), there has been increasing interest in spoken language understanding (SLU). A challenge in large vocabulary spoken language understanding is robustness to ASR errors. State of the art spoken language understanding relies on the best ASR hypotheses (ASR 1-best). In this paper, we propose methods for a tighter integration of ASR and SLU using word confusion networks (WCNs). WCNs obtained from ASR word graphs (lattices) provide a compact representation of multiple aligned ASR hypotheses along with word confidence scores, without compromising recognition accuracy. We present our work on exploiting WCNs instead of simply using ASR one-best hypotheses. In this work, we focus on the tasks of named entity detection and extraction and call classification in a spoken dialog system, although the idea is more general and applicable to other spoken language processing tasks. For named entity detection, we have improved the F-measure by using both word lattices and WCNs, 6–10% absolute. The processing of WCNs was 25 times faster than lattices, which is very important for real-life applications. For call classification, we have shown between 5% and 10% relative reduction in error rate using WCNs compared to ASR 1-best output.
Document type :
Journal articles
Complete list of metadatas
Contributor : Bibliothèque Universitaire Déposants Hal-Avignon <>
Submitted on : Thursday, May 12, 2016 - 2:50:08 PM
Last modification on : Tuesday, January 14, 2020 - 10:38:06 AM

Links full text




Dilek Hakkani-Tü, Frédéric Béchet, Giuseppe Riccardi, Gokhan Tur. Beyond ASR 1-best: Using word confusion networks in spoken language understanding. Computer Speech and Language, Elsevier, 2005, ⟨10.1016/j.csl.2005.07.005⟩. ⟨hal-01314993⟩



Record views