Visual articulatory feedback for phonetic correction in second language learning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2010

Visual articulatory feedback for phonetic correction in second language learning

Résumé

Orofacial clones can display speech articulation in an augmented mode, i.e. display all major speech articulators, including those usually hidden such as the tongue or the velum. Besides, a number of studies tend to show that the visual articulatory feedback provided by ElectroPalatoGraphy or ultrasound echography is useful for speech therapy. This paper describes the latest developments in acoustic-to-articulatory inversion, based on statistical models, to drive orofacial clones from speech sound. It suggests that this technology could provide a more elaborate feedback than previously available, and that it would be useful in the domain of Computer Aided Pronunciation Training.
Fichier non déposé

Dates et versions

hal-00508272 , version 1 (02-08-2010)

Identifiants

  • HAL Id : hal-00508272 , version 1

Citer

Pierre Badin, Atef Ben Youssef, Gérard Bailly, Frédéric Elisei, Thomas Hueber. Visual articulatory feedback for phonetic correction in second language learning. Interspeech 2010 - 11th Annual Conference of the International Speech Communication Association, Sep 2010, Makuhari, Japan. pp.n.c. ⟨hal-00508272⟩
479 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More