Adapting visual data to a linear articulatory model - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2006

Adapting visual data to a linear articulatory model

Résumé

The goal of this work is to investigate audiovisual-to-articulatory inversion. It is well established that acoustic-to-articulatory inversion is an underdetermined problem. On the other hand, there is strong evidence that human speakers/listeners exploit the multimodality of speech, and more particularly the articulatory cues: the view of visible articulators, i.e. jaw and lips, improves speech intelligibility. It is thus interesting to add constraints provided by the direct visual observation of the speaker's face. Visible data was obtained by stereo-vision and enable the 3D recovery of jaw and lip movements. These data were processed to fit the nature of parameters of Maeda's articulatory model. Inversion experiments were conducted.

Mots clés

Fichier principal
Vignette du fichier
audiovisualinv.pdf (198.35 Ko) Télécharger le fichier
Loading...

Dates et versions

inria-00112223 , version 1 (07-11-2006)

Identifiants

  • HAL Id : inria-00112223 , version 1

Citer

Yves Laprie, Blaise Potard. Adapting visual data to a linear articulatory model. 7th International Seminar on Speech Production - ISSP 2006, Dec 2006, Sao Paulo/Brazil. ⟨inria-00112223⟩
160 Consultations
100 Téléchargements

Partager

Gmail Facebook X LinkedIn More