Speech-driven eyebrow motion synthesis with contextual Markovian models

Abstract : Nonverbal communicative behaviors during speech are important to model a virtual agent able to sustain a natural and lively conversation with humans. We investigate statistical frameworks for learning the correlation between speech prosody and eyebrow motion features. Such methods may be used to synthesize automatically accurate eyebrow movements from synchronized speech.
Type de document :
Communication dans un congrès
IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2013, May 2013, Vancouver, Canada. IEEE, pp.3756-3760, 〈10.1109/icassp.2013.6638360〉
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01215185
Contributeur : Lip6 Publications <>
Soumis le : mardi 13 octobre 2015 - 16:37:54
Dernière modification le : jeudi 22 novembre 2018 - 15:04:53

Identifiants

Collections

Citation

Yu Ding, Mathieu Radenen, Thierry Artières, Catherine Pelachaud. Speech-driven eyebrow motion synthesis with contextual Markovian models. IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2013, May 2013, Vancouver, Canada. IEEE, pp.3756-3760, 〈10.1109/icassp.2013.6638360〉. 〈hal-01215185〉

Partager

Métriques

Consultations de la notice

62