An Articulatory-Based Singing Voice Synthesis Using Tongue and Lips Imaging

Abstract : Ultrasound imaging of the tongue and videos of lips movements can be used to investigate specific articulation in speech or singing voice. In this study, tongue and lips image sequences recorded during singing performance are used to predict vocal tract properties via Line Spectral Frequencies (LSF). We focused our work on traditional Corsican singing " Cantu in paghjella ". A multimodal Deep Autoencoder (DAE) extracts salient descriptors directly from tongue and lips images. Afterwards, LSF values are predicted from the most relevant of these features using a multilayer perceptron. A vocal tract model is derived from the predicted LSF, while a glottal flow model is computed from a synchronized electroglottographic recording. Articulatory-based singing voice synthesis is developed using both models. The quality of the prediction and singing voice synthesis using this method outperforms the state of the art method.
Liste complète des métadonnées

Cited literature [9 references]  Display  Hide  Download
Contributor : Aurore Jaumard-Hakoun <>
Submitted on : Wednesday, May 31, 2017 - 10:42:27 AM
Last modification on : Thursday, April 4, 2019 - 1:24:48 AM
Document(s) archivé(s) le : Wednesday, September 6, 2017 - 2:32:30 PM


Files produced by the author(s)



Aurore Jaumard-Hakoun, Kele Xu, Clémence Leboullenger, Pierre Roussel-Ragot, Bruce Denby. An Articulatory-Based Singing Voice Synthesis Using Tongue and Lips Imaging. ISCA Interspeech 2016, Sep 2016, San Francisco, United States. pp.1467 - 1471, ⟨10.21437/Interspeech.2016-385⟩. ⟨hal-01529630⟩



Record views


Files downloads