Articulatory Speech Synthesis from Static Context-Aware Articulatory Targets

Abstract : The aim of this work is to develop an algorithm for controlling the articulators (the jaw, the tongue, the lips, the velum, the larynx and the epiglottis) to produce given speech sounds, syllables and phrases. This control has to take into account coarticulation and be flexible enough to be able to vary strategies for speech production. The data for the algorithm are 97 static MRI images capturing the articulation of French vowels and blocked consonant-vowel syllables. The results of this synthesis are evaluated visually, acoustically and perceptually, and the problems encountered are broken down by their origin: the dataset, its modeling, the algorithm for managing the vocal tract shapes, their translation to the area functions, and the acoustic simulation.
Liste complète des métadonnées

Cited literature [18 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01643487
Contributor : Anastasiia Tsukanova <>
Submitted on : Tuesday, November 21, 2017 - 2:32:45 PM
Last modification on : Tuesday, December 18, 2018 - 4:38:02 PM

File

ISSP2017Tsukanova.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01643487, version 1

Citation

Anastasiia Tsukanova, Benjamin Elie, Yves Laprie. Articulatory Speech Synthesis from Static Context-Aware Articulatory Targets. ISSP 2017 - 11th International Seminar on Speech Production, Oct 2017, Tianjin, China. ⟨hal-01643487⟩

Share

Metrics

Record views

335

Files downloads

243