Learning robust speech representation with an articulatory-regularized variational autoencoder - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Learning robust speech representation with an articulatory-regularized variational autoencoder

Résumé

It is increasingly considered that human speech perception and production both rely on articulatory representations. In this paper, we investigate whether this type of representation could improve the performances of a deep generative model (here a variational autoencoder) trained to encode and decode acoustic speech features. First we develop an articulatory model able to associate articulatory parameters describing the jaw, tongue, lips and velum configurations with vocal tract shapes and spectral features. Then we incorporate these articulatory parameters into a variational autoencoder applied on spectral features by using a regularization technique that constrains part of the latent space to represent articulatory trajectories. We show that this articulatory constraint improves model training by decreasing time to convergence and reconstruction loss at convergence, and yields better performance in a speech denoising task.
Fichier principal
Vignette du fichier
georges-interspeech2021-final.pdf (529.94 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03373252 , version 1 (13-10-2021)

Identifiants

Citer

Marc-Antoine Georges, Laurent Girin, Jean-Luc Schwartz, Thomas Hueber. Learning robust speech representation with an articulatory-regularized variational autoencoder. Interspeech 2021 - 22nd Annual Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic. pp.3345-3349, ⟨10.21437/Interspeech.2021-1604⟩. ⟨hal-03373252⟩
98 Consultations
82 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More