Developing an audio-visual speech source separation algorithm - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Speech Communication Année : 2004

Developing an audio-visual speech source separation algorithm

David Sodoyer
Laurent Girin
Christian Jutten

Résumé

Looking at the speaker's face is useful to hear better a speech signal and extract it from competing sources before identification. This might result in elaborating new speech enhancement or extraction techniques exploiting the audio-visual coherence of speech stimuli. In this paper, a novel algorithm plugging audio-visual coherence estimated by statistical tools on classical blind source separation algorithms is presented, and its assessment is described. We show, in the case of additive mixtures, that this algorithm performs better than classical blind tools both when there are as many sensors as sources, and when there are less sensors than sources. Audiovisual coherence enables a focus on the speech source to extract. It may also be used at the output of a classical source separation algorithm, to select the “best” sensor with reference to a target source.
Fichier principal
Vignette du fichier
Sodoyer_Speech_Comm_2004.pdf (600.64 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-00186591 , version 1 (09-11-2007)

Identifiants

  • HAL Id : hal-00186591 , version 1

Citer

David Sodoyer, Laurent Girin, Christian Jutten, Jean-Luc Schwartz. Developing an audio-visual speech source separation algorithm. Speech Communication, 2004, 44, pp.113-125. ⟨hal-00186591⟩
329 Consultations
140 Téléchargements

Partager

Gmail Facebook X LinkedIn More