Audiovisual speech source separation: a regularization method based on visual voice activity detection - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2007

Audiovisual speech source separation: a regularization method based on visual voice activity detection

Résumé

Audio-visual speech source separation consists in mixing visual speech processing techniques (e.g. lip parameters tracking) with source separation methods to improve and/or simplify the extraction of a speech signal from a mixture of acoustic signals. In this paper, we present a new approach to this problem: visual information is used here as a voice activity detector (VAD). Results show that, in the difficult case of realistic convolutive mixtures, the classic problem of the permutation of the output frequency channels can be solved using the visual information with a simpler processing than when using only audio information.
Fichier non déposé

Dates et versions

hal-00195014 , version 1 (08-12-2007)

Identifiants

  • HAL Id : hal-00195014 , version 1

Citer

Bertrand Rivet, Laurent Girin, Christine Serviere, Dinh-Tuan Pham, Christian Jutten. Audiovisual speech source separation: a regularization method based on visual voice activity detection. AVSP 2007 - 6th International Conference on Auditory-Visual Speech Processing, Aug 2007, Hilvarenbeek, Netherlands. pp.223-227. ⟨hal-00195014⟩
121 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More