Electrophysiological evidence for audio-visuo-lingual speech integration

Coriandre Vilain 1, 2 Avril Treille 1 Marc Sato 3
1 GIPSA-PCMD - PCMD
GIPSA-DPC - Département Parole et Cognition
2 GIPSA-Services - GIPSA-Services
GIPSA-lab - Grenoble Images Parole Signal Automatique
Abstract : Audio-visual speech perception is a special case of multisensory processing that interfaces with the linguistic system. One important issue is whether cross-modal interactions only depend on well-known auditory and visuo-facial modalities or, rather, might also be triggered by other sensory sources less common in speech communication. The present EEG study aimed at investigating cross-modal interactions not only between auditory, visuo-facial and audio-visuo-facial syllables but also between auditory, visuo-lingual and audio-visuo-lingual syllables. Eighteen adults participated in the study, none of them being experienced with visuo-lingual stimuli. The stimuli were acquired by means of a camera and an ultrasound system, synchronized with the acoustic signal. At the behavioral level, visuo-lingual syllables were recognized far above chance, although to a lower degree than visuo-labial syllables. At the brain level, audiovisual interactions were estimated by comparing the EEG responses to the multisensory stimuli (AV) to the combination of responses to the stimuli presented in isolation (A+V). For both visuo-labial and visuo-lingual syllables, a reduced latency and a lower amplitude of P2 auditory evoked potentials were observed for AV compared to A+V. Apart from this sub-additive effect, a reduced amplitude of N1 and a higher amplitude of P2 were also observed for lingual compared to labial movements. Although participants were not experienced with visuo-lingual stimuli, our results demonstrate that they were able to recognize them and provide the first evidence for audio-visuo-lingual speech interactions. These results further emphasize the multimodal nature of speech perception and likely reflect the impact of listener's knowledge of speech production.
Type de document :
Poster
IMRF 2015 - 16th international multisensory research forum, Jun 2015, Pise, Italy. 〈http://www.pisavisionlab.org/imrf2015/〉
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01297678
Contributeur : Avril Treille <>
Soumis le : mardi 5 avril 2016 - 09:51:30
Dernière modification le : lundi 28 mai 2018 - 10:51:43
Document(s) archivé(s) le : mercredi 6 juillet 2016 - 11:52:02

Fichier

IMRF-EEG-tongue-Final.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01297678, version 1

Collections

Citation

Coriandre Vilain, Avril Treille, Marc Sato. Electrophysiological evidence for audio-visuo-lingual speech integration. IMRF 2015 - 16th international multisensory research forum, Jun 2015, Pise, Italy. 〈http://www.pisavisionlab.org/imrf2015/〉. 〈hal-01297678〉

Partager

Métriques

Consultations de la notice

330

Téléchargements de fichiers

75