Motherese by Eye and Ear: Infants Perceive Visual Prosody in Point-Line Displays of Talking Heads - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue PLoS ONE Année : 2014

Motherese by Eye and Ear: Infants Perceive Visual Prosody in Point-Line Displays of Talking Heads

Résumé

Infant-directed (ID) speech provides exaggerated auditory and visual prosodic cues. Here we investigated if infants were sensitive to the match between the auditory and visual correlates of ID speech prosody. We presented 8-month-old infants with two silent line-joined point-light displays of faces speaking different ID sentences, and a single vocal-only sentence matched to one of the displays. Infants looked longer to the matched than mismatched visual signal when full-spectrum speech was presented; and when the vocal signals contained speech low-pass filtered at 400 Hz. When the visual display was separated into rigid (head only) and non-rigid (face only) motion, the infants looked longer to the visual match in the rigid condition; and to the visual mismatch in the non-rigid condition. Overall, the results suggest 8-month-olds can extract information about the prosodic structure of speech from voice and head kinematics, and are sensitive to their match; and that they are less sensitive to the match between lip and voice information in connected speech.

Dates et versions

hal-01478469 , version 1 (28-02-2017)

Identifiants

Citer

Christine Kitamura, Bahia Guellaï, Jeesun Kim. Motherese by Eye and Ear: Infants Perceive Visual Prosody in Point-Line Displays of Talking Heads. PLoS ONE, 2014, 9 (10), pp.e111467. ⟨10.1371/journal.pone.0111467⟩. ⟨hal-01478469⟩
19 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More