Skip to Main content Skip to Navigation
Conference papers

Homophonic speech sequences in French: The role of acoustic and contextual cues for disambiguation

Abstract : Despite the lack of clear word boundaries in spoken language, the human ability to recognize speech seems to be effortless. Listeners divide continuous speech into linguistically and psychologically significant units to access meaning. Speech segmentation has been proven to be affected by both the listeners’ sensitivity to acoustic cues and sub-phonemic properties (Davis et al. 2002; Mattys, 2004), and contextual information and lexical competition (Norris, 1994; Dahan & Brent, 1999). Fine acoustic details can influence word boundaries perception (Friederici & Wessels, 1993; Davis et al, 2002), particularly when contextual information is insufficient (Mattys et al. 2005). Yet, it remains unclear how low-level signals can affect higher-order information, i.e., how the speech recognition system reacts when exposed to unmatched acoustic and sentential information. We aimed to investigate the cost of mismatching the fine acoustic cues of the speech signal during sentence processing. More precisely, we wanted to explore whether and how such fine acoustic details affect semantic processing. In the present experiment, we recorded French sentences containing homophonic sequences, article+noun combinations that can be segmented differently, such as “l’affiche” (“the poster”) and “la fiche” (“the sheet”), both pronounced /lɑfiʃ/. We recorded the Event-Related brain Potentials (ERPs) to three different conditions: baseline, congruent, and incongruent, each of which comprised 46 sentences. To avoid differences, congruent and incongruent sentences were created by cross-splicing the article-noun sequences within sentences. For example, two meaningful sentences were selected for the pair “la fiche”-“l’affiche”: (1) “La secrétaire médicale a perdu la fiche du patient” (“The medical secretary lost the patient's chart”) and (2) “Le Théâtre National a choisi l'affiche du spectacle” (“The National Theatre has chosen the poster for the show”). We extracted “l’affiche” from (2) and placed it in (1), generating the incongruent (3) “La secrétaire médicale a perdu l’affiche du patient”. A similar manipulation was done for the congruent condition: we swapped the two “la fiche” from (1) and (4) “Le comptable rempli la fiche de ses employés” (“The accountant fills out his employee chart”). While sentences were not highly predictable, the congruent homophone choice was consistently more predictable than the incongruent. EEG data were acquired using Curry 8.0 XS software (Neuroscan SynAmps 2/RT; 64-electrode Quik-Cap Neo Net, adjusted to the International 10/20 standard system). We used EEGLAB toolbox (Delorme & Makeig, 2004) for offline analyses. Epochs were extracted from -200 to 1100 ms after stimulus onset, with a baseline period of -200 to 0 ms. Sources for each independent component were evaluated with ICLabel plugin (Pion-Tonachini et al., 2017). To assess for semantic processing differences, we focused our preliminary analyses on the N400 component and found a statistically significant effect of topographical distribution in our regions of interest (ROI) and an effect of interaction between conditions and ROIs. Topographic analyses revealed mean amplitude significant differences between frontal and parietal brain regions, suggesting the presence of an N400-like component in central to parietal sides. This would suggest that listeners consider fine acoustic cues when challenged with mismatching acoustic and sentential information.
Complete list of metadata
Contributor : Fanny Meunier Connect in order to contact the contributor
Submitted on : Sunday, December 6, 2020 - 6:20:14 PM
Last modification on : Tuesday, October 19, 2021 - 10:57:20 PM


  • HAL Id : hal-03042439, version 1



Maria del Mar Cordero Rull, Damien Vistoli, Stéphane Pota, Elsa Spinelli, Fanny Meunier. Homophonic speech sequences in French: The role of acoustic and contextual cues for disambiguation. Conference of the Society for the Neurobiology of Language (SNL 2020), Oct 2020, Philadelphia, United States. ⟨hal-03042439⟩



Record views