Using auditory classification images for the identification of fine acoustic cues used in speech perception. - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Frontiers in Human Neuroscience Année : 2013

Using auditory classification images for the identification of fine acoustic cues used in speech perception.

Résumé

An essential step in understanding the processes underlying the general mechanism of perceptual categorization is to identify which portions of a physical stimulation modulate the behavior of our perceptual system. More specifically, in the context of speech comprehension, it is still a major open challenge to understand which information is used to categorize a speech stimulus as one phoneme or another, the auditory primitives relevant for the categorical perception of speech being still unknown. Here we propose to adapt a method relying on a Generalized Linear Model with smoothness priors, already used in the visual domain for the estimation of so-called classification images, to auditory experiments. This statistical model offers a rigorous framework for dealing with non-Gaussian noise, as it is often the case in the auditory modality, and limits the amount of noise in the estimated template by enforcing smoother solutions. By applying this technique to a specific two-alternative forced choice experiment between stimuli "aba" and "ada" in noise with an adaptive SNR, we confirm that the second formantic transition is key for classifying phonemes into /b/ or /d/ in noise, and that its estimation by the auditory system is a relative measurement across spectral bands and in relation to the perceived height of the second formant in the preceding syllable. Through this example, we show how the GLM with smoothness priors approach can be applied to the identification of fine functional acoustic cues in speech perception. Finally we discuss some assumptions of the model in the specific case of speech perception.
Fichier principal
Vignette du fichier
Varnet_L._Knoblauch_K._Meunier_F._Hoen_M._2013_._Frontiers_in_Human_Neuroscience_7_865.pdf (1.08 Mo) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte
Loading...

Dates et versions

hal-00931465 , version 1 (15-01-2014)

Identifiants

Citer

Léo Varnet, Kenneth Knoblauch, Fanny Meunier, Michel Hoen. Using auditory classification images for the identification of fine acoustic cues used in speech perception.. Frontiers in Human Neuroscience, 2013, 7, pp.865. ⟨10.3389/fnhum.2013.00865⟩. ⟨hal-00931465⟩
169 Consultations
102 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More