A reanalysis of McGurk data suggests that audiovisual fusion in speech perception is subject-dependent
Résumé
Audiovisual perception of conflicting stimuli displays a large level of intersubject variability, generally larger than pure auditory or visual data. However, it is not clear whether this actually reflects differences in integration per se, or just the consequence of slight differences in unisensory perception. It is argued that the debate has been blurred by methodological problems in the analysis of experimental data, particularly when using the Fuzzy-Logical Model of Perception (Massaro, 1987) shown to display overfitting abilities with McGurk stimuli (Schwartz, 2006). A large corpus of McGurk data is reanalyzed, using a methodology based on (1) comparison of FLMP and a variant with subject-dependent weights of the auditory and visual inputs in the fusion process, WFLMP, (2) use of a Bayesian Selection Model criterion instead of a Root Mean Square Error fit in model assessment, (3) systematic exploration of the number of useful parameters in the models to compare, attempting to discard poorly explicative parameters. It is shown that WFLMP performs significantly better than FLMP, suggesting that audiovisual fusion is indeed subject-dependent, some subjects being more "auditory" and others more "visual". Inter-subject variability has important consequences for theoretical understanding of the fusion process, and reeducation of hearing impaired people.
Origine : Fichiers produits par l'(les) auteur(s)