Quantifying facial expression intensity and signal use in deaf signers - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Journal of Deaf Studies and Deaf Education Année : 2019

Quantifying facial expression intensity and signal use in deaf signers

Résumé

We live in a world surrounded by rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded in deaf adults in the absence of an auditory sensory channel. We thus compared early and profoundly deaf signers (n = 46) with hearing non-signers (n = 48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modelling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and
Fichier principal
Vignette du fichier
2021-Helyon-Rodger (1).pdf (2.16 Mo) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-02567187 , version 1 (07-05-2020)
hal-02567187 , version 2 (25-05-2021)

Identifiants

Citer

Helen Rodger, Junpeng Lao, Chloé Stoll, Anne-Raphaëlle Richoz, Olivier Pascalis, et al.. Quantifying facial expression intensity and signal use in deaf signers. Journal of Deaf Studies and Deaf Education, 2019, 24 (4), pp.346-355. ⟨10.1093/deafed/enz023⟩. ⟨hal-02567187v2⟩
44 Consultations
199 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More