Multimodal Expressions of Stress during a Public Speaking Task: Collection, Annotation and Global Analyses

Abstract : Databases of spontaneous multimodal expressions of affective states occurring during a task are few. This paper presents a protocol for eliciting stress in a public speaking task. Behaviors of 19 participants were recorded via a multimodal setup including speech, video of the facial expressions and body movements, balance via a force plate, and physiological measures. Questionnaires were used to assert emotional states, personality profiles and relevant coping behaviors to study how participants cope with stressful situations. Several subjective and objective performances were also evaluated. Results show a significant impact of the overall task and conditions on the participants' emotional activation. The possible future use of this new multimodal emotional corpus is described.
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01443842
Contributor : David Antonio Gómez Jáuregui <>
Submitted on : Tuesday, July 18, 2017 - 3:33:47 PM
Last modification on : Saturday, May 4, 2019 - 1:20:25 AM
Long-term archiving on : Thursday, December 14, 2017 - 5:18:04 PM

File

Multimodal Expressions of Stre...
Files produced by the author(s)

Identifiers

Citation

Tom Giraud, Marie Soury, Jiewen Hua, Agnes Delaborde, Marie Tahon, et al.. Multimodal Expressions of Stress during a Public Speaking Task: Collection, Annotation and Global Analyses. 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII 2013), Sep 2013, Genève, Switzerland. ⟨10.1109/ACII.2013.75⟩. ⟨hal-01443842⟩

Share

Metrics

Record views

145

Files downloads

254