Multi-stage Classification of Emotional Speech Motivated by a Dimensional Emotion Model

Zhongzhe Xiao 1 Emmanuel Dellandréa 1 Weibei Dou Liming Chen 1
1 imagine - Extraction de Caractéristiques et Identification
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
Abstract : This paper deals with speech emotion analysis within the context of increasing awareness of the wide application potential of affective computing. Unlike most works in the literature which mainly rely on classical frequency and energy based features along with a single global classifier for emotion recognition, we propose in this paper some new harmonic and Zipf based features for better speech emotion characterization in the valence dimension and a multi-stage classification scheme driven by a dimensional emotion model for better emotional class discrimination. Experimented on the Berlin dataset with 68 features and six emotion states, our approach shows its effectiveness, displaying a 68.60% classification rate and reaching a 71.52% classification rate when a gender classification is first applied. Using the DES dataset with five emotion states, our approach achieves an 81% recognition rate when the best performance in the literature to our knowledge is 76.15% on the same dataset.
Type de document :
Article dans une revue
Multimedia Tools and Applications, 2010, 1, 46, pp.119-145. 〈10.1007/s11042-009-0319-3〉
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01381431
Contributeur : Équipe Gestionnaire Des Publications Si Liris <>
Soumis le : vendredi 14 octobre 2016 - 14:45:21
Dernière modification le : samedi 15 octobre 2016 - 01:05:30

Identifiants

Collections

Citation

Zhongzhe Xiao, Emmanuel Dellandréa, Weibei Dou, Liming Chen. Multi-stage Classification of Emotional Speech Motivated by a Dimensional Emotion Model. Multimedia Tools and Applications, 2010, 1, 46, pp.119-145. 〈10.1007/s11042-009-0319-3〉. 〈hal-01381431〉

Partager

Métriques

Consultations de la notice

63