L. Devillers, L. Vidrascu, and L. , Challenges in real-life emotion annotation and machine learning based detection, Journal of Neural Networks 2005, special issue: Emotion and Brain, pp.407-422, 2005.
DOI : 10.1016/j.neunet.2005.03.007

M. Lowry, J. Mcrorie, L. Martin, S. Devillers, A. Abrilian et al., The HUMAINE database: Adressing the collection of annotation of naturalistic and induced emotional data, 2007.

. Castellengo, Perception et acoustique dans la qualité vocale dans le chant lyrique, ICVPB, 2004.

M. Kotti and C. Kotropoulos, Gender classification in two Emotional Speech databases, 2008 19th International Conference on Pattern Recognition, 2008.
DOI : 10.1109/ICPR.2008.4761624

URL : http://figment.cse.usf.edu/~sfefilat/data/papers/TuBCT9.22.pdf

A. Martinet, Eléments de linguistique générale, 1980.

G. Peeters, A large set of audio features for sound descriptions (similarity and classification) in the CUIDADO project, 2004.

N. Rollet, A. Delaborde, and L. Devillers, Protocol CINEMO: The use of fiction for collecting emotional data in naturalistic controlled oriented context, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009.
DOI : 10.1109/ACII.2009.5349545

K. R. Scherer, T. Johnstone, and G. Klasmeyer, Vocal expression of emotion, Handbook of Affective Sciences, chapter 23, pp.433-456, 2003.

S. Steidl, B. Schüller, A. Batliner, and D. Seppi, The hinterland of emotions: Facing the open-microphone challenge, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009.
DOI : 10.1109/ACII.2009.5349499