J. Bachorowski, M. Smoski, and M. Owren, The acoustic features of human laughter, The Journal of the Acoustical Society of America, vol.110, pp.1581-1597, 2001.

J. Bachorowski and M. J. Owren, Not all laughs are alike: Voiced but not unvoiced laughter readily elicits positive affect, Psychological Science, vol.12, pp.252-257, 2001.
DOI : 10.1111/1467-9280.00346

T. Baltru?aitis, M. Mahmoud, and P. Robinson, Cross-dataset learning and person-specific normalisation for automatic action unit detection, Proc. 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol.6, pp.1-6, 2015.

S. Batliner, A. Steidl, F. Eyben, and B. Schuller, On laughter and speech laugh, based on observations of child-robot interaction. The phonetics of laughing, 2011.

L. S. Berk, S. A. Tan, B. J. Napier, and W. C. Eby, Eustress of mirthful laughter modifies natural-killer cell activity, Proc, vol.37, p.115, 1989.

P. Boersma and D. Weenink, Praat: doing phonetics by computer, 2005.

D. Bone, C. Lee, and S. S. Narayanan, Robust Unsupervised Arousal Rating: A Rule-Based Framework with Knowledge-Inspired Vocal Features, IEEE Transactions on Affective Computing, vol.5, pp.201-213, 2014.

H. Brugman and A. Russel, Annotating multimedia/multi-modal resources with ELAN, Proc. International Conference on Language Resources and Evaluation. ELRA, pp.2065-2068, 2013.

W. L. Chafe, The importance of not being earnest: The feeling behind laughter and humor, vol.3, 2007.

A. J. Chapman, Social aspects of humourous laughter. Humour and laughter: Theory, research and applications, pp.155-185, 1976.

S. Cosentino, S. Sessa, and A. Takanishi, Quantitative laughter detection, measurement, and classification-A Critical Survey, IEEE Reviews in Biomedical engineering, vol.9, pp.148-162, 2016.

C. Darwin, P. Ekman, and P. Prodger, The expression of the emotions in man and animals, 1998.

L. Devillers and L. Vidrascu, Positive and negative emotional states behind the laughs in spontaneous spoken dialogs, Proc. Interdisciplinary workshop on the phonetics of laughter, p.37, 2007.

P. Ekman, What we have learned by measuring facial behavior. What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS, pp.469-485, 1997.
DOI : 10.1093/acprof:oso/9780195179644.003.0030

P. Ekman, R. J. Davidson, and W. V. Friesen, The Duchenne smile: Emotional expression and brain physiology: II, Journal of personality and social psychology, vol.58, p.342, 1990.

P. Ekman, W. V. Friesen, and J. C. Hager, Facial action coding system, 2002.

F. Eyben, K. Scherer, B. Schuller, J. Sundberg, E. André et al., The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing, IEEE Transactions on Affective Computing, 2015.
DOI : 10.1109/taffc.2015.2457417

URL : https://doi.org/10.1109/taffc.2015.2457417

F. Eyben, F. Weninger, F. Gross, and B. Schuller, Recent Developments in openSMILE, the Munich Open-Source Multimedia Feature Extractor, Proc. ACM Multimedia (MM), pp.835-838, 2013.
DOI : 10.1145/2502081.2502224

URL : http://mediatum.ub.tum.de/doc/1189646/document.pdf

R. Fan, K. Chang, C. Hsieh, X. Wang, and C. Lin, LIBLINEAR: A library for large linear classification, Journal of machine learning research, vol.9, pp.1871-1874, 2008.

P. Glenn, Laughter in interaction, vol.18, 2003.

J. Hall and W. H. Watson, The effects of a normative intervention on group decision-making performance, Human relations, vol.23, pp.299-317, 1970.

A. Hanjalic and L. Xu, User-oriented affective video content analysis, IEEE Workshop on Content-Based Access of Image and Video Libraries, pp.50-57, 2001.
DOI : 10.1109/ivl.2001.990856

W. Hudenko, W. Stone, and J. Bachorowski, Laughter differs in children with autism: An acoustic analysis of laughs produced by children with and without the disorder, Journal of autism and developmental disorders, vol.39, pp.1392-1400, 2009.

K. Laskowski, Contrasting emotion-bearing laughter types in multiparticipant vocal activity detection for meetings, Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, pp.4765-4768, 2009.
DOI : 10.1109/icassp.2009.4960696

URL : http://www.cs.cmu.edu/~kornel/pubs/0004765.pdf

K. Laskowski and S. Burger, Analysis of the occurrence of laughter in meetings, Proc. INTERSPEECH, pp.1258-1261, 2007.

K. Laskowski and T. Schultz, Detection of laughter-in-interaction in multichannel close-talk microphone recordings of meetings, Machine Learning for Multimodal Interaction, pp.149-160, 2008.

F. Lingenfelser, J. Wagner, . El, G. André, W. Mckeown et al., An event driven fusion approach for enjoyment recognition in real-time, Proc. of the 22nd ACM international conference on Multimedia, pp.377-386, 2014.
DOI : 10.1145/2647868.2654924

URL : https://pure.qub.ac.uk/portal/files/13894772/Lingenfelser_et_al.pdf

R. A. Martin, Humor, laughter, and physical health: Methodological issues and research findings, Psychological bulletin, vol.127, p.504, 2001.
DOI : 10.1037/0033-2909.127.4.504

G. Mckeown, W. Curran, J. Wagner, F. Lingenfelser, and E. André, The Belfast storytelling database: A spontaneous social interaction database with laughter focused annotation, Proc. 2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015, pp.166-172, 2015.

G. Mckeown, M. Valstar, R. Cowie, M. Pantic, and M. Schröder, The SEMAINE database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent, IEEE Transactions on Affective Computing, vol.3, pp.5-17, 2012.

C. C. Neuhoff and C. Schaefer, Effects of laughing, smiling, and howling on mood, Psychological Reports, vol.91, pp.1079-1080, 2002.
DOI : 10.2466/pr0.2002.91.3f.1079

R. Niewiadomski, M. Mancini, T. Baur, G. Varni, H. Griffin et al., MMLI: Multimodal multiperson corpus of laughter in interaction, Proc. International Workshop on Human Behavior Understanding, pp.184-195, 2013.
DOI : 10.1007/978-3-319-02714-2_16

E. Nwokah, H. Hsu, P. Davies, and A. Fogel, The integration of laughter and speech in vocal communication: A dynamic systems perspective, Journal of Speech, Language, and Hearing Research, vol.42, pp.880-894, 1999.

F. Orozco, F. García, L. Arcos, and J. Gonzàlez, Spatio-temporal reasoning for reliable facial expression interpretation, Proc. International Conference on Computer Vision Systems (ICVS), 2007.

S. Petridis, M. Leveque, and M. Pantic, Audiovisual detection of laughter in human-machine interaction, Proc. 5th International Conference on Affective Computing and Intelligent Interaction (ACII), pp.129-134, 2013.
DOI : 10.1109/acii.2013.28

URL : http://ibug.doc.ic.ac.uk/media/uploads/documents/petridislevequepantic_acii2013.pdf

S. Petridis, B. Martínez, and M. Pantic, The MAHNOB laughter database, Image and Vision Computing, vol.31, pp.186-202, 2013.
DOI : 10.1016/j.imavis.2012.08.014

URL : http://ibug.doc.ic.ac.uk/media/uploads/documents/mahnob_laughter_db.pdf

S. Petridis and M. Pantic, Audiovisual laughter detection based on temporal features, Proc. of the 10th international conference on Multimodal interfaces, pp.37-44, 2008.

S. Petridis and M. Pantic, Audiovisual discrimination between speech and laughter: Why and when visual information might help, IEEE Transactions on Multimedia, vol.13, pp.216-234, 2011.

R. R. Provine, Laughter: A scientific investigation, 2001.

F. Ringeval, S. Amiriparian, F. Eyben, K. Scherer, and B. Schuller, Emotion Recognition in the Wild: Incorporating Voice and Lip Activity in Multimodal Decision-Level Fusion, Proc. of EmotiW, ICMI, pp.473-480, 2014.

F. Ringeval, F. Eyben, E. Kroupi, A. Yuce, J. Thiran et al., Prediction of Asynchronous Dimensional Emotion Ratings from Audiovisual and Physiological Data, Pattern Recognition Letters, vol.66, pp.22-30, 2015.

F. Ringeval, B. Schuller, M. Valstar, S. Jaiswal, E. Marchi et al., Av+ ec 2015: The first affect recognition challenge bridging across audio, video, and physiological data, Proc. of the 5th International Workshop on Audio/Visual Emotion Challenge, pp.3-8, 2015.

F. Ringeval, A. Sonderegger, J. Sauer, and D. Lalanne, Introducing the RECOLA Multimodal Corpus of Remote Collaborative and Affective Interactions, Proc. 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), 2013.

W. Ruch and P. Ekman, The expressive pattern of laughter. Emotion, qualia, and consciousness, pp.426-443, 2001.

S. Scherer, F. Schwenker, N. Campbell, and G. Palm, Multimodal laughter detection in natural discourses, Human Centered Robot Systems, pp.111-120, 2009.

M. Schröder, Experimental study of affect bursts, Speech communication, vol.40, pp.99-116, 2003.

B. Schuller, The computational paralinguistics challenge, 2012.
URL : https://hal.archives-ouvertes.fr/hal-01993250

, IEEE Signal Processing Magazine, vol.29, pp.97-101, 2012.

I. Sneddon, M. Mcrorie, G. Mckeown, and J. Hanratty, The Belfast induced natural emotion database, IEEE Transactions on Affective Computing, vol.3, pp.32-41, 2012.

R. Stibbard, Automated extraction of ToBI annotation data from the Reading/Leeds emotional speech corpus, Proc. ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion, 2000.

M. T. Suarez, J. Cu, and M. Sta, Building a Multimodal Laughter Database for Emotion Recognition, Proc. LREC, pp.2347-2350, 2012.

D. P. Szameitat, K. Alter, A. J. Szameitat, C. J. Darwin, D. Wildgruber et al., Differentiation of emotions in laughter at the behavioral level, Emotion, vol.9, p.397, 2009.

J. Trouvain, Phonetic Aspects of Speech-Laughs, Proc. of the Conference on Orality & Gestuality (ORAGE), pp.634-639, 2001.

J. Trouvain, Segmenting Phonetic Units in Laughter, Proc. of the 15th International Conference of Phonetic Sciences, pp.2793-2796, 2003.

K. Truong and J. Trouvain, Laughter annotations in conversational speech corpora-possibilities and limitations for phonetic analysis, Proceedings of the 4th International Worskhop on Corpora for Research on Emotion Sentiment and Social Signals, pp.20-24, 2012.

J. Urbain, , 2014.

J. Urbain, E. Bevacqua, T. Dutoit, A. Moinet, R. Niewiadomski et al., The AVLaughterCycle Database, Proc. LREC, 2010.
DOI : 10.1007/s12193-010-0053-1

M. Valstar, J. Gratch, B. Schuller, F. Ringeval, D. Lalanne et al., AVEC 2016: Depression, Mood, and Emotion Recognition Workshop and Challenge, Proc. of the 6th International Workshop on Audio/Visual Emotion Challenge, pp.3-10, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01494127

M. Valstar, B. Schuller, K. Smith, T. Almaev, F. Eyben et al., AVEC 2014-The Three Dimensional Affect and Depression Challenge, Proc. of ACM MM, 2014.