D. Acharya, Z. Huang, D. P. Paudel, and L. Van-gool, Covariance pooling for facial expression recognition, CVPR Workshop, 2018.

J. C. Batista, V. Albiero, O. R. Bellon, and L. Silva, Aumpnet: simultaneous action units detection and intensity estimation on multipose facial images using a single convolutional neural network, Automatic Face and Gesture Recognition, 2017.

V. Blanz, C. Basso, T. Poggio, and T. Vetter, Reanimating faces in images and video, Computer graphics forum, vol.22, pp.641-650, 2003.

E. Cambria, Affective computing and sentiment analysis, IEEE Intelligent Systems, vol.31, issue.2, pp.102-107, 2016.

Y. Choi, M. Choi, M. Kim, J. Ha, S. Kim et al., Stargan: Unified generative adversarial networks for multi-domain image-toimage translation, p.1711, 2018.

A. Dhall, A. Kaur, R. Goecke, T. Gedeon, and . Emotiw, Audiovideo, student engagement and group-level affect prediction, ICMI, pp.653-656, 2018.

H. Ding, K. Sricharan, and R. Chellappa, Exprgan: Facial expression editing with controllable expression intensity. aaai, 2018.

S. Du, Y. Tao, and A. M. Martinez, Compound facial expressions of emotion, Proceedings of the National Academy of Sciences, 2014.

P. Ekman and W. V. Friesen, Constants across cultures in the face and emotion, J. of personality and social psychology, vol.17, issue.2, p.124, 1971.

R. Ekman, What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS)

C. , F. Benitez-quiroz, R. Srinivasan, and A. M. Martinez, Emotionet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild, 2016.

I. Goodfellow, J. Pouget-abadie, M. Mirza, B. Xu, D. Warde-farley et al., Generative adversarial nets, NIPS, pp.2672-2680, 2014.

K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, CVPR, 2016.

J. Huang, Y. Li, J. Tao, and Z. Lian, Speech emotion recognition from variable-length inputs with triplet loss function, Interspeech, pp.3673-3677, 2018.

C. Kervadec, V. Vielzeuf, S. Pateux, A. Lechervy, and F. Jurie, Cake: Compact and accurate k-dimensional representation of emotion, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01849908

H. Kim, P. Garrido, A. Tewari, W. Xu, J. Thies et al., Deep video portraits. Siggraph, 2018.

D. P. Kingma and M. Welling, Auto-encoding variational bayes. NIPS, 2014.

B. Knyazev, R. Shvetsov, N. Efremova, and A. Kuharenko, Convolutional neural networks pretrained on large face recognition datasets for emotion classification from video. Automatic Face and Gesture Recognition, 2018.

S. Li, W. Deng, and J. Du, Reliable crowdsourcing and deep localitypreserving learning for expression recognition in the wild, CVPR. IEEE, 2017.

P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar et al., The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression, CVPR Workshop, 2010.

M. J. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, and J. Budynek, The japanese female facial expression (jaffe) database, Automatic Face and Gesture Recognition, 1998.

A. Mehrabian, Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament, Current Psychology, vol.14, issue.4, pp.261-292, 1996.

M. Mirza and S. Osindero, Conditional generative adversarial nets, 2014.

A. Mollahosseini, B. Hasani, and M. H. Mahoor, Affectnet: A database for facial expression, valence, and arousal computing in the wild, Transactions on Affective Computing, 2017.

H. Ng, V. D. Nguyen, V. Vonikakis, and S. Winkler, Deep learning for emotion recognition on small datasets using transfer learning, 2015.

S. Poria, E. Cambria, N. Howard, G. Huang, and A. Hussain, Fusing audio, visual and textual clues for sentiment analysis from multimodal content, Neurocomputing, vol.174, 2016.

A. Pumarola, A. Agudo, A. M. Martinez, A. Sanfeliu, and F. Morenonoguer, Ganimation: Anatomically-aware facial animation from a single image, ECCV, pp.818-833, 2018.

F. Qiao, N. Yao, Z. Jiao, Z. Li, H. Chen et al., Emotional facial expression transfer from a single image via generative adversarial nets, Computer Animation and Virtual Worlds, vol.29, issue.3-4, p.1819, 2018.

F. Ringeval, B. Schuller, M. Valstar, J. Gratch, R. Cowie et al., Avec 2017: Real-life depression, and affect recognition workshop and challenge, 2017.
URL : https://hal.archives-ouvertes.fr/hal-02080874

S. Rosenthal, N. Farra, and P. Nakov, Semeval-2017 task 4: Sentiment analysis in twitter, Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp.502-518, 2017.

J. A. Russell, A circumplex model of affect, Journal of personality and social psychology, vol.39, issue.6, p.1161, 1980.
URL : https://hal.archives-ouvertes.fr/hal-01086372

B. Schuller, S. Steidl, A. Batliner, A. Vinciarelli, K. Scherer et al., The interspeech 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism, 2013.

B. W. Schuller, S. Steidl, A. Batliner, P. B. Marschik, H. Baumeister et al., The interspeech 2018 comput. paralinguistics challenge: Atypical & self-assessed affect, crying & heart beats, 2018.

C. Soladié, N. Stoiber, and R. Séguier, Invariant representation of facial expressions for blended expression recognition on unknown subjects, CVIU, vol.117, issue.11, 2013.

L. Song, Z. Lu, R. He, Z. Sun, and T. Tan, Geometry guided adversarial facial expression synthesis, 2018.

J. M. Susskind, G. E. Hinton, J. R. Movellan, and A. K. Anderson, Generating facial expressions with deep belief nets, Transactions on Affective Computing, 2008.

S. Tulyakov, M. Liu, X. Yang, and J. Kautz, Mocogan: Decomposing motion and content for video generation, 2018.

M. F. Valstar, E. Sánchez-lozano, J. F. Cohn, L. A. Jeni, J. M. Girard et al., Fera 2017-addressing head pose in the third facial expression recognition and analysis challenge, Automatic Face and Gesture Recognition, pp.839-847, 2017.

M. Van-vugt and A. E. Grabo, The many faces of leadership: an evolutionary-psychology approach, Current Directions in Psychological Science, vol.24, issue.6, pp.484-489, 2015.

V. Vielzeuf, C. Kervadec, S. Pateux, A. Lechervy, and F. Jurie, An occam's razor view on learning audiovisual emotion recognition with small training sets. ICMI, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01854019

V. Vielzeuf, S. Pateux, and F. Jurie, Temporal multimodal fusion for video emotion classification in the wild, ICMI. ACM, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01590608

R. Weber, V. Barrielle, C. Soladié, and R. Séguier, Unsupervised adaptation of a person-specific manifold of facial expressions, Transactions on Affective Computing, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01831384

F. Yang, J. Wang, E. Shechtman, L. Bourdev, and D. Metaxas, Expression flow for 3d-aware face component transfer, ACM Transactions on Graphics (TOG), vol.30, issue.4, p.60, 2011.

G. Zhao, X. Huang, M. Taini, S. Z. Li, and M. Pietikäinen, Facial expression recognition from near-infrared videos, Image and Vision Computing, vol.29, issue.9, pp.607-619, 2011.