LIRIS-ACCEDE: A Video Database for Affective Content Analysis

Abstract : Research in affective computing requires ground truth data for training and benchmarking computational models for machine-based emotion understanding. In this paper, we propose a large video database, namely LIRIS-ACCEDE, for affective content analysis and related applications, including video indexing, summarization or browsing. In contrast to existing datasets with very few video resources and limited accessibility due to copyright constraints, LIRIS-ACCEDE consists of 9,800 good quality video excerpts with a large content diversity. All excerpts are shared under creative commons licenses and can thus be freely distributed without copyright issues. Affective annotations were achieved using crowdsourcing through a pair-wise video comparison protocol, thereby ensuring that annotations are fully consistent, as testified by a high inter-annotator agreement, despite the large diversity of raters' cultural backgrounds. In addition, to enable fair comparison and landmark progresses of future affective computational models, we further provide four experimental protocols and a baseline for prediction of emotions using a large set of both visual and audio features. The dataset (the video clips, annotations, features and protocols) is publicly available at: http://liris-accede.ec-lyon.fr/.
Liste complète des métadonnées


https://hal.archives-ouvertes.fr/hal-01375518
Contributeur : Emmanuel Dellandrea <>
Soumis le : mercredi 29 mars 2017 - 16:36:02
Dernière modification le : mardi 4 avril 2017 - 01:10:53

Fichier

Liris-7059.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

Collections

Citation

Yoann Baveye, Emmanuel Dellandréa, Christel Chamaret, Liming Chen. LIRIS-ACCEDE: A Video Database for Affective Content Analysis. IEEE Transactions on Affective Computing, 2015, 6 (1), pp.43-55. <10.1109/TAFFC.2015.2396531>. <hal-01375518>

Partager

Métriques

Consultations de
la notice

95

Téléchargements du document

59