A Protocol for Cross-Validating Large Crowdsourced Data: The Case of the LIRIS-ACCEDE Affective Video Dataset

Abstract : Recently, we released a large affective video dataset, namely LIRIS-ACCEDE, which was annotated through crowdsourcing along both induced valence and arousal axes using pairwise comparisons. In this paper, we design an annotation protocol which enables the scoring of induced affective feelings for cross-validating the annotations of the LIRIS-ACCEDE dataset and identifying any potential bias. We have collected in a controlled setup the ratings from 28 users on a subset of video clips carefully selected from the dataset by computing the inter-observer reliabilities on the crowdsourced data. On contrary to crowdsourced rankings gathered in unconstrained environments, users were asked to rate each video through the Self-Assessment Manikin tool. The significant correlation between crowdsourced rankings and controlled ratings validates the reliability of the dataset for future uses in affective video analysis and paves the way for the automatic generation of ratings over the whole dataset.
Type de document :
Communication dans un congrès
CrowdMM '14 Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia, Nov 2014, Orlando, United States. pp.3-8, 2014, <10.1145/2660114.2660115>
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01313188
Contributeur : Équipe Gestionnaire Des Publications Si Liris <>
Soumis le : lundi 9 mai 2016 - 16:11:56
Dernière modification le : mardi 10 mai 2016 - 01:05:55

Identifiants

Collections

Citation

Yoann Baveye, Christel Chamaret, Emmanuel Dellandréa, Liming Chen. A Protocol for Cross-Validating Large Crowdsourced Data: The Case of the LIRIS-ACCEDE Affective Video Dataset. CrowdMM '14 Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia, Nov 2014, Orlando, United States. pp.3-8, 2014, <10.1145/2660114.2660115>. <hal-01313188>

Partager

Métriques

Consultations de la notice

105