A Protocol for Cross-Validating Large Crowdsourced Data: The Case of the LIRIS-ACCEDE Affective Video Dataset

Abstract : Recently, we released a large affective video dataset, namely LIRIS-ACCEDE, which was annotated through crowdsourcing along both induced valence and arousal axes using pairwise comparisons. In this paper, we design an annotation protocol which enables the scoring of induced affective feelings for cross-validating the annotations of the LIRIS-ACCEDE dataset and identifying any potential bias. We have collected in a controlled setup the ratings from 28 users on a subset of video clips carefully selected from the dataset by computing the inter-observer reliabilities on the crowdsourced data. On contrary to crowdsourced rankings gathered in unconstrained environments, users were asked to rate each video through the Self-Assessment Manikin tool. The significant correlation between crowdsourced rankings and controlled ratings validates the reliability of the dataset for future uses in affective video analysis and paves the way for the automatic generation of ratings over the whole dataset.
Document type :
Conference papers
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01313188
Contributor : Équipe Gestionnaire Des Publications Si Liris <>
Submitted on : Monday, May 9, 2016 - 4:11:56 PM
Last modification on : Wednesday, November 20, 2019 - 2:31:07 AM

Identifiers

Citation

Yoann Baveye, Christel Chamaret, Emmanuel Dellandréa, Liming Chen. A Protocol for Cross-Validating Large Crowdsourced Data: The Case of the LIRIS-ACCEDE Affective Video Dataset. CrowdMM '14 Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia, Nov 2014, Orlando, United States. pp.3-8, ⟨10.1145/2660114.2660115⟩. ⟨hal-01313188⟩

Share

Metrics

Record views

221