Increasing Argument Annotation Reproducibility by Using Inter-annotator Agreement to Improve Guidelines

Abstract : In this abstract we present a methodology to improve Argument annotation guidelines by exploiting inter-annotator agreement measures. After a first stage of the annotation effort, we have detected problematic issues via an analysis of inter-annotator agreement. We have detected ill-defined concepts, which we have addressed by redefining high-level annotation goals. For other concepts, that are well-delimited but complex, the annotation protocol has been extended and detailed. Moreover, as can be expected, we show that distinctions where human annotators have less agreement are also those where automatic analyzers perform worse. Thus, the reproducibility of results of Argument Mining systems can be addressed by improving inter-annotator agreement in the training material. Following this methodology, we are enhancing a corpus annotated with argumentation, available at https://github.com/ PLN-FaMAF/ArgumentMiningECHR together with guidelines and analyses of agreement. These analyses can be used to filter performance figures of automated systems, with lower penalties for cases where human annotators agree less.
Document type :
Conference papers
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01876506
Contributor : Serena Villata <>
Submitted on : Tuesday, September 18, 2018 - 2:54:24 PM
Last modification on : Friday, January 4, 2019 - 4:23:34 PM

Identifiers

  • HAL Id : hal-01876506, version 1

Collections

Citation

Milagro Teruel, Cristian Cardellino, Fernando Cardellino, Laura Alonso Alemany, Serena Villata. Increasing Argument Annotation Reproducibility by Using Inter-annotator Agreement to Improve Guidelines. LREC 2018 - 11th International Conference on Language Resources and Evaluation, May 2018, Miyazaki, Japan. pp.1-4. ⟨hal-01876506⟩

Share

Metrics

Record views

62