Skip to Main content Skip to Navigation
Journal articles

Redonner du sens à l’accord interannotateurs : vers une interprétation des mesures d’accord en termes de reproductibilité de l’annotation

Abstract : Inter-coders agreement measures are used to assess the reliability of annotated corpora in NLP. Now, the interpretation of these agreement measures in terms of reliability level relies on pure subjective opinions that are not supported by any experimental validation. In this paper, we present several experiments on real or simulated data that aim at providing a clear interpretation of agreement measures in terms of the level of reproductibility of the reference annotation with any other set of coders.
Complete list of metadata

Cited literature [26 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02375240
Contributor : Jean-Yves Antoine Connect in order to contact the contributor
Submitted on : Monday, November 25, 2019 - 2:46:44 PM
Last modification on : Tuesday, October 12, 2021 - 5:20:41 PM
Long-term archiving on: : Wednesday, February 26, 2020 - 1:00:45 PM

File

TAL_60_2_accords_inter_annotat...
Explicit agreement for this submission

Identifiers

  • HAL Id : hal-02375240, version 1

Citation

Dany Bregeon, Jean-Yves Antoine, Jeanne Villaneau, Anaïs Lefeuvre-Halftermeyer. Redonner du sens à l’accord interannotateurs : vers une interprétation des mesures d’accord en termes de reproductibilité de l’annotation. Revue TAL, ATALA (Association pour le Traitement Automatique des Langues), 2019, 60 (2), pp.23. ⟨hal-02375240⟩

Share

Metrics

Record views

277

Files downloads

308