Skip to Main content Skip to Navigation
Conference papers

Binary Classifier Evaluation Without Ground Truth

Abstract : In this paper we study statistically sound ways of comparing classifiers in absence for fully reliable reference data. Based on previously published partial frameworks, we explore a more comprehensive approach to comparing and ranking classifiers that is robust to incomplete, erroneous or missing reference evaluation data. On the one hand, the use of a generalized McNemar's test is shown to give reliable confidence measures in the ranking of two classifiers under the assumption of an existing better-than-random reference classifier. We extend its use to cases where its traditional formulation is notoriously unstable. We also provide a computational context that allows it to be used for large amounts of data. Our classifier evaluation model is generic and applies to any set of binary classifiers. We have more specifically tested and validated it on synthetic and real data coming from document image binarization.
Complete list of metadatas

Cited literature [21 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01680358
Contributor : Bart Lamiroy <>
Submitted on : Wednesday, January 10, 2018 - 3:59:26 PM
Last modification on : Tuesday, April 24, 2018 - 12:35:40 PM
Document(s) archivé(s) le : Wednesday, May 23, 2018 - 5:04:30 PM

File

icapr2017.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01680358, version 1

Collections

Citation

Maksym Fedorchuk, Bart Lamiroy. Binary Classifier Evaluation Without Ground Truth. Ninth International Conference on Advances in Pattern Recognition (ICAPR-2017), Dec 2017, Bangalore, India. ⟨hal-01680358⟩

Share

Metrics

Record views

417

Files downloads

1534