Confidence sets with expected sizes for Multiclass Classification - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Journal of Machine Learning Research Année : 2017

Confidence sets with expected sizes for Multiclass Classification

Résumé

Multiclass classification problems such as image annotation can involve a large number of classes. In this context, confusion between classes can occur, and single label classification may be misleading. We provide in the present paper a general device that, given an unlabeled dataset and a score function defined as the minimizer of some empirical and convex risk, outputs a set of class labels, instead of a single one. Interestingly, this procedure does not require that the unlabeled dataset explores the whole classes. Even more, the method is calibrated to control the expected size of the output set while minimizing the classification risk. We show the statistical optimality of the procedure and establish rates of convergence under the Tsybakov margin condition. It turns out that these rates are linear on the number of labels. We apply our methodology to convex aggregation of confidence sets based on the V-fold cross validation principle also known as the superlearning principle. We illustrate the numerical performance of the procedure on real data and demonstrate in particular that with moderate expected size, w.r.t. the number of labels, the procedure provides significant improvement of the classification risk.
Fichier principal
Vignette du fichier
DH_revision_v5.pdf (279.24 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01357850 , version 1 (30-08-2016)
hal-01357850 , version 2 (28-11-2017)

Identifiants

Citer

Christophe Denis, Mohamed Hebiri. Confidence sets with expected sizes for Multiclass Classification. Journal of Machine Learning Research, 2017. ⟨hal-01357850v2⟩
140 Consultations
81 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More