Learning A Priori Constrained Weighted Majority Votes

Abstract : Weighted majority votes allow one to combine the output of several classifiers or voters. MinCq is a recent algorithm for optimizing the weight of each voter based on the minimization of a theoretical bound over the risk of the vote with elegant PAC-Bayesian generalization guarantees. However, while it has demonstrated good performance when combining weak classifiers, MinCq cannot make use of the useful a priori knowledge that one may have when using a mixture of weak and strong voters. In this paper, we propose P-MinCq, an extension of MinCq that can incorporate such knowledge in the form of a constraint over the distribution of the weights, along with general proofs of convergence that stand in the sample compression setting for data-dependent voters. The approach is applied to a vote of k-NN classifiers with a specific modeling of the voters' performance. P-MinCq significantly outperforms the classic k-NN classifier, a symmetric NN and MinCq using the same voters. We show that it is also competitive with LMNN, a popular metric learning algorithm, and that combining both approaches further reduces the error.
Document type :
Journal articles
Complete list of metadatas

Cited literature [24 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01009578
Contributor : Emilie Morvant <>
Submitted on : Wednesday, June 18, 2014 - 11:01:23 AM
Last modification on : Tuesday, September 10, 2019 - 11:32:07 AM
Long-term archiving on : Thursday, September 18, 2014 - 10:46:50 AM

File

pmincq.pdf
Publisher files allowed on an open archive

Identifiers

Collections

Citation

Aurélien Bellet, Amaury Habrard, Emilie Morvant, Marc Sebban. Learning A Priori Constrained Weighted Majority Votes. Machine Learning, Springer Verlag, 2014, 97 (1-2), pp.129-154. ⟨10.1007/s10994-014-5462-z⟩. ⟨hal-01009578⟩

Share

Metrics

Record views

321

Files downloads

475