Learning Interpretable Models using Soft Integrity Constraints - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2019

Learning Interpretable Models using Soft Integrity Constraints

Résumé

Integer models are of particular interest for applications where predictive models are supposed not only to be accurate but also interpretable to human experts. We introduce a novel penalty term called Facets whose primary goal is to favour integer weights. Our theoretical results illustrate the behaviour of the proposed penalty term: for small enough weights, the Facets matches the L1 penalty norm, and as the weights grow, it approaches the L2 regularizer. We provide the proximal operator associated with the proposed penalty term, so that the regularized empirical risk minimizer can be computed efficiently. We also introduce the Strongly Convex Facets, and discuss its theoretical properties. Our numerical results show that while achieving the state-of-the-art accuracy, optimisation of a loss function penalized by the proposed Facets penalty term leads to a model with a significant number of integer weights.
Fichier principal
Vignette du fichier
main_hal.pdf (1.21 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02360875 , version 1 (13-11-2019)

Identifiants

  • HAL Id : hal-02360875 , version 1

Citer

Khaled Belahcene, Nataliya Sokolovska, Yann Chevaleyre, Jean-Daniel Zucker. Learning Interpretable Models using Soft Integrity Constraints. 2019. ⟨hal-02360875⟩
129 Consultations
65 Téléchargements

Partager

Gmail Facebook X LinkedIn More