Learning Interpretable Models using Soft Integrity Constraints - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Learning Interpretable Models using Soft Integrity Constraints

Résumé

Integer models are of particular interest for applications where predictive models are supposed not only to be accurate but also interpretable to human experts. We introduce a novel penalty term called Facets whose primary goal is to favour integer weights. Our theoretical results illustrate the behaviour of the proposed penalty term: for small enough weights, the Facets matches the L 1 penalty norm, and as the weights grow, it approaches the L 2 regulariser. We provide the proximal operator associated with the proposed penalty term, so that the regularized empirical risk minimiser can be computed efficiently. We also introduce the Strongly Convex Facets, and discuss its theoretical properties. Our numerical results show that while achieving the state-of-the-art accuracy, optimisation of a loss function penalised by the proposed Facets penalty term leads to a model with a significant number of integer weights.
Fichier principal
Vignette du fichier
Learning Interpretable Models using Soft Integrity Constraints.pdf (547.04 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02944833 , version 1 (21-09-2020)

Identifiants

  • HAL Id : hal-02944833 , version 1

Citer

Khaled Belahcene, Nataliya Sokolovska, Yann Chevaleyre, Jean-Daniel Zucker. Learning Interpretable Models using Soft Integrity Constraints. 12th Asian Conference on Machine Learning (ACML 2020), Nov 2020, Bangkok, Thailand. pp.529-544. ⟨hal-02944833⟩
82 Consultations
31 Téléchargements

Partager

Gmail Facebook X LinkedIn More