Learning Interpretable Models using Soft Integrity Constraints
Résumé
Integer models are of particular interest for applications where predictive models are supposed not only to be accurate but also interpretable to human experts. We introduce a novel penalty term called Facets whose primary goal is to favour integer weights. Our theoretical results illustrate the behaviour of the proposed penalty term: for small enough weights, the Facets matches the L 1 penalty norm, and as the weights grow, it approaches the L 2 regulariser. We provide the proximal operator associated with the proposed penalty term, so that the regularized empirical risk minimiser can be computed efficiently. We also introduce the Strongly Convex Facets, and discuss its theoretical properties. Our numerical results show that while achieving the state-of-the-art accuracy, optimisation of a loss function penalised by the proposed Facets penalty term leads to a model with a significant number of integer weights.
Fichier principal
Learning Interpretable Models using Soft Integrity Constraints.pdf (547.04 Ko)
Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...