Skip to Main content Skip to Navigation
New interface
Conference papers

Learning Interpretable Models using Soft Integrity Constraints

Abstract : Integer models are of particular interest for applications where predictive models are supposed not only to be accurate but also interpretable to human experts. We introduce a novel penalty term called Facets whose primary goal is to favour integer weights. Our theoretical results illustrate the behaviour of the proposed penalty term: for small enough weights, the Facets matches the L 1 penalty norm, and as the weights grow, it approaches the L 2 regulariser. We provide the proximal operator associated with the proposed penalty term, so that the regularized empirical risk minimiser can be computed efficiently. We also introduce the Strongly Convex Facets, and discuss its theoretical properties. Our numerical results show that while achieving the state-of-the-art accuracy, optimisation of a loss function penalised by the proposed Facets penalty term leads to a model with a significant number of integer weights.
Complete list of metadata

Cited literature [21 references]  Display  Hide  Download
Contributor : Khaled Belahcene Connect in order to contact the contributor
Submitted on : Monday, September 21, 2020 - 5:09:17 PM
Last modification on : Thursday, March 31, 2022 - 2:46:02 PM
Long-term archiving on: : Thursday, December 3, 2020 - 3:21:11 PM


Learning Interpretable Models ...
Files produced by the author(s)


  • HAL Id : hal-02944833, version 1


Khaled Belahcene, Nataliya Sokolovska, Yann Chevaleyre, Jean-Daniel Zucker. Learning Interpretable Models using Soft Integrity Constraints. 12th Asian Conference on Machine Learning (ACML 2020), Nov 2020, Bangkok, Thailand. pp.529-544. ⟨hal-02944833⟩



Record views


Files downloads