Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Variance-based importance measures for machine learning model interpretability

Abstract : Machine learning algorithms benefit from an unprecedented boost in the industrial world, in particular in support of decision-making for critical systems. However, their lack of “interpretability” remains a challenge to leverage in order to make these tools fully intelligible and auditable. This paper aims to track and synthesize of a panel of interpretability metrics (called “importance measures”) whose aim is to quantify the impact of each predictor on the statistical model’s output variance. It is shown that the choice of a relevant metric has to be guided by proper constraints imposed by the data and the considered model (linear vs. nonlinear phenomenon of interest, input dimension, input dependency) together with taking the type of study the user wants to perform into consideration (detect influential variables, rank them, etc.). Finally, these metrics are estimated and analyzed on a public dataset so as to illustrate some of their theoretical and empirical properties.
Document type :
Preprints, Working Papers, ...
Complete list of metadata
Contributor : Bertrand Iooss Connect in order to contact the contributor
Submitted on : Monday, August 1, 2022 - 11:40:15 AM
Last modification on : Wednesday, August 3, 2022 - 4:08:18 AM


Files produced by the author(s)


  • HAL Id : hal-03741384, version 1


Iooss Bertrand, Vincent Chabridon, Vincent Thouvenot. Variance-based importance measures for machine learning model interpretability. 2022. ⟨hal-03741384⟩



Record views


Files downloads