The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

Résumé

Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model. However, they create the risk of having explanations that are a result of some artifacts learned by the model instead of actual knowledge from the data. This paper focuses on the case of counterfactual explanations and asks whether the generated instances can be justified, i.e. continuously connected to some ground-truth data. We evaluate the risk of generating unjustified counter-factual examples by investigating the local neighborhoods of instances whose predictions are to be explained and show that this risk is quite high for several datasets. Furthermore, we show that most state of the art approaches do not differentiate justified from unjustified counterfactual examples, leading to less useful explanations.
Fichier principal
Vignette du fichier
2019-IJCAI-Interpretability-LaugelEtAl.pdf (1.25 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02275308 , version 1 (30-08-2019)

Identifiants

Citer

Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki. The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations. Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}, Aug 2019, Macao, Macau SAR China. pp.2801-2807, ⟨10.24963/ijcai.2019/388⟩. ⟨hal-02275308⟩
173 Consultations
132 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More