Skip to Main content Skip to Navigation
New interface
Conference papers

Making ML models fairer through explanations: the case of LimeOut

Guilherme Alves 1 Vaishnavi Bhargava 1 Miguel Couceiro 1 Amedeo Napoli 1 
1 ORPAILLEUR - Knowledge representation, reasonning
Inria Nancy - Grand Est, LORIA - NLPKD - Department of Natural Language Processing & Knowledge Discovery
Abstract : Algorithmic decisions are now being used on a daily basis, and based on Machine Learning (ML) processes that may be complex and biased. This raises several concerns given the critical impact that biased decisions may have on individuals or on society as a whole. Not only unfair outcomes affect human rights, they also undermine public trust in ML and AI. In this paper we address fairness issues of ML models based on decision outcomes, and we show how the simple idea of "fea-ture dropout" followed by an "ensemble approach" can improve model fairness. To illustrate, we will revisit the case of "LimeOut" that was proposed to tackle "process fairness", which measures a model's reliance on sensitive or discriminatory features. Given a classifier, a dataset and a set of sensitive features, LimeOut first assesses whether the classifier is fair by checking its reliance on sensitive features using "Lime explana-tions". If deemed unfair, LimeOut then applies feature dropout to obtain a pool of classifiers. These are then combined into an ensemble classifier that was empirically shown to be less dependent on sensitive features without compromising the classifier's accuracy. We present different experiments on multiple datasets and several state of the art classifiers, which show that LimeOut's classifiers improve (or at least maintain) not only process fairness but also other fairness metrics such as individual and group fairness, equal opportunity, and demographic parity.
Document type :
Conference papers
Complete list of metadata

Cited literature [14 references]  Display  Hide  Download
Contributor : Miguel Couceiro Connect in order to contact the contributor
Submitted on : Tuesday, October 27, 2020 - 9:30:51 AM
Last modification on : Thursday, August 4, 2022 - 5:18:48 PM


Files produced by the author(s)


  • HAL Id : hal-02864059, version 5


Guilherme Alves, Vaishnavi Bhargava, Miguel Couceiro, Amedeo Napoli. Making ML models fairer through explanations: the case of LimeOut. 9th International Conference on Analysis of Images, Social Networks, and Texts 2020 (AIST 2020), Oct 2020, Moscow, Russia. ⟨hal-02864059v5⟩



Record views


Files downloads