Symbolic AI for XAI : Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Computers Année : 2021

Symbolic AI for XAI : Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning

Alfonso Ortega
  • Fonction : Auteur
  • PersonId : 1140641
Julian Fierrez
  • Fonction : Auteur
  • PersonId : 1140642
Aythami Morales
Zilong Wang
  • Fonction : Auteur
  • PersonId : 1124209
Marina de La Cruz
  • Fonction : Auteur
César Luis Alonso
  • Fonction : Auteur

Résumé

Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods become crucial. Inductive logic programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the processing of data. Learning from interpretation transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given black-box system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains. In order to check the ability to cope with other domains no matter the machine learning paradigm used, we have done a preliminary test of the expressiveness of LFIT, feeding it with a real dataset about adult incomes taken from the US census, in which we consider the income level as a function of the rest of attributes to verify if LFIT can provide logical theory to support and explain to what extent higher incomes are biased by gender and ethnicity.
Fichier principal
Vignette du fichier
computers-10-00154.pdf (871.64 Ko) Télécharger le fichier
Origine : Publication financée par une institution
Licence : CC BY - Paternité

Dates et versions

hal-03542777 , version 1 (02-02-2024)

Licence

Paternité

Identifiants

Citer

Alfonso Ortega, Julian Fierrez, Aythami Morales, Zilong Wang, Marina de La Cruz, et al.. Symbolic AI for XAI : Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning. Computers, 2021, 10 (11), pp.154. ⟨10.3390/computers10110154⟩. ⟨hal-03542777⟩
33 Consultations
13 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More