Uncovering Semantic Bias in Neural Network Models Using a Knowledge Graph - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Uncovering Semantic Bias in Neural Network Models Using a Knowledge Graph

Andriy Nikolov
  • Fonction : Auteur
  • PersonId : 1133844
Mathieu D’aquin

Résumé

While neural networks models have shown impressive performance in many NLP tasks, lack of interpretability is often seen as a disadvantage. Individual relevance scores assigned by post-hoc explanation methods are not sufficient to show deeper systematic preferences and potential biases of the model that apply consistently across examples. In this paper we apply rule mining using knowledge graphs in combination with neural network explanation methods to uncover such systematic preferences of trained neural models and capture them in the form of conjunctive rules. We test our approach in the context of text classification tasks and show that such rules are able to explain a substantial part of the model behaviour as well as indicate potential causes of misclassifications when the model is applied outside of the initial training context. CCS CONCEPTS • Computing methodologies → Neural networks; Rule learning; Natural language processing; • Information systems → Graphbased database models.
Fichier principal
Vignette du fichier
CIKM__Explainable_AI_Insight.pdf (819.71 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03659110 , version 1 (04-05-2022)

Identifiants

Citer

Andriy Nikolov, Mathieu D’aquin. Uncovering Semantic Bias in Neural Network Models Using a Knowledge Graph. CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Oct 2020, Online/Galway, Ireland. pp.1175-1184, ⟨10.1145/3340531.3412009⟩. ⟨hal-03659110⟩
14 Consultations
88 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More