Skip to Main content Skip to Navigation
Conference papers

Why Should I Trust This Item? Explaining the Recommendations of any Model

Abstract : Explainable AI has received a lot of attention over the past decade, with the proposal of many methods explaining black box classifiers such as neural networks. Despite the ubiquity of recommender systems in the digital world, only few researchers have attempted to explain their functioning, whereas it raises e. g., ethical issues. Indeed, recommender systems direct user choices to a large extent and their impact is important as they give access to only a small part of the range of items (e. g., products and/or services), as the submerged part of the iceberg. Consequently, they limit access to other resources. The potentially negative effects of these systems have been pointed out as phenomena like echo chambers and winner-take-all effects, because the internal logic of these systems is to likely enclose the consumer in a "déjà vu" loop. Therefore, it is crucial to provide explanations of such recommender systems and to identify the user data that led the system to make a specific recommendation. This makes it possible to evaluate recommender systems not only regarding their efficiency (i. e., their capability to recommend an item that was actually chosen by the user), but also w.r.t. the diversity, relevance and timeliness of the active data used to make the recommendation. In this paper, we propose a deep analysis of 7 state-of-the-art models learnt on 6 datasets based on the identification of the items or the sequences of items actively used by the models. The proposed method, which is based on subgroup discovery with different pattern languages (i. e., itemsets and sequences), provides interpretable explanations of the recommendations-useful to compare different models and explain the reasons behind the recommendation to the user.
Document type :
Conference papers
Complete list of metadata

Cited literature [32 references]  Display  Hide  Download
Contributor : Corentin Lonjarret <>
Submitted on : Wednesday, October 14, 2020 - 5:04:16 PM
Last modification on : Tuesday, June 1, 2021 - 2:08:09 PM



Corentin Lonjarret, Céline Robardet, Marc Plantevit, Roch Auburtin, Martin Atzmueller. Why Should I Trust This Item? Explaining the Recommendations of any Model. IEEE International Conference on Data Science and Advanced Analytics (DSAA), Oct 2020, Sydney, Australia. ⟨10.1109/DSAA49011.2020.00067⟩. ⟨hal-02965196⟩



Record views