R. K. Bellamy, AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias, 2018.

V. Bhargava, M. Couceiro, and A. Napoli, LimeOut: An Ensemble Approach To Improve Process Fairness, ECML PKDD Int. Workshop XKDD, 2020.
URL : https://hal.archives-ouvertes.fr/hal-02979233

R. Binns, On the apparent conflict between individual and group fairness, FAT'20, pp.514-524

A. Chouldechova, Fair prediction with disparate impact: A study of bias in recidivism prediction instruments, Big data, vol.5, issue.2, pp.153-163, 2017.

D. Cynthia, Fairness through awareness, Innovations in Theoretical Computer Science, pp.214-226, 2012.

B. Dimanov, You shouldn't trust me: Learning models which conceal unfairness from multiple explanation methods, ECAI'20, pp.2473-2480

N. Grgi?-hla?a, Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning, AAAI'18, pp.51-60

N. Grgic-hlaca, The case for process fairness in learning: Feature selection for fair decision making, NIPS Symposium on Machine Learning and the Law

M. Hardt, Global aggregations of local explanations for black box models, NIPS'16 10, 2019.

S. M. Lundberg and S. Lee, A unified approach to interpreting model predictions, NIPS'17, pp.4765-4774

F. Pedregosa, Scikit-learn: Machine learning in Python, JMLR, vol.12, pp.2825-2830, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00650905

M. T. Ribeiro, S. Singh, and C. Guestrin, Anchors: High-precision model-agnostic explanations, AAAI'18, pp.1527-1535

M. T. Ribeiro, S. Singh, and C. Guestrin, why should i trust you?": Explaining the predictions of any classifier, SIGKDD'16, pp.1135-1144

T. Speicher, A unified approach to quantifying algorithmic unfairness: Measuring individual & group unfairness via inequality indices, SIGKDD'18, pp.2239-2248