Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

L'IA du Quotidien peut elle être Éthique ? : Loyauté des Algorithmes d'Apprentissage Automatique

Abstract : Combining big data and machine learning algorithms, the power of automatic decision tools induces as much hope as fear. Many recently enacted European legislation (GDPR) and French laws attempt to regulate the use of these tools. Leaving aside the well-identified problems of data confidentiality and impediments to competition, we focus on the risks of discrimination , the problems of transparency and the quality of algorithmic decisions. The detailed perspective of the legal texts, faced with the complexity and opacity of the learning algorithms, reveals the need for important technological disruptions for the detection or reduction of the discrimination risk, and for addressing the right to obtain an explanation of the automatic decision. Since trust of the developers and above all of the users (citizens, litigants, customers) is essential, algorithms exploiting personal data must be deployed in a strict ethical framework. In conclusion, to answer this need, we list some ways of controls to be developed: institutional control, ethical charter, external audit attached to the issue of a label.
Complete list of metadatas

Cited literature [24 references]  Display  Hide  Download
Contributor : Philippe Besse <>
Submitted on : Tuesday, December 11, 2018 - 11:44:36 AM
Last modification on : Friday, October 23, 2020 - 4:40:46 PM
Long-term archiving on: : Tuesday, March 12, 2019 - 1:46:05 PM


Files produced by the author(s)


  • HAL Id : hal-01886699, version 2


Philippe Besse, Céline Castets-Renard, Aurélien Garivier, Jean-Michel Loubes. L'IA du Quotidien peut elle être Éthique ? : Loyauté des Algorithmes d'Apprentissage Automatique. 2018. ⟨hal-01886699v2⟩



Record views


Files downloads