L'IA du Quotidien peut elle être Éthique ? : Loyauté des Algorithmes d'Apprentissage Automatique

Abstract : Combining big data and machine learning algorithms, the power of automatic decision tools induces as much hope as fear. Many recently enacted European legislation (GDPR) and French laws attempt to regulate the use of these tools. Leaving aside the well-identified problems of data confidentiality and impediments to competition, we focus on the risks of discrimination , the problems of transparency and the quality of algorithmic decisions. The detailed perspective of the legal texts, faced with the complexity and opacity of the learning algorithms, reveals the need for important technological disruptions for the detection or reduction of the discrimination risk, and for addressing the right to obtain an explanation of the automatic decision. Since trust of the developers and above all of the users (citizens, litigants, customers) is essential, algorithms exploiting personal data must be deployed in a strict ethical framework. In conclusion, to answer this need, we list some ways of controls to be developed: institutional control, ethical charter, external audit attached to the issue of a label.
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01886699
Contributor : Philippe Besse <>
Submitted on : Tuesday, December 11, 2018 - 11:44:36 AM
Last modification on : Monday, April 29, 2019 - 4:24:07 PM
Long-term archiving on : Tuesday, March 12, 2019 - 1:46:05 PM

File

IAethiqueStatSociete-V4.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01886699, version 2

Citation

Philippe Besse, Céline Castets-Renard, Aurélien Garivier, Jean-Michel Loubes. L'IA du Quotidien peut elle être Éthique ? : Loyauté des Algorithmes d'Apprentissage Automatique. 2018. ⟨hal-01886699v2⟩

Share

Metrics

Record views

242

Files downloads

310