How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2021

How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies

Résumé

The spread of AI-embedded systems involved in human decision making makes studying human trust in these systems critical. However, empirically investigating trust is challenging. One reason is the lack of standard protocols to design trust experiments. In this paper, we present a survey of existing methods to empirically investigate trust in AI-assisted decision making and analyse the corpus along the constitutive elements of an experimental protocol. We find that the definition of trust is not commonly integrated in experimental protocols, which can lead to findings that are overclaimed or are hard to interpret and compare across studies. Drawing from empirical practices in social and cognitive studies on human-human trust, we provide practical guidelines to improve the methodology of studying Human-AI trust in decision-making contexts. In addition, we bring forward research opportunities of two types: one focusing on further investigation regarding trust methodologies and the other on factors that impact Human-AI trust. CCS Concepts: • Human-centered computing → HCI theory, concepts and models.
Fichier principal
Vignette du fichier
nottrackedMinorRevCSCW_21_trustAI__Copy_.pdf (1.09 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03280969 , version 1 (07-07-2021)
hal-03280969 , version 2 (06-10-2021)

Identifiants

  • HAL Id : hal-03280969 , version 1

Citer

Oleksandra Vereschak, Gilles Bailly, Baptiste Caramiaux. How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. 2021. ⟨hal-03280969v1⟩
506 Consultations
3932 Téléchargements

Partager

Gmail Facebook X LinkedIn More