HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Skip to Main content Skip to Navigation
Book sections

Interpretability of Machine Learning Methods Applied to Neuroimaging

Elina Thibeau-Sutre 1 Sasha Collin 1 Ninon Burgos 1 Olivier Colliot 1
1 ARAMIS - Algorithms, models and methods for images and signals of the human brain
SU - Sorbonne Université, Inria de Paris, ICM - Institut du Cerveau et de la Moëlle Epinière = Brain and Spine Institute
Abstract : Deep learning methods have become very popular for the processing of natural images, and were then successfully adapted to the neuroimaging field. As these methods are non-transparent, interpretability methods are needed to validate them and ensure their reliability. Indeed, it has been shown that deep learning models may obtain high performance even when using irrelevant features, by exploiting biases in the training set. Such undesirable situations can potentially be detected by using interpretability methods. Recently, many methods have been proposed to interpret neural networks. However, this domain is not mature yet. Machine learning users face two major issues when aiming to interpret their models: which method to choose, and how to assess its reliability? Here, we aim at providing answers to these questions by presenting the most common interpretability methods and metrics developed to assess their reliability, as well as their applications and benchmarks in the neuroimaging context. Note that this is not an exhaustive survey: we aimed to focus on the studies which we found to be the most representative and relevant.
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03615163
Contributor : Olivier Colliot Connect in order to contact the contributor
Submitted on : Monday, March 21, 2022 - 12:37:59 PM
Last modification on : Thursday, May 19, 2022 - 9:39:39 AM

File

MLBD_Chapter_22_2022_03_21.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03615163, version 1

Citation

Elina Thibeau-Sutre, Sasha Collin, Ninon Burgos, Olivier Colliot. Interpretability of Machine Learning Methods Applied to Neuroimaging. Olivier Colliot. Machine Learning for Brain Disorders, Springer, In press. ⟨hal-03615163⟩

Share

Metrics

Record views

61

Files downloads

12