Assessing and tuning brain decoders: cross-validation, caveats, and guidelines

Abstract : Decoding, ie prediction from brain images or signals, calls for empirical evaluation of its predictive power. Such evaluation is achieved via cross-validation, a method also used to tune decoders' hyper-parameters. This paper is a review on cross-validation procedures for decoding in neuroimaging. It includes a didactic overview of the relevant theoretical considerations. Practical aspects are highlighted with an extensive empirical study of the common decoders in within-and across-subject predictions, on multiple datasets –anatomical and functional MRI and MEG– and simulations. Theory and experiments outline that the popular " leave-one-out " strategy leads to unstable and biased estimates, and a repeated random splits method should be preferred. Experiments outline the large error bars of cross-validation in neuroimaging settings: typical confidence intervals of 10%. Nested cross-validation can tune decoders' parameters while avoiding circularity bias. However we find that it can be more favorable to use sane defaults, in particular for non-sparse decoders.
Liste complète des métadonnées
Contributeur : Gaël Varoquaux <>
Soumis le : lundi 31 octobre 2016 - 23:06:46
Dernière modification le : jeudi 7 février 2019 - 16:11:41


Fichiers produits par l'(les) auteur(s)


Distributed under a Creative Commons Paternité 4.0 International License



Gaël Varoquaux, Pradeep Reddy Raamana, Denis Engemann, Andrés Hoyos-Idrobo, Yannick Schwartz, et al.. Assessing and tuning brain decoders: cross-validation, caveats, and guidelines. NeuroImage, Elsevier, 2016, 〈10.1016/j.neuroimage.2016.10.038〉. 〈hal-01332785v2〉



Consultations de la notice


Téléchargements de fichiers