Skip to Main content Skip to Navigation
Conference papers

Accounting for variance in machine learning benchmarks

Abstract : Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51× reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons.
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03177159
Contributor : Gaël Varoquaux Connect in order to contact the contributor
Submitted on : Monday, March 22, 2021 - 10:56:54 PM
Last modification on : Tuesday, October 19, 2021 - 11:04:30 AM
Long-term archiving on: : Wednesday, June 23, 2021 - 7:32:05 PM

File

main.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03177159, version 1

Citation

Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk, et al.. Accounting for variance in machine learning benchmarks. MLsys 2021 - 4th Conference on Machine Learning and Systems, Apr 2021, San Francisco (virtual), United States. ⟨hal-03177159⟩

Share

Metrics

Record views

202

Files downloads

233