Skip to Main content Skip to Navigation
Conference papers

One Versus all for deep Neural Network Incertitude (OVNNI) quantification

Abstract : Deep neural networks (DNNs) are powerful learning models yet their results are not always reliable. This is due to the fact that modern DNNs are usually uncalibrated and we cannot characterize their epistemic uncertainty. In this work, we propose a new technique to quantify the epistemic uncertainty of data easily. This method consists in mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification. On the one hand, the adjustment provided by the AVA DNN to the score of the base classifiers allows for a more fine-grained inter-class separation. On the other hand, the two types of classifiers enforce mutually their detection of out-of-distribution (OOD) samples, circumventing entirely the requirement of using such samples during training. Our method achieves state of the art performance in quantifying OOD data across multiple datasets and architectures while requiring little hyper-parameter tuning.
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03097063
Contributor : Gianni Franchi Connect in order to contact the contributor
Submitted on : Tuesday, January 19, 2021 - 11:12:16 AM
Last modification on : Tuesday, October 19, 2021 - 11:15:16 AM
Long-term archiving on: : Tuesday, April 20, 2021 - 6:08:39 PM

File

ovnni_v3.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03097063, version 1
  • ARXIV : 2006.00954

Citation

Gianni Franchi, Andrei Bursuc, Emanuel Aldea, Séverine Dubuisson, Isabelle Bloch. One Versus all for deep Neural Network Incertitude (OVNNI) quantification. NeurIPS workshop on Bayesian Deep Learning, Dec 2020, Vancouver, Canada. ⟨hal-03097063⟩

Share

Metrics

Record views

126

Files downloads

81