Skip to Main content Skip to Navigation
Conference papers

Interpreting Neural Networks as Majority Votes through the PAC-Bayesian Theory

Paul Viallard 1 Rémi Emonet 1 Pascal Germain 2, 3 Amaury Habrard 1 Emilie Morvant 1
2 MODAL - MOdel for Data Analysis and Learning
Inria Lille - Nord Europe, LPP - Laboratoire Paul Painlevé - UMR 8524, METRICS - Evaluation des technologies de santé et des pratiques médicales - ULR 2694, Polytech Lille - École polytechnique universitaire de Lille, Université de Lille, Sciences et Technologies
Département d'informatique et de génie logiciel [Québec]
Abstract : We propose a PAC-Bayesian theoretical study of the two-phase learning procedure of a neural network introduced by Kawaguchi et al. (2017). In this procedure, a network is expressed as a weighted combination of all the paths of the network (from the input layer to the output one), that we reformulate as a PAC-Bayesian majority vote. Starting from this observation, their learning procedure consists in (1) learning a "prior" network for fixing some parameters, then (2) learning a "posterior" network by only allowing a modification of the weights over the paths of the prior network. This allows us to derive a PAC-Bayesian generalization bound that involves the empirical individual risks of the paths (known as the Gibbs risk) and the empirical diversity between pairs of paths. Note that similarly to classical PAC-Bayesian bounds, our result involves a KL-divergence term between a "prior" network and the "posterior" network. We show that this term is computable by dynamic programming without assuming any distribution on the network weights.
Document type :
Conference papers
Complete list of metadata
Contributor : Emilie Morvant Connect in order to contact the contributor
Submitted on : Monday, October 28, 2019 - 2:35:40 PM
Last modification on : Tuesday, February 16, 2021 - 3:10:09 PM


  • HAL Id : hal-02335762, version 1


Paul Viallard, Rémi Emonet, Pascal Germain, Amaury Habrard, Emilie Morvant. Interpreting Neural Networks as Majority Votes through the PAC-Bayesian Theory. Workshop on Machine Learning with guarantees @ NeurIPS 2019, Dec 2019, Vancouver, Canada. ⟨hal-02335762⟩



Record views