Skip to Main content Skip to Navigation
Conference papers

Bayesian neural network priors at the level of units

Mariia Vladimirova 1 Julyan Arbel 1 Pablo Mesejo 2
1 MISTIS - Modelling and Inference of Complex and Structured Stochastic Systems
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann, INPG - Institut National Polytechnique de Grenoble
Abstract : We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonlinearities, shedding light on novel sparsity-inducing mechanisms at the level of the units of the network. Bayesian neural networks with Gaussian priors are well known to induce the weight decay penalty on the weights. In contrast, our result indicates a more elaborate regularization scheme at the level of the units, ranging from convex penalties for the first two layers-L 2 regularization for the first and Lasso for the second-to non convex penalties for deeper layers. Thus, although weight decay does not allow for the weights to be set exactly to zero, sparse solutions tend to be selected for the units from the second layer onward. This result provides new theoretical insight on deep Bayesian neural networks, underpinning their natural shrinkage properties and practical potential.
Complete list of metadatas

Cited literature [20 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01950659
Contributor : Julyan Arbel <>
Submitted on : Tuesday, December 11, 2018 - 3:54:23 AM
Last modification on : Thursday, March 26, 2020 - 8:49:33 PM
Document(s) archivé(s) le : Tuesday, March 12, 2019 - 12:52:14 PM

File

AABI2018.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01950659, version 1

Collections

Citation

Mariia Vladimirova, Julyan Arbel, Pablo Mesejo. Bayesian neural network priors at the level of units. AABI 2018 - 1st Symposium on Advances in Approximate Bayesian Inference, Dec 2018, Montréal, Canada. pp.1-6. ⟨hal-01950659⟩

Share

Metrics

Record views

88

Files downloads

123