Bayesian neural network priors at the level of units - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

Bayesian neural network priors at the level of units

Résumé

We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonlinearities, shedding light on novel sparsity-inducing mechanisms at the level of the units of the network. Bayesian neural networks with Gaussian priors are well known to induce the weight decay penalty on the weights. In contrast, our result indicates a more elaborate regularization scheme at the level of the units, ranging from convex penalties for the first two layers-L 2 regularization for the first and Lasso for the second-to non convex penalties for deeper layers. Thus, although weight decay does not allow for the weights to be set exactly to zero, sparse solutions tend to be selected for the units from the second layer onward. This result provides new theoretical insight on deep Bayesian neural networks, underpinning their natural shrinkage properties and practical potential.
Fichier principal
Vignette du fichier
AABI2018.pdf (334.42 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01950659 , version 1 (11-12-2018)

Identifiants

  • HAL Id : hal-01950659 , version 1

Citer

Mariia Vladimirova, Julyan Arbel, Pablo Mesejo. Bayesian neural network priors at the level of units. AABI 2018 - 1st Symposium on Advances in Approximate Bayesian Inference, Dec 2018, Montréal, Canada. pp.1-6. ⟨hal-01950659⟩
100 Consultations
157 Téléchargements

Partager

Gmail Facebook X LinkedIn More