A Scalable Bayesian Sampling Method Based on Stochastic Gradient Descent Isotropization - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Entropy Année : 2021

A Scalable Bayesian Sampling Method Based on Stochastic Gradient Descent Isotropization

Maurizio Filippone
  • Fonction : Auteur
  • PersonId : 1021042
Pietro Michiardi
  • Fonction : Auteur
  • PersonId : 1084771

Résumé

Stochastic gradient sg-based algorithms for Markov chain Monte Carlo sampling (sgmcmc) tackle large-scale Bayesian modeling problems by operating on mini-batches and injecting noise on sgsteps. The sampling properties of these algorithms are determined by user choices, such as the covariance of the injected noise and the learning rate, and by problem-specific factors, such as assumptions on the loss landscape and the covariance of sg noise. However, current sgmcmc algorithms applied to popular complex models such as Deep Nets cannot simultaneously satisfy the assumptions on loss landscapes and on the behavior of the covariance of the sg noise, while operating with the practical requirement of non-vanishing learning rates. In this work we propose a novel practical method, which makes the sg noise isotropic, using a fixed learning rate that we determine analytically. Extensive experimental validations indicate that our proposal is competitive with the state of the art on sgmcmc.

Dates et versions

hal-03592411 , version 1 (01-03-2022)

Licence

Paternité

Identifiants

Citer

Giulio Franzese, Dimitrios Milios, Maurizio Filippone, Pietro Michiardi. A Scalable Bayesian Sampling Method Based on Stochastic Gradient Descent Isotropization. Entropy, 2021, 23 (11), pp.1426. ⟨10.3390/e23111426⟩. ⟨hal-03592411⟩
28 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More