Model Selection for Bayesian Autoencoders - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2021

Model Selection for Bayesian Autoencoders

Ba-Hien Tran
  • Fonction : Auteur
  • PersonId : 1302369
  • IdRef : 272912042
Pietro Michiardi
  • Fonction : Auteur
  • PersonId : 1084771
Edwin V. Bonilla
Maurizio Filippone
  • Fonction : Auteur
  • PersonId : 1021042

Résumé

We develop a novel method for carrying out model selection for Bayesian autoencoders (BAEs) by means of prior hyper-parameter optimization. Inspired by the common practice of type-II maximum likelihood optimization and its equivalence to Kullback-Leibler divergence minimization, we propose to optimize the distributional sliced-Wasserstein distance (DSWD) between the output of the autoencoder and the empirical data distribution. The advantages of this formulation are that we can estimate the DSWD based on samples and handle high-dimensional problems. We carry out posterior estimation of the BAE parameters via stochastic gradient Hamiltonian Monte Carlo and turn our BAE into a generative model by fitting a flexible Dirichlet mixture model in the latent space. Consequently, we obtain a powerful alternative to variational autoencoders, which are the preferred choice in modern applications of autoencoders for representation learning with uncertainty. We evaluate our approach qualitatively and quantitatively using a vast experimental campaign on a number of unsupervised learning tasks and show that, in small-data regimes where priors matter, our approach provides state-of-the-art results, outperforming multiple competitive baselines.

Dates et versions

hal-03344782 , version 1 (15-09-2021)

Identifiants

Citer

Ba-Hien Tran, Simone Rossi, Dimitrios Milios, Pietro Michiardi, Edwin V. Bonilla, et al.. Model Selection for Bayesian Autoencoders. 2021. ⟨hal-03344782⟩
39 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More