Top-Down Regularization of Deep Belief Networks - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2013

Top-Down Regularization of Deep Belief Networks

Résumé

Designing a principled and effective algorithm for learning deep architectures is a challenging problem. The current approach involves two training phases: a fully unsupervised learning followed by a strongly discriminative optimization. We suggest a deep learning strategy that bridges the gap between the two phases, resulting in a three-phase learning procedure. We propose to implement the scheme using a method to regularize deep belief networks with top-down information. The network is constructed from building blocks of restricted Boltzmann machines learned by combining bottom-up and top-down sampled signals. A global optimization procedure that merges samples from a forward bottom-up pass and a top-down pass is used. Experiments on the MNIST dataset show improvements over the existing algorithms for deep belief networks. Object recognition results on the Caltech-101 dataset also yield competitive results.
Fichier principal
Vignette du fichier
13_NIPS.pdf (282.71 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00947569 , version 1 (19-02-2014)

Identifiants

  • HAL Id : hal-00947569 , version 1

Citer

Hanlin Goh, Nicolas Thome, Matthieu Cord, Joo-Hwee Lim. Top-Down Regularization of Deep Belief Networks. Advances in Neural Information Processing Systems 26, Dec 2013, Lake Tahoe, United States. pp.1878-1886. ⟨hal-00947569⟩
268 Consultations
345 Téléchargements

Partager

Gmail Facebook X LinkedIn More