Top-Down Regularization of Deep Belief Networks

Abstract : Designing a principled and effective algorithm for learning deep architectures is a challenging problem. The current approach involves two training phases: a fully unsupervised learning followed by a strongly discriminative optimization. We suggest a deep learning strategy that bridges the gap between the two phases, resulting in a three-phase learning procedure. We propose to implement the scheme using a method to regularize deep belief networks with top-down information. The network is constructed from building blocks of restricted Boltzmann machines learned by combining bottom-up and top-down sampled signals. A global optimization procedure that merges samples from a forward bottom-up pass and a top-down pass is used. Experiments on the MNIST dataset show improvements over the existing algorithms for deep belief networks. Object recognition results on the Caltech-101 dataset also yield competitive results.
Document type :
Conference papers
Liste complète des métadonnées

Cited literature [28 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-00947569
Contributor : Hanlin Goh <>
Submitted on : Wednesday, February 19, 2014 - 2:15:00 PM
Last modification on : Thursday, March 21, 2019 - 1:00:19 PM
Document(s) archivé(s) le : Monday, May 19, 2014 - 10:55:37 AM

File

13_NIPS.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00947569, version 1

Citation

Hanlin Goh, Nicolas Thome, Matthieu Cord, Joo-Hwee Lim. Top-Down Regularization of Deep Belief Networks. C.J.C. Burges; L. Bottou; M. Welling; Z. Ghahramani; K.Q. Weinberger. Advances in Neural Information Processing Systems 26, Dec 2013, Lake Tahoe, United States. pp.1878-1886. 〈hal-00947569〉

Share

Metrics

Record views

286

Files downloads

437