Biasing Restricted Boltzmann Machines to Manipulate Latent Selectivity and Sparsity

Abstract : This paper proposes a modification to the restricted Boltzmann machine (RBM) learning algorithm to incorporate inductive biases. These latent activation biases are ideal solutions of the latent activity and may be designed either by modeling neural phenomenon or inductive principles of the task. In this paper, we design activation biases for sparseness and selectivity based on the activation distributions of biological neurons. With this model, one can manipulate the selectivity of individual hidden units and the sparsity of population codes. The biased RBM yields a filter bank of Gabor-like filters when trained on natural images, while modeling handwritten digits results in filters with stroke-like features. We quantitatively verify that the latent representations assume the properties of the activation biases. We further demonstrate that RBMs biased with selectivity and sparsity can significantly outperform standard RBMs for discriminative tasks.
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-00716050
Contributor : Hanlin Goh <>
Submitted on : Tuesday, July 10, 2012 - 10:11:30 AM
Last modification on : Thursday, March 21, 2019 - 2:22:36 PM
Long-term archiving on : Thursday, December 15, 2016 - 9:26:42 PM

File

SparseBias.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00716050, version 1

Citation

Hanlin Goh, Nicolas Thome, Matthieu Cord. Biasing Restricted Boltzmann Machines to Manipulate Latent Selectivity and Sparsity. NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning, Dec 2010, Vancouver, Canada. ⟨hal-00716050⟩

Share

Metrics

Record views

770

Files downloads

553