Training recurrent networks online without backtracking

Yann Ollivier 1, 2 Guillaume Charpiat 2
2 TAO - Machine Learning and Optimisation
CNRS - Centre National de la Recherche Scientifique : UMR8623, Inria Saclay - Ile de France, UP11 - Université Paris-Sud - Paris 11, LRI - Laboratoire de Recherche en Informatique
Abstract : We introduce the "NoBackTrack" algorithm to train the parameters of dynamical systems such as recurrent neural networks. This algorithm works in an online, memoryless setting, thus requiring no backpropagation through time, and is scalable, avoiding the large computational and memory cost of maintaining the full gradient of the current state with respect to the parameters. The algorithm essentially maintains, at each time, a single search direction in parameter space. The evolution of this search direction is partly stochastic and is constructed in such a way to provide, at every time, an unbiased random estimate of the gradient of the loss function with respect to the parameters. Because the gradient estimate is unbiased, on average over time the parameter is updated as it should. The resulting gradient estimate can then be fed to a lightweight Kalman-like filter to yield an improved algorithm. For recurrent neural networks, the resulting algorithms scale linearly with the number of parameters. Preliminary tests on a simple task show that the stochastic approximation of the gradient introduced in the algorithm does not seem to introduce too much noise in the trajectory, compared to maintaining the full gradient, and confirm the good performance and scalability of the Kalman-like version of NoBackTrack.
Type de document :
Pré-publication, Document de travail
v2 sur arxiv. 2015
Liste complète des métadonnées
Contributeur : Yann Ollivier <>
Soumis le : lundi 5 février 2018 - 22:38:22
Dernière modification le : jeudi 5 avril 2018 - 12:30:12

Lien texte intégral


  • HAL Id : hal-01228954, version 1
  • ARXIV : 1507.07680


Yann Ollivier, Guillaume Charpiat. Training recurrent networks online without backtracking. v2 sur arxiv. 2015. 〈hal-01228954〉



Consultations de la notice