Unbiased Online Recurrent Optimization

Yann Ollivier 1, 2 Corentin Tallec 1, 2
2 TAU - TAckling the Underspecified
LRI - Laboratoire de Recherche en Informatique, UP11 - Université Paris-Sud - Paris 11, Inria Saclay - Ile de France, CNRS - Centre National de la Recherche Scientifique : UMR8623
Abstract : The novel Unbiased Online Recurrent Optimization (UORO) algorithm allows for online learning of general recurrent computational graphs such as recurrent network models. It works in a streaming fashion and avoids backtracking through past activations and inputs. UORO is computationally as costly as Truncated Backpropagation Through Time (truncated BPTT), a widespread algorithm for online learning of recurrent networks. UORO is a modification of NoBackTrack that bypasses the need for model sparsity and makes implementation easy in current deep learning frameworks, even for complex models. Like NoBackTrack, UORO provides unbiased gradient estimates; unbiasedness is the core hypothesis in stochastic gradient descent theory, without which convergence to a local optimum is not guaranteed. On the contrary, truncated BPTT does not provide this property, leading to possible divergence. On synthetic tasks where truncated BPTT is shown to diverge, UORO converges. For instance, when a parameter has a positive short-term but negative long-term influence, truncated BPTT diverges unless the truncation span is very significantly longer than the intrinsic temporal range of the interactions, while UORO performs well thanks to the unbiasedness of its gradients.
Type de document :
Pré-publication, Document de travail
Liste complète des métadonnées

Contributeur : Yann Ollivier <>
Soumis le : lundi 18 décembre 2017 - 14:07:35
Dernière modification le : mardi 8 janvier 2019 - 08:36:01

Lien texte intégral


  • HAL Id : hal-01666483, version 1
  • ARXIV : 1702.05043


Yann Ollivier, Corentin Tallec. Unbiased Online Recurrent Optimization. 2017. 〈hal-01666483〉



Consultations de la notice