Unbiased online recurrent optimization - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

Unbiased online recurrent optimization

Résumé

The novel Unbiased Online Recurrent Optimization (UORO) algorithm allows for online learning of general recurrent computational graphs such as recurrent network models. It works in a streaming fashion and avoids backtracking through past activations and inputs. UORO is computationally as costly as Truncated Backpropagation Through Time (truncated BPTT), a widespread algorithm for online learning of recurrent networks Jaeger (2002). UORO is a modification of NoBackTrack Ollivier et al. (2015) that bypasses the need for model sparsity and makes implementation easy in current deep learning frameworks, even for complex models. Like NoBackTrack, UORO provides unbiased gradient estimates; unbiasedness is the core hypothesis in stochastic gradient descent theory, without which convergence to a local optimum is not guaranteed. On the contrary, truncated BPTT does not provide this property, leading to possible divergence. On synthetic tasks where truncated BPTT is shown to diverge, UORO converges. For instance, when a parameter has a positive short-term but negative long-term influence , truncated BPTT diverges unless the truncation span is very significantly longer than the intrinsic temporal range of the interactions, while UORO performs well thanks to the unbiasedness of its gradients.
Fichier principal
Vignette du fichier
uoro.pdf (418.74 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01972587 , version 1 (07-01-2019)

Identifiants

  • HAL Id : hal-01972587 , version 1

Citer

Corentin Tallec, Yann Ollivier. Unbiased online recurrent optimization. International Conference On Learning Representation, Apr 2018, Vancouver, Canada. ⟨hal-01972587⟩
42 Consultations
177 Téléchargements

Partager

Gmail Facebook X LinkedIn More