Skip to Main content Skip to Navigation
Journal articles

Riemannian metrics for neural networks I: Feedforward networks

Yann Ollivier 1, 2
2 TAO - Machine Learning and Optimisation
CNRS - Centre National de la Recherche Scientifique : UMR8623, Inria Saclay - Ile de France, UP11 - Université Paris-Sud - Paris 11, LRI - Laboratoire de Recherche en Informatique
Abstract : We describe four algorithms for neural network training, each adapted to different scalability constraints. These algorithms are mathematically principled and invariant under a number of transformations in data and network representation, from which performance is thus independent. These algorithms are obtained from the setting of differential geometry, and are based on either the natural gradient using the Fisher information matrix, or on Hessian methods, scaled down in a specific way to allow for scalability while keeping some of their key mathematical properties.
Complete list of metadatas
Contributor : Yann Ollivier <>
Submitted on : Wednesday, September 4, 2013 - 1:33:38 PM
Last modification on : Tuesday, April 21, 2020 - 1:09:50 AM

Links full text



Yann Ollivier. Riemannian metrics for neural networks I: Feedforward networks. Information and Inference, Oxford University Press (OUP), 2015, 4 (2), pp.108-153. ⟨10.1093/imaiai/iav006⟩. ⟨hal-00857982⟩



Record views