Super-efficiency of automatic differentiation for functions defined as a minimum - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Super-efficiency of automatic differentiation for functions defined as a minimum

Gabriel Peyré
Thomas Moreau

Résumé

In min-min optimization or max-min optimization, one has to compute the gradient of a function defined as a minimum. In most cases, the minimum has no closed-form, and an approximation is obtained via an iterative algorithm. There are two usual ways of estimating the gradient of the function: using either an analytic formula obtained by assuming exactness of the approximation, or automatic differentiation through the algorithm. In this paper, we study the asymptotic error made by these estimators as a function of the optimization error. We find that the error of the automatic estimator is close to the square of the error of the analytic estimator, reflecting a super-efficiency phenomenon. The convergence of the automatic estimator greatly depends on the convergence of the Jacobian of the algorithm. We analyze it for gradient descent and stochastic gradient descent and derive convergence rates for the estimators in these cases. Our analysis is backed by numerical experiments on toy problems and on Wasserstein barycenter computation. Finally, we discuss the computational complexity of these estimators and give practical guidelines to chose between them.

Domaines

Autres [stat.ML]

Dates et versions

hal-02941922 , version 1 (17-09-2020)

Identifiants

Citer

Pierre Ablin, Gabriel Peyré, Thomas Moreau. Super-efficiency of automatic differentiation for functions defined as a minimum. International Conference on Machine Learning, Aug 2020, Vienna, Austria. ⟨hal-02941922⟩
33 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More