A continuous-time approach to online optimization

Abstract : We consider a family of learning strategies for online optimization problems that evolve in continuous time and we show that they lead to no regret. From a more traditional, discrete-time viewpoint, this continuous-time approach allows us to derive the no-regret properties of a large class of discrete-time algorithms including as special cases the exponential weight algorithm, online mirror descent, smooth fictitious play and vanishingly smooth fictitious play. In so doing, we obtain a unified view of many classical regret bounds, and we show that they can be decomposed into a term stemming from continuous- time considerations and a term which measures the disparity between discrete and continuous time. As a result, we obtain a general class of infinite horizon learning strategies that guarantee an $O(n^{-1/2})$ regret bound without having to resort to a doubling trick.
Type de document :
Article dans une revue
Journal of Dynamics and Games, AIMS, 2017, 4 (2), pp.125-148
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01382299
Contributeur : Panayotis Mertikopoulos <>
Soumis le : dimanche 16 octobre 2016 - 15:29:19
Dernière modification le : mercredi 11 avril 2018 - 01:57:13

Identifiants

  • HAL Id : hal-01382299, version 1

Citation

Joon Kwon, Panayotis Mertikopoulos. A continuous-time approach to online optimization. Journal of Dynamics and Games, AIMS, 2017, 4 (2), pp.125-148. 〈hal-01382299〉

Partager

Métriques

Consultations de la notice

251