A continuous-time approach to online optimization - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Journal of Dynamics and Games Année : 2017

A continuous-time approach to online optimization

Résumé

We consider a family of mirror descent strategies for online optimization in continuous-time and we show that they lead to no regret. From a more traditional, discrete-time viewpoint, this continuous-time approach allows us to derive the no-regret properties of a large class of discrete-time algorithms including as special cases the exponential weights algorithm, online mirror descent, smooth fictitious play and vanishingly smooth fictitious play. In so doing, we obtain a unified view of many classical regret bounds, and we show that they can be decomposed into a term stemming from continuous-time considerations and a term which measures the disparity between discrete and continuous time. This generalizes the continuous-time based analysis of the exponential weights algorithm from [29]. As a result, we obtain a general class of infinite horizon learning strategies that guarantee an regret bound without having to resort to a doubling trick.

Dates et versions

hal-02619471 , version 1 (25-05-2020)

Identifiants

Citer

Joon Kwon, Panayotis Mertikopoulos. A continuous-time approach to online optimization. Journal of Dynamics and Games, 2017, 4 (2), pp.125-148. ⟨10.3934/jdg.2017008⟩. ⟨hal-02619471⟩

Collections

CNRS INRAE ANR
174 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More