Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

A Non-asymptotic Analysis of Non-parametric Temporal-Difference Learning

Eloïse Berthier 1, 2 Ziad Kobeissi 1, 2, 3 Francis Bach 1, 2 
1 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique - ENS Paris, CNRS - Centre National de la Recherche Scientifique, Inria de Paris
Abstract : Temporal-difference learning is a popular algorithm for policy evaluation. In this paper, we study the convergence of the regularized non-parametric TD(0) algorithm, in both the independent and Markovian observation settings. In particular, when TD is performed in a universal reproducing kernel Hilbert space (RKHS), we prove convergence of the averaged iterates to the optimal value function, even when it does not belong to the RKHS. We provide explicit convergence rates that depend on a source condition relating the regularity of the optimal value function to the RKHS. We illustrate this convergence numerically on a simple continuous-state Markov reward process.
Document type :
Preprints, Working Papers, ...
Complete list of metadata
Contributor : Eloïse Berthier Connect in order to contact the contributor
Submitted on : Monday, May 23, 2022 - 8:55:59 AM
Last modification on : Wednesday, June 8, 2022 - 12:50:06 PM


Files produced by the author(s)


  • HAL Id : hal-03672958, version 1
  • ARXIV : 2205.11831



Eloïse Berthier, Ziad Kobeissi, Francis Bach. A Non-asymptotic Analysis of Non-parametric Temporal-Difference Learning. 2022. ⟨hal-03672958⟩



Record views


Files downloads