D. Aldous, Interacting particle systems as stochastic social dynamics, Bernoulli, vol.19, issue.4, pp.1122-1149, 2013.

D. Aldous and D. Lanoue, A lecture on the averaging process, Probability Surveys, vol.9, pp.90-102, 2012.

K. Avrachenkov, L. Cottatellucci, and M. Hamidouche, Eigenvalues and spectral dimension of random geometric graphs in thermodynamic regime, International Conference on Complex Networks and Their Applications, pp.965-975, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02397383

F. Bach and E. Moulines, Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n), Advances in Neural Information Processing Systems, pp.773-781, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00831977

B. Bauer, L. Devroye, M. Kohler, A. Krzy?ak, and H. Walk, Nonparametric estimation of a function from noiseless observations at random points, Journal of Multivariate Analysis, vol.160, pp.93-104, 2017.

R. Berthier, F. Bach, and P. Gaillard, Accelerated gossip in networks of given dimension using Jacobi polynomial iterations, SIAM Journal on Mathematics of Data Science, vol.2, issue.1, pp.24-47, 2020.
URL : https://hal.archives-ouvertes.fr/hal-01797016

L. Bottou and O. Bousquet, The tradeoffs of large scale learning, Advances in Neural Information Processing Systems, vol.20, pp.161-168, 2008.

L. Bottou and Y. Le-cun, On-line learning for very large data sets, Applied Stochastic Models in Business and Industry, vol.21, issue.2, pp.137-151, 2005.

A. Caponnetto and E. Vito, Optimal rates for the regularized least-squares algorithm, Foundations of Computational Mathematics, vol.7, issue.3, pp.331-368, 2007.

V. Cevher and B. C. V?, On the linear convergence of the stochastic gradient method with constant step-size, Optimization Letters, vol.13, issue.5, pp.1177-1187, 2019.

F. R. Chung and F. C. Graham, Spectral Graph Theory. Number 92 in CBMS Regional Conference Series in Mathematics, 1997.

A. Dieuleveut and F. Bach, Nonparametric stochastic approximation with large step-sizes, The Annals of Statistics, vol.44, issue.4, pp.1363-1399, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01053831

A. Dieuleveut, N. Flammarion, and F. Bach, Harder, better, faster, stronger convergence rates for least-squares regression, The Journal of Machine Learning Research, vol.18, issue.1, pp.3520-3570, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01275431

S. Fischer and I. Steinwart, Sobolev norm learning rates for regularized least-squares algorithm, 2017.

L. Györfi, M. Kohler, A. Krzyzak, and H. Walk, A distribution-free theory of nonparametric regression, 2006.

T. Hofmann, B. Schölkopf, and A. J. Smola, Kernel methods in machine learning, The Annals of Statistics, pp.1171-1220, 2008.

K. Jun, A. Cutkosky, and F. Orabona, Kernel truncated randomized ridge regression: Optimal rates and low noise acceleration, Advances in Neural Information Processing Systems, pp.15332-15341, 2019.

M. Kohler and A. Krzy?ak, Optimal global rates of convergence for interpolation problems with random design, Statistics & Probability Letters, vol.83, issue.8, pp.1871-1879, 2013.

J. Lin and V. Cevher, Optimal convergence for distributed learning with stochastic gradient methods and spectral-regularization algorithms, 2018.

S. Ma, R. Bassily, and M. Belkin, The power of interpolation: Understanding the effectiveness of SGD in modern over-parametrized learning, Proceedings of the 35th International Conference on Machine Learning, pp.3325-3334, 2018.

P. Mathieu and E. Remy, Isoperimetry and heat kernel decay on percolation clusters. The Annals of Probability, vol.32, pp.100-128, 2004.

N. Mücke, G. Neu, and L. Rosasco, Beating SGD saturation with tail-averaging and minibatching, Advances in Neural Information Processing Systems, pp.12568-12577, 2019.

A. Nedic, A. Ozdaglar, and P. A. Parrilo, Constrained consensus and optimization in multi-agent networks, IEEE Transactions on Automatic Control, vol.55, issue.4, pp.922-938, 2010.

L. Pillaud-vivien, A. Rudi, and F. Bach, Statistical optimality of stochastic gradient descent on hard learning problems through multiple passes, Advances in Neural Information Processing Systems, pp.8114-8124, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01799116

L. Rosasco and S. Villa, Learning with incremental iterative regularization, Advances in Neural Information Processing Systems, pp.1630-1638, 2015.

M. Schmidt and N. L. Roux, Fast convergence of stochastic gradient descent under a strong growth condition, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00855113

D. Shah, Gossip algorithms. Foundations and Trends® in Networking, vol.3, pp.1-125, 2009.

P. Tarrès and Y. Yao, Online learning as stochastic approximation of regularization paths: Optimality and almost-sure convergence, IEEE Transactions on Information Theory, vol.60, issue.9, pp.5716-5735, 2014.

A. B. Tsybakov, Introduction to Nonparametric Estimation, 2008.

S. Vaswani, F. Bach, and M. Schmidt, Fast and faster convergence of sgd for over-parameterized models and an accelerated perceptron, Proceedings of Machine Learning Research, pp.1195-1204, 2019.

G. Wahba, Spline Models for Observational Data, Society for Industrial and Applied Mathematics, 1990.

H. Wendland, Scattered Data Approximation, Cambridge Monographs on Applied and Computational Mathematics, 2004.

Y. Ying and M. Pontil, Online gradient descent learning algorithms, Foundations of Computational Mathematics, vol.8, issue.5, pp.561-596, 2008.