M. Fazel, Matrix Rank Minimization with Applications, 2002.

S. Lee and S. Wright, Manifold identification in dual averaging for regularized stochastic online learning, Journal of Machine Learning Research, vol.13, pp.1705-1744, 2012.

C. Poon, J. Liang, and C. Schönlieb, Local convergence properties of saga/prox-svrg and acceleration, 2018.

L. Rosasco, S. Villa, and B. C. V?uv?u, A stochastic inertial forward-backward splitting algorithm for multivariate monotone inclusions, Optimization, vol.65, issue.6, pp.1293-1314, 2016.

G. Stewart, On the perturbation of pseudoinverses, projections and linear least squares problems, SIAM review, vol.19, issue.4, pp.634-662, 1977.

R. Tibshirani, Regression shrinkage and selection via the Lasso, Journal of the Royal Statistical Society. Series B. Methodological, vol.58, issue.1, pp.267-288, 1996.

R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight, Sparsity and smoothness via the fused lasso, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.67, issue.1, pp.91-108, 2005.

S. Vaiter, G. Peyré, and J. Fadili, Model consistency of partly smooth regularizers, IEEE Trans. Inf. Theory, 2014.
URL : https://hal.archives-ouvertes.fr/hal-00987293

A. W. Van-der-vaart, Asymptotic statistics, vol.3, 1998.

L. Xiao, Dual averaging methods for regularized stochastic learning and online optimization, Journal of Machine Learning Research, vol.11, pp.2543-2596, 2010.

L. Xiao and T. Zhang, A proximal stochastic gradient method with progressive variance reduction, SIAM Journal on Optimization, vol.24, issue.4, pp.2057-2075, 2014.

M. Yuan and Y. Lin, Model selection and estimation in regression with grouped variables, Journal of the Royal Statistical Society: Series B, vol.68, issue.1, pp.49-67, 2005.

P. Zhao and B. Yu, On model selection consistency of Lasso, The Journal of Machine Learning Research, vol.7, pp.2541-2563, 2006.