HAL: hal-00004908, version 2
 arXiv: math.ST/0505333
 Available versions: v1 (2005-05-16) v2 (2006-03-07)
 Recursive Aggregation of Estimators by Mirror Descent Algorithm with Averaging
 (2005-05-12)
 We consider a recursive algorithm to construct an aggregated estimator from a finite number of base decision rules in the classification problem. The estimator approximately minimizes a convex risk functional under the l1-constraint. It is defined by a stochastic version of the mirror descent algorithm (i.e., of the method which performs gradient descent in the dual space) with an additional averaging. The main result of the paper is an upper bound for the expected accuracy of the proposed estimator. This bound is of the order $\sqrt{(\log M)/t}$ with an explicit and small constant factor, where $M$ is the dimension of the problem and $t$ stands for the sample size. A similar bound is proved for a more general setting that covers, in particular, the regression model with squared loss.
 1: Laboratoire de Modélisation et Calcul (LMC - IMAG) CNRS : UMR5523 – Université Joseph Fourier - Grenoble I – Institut National Polytechnique de Grenoble (INPG) 2: Institute of Control Sciences (ICS) Institute of Control Sciences 3: Laboratoire de Probabilités et Modèles Aléatoires (LPMA) CNRS : UMR7599 – Université Pierre et Marie Curie (UPMC) - Paris VI – Université Paris VII - Paris Diderot
 Subject : Mathematics/Statistics
 Keyword(s): aggregation – stochastic approximation – classification – svm – convex risk minimization – rates of convergence – online learning
Attached file list to this document:
 PS
 mda-ai.ps(220.7 KB)
 PDF
 mda-ai.pdf(269.7 KB)
 hal-00004908, version 2 http://hal.archives-ouvertes.fr/hal-00004908 oai:hal.archives-ouvertes.fr:hal-00004908 From: Nicolas Vayatis <> Submitted on: Tuesday, 7 March 2006 19:12:43 Updated on: Tuesday, 7 March 2006 19:17:40