Regularity dependence of the rate of convergence of the learning curve for Gaussian process regression

Abstract : This paper deals with the speed of convergence of the learning curve in a Gaussian process regression framework. The learning curve describes the average generalization error of the Gaussian process used for the regression. More specifically, it is defined in this paper as the integral of the mean squared error over the input parameter space with respect to the probability measure of the input parameters. The main result is the proof of a theorem giving the mean squared error in function of the number of observations for a large class of kernels and for any dimension when the number of observations is large. From this result, we can deduce the asymptotic behavior of the generalization error. The presented proof generalizes previous ones that were limited to more specific kernels or to small dimensions (one or two). The result can be used to build an optimal strategy for resources allocation. This strategy is applied successfully to a nuclear safety problem.
Type de document :
Pré-publication, Document de travail
2012
Liste complète des métadonnées

Littérature citée [24 références]  Voir  Masquer  Télécharger

https://hal.archives-ouvertes.fr/hal-00737342
Contributeur : Loic Le Gratiet <>
Soumis le : jeudi 10 janvier 2013 - 16:00:54
Dernière modification le : lundi 29 mai 2017 - 14:22:01
Document(s) archivé(s) le : jeudi 11 avril 2013 - 04:06:49

Fichiers

Convergence_BLUP_arxiv.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-00737342, version 3
  • ARXIV : 1210.2879

Collections

Citation

Loic Le Gratiet, Josselin Garnier. Regularity dependence of the rate of convergence of the learning curve for Gaussian process regression. 2012. 〈hal-00737342v3〉

Partager

Métriques

Consultations de la notice

296

Téléchargements de fichiers

115