Efficient online learning with kernels for adversarial large scale problems
Résumé
We are interested in a framework of online learning with kernels for low-dimensional but large-scale and potentially adversarial datasets. Considering the Gaussian kernel, we study the computational and theoretical performance of online variations of kernel Ridge regression. The resulting algorithm is based on approximations of the Gaussian kernel through Taylor expansion. It achieves for $d$-dimensional inputs a (close to) optimal regret of order $O((\log n)^{d+1})$ with per-round time complexity and space complexity $O((\log n)^{2d})$. This makes the algorithm a suitable choice as soon as $n \gg e^d$ which is likely to happen in a scenario with small dimensional and large-scale dataset.
Fichier principal
Efficient_online_learning_with_Kernels_for_adversarial_large_scale_problems__preprint.pdf (768.25 Ko)
Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)