Minimizing Calibrated Loss using Stochastic Low-Rank Newton Descent for large scale image classification

Abstract : A standard approach for large scale image classification involves high dimensional features and Stochastic Gradient Descent algorithm (SGD) for the minimization of classical Hinge Loss in the primal space. Although complexity of Stochastic Gradient Descent is linear with the number of samples these method suffers from slow convergence. In order to cope with this issue, we propose here a Stochastic Low-Rank Newton Descent SLND for minimization of any calibrated loss in the primal space. SLND approximates the inverse Hessian by the best low-rank approximation according to squared Frobenius norm. We provide core optimization for fast convergence. Theoretically speaking, we show explicit convergence rates of the algorithm using these calibrated losses, which in addition provide working sets of parameters for experiments. Experiments are provided on the SUN, Caltech256 and ImageNet databases, with simple, uniform and efficient ways to tune remaining SLND parameters. On each of these databases, SLND challenges the accuracy of SGD with a speed of convergence faster by order of magnitude.
Liste complète des métadonnées


https://hal.archives-ouvertes.fr/hal-00825414
Contributeur : Michel Barlaud <>
Soumis le : jeudi 23 mai 2013 - 16:16:51
Dernière modification le : samedi 17 septembre 2016 - 01:37:08
Document(s) archivé(s) le : mardi 4 avril 2017 - 10:45:56

Fichier

TechnicalReport.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-00825414, version 1

Collections

I3S | BNRMI | UNICE | UGA | LARA

Citation

Wafa Bel Haj Ali, Michel Barlaud, Richard Nock. Minimizing Calibrated Loss using Stochastic Low-Rank Newton Descent for large scale image classification. 2013. <hal-00825414>

Partager

Métriques

Consultations de
la notice

322

Téléchargements du document

123