On Structured Prediction Theory with Calibrated Convex Surrogate Losses - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

On Structured Prediction Theory with Calibrated Convex Surrogate Losses

Résumé

We provide novel theoretical insights on structured prediction in the context of efficient convex surrogate loss minimization with consistency guarantees. For any task loss, we construct a convex surrogate that can be optimized via stochastic gradient descent and we prove tight bounds on the so-called “calibration function” relating the excess surrogate risk to the actual risk. In contrast to prior related work, we carefully monitor the effect of the exponential number of classes in the learning guarantees as well as on the optimization complexity. As an interesting consequence, we formalize the intuition that some task losses make learning harder than others, and that the classical 0-1 loss is ill-suited for structured prediction.

Dates et versions

hal-01611691 , version 1 (06-10-2017)

Identifiants

Citer

Anton Osokin, Francis Bach, Simon Lacoste-Julien. On Structured Prediction Theory with Calibrated Convex Surrogate Losses. The Thirty-first Annual Conference on Neural Information Processing Systems (NIPS), Dec 2017, Long Beach, United States. ⟨hal-01611691⟩
270 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More