On the (Non-)existence of Convex, Calibrated Surrogate Losses for Ranking

Abstract : We study surrogate losses for learning to rank, in a framework where the rankings are induced by scores and the task is to learn the scoring function. We focus on the calibration of surrogate losses with respect to a ranking evaluation metric, where the calibration is equivalent to the guarantee that near-optimal values of the surrogate risk imply near-optimal values of the risk defined by the evaluation metric. We prove that if a surrogate loss is a convex function of the scores, then it is not calibrated with respect to two evaluation metrics widely used for search engine evaluation, namely the Average Precision and the Expected Reciprocal Rank. We also show that such convex surrogate losses cannot be calibrated with respect to the Pairwise Disagreement, an evaluation metric used when learning from pairwise preferences. Our results cast lights on the intrinsic difficulty of some ranking problems, as well as on the limitations of learning-to-rank algorithms based on the minimization of a convex surrogate risk.
Complete list of metadatas

Cited literature [27 references]  Display  Hide  Download

Contributor : Clément Calauzènes <>
Submitted on : Saturday, September 14, 2013 - 7:00:05 AM
Last modification on : Thursday, March 21, 2019 - 2:42:17 PM
Long-term archiving on : Tuesday, April 4, 2017 - 9:47:47 PM


Publisher files allowed on an open archive


  • HAL Id : hal-00834050, version 1


Clément Calauzènes, Nicolas Usunier, Patrick Gallinari. On the (Non-)existence of Convex, Calibrated Surrogate Losses for Ranking. Advances in Neural Information Processing Systems 25 (NIPS 2012), Dec 2012, Lake Tahoe, NV, United States. pp.197-205. ⟨hal-00834050⟩



Record views


Files downloads