Efficient top rank optimization with gradient boosting for supervised anomaly detection

Abstract : In this paper we address the anomaly detection problem in a supervised setting where positive examples might be very sparse. We tackle this task with a learning to rank strategy by optimizing a differentiable smoothed surrogate of the so-called Average Precision (AP). Despite its non-convexity, we show how to use it efficiently in a stochastic gradient boosting framework. We show that using AP is much better to optimize the top rank alerts than the state of the art measures. We demonstrate on anomaly detection tasks that the interest of our method is even reinforced in highly unbalanced scenarios.
Document type :
Conference papers
Complete list of metadatas

Cited literature [26 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01611346
Contributor : Jordan Frery <>
Submitted on : Tuesday, October 17, 2017 - 4:35:13 PM
Last modification on : Tuesday, October 16, 2018 - 4:59:49 PM
Long-term archiving on : Thursday, January 18, 2018 - 3:43:05 PM

File

ECML2017_CR.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01611346, version 2

Citation

Jordan Frery, Amaury Habrard, Marc Sebban, Olivier Caelen, Liyun He-Guelton. Efficient top rank optimization with gradient boosting for supervised anomaly detection. European Conference on Machine Learning & Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD'17), Sep 2017, Skopje, Macedonia. ⟨hal-01611346v2⟩

Share

Metrics

Record views

83

Files downloads

197