Multi-task learning with one-class SVM - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Neurocomputing Année : 2014

Multi-task learning with one-class SVM

Résumé

Multi-task learning technologies have been developed to be an effective way to improve the generalization performance by training multiple related tasks simultaneously. The determination of the relatedness between tasks is usu- ally the key to the formulation of a multi-task learning method. In this paper, we make the assumption that when tasks are related to each other, usually their models are close enough, that is, their models or their model parameters are close to a certain mean function. Following this task relat- edness assumption, two multi-task learning formulations based on one-class Support Vector Machines (one-class SVM) are presented. With the help of new kernel design, both multi-task learning methods can be solved by the optimization program of a single one-class SVM. Experiments conducted on both low dimensional nonlinear toy dataset and high dimensional textured images show that our approaches lead to very encouraging results.
Fichier non déposé

Dates et versions

hal-00915458 , version 1 (07-12-2013)

Identifiants

Citer

Xiyan He, Gilles Mourot, Didier Maquin, José Ragot, Pierre Beauseroy, et al.. Multi-task learning with one-class SVM. Neurocomputing, 2014, 133, pp.416-426. ⟨10.1016/j.neucom.2013.12.022⟩. ⟨hal-00915458⟩
124 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More