\ell_p-\ell_q penalty for sparse linear and sparse multiple kernel multi-task learning,

Abstract : Recently, there has been a lot of interest around multi-task learning (MTL) problem with the constraints that tasks should share a common sparsity profile. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework and focus on $\ell_p-\ell_q$ (with $0 \leq p \leq 1$ and $ 1 \leq q \leq 2$) mixed-norms as sparsity- inducing penalties. Our motivation for addressing such a larger class of penalty is to adapt the penalty to a problem at hand leading thus to better performances and better sparsity pattern. For solving the problem in the general multiple kernel case, we first derive a variational formulation of the $\ell_1-\ell_q$ penalty which helps up in proposing an alternate optimization algorithm. Although very simple, the latter algorithm provably converges to the global minimum of the $\ell_1-\ell_q$ penalized problem. For the linear case, we extend existing works considering accelerated proximal gradient to this penalty. Our contribution in this context is to provide an efficient scheme for computing the $\ell_1-\ell_q$ proximal operator. Then, for the more general case when $0 < p < 1$, we solve the resulting non-convex problem through a majorization-minimization approach. The resulting algorithm is an iterative scheme which, at each iteration, solves a weighted $\ell_1-\ell_q$ sparse MTL problem. Empirical evidences from toy dataset and real-word datasets dealing with BCI single trial EEG classification and protein subcellular localization show the benefit of the proposed approaches and algorithms.
Type de document :
Article dans une revue
IEEE Trans. on Neural Networks, 2011, 22 (8), pp.1307-1320
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-00509608
Contributeur : Alain Rakotomamonjy <>
Soumis le : vendredi 13 août 2010 - 15:23:06
Dernière modification le : mardi 3 octobre 2017 - 14:52:07
Document(s) archivé(s) le : dimanche 14 novembre 2010 - 02:39:54

Fichier

mtlfeatsel.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-00509608, version 1

Collections

Citation

Alain Rakotomamonjy, Remi Flamary, Gilles Gasso, Stéphane Canu. \ell_p-\ell_q penalty for sparse linear and sparse multiple kernel multi-task learning,. IEEE Trans. on Neural Networks, 2011, 22 (8), pp.1307-1320. 〈hal-00509608〉

Partager

Métriques

Consultations de
la notice

342

Téléchargements du document

394