TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Neurocomputing Année : 2011

TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization

Résumé

In this paper an improvement of the optimally pruned extreme learning machine (OP-ELM) in the form of a L2 regularization penalty applied within the OP-ELM is proposed. The OP-ELM originally proposes a wrapper methodology around the extreme learning machine (ELM) meant to reduce the sensitivity of the ELM to irrelevant variables and obtain more parsimonious models thanks to neuron pruning. The proposed modification of the OP-ELM uses a cascade of two regularization penalties: first a L1 penalty to rank the neurons of the hidden layer, followed by a L2 penalty on the regression weights (regression between hidden layer and output layer) for numerical stability and efficient pruning of the neurons. The new methodology is tested against state of the art methods such as support vector machines or Gaussian processes and the original ELM and OP-ELM, on 11 different data sets; it systematically outperforms the OP-ELM (average of 27% better mean square error) and provides more reliable results - in terms of standard deviation of the results - while remaining always less than one order of magnitude slower than the OP-ELM.

Dates et versions

hal-00648069 , version 1 (05-12-2011)

Identifiants

Citer

Yoan Miche, Mark van Heeswijk, Patrick Bas, Amaury Lendasse, Olli Simula. TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization. Neurocomputing, 2011, 74 (16), pp.2413-2421. ⟨10.1016/j.neucom.2010.12.042⟩. ⟨hal-00648069⟩
196 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More