Impact of hidden weights choice on accuracy of MLP with randomly fixed hidden neurons for regression problems
Résumé
Neural network is a well-known tool able to learn model from data with a good accuracy. However, this tool suffers from an important computational time which may be too expansive. One alternative is to fix the weights and biases connecting the input to the hidden layer. This approach has been denoted recently extreme learning machine (ELM) which is able to learn quickly a model. Multilayers perceptron and ELM have identical structure, the main difference is that only the parameters linking hidden to output layers are learned. The weights and biases which connect the input to the hidden layers are randomly chosen and they don't evolved during the learning. The impact of the choice of these random parameters on the model accuracy is not studied in the literature. This paper draws on extensive literature concerning the feedforward neural networks initialization problem. Different feedforward neural network initialisation algorithms are recalled, and used for the determination of ELM parameters connecting input to hidden layers. These algorithms are tested and compared on several regression benchmark problems.
Origine : Fichiers produits par l'(les) auteur(s)
Loading...