Practical complexity control in multilayer perceptrons

Patrick Gallinari 1 Tautvydas Cibas
1 APA - Apprentissage et Acquisition des connaissances
LIP6 - Laboratoire d'Informatique de Paris 6
Abstract : Model selection, i.e. discovering the model which provides the best approximation to an input–output relationship is a key problem of supervised learning. For flexible or non-parametric models, this is often performed via the control of model complexity. This paper is aimed as an introduction to these methods in the context of neural networks; it illustrates and analyzes the effect and behaviour of simple and practical complexity control techniques using an artificial problem. The paper is focused on multilayer perceptrons, which are among the most popular non-linear regression and classification models. It first provides a brief review of model selection and complexity control techniques which have been proposed in the neural network community or adapted from statistics. Simple complexity control methods which have been found well suited for practical applications are then introduced and an experimental analysis which is aimed at illustrating why and how these methods do work is described. The dependency of overfitting on neural networks complexity is analysed, and within the perspective of the bias-variance trade-off, the error evolution and the effects of these techniques is characterized. Different tools for analyzing the effects of complexity control on the behaviour of multilayer perceptrons are then introduced in order to provide complementary insights on the observed behaviour.
Document type :
Journal articles
Complete list of metadatas
Contributor : Lip6 Publications <>
Submitted on : Friday, August 14, 2015 - 3:57:31 PM
Last modification on : Friday, May 24, 2019 - 5:23:22 PM

Links full text



Patrick Gallinari, Tautvydas Cibas. Practical complexity control in multilayer perceptrons. Signal Processing, Elsevier, 1999, 74 (1), pp.29-46. ⟨10.1016/S0165-1684(98)00200-X⟩. ⟨hal-01184484⟩



Record views