Nearest Neighbors Strategy, ${\mathbb {P}}_1$ Lagrange Interpolation, and Error Estimates for a Learning Neural Network - Institut Camille Jordan Accéder directement au contenu
Article Dans Une Revue SN Computer Science Année : 2021

Nearest Neighbors Strategy, ${\mathbb {P}}_1$ Lagrange Interpolation, and Error Estimates for a Learning Neural Network

Résumé

Approximating a function with a learning neural network (LNN) has been considered for a long time by many authors and is known as “the universal approximation property”. The smaller the precision, the more neurons in the hidden layer one should take to reach the required precision. An other challenge for learning neural networks is understanding how to reduce the computational expense and how to select the training examples. Our contribution here is to consider an LNN as a dis- cretization of the time-dependent transport of the identity function, Id, towards the function T to be approximated by the LNN. Using classical interpolation properties for P1 Lagrange functions, we are able to give space-time error estimates for the simple LNN we introduce.
Fichier non déposé

Dates et versions

hal-04444606 , version 1 (07-02-2024)

Identifiants

Citer

Khadidja Benmansour, Jerome Pousin. Nearest Neighbors Strategy, ${\mathbb {P}}_1$ Lagrange Interpolation, and Error Estimates for a Learning Neural Network. SN Computer Science, 2021, 2 (1), pp.38. ⟨10.1007/s42979-020-00409-3⟩. ⟨hal-04444606⟩
14 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More