Abstract : We train an artificial neural network with one hidden layer on realizations of the first few eigenvalues of a partial differential operator that is parameterized by a vector of independent random variables. The eigenvalues exhibit "crossings" in the high-dimensional parameter space. The training set is constructed by sampling the parameter either at random nodes or at the Smolyak collocation nodes. The performance of the neural network is evaluated empirically on a large random test set. We find that training on random or quasi-random nodes is preferable to the Smolyak nodes. The neural network outperforms the Smolyak interpolation in terms of error bias and variance on nonsimple eigenvalues but not on the simple ones.
https://hal.archives-ouvertes.fr/hal-01313404
Contributeur : Roman Andreev
<>
Soumis le : lundi 12 décembre 2016 - 17:40:19
Dernière modification le : vendredi 4 janvier 2019 - 17:32:31
Document(s) archivé(s) le : mardi 28 mars 2017 - 00:38:57