M. A. Alam and K. Fukumizu, Hyperparameter selection in kernel principal component analysis, J. Comput. Sci, vol.10, issue.7, pp.1139-1150, 2014.
DOI : 10.3844/jcssp.2014.1139.1150

URL : http://thescipub.com/pdf/10.3844/jcssp.2014.1139.1150

S. Arlot and A. Celisse, A survey of cross-validation procedures for model selection, Stat. Surveys, vol.4, pp.40-79, 2010.
URL : https://hal.archives-ouvertes.fr/hal-00407906

F. Bachoc, Cross validation and maximum likelihood estimations of hyper-parameters of Gaussian processes with model misspecifications, Comput. Stat. Data Anal, vol.66, pp.55-69, 2013.

M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear programming: theory and algorithms, 2013.

D. P. Bertsekas, Nonlinear programming, 1999.

M. Berveiller, B. Sudret, and M. Lemaire, Stochastic finite elements: a non intrusive approach by regression, Eur. J. Comput. Mech, vol.15, issue.1-3, pp.81-92, 2006.
DOI : 10.3166/remn.15.81-92

URL : https://hal.archives-ouvertes.fr/hal-01665506/file/MBBSMLS.pdf

G. Blatman and B. Sudret, An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis, Prob. Eng. Mech, vol.25, pp.183-197, 2010.

G. Blatman and B. Sudret, Adaptive sparse polynomial chaos expansion based on Least Angle Regression, J. Comput. Phys, vol.230, pp.2345-2367, 2011.
DOI : 10.1016/j.jcp.2010.12.021

R. Calandra, J. Peters, C. E. Rasmussen, and M. P. Deisenroth, Manifold Gaussian processes for regression, Neural Networks (IJCNN), 2016 International Joint Conference on, pp.3338-3345, 2016.
DOI : 10.1109/ijcnn.2016.7727626

URL : http://arxiv.org/pdf/1402.5876

F. Camastra, Data dimensionality estimation methods: a survey, Pattern recognition, vol.36, issue.12, pp.2945-2954, 2003.
DOI : 10.1016/s0031-3203(03)00176-6

URL : http://people.sabanciuniv.edu/berrin/cs512/reading/camastra-dimensionality.pdf

M. Chevreuil, R. Lebrun, A. Nouy, and P. Rai, A least-squares method for sparse low rank approximation of multivariate functions, SIAM/ASA J. Uncer. Quant, vol.3, issue.1, pp.897-921, 2015.
URL : https://hal.archives-ouvertes.fr/hal-00861913

P. G. Constantine, E. Dow, and Q. Wang, Active subspace methods in theory and practice: applications to kriging surfaces, SIAM Journal on Scientific Computing, vol.36, issue.4, pp.1500-1524, 2014.

A. Damianou and N. Lawrence, Deep Gaussian processes, Artificial Intelligence and Statistics, pp.207-215, 2013.

J. Djolonga, A. Krause, and V. Cevher, High-dimensional Gaussian process bandits, Advances in Neural Information Processing Systems, pp.1025-1033, 2013.

O. Dubrule, Cross validation of Kriging in a unique neighborhood, J. Int. Assoc Math. Geology, vol.15, issue.6, pp.687-699, 1983.

N. Durrande, D. Ginsbourger, and O. Roustant, Additive covariance kernels for high-dimensional Gaussian process modeling, Annales de la Faculté de Sciences de Toulouse, vol.21, p.481, 2012.
DOI : 10.5802/afst.1342

URL : https://hal.archives-ouvertes.fr/hal-00644934

M. Fornasier, K. Schnass, and J. Vybiral, Learning functions of few arbitrary linear parameters in high dimensions, Foundations of Computational Mathematics, vol.12, issue.2, pp.229-262, 2012.

K. Fukunaga, Introduction to statistical pattern recognition, 2013.

W. Gautschi, Orthogonal Polynomials: Computation and Approximation. Numerical Mathematics and Scientific Computation, 2004.

R. Ghanem and P. Spanos, Stochastic finite elements-A spectral approach, 1991.
DOI : 10.1007/978-1-4612-3094-6

D. E. Goldberg, Genetic algorithms in search, optimization and machine learning, 1989.

N. Hansen, S. D. Müller, and P. Koumoutsakos, Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es), Evolutionary computation, vol.11, issue.1, pp.1-18, 2003.

T. Hastie, R. Tibshirani, and J. Friedman, The elements of statistical learning: Data mining, inference and prediction, 2001.

G. E. Hinton and S. T. Roweis, Stochastic neighbor embedding, Advances in neural information processing systems, pp.857-864, 2003.

G. E. Hinton and R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science, vol.313, issue.5786, pp.504-507, 2006.

W. Huang, D. Zhao, F. Sun, H. Liu, and E. Y. Chang, Scalable Gaussian process regression using deep neural networks. In IJCAI, pp.3576-3582, 2015.

A. Hyvärinen and E. Oja, One-unit learning rules for independent component analysis, Advances in neural information processing systems, pp.480-486, 1997.

B. Iooss and P. Lema??trelema??tre, A review on global sensitivity analysis methods. In Uncertainty management in simulation-optimization of complex systems, pp.101-122, 2015.
DOI : 10.1007/978-1-4899-7547-8_5

URL : https://hal.archives-ouvertes.fr/hal-00975701

J. Jakeman, M. Eldred, and K. Sargsyan, Enhancing 1-minimization estimates of polynomial chaos expansions using basis selection, J. Comput. Phys, vol.289, pp.18-34, 2015.

P. Kersaudy, B. Sudret, N. Varsier, O. Picon, and J. Wiart, A new surrogate modeling technique combining Kriging and polynomial chaos expansions-Application to uncertainty analysis in computational dosimetry, J. Comput. Phys, vol.286, pp.103-117, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01143146

K. Konakli and B. Sudret, Global sensitivity analysis using low-rank tensor approximations, Reliab. Eng. Sys. Safety, vol.156, pp.64-83, 2016.
DOI : 10.1016/j.ress.2016.07.012

URL : https://hal.archives-ouvertes.fr/hal-01428988

K. Konakli and B. Sudret, Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions, 2016.
DOI : 10.1016/j.jcp.2016.06.005

URL : https://hal.archives-ouvertes.fr/hal-01432141

, J. Comput. Phys, vol.321, pp.1144-1169

J. T. Kwok and I. W. Tsang, The pre-image problem in kernel methods, Proc. 20th Int. Conf. Machine Learning (ICML-03), pp.408-415, 2003.

C. Lataniotis, S. Marelli, and B. Sudret, The Gaussian process modelling module in UQLab, Soft Comput. Civil Eng, vol.2, issue.3, pp.91-116, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01901966

N. Lawrence, Probabilistic non-linear principal component analysis with Gaussian process latent variable models, J. Machine Learning Research, vol.6, pp.1783-1816, 2005.

C. Li and A. Der-kiureghian, Optimal discretization of random fields, J. Eng. Mech, vol.119, issue.6, pp.1136-1154, 1993.

S. Marelli and B. Sudret, UQLab: A framework for uncertainty quantification in Matlab, Vulnerability, Uncertainty, and Risk (Proc. 2nd Int. Conf. on Vulnerability, Risk Analysis and Management (ICVRAM2014), pp.2554-2563, 2014.

S. Marelli and B. Sudret, UQLab user manual-polynomial chaos expansions, Chair of Risk, Safety & Uncertainty Quantification, 2018.

M. D. Mckay, R. J. Beckman, and W. J. Conover, A comparison of three methods for selecting values of input variables in the analysis of output from a computer code, Technometrics, vol.2, pp.239-245, 1979.

M. Moustapha, B. Sudret, J. Bourinet, and B. Guillaume, Comparative study of Kriging and support vector regression for structural engineering applications, Paper #04018005, vol.4, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01893274

K. Pearson, On lines and planes of closest fit to systems of points in space, Phil. Mag, vol.6, issue.2, pp.559-572, 1901.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion et al., Scikit-learn: Machine learning in python, J. Machine Learning Research, vol.12, pp.2825-2830, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00650905

C. Rasmussen and C. Williams, Adaptive computation and machine learning, 2006.

S. T. Roweis and L. K. Saul, Nonlinear dimensionality reduction by locally linear embedding, Science, vol.290, issue.5500, pp.2323-2326, 2000.

J. Sacks, W. Welch, T. Mitchell, and H. Wynn, Design and analysis of computer experiments, Stat. Sci, vol.4, pp.409-435, 1989.

, Sensitivity analysis, 2000.

A. Saltelli, M. Ratto, T. Andres, F. Campolongo, J. Cariboni et al., Global Sensitivity Analysis-The Primer, 2008.

T. J. Santner, B. J. Williams, and W. I. Notz, The Design and Analysis of Computer Experiments, 2003.

B. Schölkopf, A. Smola, and K. Müller, Nonlinear component analysis as a kernel eigenvalue problem, Neural Comput, vol.10, issue.5, pp.1299-1319, 1998.

&. Sobol and I. , Sensitivity estimates for nonlinear mathematical models, Math. Modeling & Comp. Exp, vol.1, pp.407-414, 1993.

J. B. Tenenbaum, V. D. Silva, and J. C. Langford, A global geometric framework for nonlinear dimensionality reduction, Science, vol.290, issue.5500, pp.2319-2323, 2000.

R. J. Tibshirani and R. Tibshirani, A bias correction for the minimum error rate in cross-validation, Ann. Applied Statistics, pp.822-829, 2009.

E. Torre, S. Marelli, P. Embrechts, and B. Sudret, Data-driven polynomial chaos expansion for machine learning regression, 2018.

V. Vapnik, The Nature of Statistical Learning Theory, 1995.

M. Verleysen and D. François, The curse of dimensionality in data mining and time series prediction, Computational Intelligence and Bioinspired Systems, vol.3512, pp.758-770, 2005.

P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol, Extracting and composing robust features with denoising autoencoders, Proc. 25th Int. Conf. Machine learning, pp.1096-1103, 2008.

N. Wahlström, T. B. Schön, and M. P. Deisenroth, Learning deep dynamical models from image pixels, IFAC-PapersOnLine, vol.48, issue.28, pp.1059-1064, 2015.

K. Q. Weinberger, F. Sha, and L. K. Saul, Learning a kernel matrix for nonlinear dimensionality reduction, 21st Int. Conf. on Machine Learning, p.106, 2004.

J. Weston, B. Schölkopf, and G. H. Bakir, Learning to find pre-images, Advances in neural information processing systems, pp.449-456, 2004.

A. G. Wilson, Z. Hu, R. Salakhutdinov, and E. P. Xing, Deep kernel learning, Artificial Intelligence and Statistics, pp.370-378, 2016.

D. Xiu, Numerical methods for stochastic computations-A spectral method approach, 2010.

D. Xiu and G. E. Karniadakis, The Wiener-Askey polynomial chaos for stochastic differential equations, SIAM J. Sci. Comput, vol.24, issue.2, pp.619-644, 2002.

Z. Yang, K. Tang, and X. Yao, Differential evolution for high-dimensional function optimization, Evolutionary Computation, pp.3523-3530, 2007.