The Reference Point Method, a ”hyperreduction” technique: Application to PGD-based nonlinear model reduction

A new approximation technique, called Reference Point Method (RPM), is proposed in order to reduce the computational complexity of algebraic operations for constructing reduced-order models in the case of time dependent and/or parametrized nonlinear partial diﬀerential equations. Even though model reduction techniques enables one to decrease the dimension of the initial problem in the sense that far fewer degrees of freedom are needed to represent the solution, the complexity of evaluating the nonlinear terms and assembling the low dimensional operator associated with the reduced-order model still scales with the size of the original high-dimensional model. This point can be critical, especially when the reduced-order basis changes throughout the solution strategy as it is the case for model reduction techniques based on Proper Generalized Decomposition (PGD). Based on the concept of spatial, parameter/time reference points and inﬂuence patches, the RPM deﬁnes a compressed version of the data from which an approximate low-rank separated representation by patch of the operators can be constructed by explicit formulas at low-cost without resorting to SVD-based techniques. An application of the RPM to PGD-based model reduction for a nonlinear parametrized elliptic PDE previously studied by other authors with reduced-basis method and EIM is proposed. It is shown that computational complexity to construct the reduced-order model can be divided in practice by one order of magnitude compared with the classical PGD approach.


Introduction
Numerical simulation has been playing an increasingly important role in science and engineering due to the need to describe realistic scenarios and derive tools to facilitate the virtual design of new structures while reducing the use of real prototypes.Most of the time, engineers are interested into specific quantities called outputs (e.g.forces, critical stresses, pressure drops...) in some particular zones, rather than into the description of the model at each point of the considered domain.These outputs are functions of the system parameters, called inputs (e.g.geometry parameter, material properties, boundary conditions and loadings...), that define a particular configuration of the model.The evaluation of these outputs needs the solution of the underlying PDE that requires a computational cost which, in some engineering fields, can be very high, especially when the simulations concern nonlinear analyses of complex high-fidelity models.
Moreover, engineering design and optimization may require thousands of these evaluations, sometimes in real-time.A challenging issue would be to provide engineers with a kind of Virtual Chart constructed during an offline stage that can be CPU intensive.Once constructed, it can be used online to solve real-time or many queries simulations.Despite of the continuing progress in computer speeds and hardware capabilities, the construction of those charts would be hardly tractable with classical numerical approaches.
Model reduction techniques aim at circumventing this obstacle by seeking the solution of a given problem in a reduced-order basis (ROB), whose dimension is much smaller than the size of the original highdimensional model.These techniques take advantage of the redundancy of information that usually exists when describing the solution.In applied mathematics, the first proposed model reduction technique was the Proper Orthogonal Decomposition (see [1,2,3,4,5] or the review [6] for more examples).Techniques based on POD involve a learning phase which consists in solving the problem at some particular time instants and/or parameter values arbitrarily chosen, either from the full-order model in space or, sometimes, from a simplified model.These solutions, called snapshots, can be CPU-expensive.A ROB is subsequently formed by considering only the most relevant POD modes of these snapshots.The reduced-order model (ROM) is then classically generated by Galerkin projection onto the ROB and solved for the entire time and/or parameter domain, but other approaches such as Petrov-Galerkin or minimization techniques can also be used.The strong point of these techniques is the fact that, the number of the most relevant POD modes is often much lower than the size of the full-order model in space, however it is case sensitive [7].For instance, in dynamics computations, changes in boundary conditions can drastically affect the POD accuracy and a high number of snapshots may be required [7,8].Since the 80's, the Reduced-Basis (RB) approach has been developed and consists in a greedy algorithm to select the most relevant calculations to be performed on the parametric space in order to enrich the ROB [9,10,11,12,13,14,15,16,17].In this technique, snapshots are selected by a greedy algorithm such that the new (n + 1) th snapshot minimizes the residue, associated with a chosen norm, of the solution achieved by solving the original problem projected onto the n-order reduced basis.In that way, this strategy ensures the quality of the reduced-order basis for the construction of the ROM and palliates the case-sensitivity of POD.
Another appealing family of model reductions which has received a growing interest during the last decade is based on the Proper Generalized Decomposition (PGD).This technique does not need to solve the full-order model since it does not require snapshots to build up the ROB.Basically, PGD consists in seeking the solution of a problem in a relevant ROB which is generated automatically and on-the-fly by a greedy or power iterations algorithm.The interested reader is referred to [18] for a review on PGDbased techniques and to the book [19] for a handbook on separated representations and model reduction techniques.A valuable survey on the use of separated representations for solving parametric models can also be found in [20].In the field of computational mechanics, PGD was used in [21] under the name of radial loading approximation as one of the fundamental ingredients of the LATIN (Large Time Increment) method [22], a non-incremental solution strategy for nonlinear evolution problems.By an alternatingdirection scheme, the strategy generates approximations of the solution on the entire time-space domain by successive corrections and, consequently, is likely appropriate for time-space separated representation.In this context, PGD enabled to circumvent efficiently storage issue of iterates defined on the whole time-space domain while saving drastically the computational time.One of the significant improvements of the strategy concerns the introduction of a Preliminary step, which consists in first updating the time functions at each LATIN iteration by using the ROB generated at the previous iteration in order to compute a new iterate [23,24,25,26,27,28].Called Update step in [29,30] with Newton-like solution strategies, this step has proved efficiency to reduce the number of PGD functions generated while keeping the most relevant ones to produce the best approximate separated representation of the solution.However, this step can be time consuming in practice and may represent between fifty and seventy percent of the total CPU time [31].
More generally, model reduction techniques are particularly efficient when the ROM needs to be constructed only once or when this step can be performed offline, prior to the online resolution of this model which can then be very fast.This is the case of parametrized time-invariant systems [32], linear stationary and quasi-stationary systems whose operators are affine functions of the input parameters [16].On the contrary, when the projection is applied to linear dynamics or stationary systems with non-affine parameter dependence, or general nonlinear problems, the resulting ROM is costly to assemble, decreasing the efficiency of reduced-order modeling.Indeed, for any model reduction technique based on the projection of the problem on a given reduced-order basis, the computational complexity of evaluating the nonlinear terms (Jacobian or residue of the solution strategy) and assembling the ROM's low dimensional operator scales with the size of the original high-dimensional model.This point can be critical especially when the reduced-order basis changes throughout the solution strategy as it is the case for model reduction tech-niques based on Proper Generalized Decomposition (PGD).This makes the bottleneck of model reduction strategies applied to nonlinear problems.
Several techniques have been introduced in the literature in order to tackle this issue, especially for model reduction techniques based on a learning stage (POD-Galerkin or reduced basis method for instance).Among them, the Empirical Interpolation Method (EIM) is probably one of the most popular.It has been developed in particular for linear elliptic problems with non-affine parameter dependence [33] as well as for nonlinear elliptic and parabolic problems [15].Several variants can be found in the literature.To name a few, one can cite the Best Interpolation Point Method (BPIM) [34] or the Discrete EIM (DEIM) [35].Another family of techniques that tackles nonlinear reduced-order modeling is the one that belongs to the Gappy-POD application, as the A priori Hyper-Reduction (APHR) [36], the Missing Point Estimation (MPE) [37], the Gauss Newton with approximated tensors (GNAT) [38].More recently, [39] has introduced the Energy-Conserving Sampling and Weighting (ECSW), allowing not only to focus on the accuracy of the approximated function, but also to preserve the energetic aspect of the solution.
Concerning PGD-based model reduction, techniques to overcome the aforementioned bottleneck seems to be by far less numerous.As previously said, the possibility to enrich the reduced-order basis throughout the computation is definitely an advantage to tackle nonlinear problems efficiently.However the fact that ROB changes quite often makes the previous techniques such as EIM less suitable in the context of PGD, a priori.Indeed, for a fixed number of best/magic interpolation points the error in the coefficient functions used to interpolate the nonlinear terms ultimately dominates when the size of the ROB increases [15].Adding more magic points may be necessary as long as the ROB evolves.Such an adaptative strategy can obviously be derived but it may be not very efficient in this context, at least in the LATIN framework.
In the present article, a more pragmatic approach is proposed.Based on the concept of spatial, time and parameter reference points associated to influence patches, the Reference Point Method (RPM) defines a compressed version of the data from which an approximate low-rank separated representation by patch of operators can be constructed by explicit formulas at low-cost.In this work, RPM is applied to PGD-based nonlinear model reduction with the LATIN solution strategy.More precisely, it is used at each LATIN iteration to solve the Preliminary step, which consists in an update of the reduced model from the current ROB.As previously said, this step is time consuming and may represent up to seventy percents of the total CPU time.In practice, the RPM enables to decrease the cost of this stage of one order of magnitude to make it represent less than ten percents of the total computational time, which is very promising.It is important to note that contrary to an interpolation technique, it is rather an approximation technique of the integrals involved in the ROM construction similarly to quadrature techniques in classical finite element methods.The key point is that, even though the integral computations are not enough accurate, the PGD-model reduction technique can anyway improve it by adding new correction functions to the ROB at the next LATIN iterations, which ensures the convergence of the whole process.
The article is structured as follows.A reference problem previously investigated by other authors by reduced-basis techniques with EIM [15] or POD with DEIM [35] is first presented in Section 2. The studied nonlinear parametrized elliptic PDE is solved by the LATIN-PGD method and the aforementioned bottleneck of nonlinear model reduction is illustrated.The Reference Point Method is presented in Section 3 and its basic features (reference points, patches, generalized components) are introduced.It is shown that the space of generalized components provides a framework that presents interesting properties dealing with the elementary algebraic operations [40], which simplifies the evaluation of integrands.The way to reconstruct by explicit formulas an approximate low-rank separated representation by patch of operators is also detailed.Keeping a separated representation of the quantities and operators without resorting to SVD-based techniques is indeed a great advantage regarding integral computation by separation of variables.A comparison between the RPM applied to PGD-based nonlinear model reduction and EIM with reduced-order basis technique is proposed in Section 4 in order to appreciate and illustrate the performances of the RPM.

PGD-based model reduction of a nonlinear parametrized elliptic PDE
The reference problem that motivates the study is first presented in Section 2.1.It was previously investigated through EIM with reduced-basis technique in [15] and DEIM with POD in [35].In these works, a Newton-based nonlinear solution strategy was used.Here, an alternating-direction scheme, the LATIN method, is used as detailed in Appendix A. The LATIN iterative scheme generates approximations of the solution on the entire space-parameter domain by successive corrections and, consequently, is likely appropriate for space-parameter separated representation.A PGD-based model reduction is introduced within the LATIN method as detailed in Section 2.2.The issue of computational complexity of nonlinear model reduction is discussed in Section 2.3.

Reference problem formulation
Let us consider as a reference problem, the nonlinear parametrized elliptic PDE first studied in [15,35] defined on a spatial domain Ω ⊂ R 2 and a parameter domain D ⊂ R 2 and stated as follows: and homogeneous Dirichlet boundary condition on ∂Ω × D, such that: This problem can be interpreted as a stationary diffusion problem with a parametrized nonlinear interior heat source term.
A space weak formulation of Problem 1 can be proposed.In this case, the solution u is identified with a function defined on D with values in Hilbert space and Problem 1 can be stated as follows: Problem 2 (Space weak form).Given µ ∈ D, find u : D → V such that: where a(•, •) is a bilinear form on V and l(•) is a linear form on V defined by: Term g(u; µ) is a nonaffine nonlinear function of the parameter µ, spatial coordinate x, and field variable u(x, µ).Bilinear term a(u, v) and linear term l(v) are V-continuous bounded functionals and are parameterindependent.It is shown in [15] that Problem 2 is well-posed and that it admits a unique solution u ∈ V.
A space-parameter weak formulation of Problem 1 can also be proposed.The following function space is introduced: where • V is a norm on V.A space-parameter weak formulation of Problem 2 can be defined by the following problem: Problem 3 (Space-parameter weak form).Find u ∈ S such that: where A and L are bilinear and linear forms defined by: The nonlinear form G is defined by: Problem 2 can be easily solved by introducing a finite element approximation space V h ∈ V of u and by using a Newton-based nonlinear solution strategy.In the following, the reference solution refers to the solution obtained by a finite element mesh of 50 × 50 bilinear quadrilateral elements leading to a finite element approximation space of dimension N = 2601.The approximate solution u h of u for two sets of parameters, µ = (0.01, 0.01) and µ = (10, 10), is given in Figure 1.It can be seen that the parameter µ 1 controls the strength of the sink term whereas µ 2 changes the strength of the nonlinearity, the exponential function µ 1 e µ2u increasing more the peak magnitude of the negative part of u than the magnitude of the positive one.

The LATIN-PGD method
In order to solve Problem 3, the LATIN method is used as a nonlinear solution strategy (see Appendix A for further details).By an alternating-direction scheme, the strategy generates approximations of the solution on the entire space-parameter domain by successive corrections.In this case, it shares similar features with alternating-direction nonlinear solution strategy and augmented Lagrangian methods [41].
The LATIN iterative scheme generates approximations of the solution on the entire space-parameter domain by successive corrections and, consequently, is likely appropriate for space-parameter separated representation.By denoting P = L 2 (D; R 2 ) := L 2 (D) and by identifying the space S = L 2 (D, V) with the tensor product space V ⊗ P, the correction δu (n+1) of the global stage (see Problem 6 in Appendix A) is sought in V ⊗ P. The following approximation of order k of the iterate u (n+1) is proposed: where u (0) ∈ A d is an initial approximation of the solution that verifies the boundary conditions and which can be possibly given under separated representation provided that it is admissible.Each PGD pair (λ i , Φ i ) ∈ P × V is unknown and determined throughout the computation by a greedy algorithm.
The construction of a new space function Φ i is by far the most expensive step of this process.Thus, at a given iteration n + 1 of the nonlinear solver, it is advantageous to first reuse the reduced-order basis W k = {Φ i } 1 i k generated up to iteration n by updating the parameter functions {λ i } 1 i k [27].One proceeds with the global stage at the iteration n + 1 of the nonlinear solver as follows: 1. Preliminary step: reuse of the current reduced-order basis [23,24,25,26,27,30].This step consists in building an approximation of the solution, denoted by s(n+1) = (ȗ (n+1) , w(n+1) ), thanks to the ROB generated at the previous iteration n of the nonlinear iterative scheme.This is similar to what is done during the online stage of classical reduced-order basis techniques.The main difference comes from the fact that here the ROB evolves throughout the iterations which makes pre-computations of operators inefficient.Due to the bilinearity and linearity of forms involved in (A.7) associated to global stage (see Problem 5 in Appendix A), one seeks an approximation of order k of the correction δu (n+1) such that: Assuming that an approximation of order k of u (n) is available from the previous iteration n of the nonlinear iterative scheme, an approximation of ȗ(n+1) can be deduced as follows : Here, the only unknowns are the functions {δλ i } 1 i k depending on the parameters.Given a ROB of space functions Problem 4 (Preliminary/update step).Find parameter corrections {δλ i } 1 i k such that: It is worth noting at the fact that terms a(Φ i , Φ j ) are parameter-independent and can be computed once for all parameter values provided that the ROB does not change.However, nonlinear terms Φ i H − (u (n) ; µ) Φ j and residue R(u (n) , Φ j λ * ; µ) depend on g(u (n) ; µ) and have to be evaluated for each new parameter value and iterate u (n) anyway.As shown in Section 2.3, this results in a computational cost that scales with the size of the original high-dimensional model.2. Preliminary step performance indicator : The preliminary step produces a first approximation ȗ(n+1) ≈ u of Problem 6 (see Appendix A) at the iteration n + 1 by seeking the solution in the span of the already existing ROB, generated at the previous iteration n (see (2)).In order to appreciate the improvement of the solution between two consecutive LATIN iterations thanks to the preliminary step, the following performance indicator [42], based on LATIN error indicator δ L (see (A.9)), is proposed: and e 2 = ȗ(n+1) − û(n+1/2) Due to the particular choice made for search direction H + (see (A.2)), one has: If the value of η 0 is higher than a given threshold, the correction δu (n+1) k is significant and the global stage at the iteration n + 1 is considered to be solved.One can proceed to a new LATIN iteration, that is to say to a new local stage.Otherwise, one proceeds to the generation of a new PGD pair as described hereafter.3. Generation of a new PGD pair : The prediction ȗ(n+1) ≈ u (n) +δu (n+1) k previously computed during the preliminary step is considered to be known but it is not good enough and the performance indicator ( 5) is lower than a given threshold.Then a new PGD pair is sought to enrich the previous approximation by solving Problem 6 (see Appendix A) by a progressive Galerkin PGD procedure.The new PGD pair is generated to approximate the correction δu (n+1) = u (n+1) − u (n) of the LATIN iteration n + 1 (see ( 2)).Let assume that a decomposition δu (n+1) k (denoted by δu k in the following to alleviate the notations) of order k is known.Thus, a new approximation of u (n+1) can be defined as: The progressive definition of a new couple (Φ, λ) ∈ V ⊗P is defined as the one that verifies the following Galerkin orthogonality condition : That is to say: (9) where : The two following mappings are introduced: Definition 1 (Space function mapping).S k+1 : P → V is the application that maps a parameter function λ ∈ P into a space function Φ = S k+1 (λ) ∈ V is defined as follows: Definition 2 (Parameter function mapping).P k+1 : V → P is the application that maps a space function Φ ∈ V into a parameter function λ = P k+1 (Φ) ∈ P is defined as follows: Definition 3 (Progressive Galerkin PGD).A couple (Φ, λ) ∈ V ⊗ P verifies (8) if and only if Φ = S k+1 (λ) and λ = P k+1 (Φ), i.e.Φ = S k+1 (λ) and λ (resp.λ = P k+1 (Φ) and Φ) is a fixed point of mapping Algorithm 1 Progressive PGD to generate the (k + 1) th new PGD pair (Power iterations algorithm) ⊲ Solve problem of Definition 1

5:
Compute Check stagnation of λ ⊲ Use for instance the L 2 (D) norm 7: end for 8: Φ k+1 ← Φ and λ k+1 ← λ A power iterations algorithm that iteratively generates parameter function λ and space function Φ is used as shown in Algorithm 1.The interested reader could refer to [29,22,43] for further details.Once this new pair of parameter and space functions is computed, the (k + 1) th space function is orthogonalized and added to the reduced-basis to form W k+1 .In practice, only one new PGD pair is added per LATIN iteration to approximate correction δu (n+1) but nothing prevents one to generate more than one.
The final algorithm of the LATIN-PGD nonlinear solution strategy is finally given in Algorithm 2 of Appendix B. Remark 1.In the case of high-dimensional parametric models, different update strategies can be proposed [30].More precisely, for a large number r of parameters (µ 1 , ..., µ r ), a separated variables representation of the solution under the form u(x, µ 1 , . . ., µ r ) ≈ k i=1 Φ i (x)λ 1 i (µ 1 )...λ r i (µ r ) can be proposed similarly to (1) (see for instance [20]).Among others, a possible strategy consists in successive updates of the k functions {λ l i } 1 i k along each selected dimension l according to a particular sequence [30].In this case, the update of the k functions {λ l i } 1 i k for a given dimension l with λ l i ∈ P l can be stated as Problem 4 whose dimension is k × dim(P l ).Obviously, the updating step along a particular dimension l is affordable provided that dim(P l ) is reasonably small.

Bottleneck of nonlinear model reduction -A brief complexity analysis
In order to illustrate the bottleneck of nonlinear model reduction and to explain why the construction of the reduced-order model involved during the preliminary step is CPU intensive, let come back to Problem 4 solved at the preliminary step.By introducing a finite element approximation space V h ∈ V, with: a space function Φ i (x) can be approximated as follows : Φ i (x) ≈ N j=1 Φ j i ϕ j (x) = Φ j i ϕ j in indicial notation where repeated indices are summed over their range.Thus (4) can be discretized as follows: (13) or: where A and G are N × N matrices whose entries are defined by : Elements of the N -length vector R are given by: If a finite element approximation space P h ∈ P is now introduced, with: a parameter function λ i (µ) can be approximated as follows : the discretized k−order reduced basis, the sum over i of first terms in (14) gives: where ⊗ denotes the tensor product of matrices (Kronecker product) and M is a p × p matrice whose entries are defined by Projection of discretized operator A onto the discretized reduced-order basis W k involves two matricematrice products resulting into a computational complexity that scales with Since discretized operator A is parameter-independent, separation of variables is possible to compute the integrals over the space-parameter domain (Kronecker product).Consequently, provided that the ROB does not evolve, the k × k matrice W T k AW k can be pre-computed and factorized, if needed, offline.This is the reason why model reduction techniques are particularly efficient and appealing for parametrized timeinvariant systems [4] or linear stationary and quasi-stationary systems whose operators are affine functions of the input parameters [16].
On the contrary, the parameter-dependency of the discretized operator G in the second term of ( 14) prevents the separation of integrals.Separation of the operator as a tensor product is not straightforward.
Moreover, it has to be evaluated for each new iterate u (n) (x, µ) at a computational complexity of O(pN k 2 ) (resp.O(pN 2 k)) if the discretized operator is sparse (resp.full) [15,35].One can also show that complexity for evaluating nonlinear residue (right-hand side member of ( 14)) is O(pN k).In short, the expected decrease in computational cost by using reduced-basis approximation is consequently quite modest whatever the dimension reduction k ≪ N is.This bottleneck is clearly not specific to PGD-based model reduction but more generally to reducedbasis approximation in nonlinear model reduction.This problem arises due to the fact that: (i) Galerkin projection onto the ROB has to be performed as soon as the reduced-order basis evolves, (ii) non-linear terms (Jacobian and residue) have to be evaluated at each iteration of the solution strategy for each new iterate which prevents pre-computations of operators.Many works have already been done to tackle this issue as discussed in the introduction.

The Reference Point Method
The goal of the Reference Point Method described in this section, is to decrease the computational cost of integrals arising from Galerkin projection.For that two main features are introduced: (i) an approximation framework with the introduction of reference points to simplify the evaluation of integrands, (ii) a lowcost technique to reconstruct by explicit formulas an approximate low-rank separated representation by patch of operators without resorting to SVD-based techniques, which is a great advantage regarding integral computation by separation of variables.The motivations of the proposed technique are first presented in Section 3.1.The basics of the Reference Point Method are given in Section 3.2 before being applied to Problem 1 in Section 3.3.1.A brief computational complexity analysis is proposed in Section 3.3.2 to estimate the expected gain in computational cost.

Motivations
Among the techniques introduced to tackle the aforementioned bottleneck, the Empirical Interpolation Method (EIM) [33] is probably one of the most popular.It proposes an inexpensive interpolation procedure of nonlinear terms based on the offline construction of a suitable additional reduced-basis approximation space.More precisely, EIM enables to provide an approximation of nonlinear function g as follows: where M (with M ≪ N, p) interpolation "Magic" points {x M m } 1 m M of the space domain and M "Magic" points of the parameter domain {µ M m } 1 m M are computed by a greedy selection process in order to build the M interpolation functions {q m (x)} 1 m M during an offline stage.For each iterate u (n) , coefficient functions {ϕ M m (µ)} 1 m M have to be updated online for the reduced-order model of u (n) evaluated at interpolation "Magic" points {x M m } 1 m M .Note that, according to [15], approximation function g M is interpolant at the "Magic" spatial points, i.e. g(u (n) for 1 m M .Let consider Problem 4 involved at the preliminary step, which is nothing more than an update of the reduced model from the current ROB.Thanks to EIM, tangent operator can be approximated as follows: where φM m is the partial derivative of ϕ M m with respect to argument µ.Approximation of residue R with EIM is done accordingly but not detailed hereafter.Interested reader can refer to [15] for further details.Thus, (4) can be approximated with EIM as follows: Provided that the M functions {q m (x)} 1 m M and reduced-order basis W k = {Φ i } i=1 i k are known, terms a(Φ i , Φ j ) and Ω Φ i q m (x) Φ j dx are parameter-independent and thus can be pre-computed.In this case, it is shown in [15] that the computational complexity at each iteration for the Galerkin projection (resp.the determination of coefficient functions ).Since M, k ≪ N , the online complexity is consequently independent of N , the dimension of the underlying initial finite element approximation space.
EIM could be used to solve the preliminary step, however the fact that the ROB changes quite often throughout the iterations of the LATIN nonlinear solver makes of this technique less suitable.Indeed, for a fixed number M of best/magic interpolation points the error in the coefficient functions {ϕ M m (µ)} 1 m M used to interpolate the nonlinear terms ultimately dominates when the size k of the ROB increases [15].
Adding more magic points may be necessary as long as the ROB evolves.Such an adaptative strategy could be derived but it may be not very efficient in this context, at least in the LATIN framework.Moreover, the determination of "Magic" parameter points {µ M m } 1 m M may be costly particularly for applications where nonlinear function g is time-dependent as for parabolic parametrized problem [15] or implicitly dependent to the field variable u for nonlinear problem as it is the case here.
The RPM proposes a more pragmatic approach.It is not based on the approximation of the nonlinear function g (i.e.tangent operator H − ).On the contrary, it is rather an approximation technique of the integrals involved in the Galerkin projection similarly to quadrature techniques in classical finite element methods.More precisely, it aims to compute, at low-cost, each contribution to the ROM, i.e. each term/entry α ij : where repeated indices are not summed over their range.It is worth noting at the fact that RPM is used at each iteration of the LATIN method only to solve the preliminary step.Even though the integral computations are not enough accurate, the PGD-model reduction technique can improve it by adding new correction functions to the ROB throughout the LATIN iterations, which ensures the convergence of the whole process.

Basics of the RPM
The RPM aims at constructing an approximate low-rank separated representation by patch of integrand in (20).In this section, two main features are introduced: (i) an approximation framework with the introduction of reference points (see Section 3.2.1) to simplify the evaluation of integrands (see Section 3.2.2),(ii) a low-cost technique to reconstruct by explicit formulas an approximate low-rank separated representation by patch of integrands (see Section 3.2.3).
Integrand of (20) involves the product of several operands: the nonlinear function H − (u (n) ; x, µ) pre and post multiplied by the terms λ * (µ)Φ i (x) and Φ j (x)δλ i (µ).For the sake of clarity, instead of integrand of (20), let consider the product of two scalar functions depending on two variables.This simple example will be much more convenient to figure the approximation procedure: where f (µ, x) and f ′ (µ, x) are two scalar functions of two variables, µ ∈ D = [0, 1] and x ∈ Ω = [0, 1].An example of functions f and f ′ and their product F is given in Figure 2.
Let assume that f and f ′ are known under separated representation (even though separation property is not always satisfied for any quantity, like H − ): This M -term representation is likely non-optimal and M may increase swiftly if terms are added to f and f ′ representations (k and k ′ increase).Performing a singular value decomposition of F may be necessary to achieve a separated representation of F with a reasonable number of terms.RPM follows a different path to avoid the artificial increasing of the number PGD modes.As described in the following, RPM enables one to build an approximate low-rank separated representation by patch of any quantity without resorting to SVD-based techniques.

Compressed format and generalized components
The RPM approximation framework is based on the concept of reference points and enables one to define a reduced representation of the data [40].The parameter domain is split into m µ sub-intervals D i of size ∆µ i .The center µ i of sub-interval D i is called parameter reference point.For the space domain, m x points x j are introduced and the domain Ω is divided into m x sub-domains Ω j of size ∆ω j .The points x j are called spatial reference points.These reference points are arbitrary and can be chosen independently of discretization spaces for unknown quantities.An influence zone is defined around each reference point of the space-parameter domain defined by the coordinates of the spatial reference points x j and parameter reference points µ i .Part of the domain, D i ×Ω j , is called reference patch (i, j).Thus, the entire domain D × Ω is divided into m µ × m x patches.
A function f defined on the domain D × Ω is represented by its generalized components, f := {(ā ij , bij )}, defined as follows.For 1 i m µ and 1 j m x : The generalized components {ā ij (µ)} i=1,...,mµ related to spatial reference point x j gives the description of function f at spatial point x j over the entire parameter domain D. Similarly, the generalized components { bij (x)} i=1,...,mx related to parameter value µ i gives the description of function f at parameter value µ i over the entire spatial domain Ω.Note that by construction, for 1 i m µ and 1 j m x : If function f is expressed under separated representation, each pair of functions is described by their generalized components.Hence, for f (µ, x) = k i=1 λ i (µ)Λ i (x), it yields: for 1 i m µ and 1 j m x : In a sense, generalized components provide a kind of compressed format of the quantities.Figure 4 depicts surfaces defined by functions f and f ′ and their generalized components when m µ = 10 and m x = 10.In this case, for a two-dimensional function, a generalized component associated with a spatial or parameter reference point is a quantity defined on a line.For higher dimensions, it corresponds more generally to hyperplanes, hypersurfaces or submanifolds of the domain accordingly.Some patches for functions f and f ′ are depicted in Figure 6.In two dimensions, these patches can be easily represented.In 3D, patches become volumes.For a scalar function of three variables (f (µ, x) with x = (X, Y )), a patch (i, j) becomes a parallelepiped, generalized space components bij (x) are snapshots of the two-dimensional spatial solution for a given value of parameter µ i and generalized parameter components āij (µ) are lines (Figure 5).
The algorithm that partitions the domain into patches for an arbitrary number of reference points for each coordinate, can easily takes into account the geometry (holes, corners...) of the spatial domain by moving some spatial reference points where functions are not defined and by slightly adapting the patches according to the space domain geometry (Figure 7).
Remark 2. For a multivariable function f (µ 1 , ..., µ r , x), m l reference points along a given dimension l can be introduced.The domain The definition of a reference patch (i µ 1 , ..., i µ r , j) with 1 i µ l m µ l and 1 j m x defined on the part of the domain D i µ 1 × ... × D i µ r × Ω j and the definition of the generalized components similarly to (23) are straightforward.

Algebra in the compressed framework
It is straightforward to show that the RPM framework shows interesting properties regarding elementary operations on quantities represented under their compressed formats (see Table 1).For instance, Figure 8 Addition illustrates the evaluation of the product F of two functions f and f ′ (see (21)) under their compressed formats.In order to evaluate the product all over the domain, a reconstruction of the product from the generalized components F of F has to be proposed as described in the following section.

Reconstruction of a low-rank separated representation by patch
In this section, the reconstruction procedure of a rank-one separated representation by patch of F is shown.Denoted by F , this reconstruction is obtained from the compressed format F by generating one product of functions per space-parameter patch D i × Ω j : for 1 i m µ and 1 j m x : where products of functions a ij (µ)b ij (x) defined on each patch D i × Ω j are determined from the generalized components {( Āij (µ), Bij (x))} of F thanks to the minimization of the following functional: where • Di and • Ωj are the classical L 2 (D i ) and L 2 (Ω j ) norms.Term λ ik is an influence coefficient which gives more importance to the neighboring patches of patch D i × Ω k along space coordinate.This functional measures the distance between the patch reconstruction and the generalized components.Minimization of functional (27) (see Appendix C) leads to the following explicit formulas: for 1 i m µ and 1 j m x : It can be noticed that space dimension and parameter dimension are not equally treated.Indeed, space domain is favored by holding all the information arising from the spatial generalized component Bij (x).As illustrated on Figure 9, one can also see that: for 1 i m µ and 1 j m x : This choice is suitable for structural mechanics where the spatial gradients of quantities are usually stronger than their variations in parameter (or time) variable.Influence coefficients λ ik have an influence on parameter function a ij (µ) by weighting the values Āik (µ i ) (= Bik (x k ) due to ( 24)) at the center of patches along space coordinate.
If one chooses λ ik = 1 for 1 k m x (uniform value), one obtains: for 1 i m µ and 1 j m x : In this case, parameter functions a ij (µ) are the same for any subdomain Ω j for a given i.Reconstruction F is continuous not only for µ = µ i but also for any µ ∈ D i with 1 i m µ (see Figure 9(c)).This may lead to high discontinuities between sub-intervals D i if variations of product according to parameter variable are significant.Another limit choice is to consider influence functions such that λ ik = 1 only for k = j.This leads to minimizations by patch independent of each other and: Parameter function a ij (µ) is normalized by the value Āij (µ i ) = Bij (x j ) (due to (24)) at the center of patch D i × Ω j and is consequently independent from one patch to another.Note that in this case, denominator of a ij can be null and can lead to numerical difficulties.This may also happen for ( 28) and ( 29) but is by far less probable.In this situation, adding an appropriate constant value to generalized components before reconstruction and subtracting the same value subsequently to the reconstruction can easily palliate this issue.Reconstruction F is discontinuous between sub-cells Ω j for all µ ∈ D i except for µ = µ i with 1 i m µ (see Figure 9(b)).
An optimization of weight functions λ ik with compact support along spatial coordinate between the two extreme situations leading to (29) (uniform weight) and (30) (independent minimizations by patch) has to be done according to the variations of the function F which is approximated by the RPM (see Figure 9(d)).For all the numerical examples presented hereafter, one chose for (28): This optimized value of λ ik has been obtained by numerical experiments and is suitable for most of the 330 functions that have been tested in the present works.Figure 10 shows different approximations F1 of function F 1 (x, µ) = e −|(x−0.5)(µ−1)|+ sin(xµ) for increasing numbers of reference points and the optimized value of λ ik given by (31).
The reconstruction of product F (see Figure 2) thanks to its generalized components depicted in Figure 4 is shown in Figure 11 for an arbitrary regular grid of m µ = 10 parameter reference points and m x = 10 spatial reference points.This leads to an error of e = 5% with respect to the exact function F .Error e is defined as follows: Remark 3. Reconstruction F of a separated representation by patch for a multivariable function F (µ 1 , ..., µ r , x) needs further investigations.With notations of Remark 2 and by introducing the L 2 (D i µ l ) norm, • Di µ l , for each sub-interval D i µ l of size ∆µ l i associated with dimension l and variable µ l , a functional similar to (27) can easily be defined.A definition that favors space domain (i.e.continuity in space variable x) and weak continuity from one patch to another according to other variables thanks to the introduction of influence/weight coefficients λ i µ l k can similarly be considered.In this multivariable case, the choice of these coefficients is not obvious.An easy and straightforward choice is to consider the independent minimization by patch by considering that λ i µ l k = δ i µ l k , leading to a simple explicit reconstruction similar to (30).

Integral computation by RPM
Once the low-rank separated representation by patch F defined by (26) has been reconstructed to approximate function F thanks to explicit formulas (28), the integral of F over the space-parameter domain can be approximated as follows: Similarly, RPM can now be used to approximate each contribution to the ROM, α ij defined by (20) and involved in the Galerkin projection of the preliminary step (see Problem 4).

Application to the reference problem
By introducing approximation space P h ∈ P defined by ( 15) into contribution α ij given by ( 20), one gets: where repeated indice i is not summed over its range.
Let split the two-dimensional parameter domain D into m µ sub-domains D i .The center µ i of each subdomain D i is considered as a parameter reference point.Similarly, the space domain Ω is divided into m x sub-domains Ω j whose centers x j defined m x spatial reference points.Spatial and parameter reference points x j and µ i can be chosen arbitrary on a regular grid for instance.In this case, a given reference patch (i, j) that occupies the part D i ×Ω j of the four-dimensional space-parameter domain D×Ω is complicated to figure .Generalized components of integrand ω in (36) (where subscripts rsij have been omitted to simplify the notations) are denoted by ω := {(ω µ ab , ωx ab )} with: for 1 a m µ and 1 b m x : The approximation ω of integrand ω is defined by the following rank-one separated representation by patch: for 1 a m µ and 1 b m x : where functions ωµ ab (µ) and ωx ab (x) are defined by explicit formulas defined in Section 3.2.3.To alleviate the notations in the following, let assume that these functions are obtained by explicit formulas (30) (RPM with independent minimizations by patch) instead of general formulas (28) such that: In this case, definition of integrand ω in (36) leads to: for 1 a m µ and 1 b m The integral computation of ω over patch (a, b) by separation of variables can now be done as follows: where approximation space V h ∈ V defined by (12) has been introduced to approximate the second integral on Ω b .The nodal vector associated with the discretization of space function ) is obtained by the assembly of spatial finite elements contributions located in sub-domain Ω b and whose entries are defined as follows: Let also introduce the restriction of the discretized k-order reduced-basis to sub-domain Ω b : Figure 12 depicts the restriction of a discretized 3-order reduced-basis to a sub-domain Ω b for a 3 × 3 regular grid of spatial reference points (m x = 9).Thanks to (42), contribution α ij given by ( 36) can be Similarly to (16), one deduces that the contribution to preliminary step of the sum over i of second terms in left-hand side of (4) (or ( 14)) gives: where the symbol A denotes the assembly of the m µ m x tensor products of matrices by patch (a, b).It is worth noting at the fact that matrices K b (µ a ) (resp.M a (x b )) are computed only at the m µ parameter reference points µ a (resp.m x spatial reference points x b ).They have to be updated each time that the LATIN search direction H − is updated.If H − is not updated and if, in addition, the discretized reduced-basis W k remains unchanged (k is unchanged), the projection onto the reduced-basis W k and tensor products on each patch can be pre-computed accordingly.

Computational complexity analysis
As noted in Section 2.3, RPM could also be used to approximate the nonlinear residue (right-hand side member of (4)) whose complexity is O(pN k) for each new iterate.In this section, only the gain in complexity associated with the evaluation of second terms in left-hand side of (4) (or ( 14)) is discussed.Let recall that the parameter-dependency of the search direction H − prevents the separation of integrals which leads to a computational complexity of O(pN k 2 ) (resp.O(pN 2 k)) if the discretized operator is sparse (resp.full) for each new iterate u (n) (x, µ).As shown in Section 3.3.1,RPM enables one to provide an approximation of this term as an assembly of tensor products of matrices by patch (see (47)).
By denoting by N b (resp.p a ) the number of spatial (resp.parameter) unknowns contains within sub-domain Ω b (resp.sub-domain D a ) in (47), the projection onto the restriction of the discretized k-order reduced-basis to sub-domain Ω b , W k,b , involves two matrice-matrice products resulting into a computational complexity that scales with If the number of arithmetical operations for evaluating one entry of matrice K b (µ a ) (resp.M a (x b )) is supposed to be β (resp.γ), complexity for evaluating one patch contribution is: Cost for the RPM reconstruction is negligible.Assuming that N b ≈ N m x and p a ≈ p m µ , the total complexity for evaluating the contributions of all the patches to the pk × pk matrice in (47) is roughly: . The total gain in case of sparse operators is consequently about: In practice for Problem 3, the number of spatial reference points m x is negligible compared with N and one has N m x ≫ p m µ .For instance, if one chooses for Problem 3, N = 2500, p = 225 and m x = m µ = 10, the gain is about one order of magnitude: O p m µ ∼ 10.

Numerical example
In this section, the numerical simulation of Problem 1 is investigated.The RPM is here used to approximate the preliminary step of the LATIN-PGD computational strategy for Problem 3. In the following this strategy is denoted by LATIN-PGD-RPM.The computational gain according to the number of reference points is investigated in Section 4.2 and compared to the expected gain (see (50)).Finally, a comparison between the LATIN-PGD-RPM and the EIM combined with reduced-order basis techniques and Newton-based solution strategy [15], called RB-EIM, is proposed in Section 4.3.

On the choice of the reference points
Problem 1 is defined on the space-parameter domain For all the simulations of Section 4, the finite element approximation space V h is fixed and obtained by a finite element mesh of 50 × 50 bilinear quadrilateral elements leading to an approximation space of dimension N = 2601.The reference solution u ref refers to the solution obtained by solving Problem 2 with Newton-based nonlinear solution strategy (classical direct method without model reduction) for each parameter value (see Figure 1).A number of p = 225 values for parameter µ is considered and chosen from a regular grid as depicted in Figure 15(a).The following relative error ǫ is introduced: where u m refers to the solution obtained with the chosen model reduction strategy: LATIN-PGD, LATIN-PGD-RPM, RB or RB-EIM.The evolutions of ǫ with respect to the number of PGD pairs for increasing numbers of spatial and parameter reference points are plotted in Figure 14.Note that for the LATIN-PGD or the LATIN-PGD-RPM, the number of generated PGD pairs is comparable to the number of LATIN iterations since only one new PGD pair is generated per LATIN iteration at most.It can be seen on Figure 14(a) and Figure 14(b) that, by adding more reference points, the error curve of the LATIN-PGD-RPM converges to the one obtained with the LATIN-PGD.The CPU time also increases accordingly and tends to the LATIN-PGD computational cost (see Figure 14(c) and Figure 14(d)).Let recall that RPM technique is used only for the preliminary step.Thus, if the RPM approximation is not good enough, the generation of new PGD pairs as described in Algorithm 2 can palliate this and makes the error monotonically decrease and the solution converge anyway.Obviously, this results into the generation of more PGD pairs compared with the LATIN-PGD curve for a given level of error.
It is worth noting at the fact that an error level of 10 −3 can be reached with only one spatial reference point and a few parameter reference points (see dash-dot line with cross markers in Figure 14(a)).For this example, adding more parameter reference points seems to be better than adding more spatial reference points in order to converge to the optimal LATIN-PGD curve (see dash-dot lines with cross markers in Figure 14(a) and Figure 14(c)).Indeed, as it is often the case in structural mechanics, the spatial gradients of quantities are stronger than their variations in parameter variable for this nonlinear parametrized elliptic problem (see Figure 1).Since the parameter reference points {µ a } 1 a mµ lead to the computation of m µ integrals K b (µ a ) (see (42) and ( 47)) over the space domain, adding more parameter reference points is preferable to account for spatial gradients.

Computational gain analysis
As seen in the previous section, the more reference points are added, the more CPU time increases (see A compromise between the number of reference points -i.e. the CPU time -and the number of generated PGD pairs -i.e.number of LATIN iterations -has to be done in oder to reach a given level of error.For an error level of ǫ = 10 −2 , the CPU time gain compared with the direct method increases from 6 for the LATIN-PGD to 18 for the LATIN-PGD-RPM (see Table 2).For a very small number of reference points (m µ = 1 × 1 and m x = 1 × 1) and a few more PGD pairs, the total CPU time gain is here multiplied by a factor of 3 thanks to the RPM applied to the preliminary step of the LATIN-PGD.Table 3 provides the gain in CPU time on the preliminary step thanks to the RPM.For a significant number of parameter reference points (m µ = 3 × 3), the real gain is very close to the theoretical gain in computational complexity (50), that is to say approximately p/m µ where p is the number of parameter values.

Comparison of RPM versus EIM: approximation versus interpolation techniques
In this section, a comparison of the performances of the RPM, described in Section 3, and the EIM is proposed.It has to be first noticed that, the two methods are here combined with different model reduction techniques and nonlinear strategies (see Table 4).As explained in Section 3.1, using EIM to solve the preliminary/update step of the PGD could be done but it may be less suitable when the reduced basis evolves.The interested reader could nevertheless refer to [44,45] for one of the first applications of DEIM to PGD.
In the following, the more classical combination of EIM with reduced-basis techniques and offline-online procedure is considered and called RB-EIM.In this case, combined with Newton-based incremental solution strategy, the reduced-basis approximation associated to the space weak form Problem 2 for each parameter value µ ∈ D is considered instead of the space-parameter weak form Problem 3 used in the LATIN nonincremental solution strategy.
Moreover, RPM and EIM are based on different approaches.As seen in Section 3.1, RPM is based on an approximation technique of the integrals involved in the Galerkin projection onto an evolving reduced-basis whereas EIM is based on an interpolation procedure of the nonlinear terms (Jacobian and residue).

Summary of the RB-EIM procedure
The main steps of the online-offline procedure of the RB-EIM as described in [15] are here summarized.During the offline stage, the following steps are performed: .This cost may be prohibitive in the parabolic case or when the nonlinear function g is time-dependent or implicitly dependent to the field variable u, which is the case here.
• Greedy selection process of a reduced-order basis W k = {Φ i } i=1 i k for the solution u over the coarse sample set D c .Note that snapshots of the solution, u(µ M m ), computed for the elements of nested sample set S c M ⊂ D c , are reused in (52).• Determination of the associated approximation space (collateral reduced-basis): where the M functions {q m (x)} 1 m M and interpolation magic points {x M m } 1 m M are determined progressively by the recursive solution of M small linear systems.As explained at the end of Section 4.1, only one spatial reference point is used in the following for the LATIN-PGD-RPM.Thus, one sets m x = 1 × 1.Only the number m µ of parameter reference points varies.For the RB-EIM, the influence on the convergence of the number M of magic points for a given size k of the reduced basis W k generated offline is investigated.For the LATIN-PGD-RPM, the reduced basis W k is progressively built and its size k increases throughout the LATIN iterations.As previously said, the number k of generated PGD pairs is comparable to the number of LATIN iterations since only one new PGD pair is generated per LATIN iteration at most.
The error ǫ given by (51) for various numbers of reference and magic points and Problem 1 is given in Figure 16.The error of the classical reduced basis approach without EIM (dash-dot line with diamond markers denoted by RB) is also given for comparison.Both the LATIN-PGD-RPM and the RB-EIM lead to the same level of error.However, some differences can be pointed out.For a given number M of magic points, the RB-EIM convergence curve first decreases until a plateau is reached once k slightly exceeds the value of M (see Figure 16(b)).Increasing M is necessary to make the error decrease.Indeed, for a fixed number M of magic interpolation points the error in the coefficient functions {ϕ M m (µ)} 1 m M used to interpolate the nonlinear terms (see (17)) ultimately dominates for increasing dimension k of the reduced basis W k .Increasing both M and k is necessary to lower the error.In the case of the LATIN-PGD-RPM (see Figure 16(a)), the reduced basis W k being enriched throughout the iterations, the error decreases monotonically as expected by generating new PGD pairs.Increasing the number m µ of parameter reference points improves the overall convergence rate.In any case, the LATIN-PGD-RPM solution converges.
As pointed out in Section 4.3.1,despite the fact that the computational cost of the online stage depends only on k, M and the number of Newton iterations and no more on N , the cost of the offline/learning stage can be very high for nonlinear problems in order to generate a pertinent reduced-basis W k as well as a suitable coarse sample set S c M of magic parameter points.A sufficiently fine coarse grid D c may be necessary.More that error level is hardly improved for M higher than about 11.Finally, even though comparing a reduced basis approach with learning stage and a PGD-based model reduction may be discussable, one nevertheless provides, in Table 5, CPU time gains with respect to the direct simulation to reach an error level of ǫ = 10 −2 for the LATIN-PGD-RPM and the RB-EIM with a comparable number of reference/magic points: m x = 1 × 1, m µ = 2 × 2 and M = 5.As discussed previously, note that CPU time for the RB-EIM includes the time spent during the offline/learning stage, the online stage cost being negligible.It can be seen that the CPU time gains for the two approaches are quite similar in the case of Problem 1.The size k of the reduced basis W k generated by the LATIN-PGD-RPM is also comparable to the size of the reduced basis built by the RB-EIM during the offline stage.Table 5: CPU time gain with respect to the direct simulation to reach an error level of ǫ = 10 −2 for the LATIN-PGD-RPM and the RB-EIM with a comparable number of reference/magic points

Conclusion
In order to tackle the bottleneck of nonlinear model reduction techniques, a new approximation technique, called Reference Point Method, has been proposed to reduce the computational complexity of algebraic operations for constructing reduced-order models in the case of time dependent and/or parametrized nonlinear partial differential equations.It has been shown that this bottleneck is not specific to PGD-based model reduction but more generally to reduced-basis approximation in nonlinear model reduction.This problem arises due to the fact that at each iteration of the solution strategy: (i) Galerkin projection onto the ROB has to be performed, especially each time the reduced-order basis evolves which is the case for PGD, (ii) non-linear terms (Jacobian and residue) have to be evaluated for each new iterate which prevents pre-computations of operators.As a result, the complexity of these operations scales with the size of the original high-dimensional model.
Contrary to EIM which is based on an interpolation procedure of the nonlinear terms (Jacobian and residue), RPM is based on an approximation technique of the integrals involved in the Galerkin projection onto an evolving reduced-basis.For that two main features are introduced: (i) an approximation framework with the introduction of reference points to simplify the evaluation of integrands, (ii) a low-cost technique to reconstruct by explicit formulas an approximate low-rank separated representation by patch of operators without resorting to SVD-based techniques, which is a great advantage regarding integral computation by separation of variables.
An application of the RPM with the LATIN-PGD nonlinear solution strategy to a nonlinear parametrized elliptic PDE previously studied by other authors with reduced-basis method and EIM has been proposed.More precisely, the RPM has been used to approximate, at each iteration of the nonlinear solution strategy, the so-called Preliminary step which is nothing more than an update of the reduced model from the reduced basis obtained from the previous iterations.
It has been shown that the convergence of the whole strategy is ensured, whatever the number of reference points is (even if a very moderate increase of the number of the generated PGD modes is required to compensate a possible lack of accuracy of the RPM approximation).Increasing the number of reference points makes the convergence curve tend to the one obtained with the classical LATIN-PGD approach.
Moreover, computational complexity as well as CPU time to construct the reduced-order model of the preliminary step can be divided in practice by one order of magnitude.The implementation of RPM for more complex problems is currently in progress and one can expect higher gains.
Second equality is due to the choice made for search direction E − (see (A.2)).A constant operator could be also chosen: In this case, the algorithm bears similarities to a quasi-Newton scheme.As shown in [22], the choice of search directions H + and H − does not affect the solution at convergence but it can change the convergence rate.The global stage problem reads as follows: Problem 5 (Global stage).Given ŝ(n+1/2) = (û (n+1/2) , ŵ(n+1/2) ), find s (n+1) = (u (n+1) , w (n+1) ) ∈ A d that satisfies search direction (A.3) and such that: The only unknown in this latter equation is u (n+1) .This equation is global over the space-parameter domain but linear.

Figure 1 :
Figure 1: Solution of Problem 2 for different parameter values and a finite element mesh of 50 × 50 bilinear quadrilateral elements (N = 2601)

Figure 2 :
Figure 2: Example of two scalar functions depending on two variables and their product

XFigure 6 :
Figure 6: Some of the space-parameter patches D i × Ω j of functions f and f ′ for mµ = 10 and mx = 10

Figure 7 :
Figure 7: Correction of a spatial reference point located in a hole for a two-dimensional spatial domain and adaptation of the patches

Figure 10 :Figure 11 :
Figure 10: Approximations F1 of F 1 for increasing numbers of reference points and the optimized value of λ ik given by(31)

3 ]Figure 12 :
Figure 12: Restriction of a discretized 3-order reduced-basis to a sub-domain Ω b for a 3 × 3 regular grid of spatial reference points (mx = 9)

2 (Figure 13 :
Figure 13: Example of selection of reference points on a regular grid for space-parameter domain of Problem 1

Figure 14 :
Figure 14: Evolution of error ǫ (see (51)) and CPU time w.r.t. the number of PGD pairs for increasing numbers of spatial and parameter reference points to solve Problem 1

Figure 14 (
Figure 14(c) and Figure 14(d)).A compromise between the number of reference points -i.e. the CPU time -and the number of generated PGD pairs -i.e.number of LATIN iterations -has to be done in oder to reach a given level of error.For an error level of ǫ = 10 −2 , the CPU time gain compared with the direct method increases from 6 for the LATIN-PGD to 18 for the LATIN-PGD-RPM (see Table2).For a very small number of reference points (m µ = 1 × 1 and m x = 1 × 1) and a few more PGD pairs, the total

Table 4 :
Context of application of the RPM and the EIM: LATIN-PGD-RPM and RB-EIM • Construction of a nested coarse sample set S c M = {µ M m ∈ D c } 1 m M of magic parameter points thanks to a greedy selection process on coarse grid D c ⊂ D based on a L 2 (Ω) norm.This requires the solution of M nonlinear problems for each parameter value of D c whose computational cost scales with the size of D c and N

• 4 . 3 . 2 .
Once order-k reduced basis W k , collateral reduced-basis W c M and interpolation magic points {x M m } 1 m M have been determined once for all, one can proceed to the online stage on the fine parameter sample set D f ⊂ D. For each parameter value µ ∈ D f and each iterate u (n) , one has the following steps: Update of coefficient functions {ϕ M m (µ)} 1 m M ) for the reduced-order model of u (n) evaluated at interpolation "Magic" points {x M m } 1 m M ; • Update of tangent operator at cost O(M k 2 ) in (19) and residue (i.e.Galerkin projection on the order-k reduced basis W k ); • Solution of the reduced-basis approximation on W k associated to the space weak form Problem 2 for parameter value µ ∈ D f at cost O(k 3 ) (factorization of the full k × k matrice).The online computational complexity depends only on k, M and the number of Newton iterations.The N independence of the online stage is recovered.As noticed previously, the costly part of the offline stage is mainly due to the construction of the coarse sample set S c M of magic parameter points {µ M m ∈ D c } 1 m M .Application of the LATIN-PGD-RPM and the RB-EIM to the reference problem As in [15], a 15×15 regular grid of parameter values is used for the fine sample set D f ⊂ D (Figure 15(a)).This leads to a number p = 225 of parameter values for µ.For the offline stage of the RB-EIM, a 12 × 12 regular grid of parameter values is chosen for the coarse sample set D c ⊂ D (Figure 15(b)).

2 ( 2 (Figure 15 :
Figure15: Fine and coarse grids, D f and Dc, for the parameter domain discretization considered for the RB-EIM[15]

19 Figure 16 :
Figure 16: Error ǫ given by (51) with respect to the size k of the reduced basis W k for the LATIN-PGD-RPM and the RB-EIM and various numbers of reference/magic points for Problem 1

precisely, for the 12 ×
12 coarse grid D c of Figure 15(b), the offline stage of the RB-EIM accounts for 64% of the computational cost of the classic direct simulation performed on the 15 × 15 fine grid D f of Figure 15(a).Error ǫ given by (51) for the RB-EIM with various regular coarse grids D c used during the offline stage is plotted in Figure 17.It can be seen on Figure 17(a) that a too coarse grid leads to a high level of error whatever the number M of magic points is.For a 5 × 5 grid, it can be seen on Figure 17(b) Coarse grid Dc of 12 × 12 points

Figure 17 :
Figure 17: Error ǫ given by (51) for the RB-EIM with various regular coarse grids Dc used during the offline stage Figure A.19 depicts the evolution of LATIN convergence indicator δ L (see (A.9) in Appendix A.3) as a function of the number of iterations for Problem 1.It can be seen that using a tangent operator (A.4) instead of a constant one (A.5)drastically improves the convergence rate.Using tangent operator (A.4) or, 560 at least, updating search direction H − is consequently recommended.

Figure A. 19 :
Figure A.19: Convergence of LATIN error indicator δ L (A.9) for Problem 1 and two choices of search direction H − : tangent operator (A.4) and constant operator (A.5)

Table 2 :
Total CPU time gain with respect to the direct method for a given level of error (ǫ = 10 −2 )

Table 3 :
CPU time gain on the preliminary step with respect to the LATIN-PGD for a given level of error (ǫ = 10 −2 )