Non-incremental boundary element discretization of parabolic models based on the use of the proper generalized decompositions

There are many ways to solve space–time linear parabolic partial differential equations by using the boundary element method (BEM). In general, standard techniques make use of an incremental strategy. In this paper we propose a novel alternative of efficient non-incremental solution strategy for that kind of models. The proposed technique combines the use of the BEM with a proper generalized decomposition (PGD) that allows a space–time separated representation of the unknown field within a non-incremental integration scheme.


Introduction
The boundary element method (BEM) allows efficient solution of partial differential equations whose kernel functions are known. The heat equation is one of these candidates when the thermal parameters are assumed constant (linear model). When the model involves large physical domains and time simulation intervals the amount of information that must be stored increases significantly. This drawback can be circumvented by using advanced strategies, as for example the multipoles technique [1]. Another possibility is to employ the LTBEM approach. The basic idea behind this method is to adopt the Laplace transform first to convert a time-dependent parabolic differential system into a time-independent elliptical boundary problem in the Laplace domain. However, once the differential system is solved in transformed space, the reconstruction of the solution in the time domain requires the application of the inverse Laplace transform. The solution accuracy depends on the choice of the inverse Laplace transform. Sutradhar [2] and Amado [3] provide recent results related to this method.
We propose in this paper an alternative radically different approach that leads to a separated solution of the space and time problems within a non-incremental integration strategy. The technique is based on the use of a space-time separated representation of the unknown field that, introduced in the residual weighting formulation, allows to define a separated solution of the resulting weak form. The spatial step can be then treated by invoking the standard BEM for solving the resulting steady state problem defined in the physical space. Then, the time problem that results in an ordinary first order differential equation is solved using any standard appropriate integration technique (e.g. backward finite differences).
In the case of the linear and transient heat equation here considered for the sake of simplicity, the PGD (proper generalized decomposition) leads to the solution of a series of steady state diffusion-reaction problems (accurately solved by using the BEM method) and a series of problems that consist of a simple time dependent ODE. Separated representations were already applied for solving transient models in the context of finite element discretizations [4][5][6][7][8][9][10], but they never have been used in the BEM framework, and certainly in this context the main advantage is the possibility of defining non-incremental strategies as well as the possibility of avoiding the use of space-time kernels. In principle, this technique seems specially adapted for solving transient problems involving extremely small time steps.
We start summarizing the main ideas of the Proper Generalized Decomposition for the solution of linear parabolic problems. In the next section we will focus on the application of such technique in the context of the BEM. Finally, some numerical examples will be presented and discussed.
. We also assume that this field is known in a discrete manner, that is, at some points x i (the nodes of a mesh or a grid) and at certain times t p , where i A ½1, . . . ,N n and p A ½1, . . . ,P. Now, we introduce the notation u p i uðx i ,t p Þ and construct the matrix Q that contains the snapshots: The proper orthogonal decomposition (POD) of this discrete field consists in solving the eigenvalue problem: When the field evolves smoothly, the magnitude of the eigenvalues decreases very fast, fact that reveals that the evolution of the field can be approximated from a reduced number of modes (eigenvectors). Thus, if we define a cutoff value e (e ¼ 10 À8 Â l 1 in practice, l 1 being the highest eigenvalue) only a reduced number of modes are retained. Let R ðR 5 N n Þ be the number of modes retained, i.e. l i Z 10 À8 Â l 1 , i ¼ 1, . . . ,R and l i o 10 À8 Â l 1 ,8i 4R (the eigenvalues are assumed ordered). Thus, one could write: where for the sake of clarity the space modes f i ðxÞ will, from now on, be denoted as X i (x). Eq. (3) represents a natural separated representation (also known as finite sums decomposition). These modes could now be used to solve other ''similar'' problems, that is, models involving slight changes in the boundary conditions, model parameters y [11][12][13]. In [13] the use of reduced basis was successfully combined with a BEM discretization. Other possibility is computing the reduced basis from the standard transient solution within a short time interval (with respect to the whole time interval in which the model is defined) and then solve the remaining part of the time interval by employing the reduced basis. Obviously, both strategies induce the introduction of an error whose evaluation, control and reduction is a challenging issue.
One possibility to construct an adaptive reduced approximation basis, that should be the best reduced approximation basis for the treated problem, consists in alternating a reduction step (based on the application of the proper orthogonal decomposition) and an enrichment stage to improve the quality of the reduced approximation basis in order to capture all the solution features. We proposed recently an enrichment technique based on the use of some Krylov's subspaces generated by the equation residual. This technique known as ''a priori'' model reduction was originally proposed in [14], widely described in [15] and successfully applied for solving complex fluid flows within the kinetic theory framework [16,7] and for speeding up boundary element discretization [13] and thermomechanical simulations [17]. However, some difficulties were noticed in the application of this strategy: (i) the enrichment based on the use of Krylov's subspaces is far to be optimal in a variety of models (e.g. the wave equation) and (ii) the incremental nature of the algorithm; y .
From the previous analysis we can conclude: (i) the transient solution of numerous models can be expressed using a very reduced number of products each one involving a function of time and a function of space and (ii) the functions involved in these functional products can be determined simultaneously by applying an appropriate algorithm.
In what follows we describe a possible strategy able to compute these separated functional couples. From our numerical experiments we noticed that this technique converged in the different models until now analyzed, and that the final decomposition is not so far to the one that results from the application of the proper orthogonal decomposition on the model solution, that is, the number of functional couples was quite similar, being the optimal decomposition of the one performed by applying the POD. However, at present we cannot prove this empirical observation for general models.

On the proper generalized decomposition: a survey
Some models encountered in science and engineering are sometimes defined in multidimensional spaces (as the ones involved in quantum mechanics or kinetic theory descriptions of materials, including complex fluids) that exhibit the terrific curse of dimensionality when usual mesh-based discretization techniques are applied. Other times, models involve transient fields, that even when they are defined in three-dimensional physical spaces, they must be solved in large time intervals using very small time steps.
In the first kind of models the difficulty is quite natural and the solution of such models needs new strategies. One possibility lies in the use of sparse grids [18]. However, as argued in [19], the use of sparse grid is restricted to models with moderate multidimensionality (up to 20). Another technique able to circumvent, or at least alleviate, the curse of dimensionality consists of using a separated representation of the unknown field (see [20,21] for some mathematical results on this topic). Basically, the separated representation of a generic function u(x 1 ,y,x D ) (also known as finite sums decomposition) writes: This kind of representation is not new, it was widely employed in the last decades in the framework of quantum chemistry. In particular the Hartree-Fock (that involves a single product of functions) and post-Hartree-Fock approaches (as the MCSCF that involves a finite number of sums) made use of a separated representation of the wavefunction [22].
We proposed recently a technique able to construct, in a way completely transparent for the user, the separated representation of the unknown field involved in a partial differential equation. This technique, originally described and applied to multi-beadspring FENE models of polymeric systems in [4], was extended to transient models of such complex fluids in [5]. Other more complex models (involving different couplings and non-linearities) based on the reptation theory of polymeric liquids were analyzed in [6] and in [7] separated representations were applied in the stochastic framework. An introductive overview on the application of separated representations in the multi-scale modeling of materials can be found in [8].
The present work will focus on the second kind of models described at the beginning of this section. For the sake of simplicity in what follows we consider models defined in moderate dimensions (d D, d ¼1, 2, 3, physical or conformation spaces) but whose solutions evolve in large time intervals. In this context, if one uses standard incremental time-discretizations, in the general case (models involving time-dependent parameters, non-linear models, y), one must solve at least a linear system at each time step. When the time step becomes too small as a consequence of the stability requirements, and the simulation time interval is large enough, the simulation becomes inefficient. To illustrate this scenario, one could imagine the simple reaction-diffusion model that describes the degradation of plastic materials, where the characteristic time of the chemical reaction involved in the material degradation is of some microseconds and the one related to the diffusion of chemical substances (that also represents the material degradation characteristic time itself) is of the order of years. In this case standard incremental techniques must be replaced by other more efficient techniques.
Pierre Ladeveze proposed several years ago a powerful technique for addressing this kind of challenging models that he called the LATIN method [23]. The LATIN method integrates many ingredients leading to a robust, powerful, efficient and accurate discretization technique especially well adapted for treating transient multi-scale non-linear models. The two most outstanding ingredients are (i) the decoupling between a linearglobal problem and a non-linear-local one, both defined in the whole space-time domain and (ii) a space-time separated representation of the model variables in order to accelerate the solution of the linear-global problem. The former separated representation was called by Ladeveze in the 1980s ''radial approximation'', and in our knowledge it was the first time that separated representations were applied in computational mechanics. The interested reader can refer to [24] and the references therein for a recent overview and the state of the art of the LATIN approach in multi-scale modeling.

Illustrating the discretization based on separated representations
In this section, we illustrate the discretization of partial differential equations using a separated representation of the unknown field.
Let us consider the generic partial differential equation with homogeneous initial and boundary conditions, where We are illustrating the strategy for constructing these functional products in the case of an academic transient problem, the transient linear heat equation: @u @t with homogeneous initial and boundary conditions, where a is the diffusion coefficient. The weak formulation yields: Find u(x,t) such that for all the functions u % in an appropriate functional space.
We compute now the functions involved in the sum (5). We suppose that the set of functional couples {(X i , T i )} i ¼ 1,y,n with 0 rn oN are already known (they have been previously computed) and that at the present iteration we search the enrichment couple (R(t), S(x)) by applying an alternating direction fixed point algorithm that after convergence will constitute the next functional couple (X n + 1 , T n + 1 ). Hence, at the present iteration, n, we  Table 1 Error en ¼ 10 2 Â en as a function of the functional couples considered to define the approximation of the unknown function uðx,tÞ and the discretization employed (nt ¼ 256).
The weighting function u % is then assumed as Introducing (8) and (9) into (7) it results We apply an alternating direction fixed point algorithm to compute the couple of functions (R, S): Computing the function S(x): First, we suppose that R is known, implying that R % vanishes in (9). Thus, Eq. (10) writes where a t ¼ RðtÞ Á @R @t ðtÞ dt, RðtÞ Á T i ðtÞ dt, The weak formulation (11) is satisfied for all S % , therefore, we could come back to the associated strong formulation a t SÀab t DS ¼ g t À that one could solve by using any appropriate discretization technique for computing the space function S(x).

Computing the function R(t):
From the function S(x) just computed, we search R(t). In this case S % vanishes in (9) and Eq. (10) reduces to where all the spatial functions can be integrated in O. Thus, by using the following notations Eq. (14) reads which is a first order ordinary differential equation that can be solved easily (even for extremely small times steps) from its initial condition.
These two steps must be repeated until convergence, that is, until verifying that both functions reach a fixed point. If we denote by R (q) (t) and R (qÀ 1) (t) the computed functions R(t) at the present and previous iteration, respectively, and the same for the space functions: S (q) (x) and S (qÀ 1) (x), the stopping criterion used in this work writes: e ¼ JR ðqÞ ðtÞ Á S ðqÞ ðxÞÀR ðqÀ1Þ ðtÞ Á S ðqÀ1Þ ðxÞJ 2 o10 À8 , ð18Þ where 10 À 8 represents the square root of the machine precision. We denote by Q n + 1 the number of iterations for solving this non-linear problem to determine the enrichment couple of functions X n + 1 (x) and T n + 1 (t). After reaching convergence we write X n + 1 (x) ¼S(x) and T n + 1 (t)¼R(t). The enrichment procedure must continue until reaching the convergence of the enrichment global procedure at iteration N, when the separated representation of the unknown field writes: In our numerical solutions the global stopping criterion was: For models whose exact solution u ref was known: For models whose exact solution was not known: with e ¼ 10 À6 in all the simulations reported in this work.
Discussion: The just proposed strategy needs for the solution of about N Â Q space and time problems (with Q ¼ ðQ 1 þ Á Á Á þQ N =N and N the number of functional couples needed to approximate, up to the desired precision, the searched solution). Thus, one must compute N Â Q dD problems, d¼1, 2, 3, whose complexity depends on the spatial mesh considered, and also N Â Q 1D problems (defined in the time interval I ) that only need the solution of an ordinary differential equation from its initial condition. Obviously, even for extremely small time steps, the solution of these transient 1D problems does not introduce any difficulty.
If instead of applying the proper generalized decomposition just discussed, one performs a standard incremental solution, P dD models, d ¼1, 2, 3, must be solved (P being the number of time steps, i.e. P ¼ T max =Dt). The time step in incremental strategies has a direct impact on the convergence and stability of the numerical scheme.
In all the analyzed cases N and Q are of the order of tens that implies the solution of about hundred dD problems defined in O, instead the thousands (or even millions) needed for solving those models using standard incremental solvers. A first comparison between both kind of approaches (the one based on the separated representation and the one based on standard incremental strategies) was presented in [25].

PGD based boundary element discretizations
The BEM is applied for solving Eq. (13). For this purpose that equation is rewritten as that defines a steady state elliptic problem with constant coefficients. It is important to note that b t , a t S, a i t and b i t will be evaluated at each iteration of the fixed point algorithm from the solution computed at the previous iteration.
The computation of integrals involved in (15) requires the knowledge of the field S in the domain. This field is reconstructed by the BEM and approximated by using moving least squares [26].
As depicted in Fig. 1 The PGD algorithm involves N Â Q times the solution of Eqs. (22) and (17), the computation of their associated integrals (12) and (15) and the reconstruction of S in O. It is important to note that matrix involved in the discretization are computed only once. Thus, the N Â Q solutions only involve matrix vector products.
Thus, the solution complexity by using the PGD is close to the one related to the standard BEM solution of steady state models. However, the storage cost is slightly higher. Moreover, the cost associated with the computation of function R from Eq. (17) is simply negligible because it consists in the solution of a simple Fig. 6. Functional couples fX i ðxÞ,T i ðtÞg for n G ¼ 8 and nt ¼ 256. ordinary differential equation. Even when the number of time steps ðn t Þ is extremely large, its impact on the computing time is negligible because it only affects the solution of the ODE related to the solution of function R. The number of time steps must be large enough for ensuring an accurate evaluation of integrals (12).

Heat equation with source term
We considered heat equation in a simple rectangular domain O ¼ ð0,1Þ Â ð0,1Þ with homogeneous boundary conditions and a time interval able to reach its steady state I ¼ ð0,0:3. The term source is set to a unit value f(x,t)¼1 and both the initial and boundary conditions vanish.
The exact solution writes [27] u ref ðx,tÞ ¼ Gðx,y,x,y,tÀtÞ dx dy dt, with Gðx,y,x,y,tÞ ¼ 4  The domain boundary G consists of n G Â n G segments G i . The time interval I is discretized by using n t nodes, uniformly distributed. The moving least square (MLS) approximation is built using a kernel function whose support radius is 3:5 Â 1=n G . We have verified that the use of cubic spline or exponential functions have not any incidence on the computed results. A quadratic consistency of the approximation is enforced by using a fully quadratic basis for defining the MLS approximation. The MLS approximation of the time functions is defined by using a support radius of 2:5 Â 1=n I and linear consistency.
The enforcement of the homogeneous boundary conditions is quite simple because it suffices for each functional couple that S vanishes on the domain boundary and R at t ¼0, conditions that are applied during the solution of Eqs. (22) and (17).
First we are analyzing the convergence rates as a function of the space discretization (i.e. n G ). For all the space meshes the time discretization (i.e. n t ) is adapted in order to reach the maximum precision. Table 1 and Fig. 2 show the evolution of the L 2 error in time and space as a function of the level of approximation, that is, as a function of the number of functional couples X i ðxÞ Á T i ðtÞ involved in the approximation of uðx,tÞ for different meshes. This error is defined by: We can notice that for a given number of functional couples the error e n decreases when n G increases, reaching an asymptotic value. For reducing the value of the error we must increase n G as well as the number of functional couples X i ðxÞ Á T i ðtÞ involved in the functional approximation. In the case of the example here addressed we must consider 9 functional couples for reaching a quadratic convergence rate for 4 rn G r 64. Fig. 3 depicts functions fX 1 ðxÞ,T 1 ðtÞg, fX 2 ðxÞ,T 2 ðtÞg, fX 3 ðxÞ,T 3 ðtÞg for n G ¼ 8 and n t ¼ 256.
uðx,tÞ ¼ûðx,tÞ, x A Gû , ð25Þ @uðx,tÞ @n The enforcement of the non-homogeneous initial and boundary conditions in the context of the PGD deserves some additional comments.  We defineû verifying the boundary conditions (25) and (26) as well as the initial condition (26). Introducing the change of variable v ¼ uÀû, the solution of (24)- (27) is equivalent of finding v solution of with homogeneous initial and boundary conditions.
For solving efficiently Eq. (28) by using the PGD method we should give a separated representation of functionû: A procedure for generating such decomposition was given by [28] in the case of the Laplace equation. Here, we propose an alternative well adapted to the transient heat equation. Inspired from the POD we define the matrix Cû containing the snapshots of functionû, whose each column contains the nodal values ofû at time t i . The first column of Cû being u 0 .
We introduce the notationû i ¼ûðx,t i Þ. Functionsû i , verifying the boundary conditions expressed by (25) and (26), can be computed inside the domain and on Gq by solving: Now, a singular value decomposition (SVD) of Cû allows computing the separated representation ofû involvingX i andT i . The number of columns, that is the snapshots t i , to be considered in the construction of Cû must be large enough for describing accuratelyû but much lower than the time steps considered in the time discretization of the partial differential equation. The cost associated with the computation of functionsû i is quite reduced because the BEM discretization of Eq. (30) must be performed when one proceed using the PGD for computing function S.  The source term writes 1 4 ðx 2 þ y 2 ÞÀðtÀ0:1Þ. We enforce on the domain boundary the exact solution of the problem, i.e u ref ðx,tÞ ¼ 1 4 ðx 2 þy 2 ÞðtÀ0:1Þ. Fig. 6 depicts the two most significant functional couples coming from the SVD decomposition of Cû that allow approximatingû.
The field v obtained by applying the PGD method is represented and accurately approximated by a single functional couple (as expected because the expression of the exact solution). Fig. 7 depicts the two functional couples. Fig. 8 shows the evolution of the error e n versus the space discretization for different levels of approximation n. A quadratic convergence is noticed.

Heat equation with incompatible boundary and initial condition
We consider the heat equation in a simple rectangular domain O ¼ ð0,1Þ Â ð0,1Þ and a time interval I ¼ ð0,0:3 with a null source term f ðx,tÞ ¼ 0. The model is solved by assuming a homogeneous boundary condition and an initial condition taking a unit value everywhere except on the domain boundary on which it vanishes.
The number of functional couples to be considered in the separated representation of the problem solution is now higher because the discontinuity involved in the initial condition. ðx þt= ffiffiffi 2 p Þ. The strategy adopted for accounting for the non-linearity of f lies in expressing f in a separated form by suing as previously the POD. Because function f depends on u we should perform such decomposition at each iteration of PGD enrichment stage or even at the level of the fixed point algorithm. In the last case we should perform the decomposition of the source term N Â Q times, instead the N decompositions involved by the former alternative. In any case, here we are only interested for evaluating the possibility of solving non-linear transient models, being the aspects related to the computational efficiency a work in progress.

Conclusion
The strategy proposed in this paper allows to define a nonincremental boundary element method strategy for solving linear parabolic partial differential equations. Thus, instead of applying the standard incremental BEM using the mandatory space-time kernel, we only need to know the steady kernel associated to the steady-state diffusion-reaction problem that defines the space functions. The time functions defined in the whole time domain are computed by solving a ODE.
Obviously, by using an appropriate linearization the proper generalized decomposition based BEM could be extended for solving non-linear models.
In principle, even if the computed code is not optimized from a computing time point of view, the use of the proposed technique should alleviate the storage needs, in comparison with standard incremental strategies. Moreover, significant CPU time savings are expected, as were noticed and reported in some of our former works, in the finite element framework [25].