Advanced Multiscale Modelling of Composite Welding Processes

This work focuses on the solution of thermal models usually encountered in composite plates welding processes that involve a certain number of numerical difficulties related to: (i) the very fine mesh required due to the small domain thickness with respect to the other characteristic dimensions as well as to the presence of a heat source moving on the domain surface; (ii) the long simulation times induced by the small thermal conductivity of polymers, the important thermal solicitations as well as to the movement of the heat source; and (iii) the necessity to define a homogenized thermal conductivity which can vary significantly from one point to other in the domain. In this paper we explore the applicability of a novel high resolution strategy abe to solve fast and accurately thermal models involving thousands of millions degrees of freedom. This technique opens new perspectives in computational mechanics, and more specifically in material homogenization.


INTRODUCTION
Innovative industrial processes dedicated to carbon fibers reinforced composites (CFRP) intends to achieve in a single stage the preform making, through Automated Tape Placement, and its consolidation. In this regard, fibres tapes are placed by a robot meanwhile they are welded to the previous layers, in the case of thermoplastics, or polymerized, in the case of thermosets, by a local heating (with the help of laser, hot gas torch or electronic beam).
These new processes are cost -effective only if they are quick enough which implies very fast heating and cooling and therefore important thermal gradients. These gradients associated to differences between dilatation coefficients of fibres and matrix lead to substantial residual strain and stresses that are even enhanced, in the case of thermoplastics, by variations of specific volumes due to crystallization phenomena.
There are important drawbacks issues. Some of them can be partly remedied, as for example the macroscopic parts distortion by controlling the tape placements; some others as micro-cracks cannot.
These thermal models involved in the numerical modelling of such processes call for the solution of a certain number of numerical difficulties related to: (i) the very fine mesh required due to the small domain thickness with respect to the other characteristic dimensions as well as to the presence of a heat source moving on the domain surface; (ii) the long simulation times induced by the small thern1al conductivity of polymers, the impor-tant thermal solicitations as well as to the movement of the heat source; and (iii) the necessity to define a homogenized thermal conductivity which can vary significantly from one point to other in the domain.
The consequence of the first difficulty is the necessity of using very fine meshes, and therefore a large number of degrees of freedom, with a significant impact on the simulation algorithms efficiency, are involved. Moreover, when the transient problem must be solved incrementally, the extremely large number of time increments induces a second difficulty. For alleviating this drawback we proposed in [3] a parallel time discretization combined with an adaptive reduced modelling.
The other difficulty concerns the material heterogeneity at the microscopic scale. Even if we could in a first approximation neglect the dependence of thermal parameters (specific heat, thermal conductivity ... ) on the temperature -linear modelling-, the macroscopic thermal conductivity will depend on the considered point in the macroscopic part, because it depends strongly on the microstructure details (fibers volume fraction, fiber orientations, ... ). Due to the small size of fibers, very fine models are required in order to analyze the microscopic patterns, which implies the necessity of using high resolution homogenization techniques. As just discussed, the main drawback comes from the extremely large of degrees of freedom involved in a such microscopic analysis where the level of detail could be in the order of the size of a fiber. In this paper we focus in a novel numerical technique that could be potentially an appealing choice for treating this kind of numerical models. x E Q will be denoted by (x,y,z). Without loss of generality, we consider ar n = {x : z = Lz} and aQn the complementary part of the domain boundary. The temperature T ( x, t ) is then prescribed on ar Q to be a given temperature Tg(x,t). In aQn, and we assume a null heat flux across the boundary, that is (K(T, x) VT) · n = 0, where K(T, x) denotes the homogenized thermal conductivity tensor and n the unit outwards vector defined on the domain boundary. Moreover, an initial condition must be imposed at the initial time: T(x,t = 0) = T 0 (x). The usual variational formulation of the conduction heat equation is expressed by:

Thermal Model In The Whole Domain
where H 1 and HJ are the usual Sobolev functional spaces, and the specific heat and the homogenized thermal conductivity tensor depend on space (in non homogeneous media) and on the temperature. The source term f(x, T,t ) accounts for possible couplings or phase changes phenomena.

HIGH RESOLUTION SOLVERS BASED ON SEPARATED REPRESENTATIONS
We first consider a representative volume element Orve that for composite materials applications must contain numerous fibers. Standard homogenization techniques require the solution in such representative volume of three steady state thermal models with three different prescribed boundary conditions. These problems can be written as: where the superscript i refers to each one of these three problems to be solved.
Due to the characteristic size of those fibers a very fine mesh will be required for solving accurately the thermal models related to Eq. (2). Thus, it is usual to define 3D models consisting of thousands of millions of nodes. Nowadays, the performance of computers allow to solve, in a reasonable time, problems rarely exceeding some millions of degrees of freedom (do:f).
In order to circumvent this difficulty we propose to use a separated representation in combination with a tensor product reduced approximation basis, whose allows to a discrete model of size N x D (N being the number of nodes in each space direction and D the dimension of the space) instead the ND characteristic of grid-based discrete models. Thus, if we suppose N ~ 10 5 and D = 3, the size of the discrete model will be 3 x 10 5 instead 10 15 .
In the following paragraphs we summarize the main elements of this technique that we have successfully applied for solving the multidimensional problems usually encountered in the kinetic theory models of complex fluids [1].
We consider the steady conduction heat transfer problem in Q = Orev = ]0,L[ 3 , the temperature vanishing on the domain boundary arn =an, T(x Earn)= 0 (the non homogeneous case will be addressed later) where the temperature T and the microscopic thermal conductivity tensor k depend on the space coordinates x = (x1 ,x2,x3). The meaning of the source termg(x) will be addressed later.
The problem solution can be written in the form: where FkJ is the fh basis function, with unit norm, which only depends on the Jeh coordinate.
It is well known that the solution of numerous problems can be accurately approximated using a finite (sometimes very reduced) number of approximation functions, i.e.: The previous expression implies the same number of approximation functions in each dimension, but each one of these functions could be expressed in a discrete form using different number of parameters (nodes in the 1D grids). Now, an appropriate numerical procedure is needed for computing the coefficients al as well as the Q approximations functions in each dimension.
The proposed numerical scheme consists of an iteration procedure that solves at each iteration n the following two steps: Introducing the approximation of T and T*: we have Now, we assume that g(x) and k (x) can be written in the form This kind of separated representation can be easily performed using singular value decomposition (in 2D) or using an alternating least squares technique in the general multidimensional case. For this purpose we only need to know the values of the scalar or tensor field) in a cloud of nodes. Eq. (9) involves integrals of a product of 3 functions each one defined in a different dimension. Let II% = 1 hk(xk) be one of these ftmctions to be integrated. The integral over Q can be performed by integrating each function in its 1D definition interval and then multiplying the 3 computed integrals according to: which makes possible a fast numerical 1D integration. Now, due to the arbitrariness of the coefficients a 1 *, Eq. (9) allows to compute the n-approximation coefficients aJ, solving the resulting linear system of size n x n. This problem is linear and moreover rarely exceeds the order of tens of degrees of freedom. Thus, even if the resulting coefficient matrix is densely populated, the time required for its solution is negligible with respect to the one required for perfonning the approximation basis enrichment (step 2).
using the approximation of T given by: This leads to a non-linear variational problem, whose solution allows to compute the 3 functions Rk(xk). Functions Fk(n+ 1 )(xk) are finally obtained by normalizing, after convergence of the non-linear problem, the functions R1 ,R2, R3.
To solve this problem we introduce a discretization of those functions Rk(xk). Each one of these functions is approximated using a 1D finite element description. If we assume than Pk nodes are used to construct the interpolation offunctionRk(xk) in the interval [0, L], then the size of the resulting discrete non-linear problem is L,~~f Pk· The price to pay for avoiding a whole mesh in the domain is the solution of a non-linear problem. However, the size of those non-linear problems remains moderate and no particular difficulties have been found in its solution.
The just proposed technique exhibits four order convergence when the Lebesque norm is considered and the nodes used as integration points. Its extension to transient problems is quite simple and direct [2], as well as its application for solving non-linear models [4]. We come back to these properties in the next section.
As previously indicated, usual homogenization techniques require the solution of three problems with pre- (15) whose second member takes the role offunctiong(x) in the previous discussion.
However, in order to extend this procedure to more general boundary conditions, other possibility, under consideration at present, lies in the use of Lagrange multipliers or penalization techniques. The first one could seem more elegant but it introduces a new unknown field (the Lagrange multiplier) that must be expressed in a separated representation and then introduced in the resulting mixed variational formulation. In this case we must address also the question related to the LBB stability condition, which could imply a different approximations of the different fields. To avoid, or at least postpone, the analysis of this problematic, the most simple and natural way to enforce general initial or boundary conditions is by using a penalization technique, that in the case of Eq.
(2) assuming a vanishing source term, results:  This problem is solved in different multidimensional spaces considering the separated representation and an increasing number of nodes for the functional approximation in each dimension (for the sake of simplicity we are considering that all the one-dimensional functional approximations are defined from the same number of nodes nn).
The errorE is quantified from the norm: (19) Figure 1 depicts the error versus the size of the onedimensional space discretization, that in this example results h = 2nj (nn -1 ), when different multidimensional spaces are considered. A convergence rate of 4 (slope in the log-log representation) is noticed, two times higher than the quadratic one characteristic of piecewise linear approximations.
Despite at present the code is not optimized, we have reported in figure 2 the CPU time associated to each solution. We can notice that even in 100 dimensions and using 1000 degrees of freedom in each dimension (note that the size of the resulting model is of 10 5 when the separated representation is used instead of 10 300 required in the framework of the finite element or any other grid method) the CPU time remains in the order of 1000 seconds. A slope of two is noticed that seems independent of the space dimension. Thus, the computing time increases in a power of two with respect to the number of nodes considered for the one-dimensional functional approximations.

CONCLUSION
This paper explores the ability of separated representations for performing high resolution computations. We have proved that the use of separated representation allows a major reduction of the CPU time. At present this procedure is limited to the linear case but its extension to more complex geometries with general boundary conditions as well as to the homogenization of non-linear behaviors constitute some works in progress.