Review on Discretization Techniques for Complex Fluid Flow Models: Past, Present and Future

. In the last decades several new and advanced numerical strategies have been proposed for solving the flow models of complex fluids. Most of them were based in the classical discretiza­ tion techniques (finite elements, finite volumes, finite differences, spectral methods, meshless ap­ proachesE) applied on the macroscopic descriptions of such flows (differential and integral models) where special advances were introduced for accounting for the mixed character of the associated variational formulations as well as for stabilizing the advection terms in the motion and constitutive equations. Recently micro-macro approaches are being the more and more applied. They allows to avoid closure relations and the microscopic physics are better described. These models are based on kinetic theory and their main difficulty concerns the curse of dimension. The microstructure con­ formation is defined in a multidimensional space where standard discretization techniques fail. To overcome this difficulty stochastic techniques were introduced (inspired in the Monte Carlo tech­ niques) but the control of the statistical noise and the low convergence order are some of their main drawbacks. Other new strategies have been recently proposed, as for example the ones based on the sparse grid and the separated representation that allows circumventing the aforementioned difficul­ ties. However the models are the more and more focused on the microscopic scale, where they are formulated in terms of Brownian or molecular dynamics. They allow describing very precisely the molecular dynamics, but the computing time remains its main drawback. Thus, in the next years new efforts must be paid to reduce the computing time involved in microscopic simulations and the definitions of bridges between the different descriptions scales.

Review on discretization techniques for complex fluid flow models: past, present and future A. Ammar*, F. Chines tat, E. Cueto** and T. Phillipst where special advances were introduced for accounting for the mixed character of the associated variational formulations as well as for stabilizing the advection terms in the motion and constitutive equations. Recently micro-macro approaches are being the more and more applied. They allows to avoid closure relations and the microscopic physics are better described. These models are based on kinetic theory and their main difficulty concerns the curse of dimension. The microstructure con formation is defined in a multidimensional space where standard discretization techniques fail. To overcome this difficulty stochastic techniques were introduced (inspired in the Monte Carlo tech niques) but the control of the statistical noise and the low convergence order are some of their main drawbacks. Other new strategies have been recently proposed, as for example the ones based on the sparse grid and the separated representation that allows circumventing the aforementioned difficul ties. However the models are the more and more focused on the microscopic scale, where they are formulated in terms of Brownian or molecular dynamics. They allow describing very precisely the molecular dynamics, but the computing time remains its main drawback. Thus, in the next years new efforts must be paid to reduce the computing time involved in microscopic simulations and the definitions of bridges between the different descriptions scales.

BRIEF REVIEW ON COMPLEX FLUIDS MODELLING
Many natural and synthetic fluids are viscoelastic materials, in the sense that the stress endured by a macroscopic fluid element depends upon the history of the deformation experienced by that element. Notable examples include polymer solutions and melts, liquid crystalline polymers and fibre suspensions. Rheologists thus face a challeng ing non-linear coupling between flow-induced evolution of molecular configurations, macroscopic rheological response, flow parameters (such as the geometry and boundary conditions) and final properties. Theoretical modelling and methods of computational rheology have an important role to play in elucidating this coupling [ 17].
Atomistic modelling is the most detailed level of description that can be applied today in rheological studies, using techniques of non equilibrium molecular dynamics. Such calculations require enormous computer resources, and then they are currently limited to flow geometries of molecular dimensions. Consideration of macroscopic flows found in processing applications calls for less detailed mesoscopic models, such as those of 1 kinetic theory.
Models of kinetic theory provide a coarse-grained description of molecular configu rations wherein atomistic processes are ignored. They are meant to display in a more or less accurate fashion the important features that govern the flow-induced evolution of configurations. Over the last few years, different models related to dilute polymers have been evaluated in simple flows by means of stochastic simulation or Brownian dynamics methods. In recent years, kinetic theory of entangled systems such as concentrated poly mer solutions and polymer melts, has known major developments that go well beyond the classical reptation tube model developed by Edwards, de Gennes, and Doi.
Kinetic theory models can be very complicated mathematical objects. It is usually not easy to compute their rheological response in rheometric flows, and their use in nu merical simulations of complex flows has long been thought impossible. The traditional approach has been to derive from a particular kinetic theory model a macroscopic con stitutive equation that relates the viscoelastic stress to the deformation history. One then solves the constitutive model together with the conservation laws using a suitable nu merical method, to predict velocity and stress fields in complex flows. The majority of constitutive equations used in continuum numerical simulations are indeed derived (or at least very much inspired) from kinetic theory.
Indeed, derivation of a constitutive equation from a model of kinetic theory usually involves closure approximations of a purely mathematical nature such as decoupling or pre-averaging. It is now widely accepted that closure approximations have a significant impact on rheological predictions for dilute polymer, solutions, or fiber suspensions.
In this context, micro-macro methods of computational rheology that couple the coarse-grained molecular scale of kinetic theory to the macroscopic scale of continuum mechanics have an important role to play. This approach is much more demanding in computer resources than more conventional continuum simulations that integrate a constitutive equation to evaluate the viscoelastic contribution of the stress tensor. On the other hand micro-macro techniques allow the direct use of kinetic theory models and thus avoid the introduction of closure approximations.
Since the early 19 90's the field has developed considerably following the introduc tion of the CONNFFESSIT method by Ottinger and Laso [16]. Being relatively new, micro-macro techniques have been implemented only for models of kinetic theory with few configurational degrees of freedom, such as non-linear dumbbell models of dilute polymer solutions and single-segment tube models of linear entangled polymers.
Kinetic theory provides two basic building blocks: the diffusion or Fokker-Planck equation that governs the evolution of the distribution function (giving the probability distribution of configurations) and an expression relating the viscoelastic stress to the distribution function. The Fokker-Planck equation has the general form: where �f is the material derivative, vector X defines the coarse-grained configuration and has dimensions N. Factor A is a N-dimensional vector that defines the drift or deterministic component of the molecular model. Finally D is a symmetric, positive definite N x N matrix that embodies the diffusive or stochastic component of molecular model. In general both A and D (and in consequence the distribution function lfl) depend on the physical coordinates x, on the configuration coordinates X and on the time t.
The second building block of a kinetic theory model is an expression relating the distribution function and the stress. It takes the form: Tp = fc g(X) lfldX (2) where C represents the configuration space and g() is a model-dependent tensorial function of configuration. In a complex flow, the velocity field is a priori unknown and stress fields are coupled through the conservation laws. In the isothermal and incompressible case the conservation of mass and momentum balance are then expressed (neglecting the body forces) by: where p is the fluid density, p the pressure and 1Jsd a purely viscous component. The set of coupled equations (1)- (3), supplemented with suitable initial and boundary conditions in both physical and configuration spaces, is the generic multiscale formulation. Three basic approaches have been adopted for exploiting the generic multiscale model: 1. The continuum approach wherein a constitutive equation of continuum mechanics that relates the viscoelastic stress to the deformation history is derived from, and replaces altogether, the kinetic theory model (1) and (2). The derivation process usually involves closure approximations. The resulting constitutive model takes the form of a differential, integral or integro-differential equation. It yields molecular information in terms of averaged quantities, such as the second moment of the distribution: fcX ® XlfldX 2. The Fokker-Planck approach wherein one solves the generic problem (1) to (3) as such, in both configuration and physical spaces. The distribution function is thus computed explicitly as a solution of the Fokker-Planck equation (1). The viscoelastic stress is computed from (2).
3. The Stochastic approach which draws on the mathematical equivalence between the Fokker-Planck equation (1) and the following Ito stochastic differential equation: dX=Adt +BdW (4) where D = B B T and W is a Wiener stochastic process of dimension N. In a com plex flow, the stochastic differential equation (4) applies along individual flow tra jectories, the time derivative is thus a material derivation. Instead of solving the deterministic Fokker-Planck equation (1), one solves the associated stochastic dif ferential equation ( 4) for a large ensemble of realizations of the stochastic process X by means of a suitable numerical technique. The distribution function is not com puted explicitly, and the viscoelastic stress (2) is readily obtained as an ensemble average.
For more details concerning the micro-macro approach reader can refers to the excellent review paper [ 11] and the references therein.
The continuum approach has been adopted throughout the development of computa tional rheology. First simulations were obtained in the early 1980's. Two decades later, macroscopic numerical techniques based upon the continuum approach remains under active development.
The control of the statistical noise is a major issue in stochastic micro-macro simu lations based on the stochastic approach. Moreover, to reconstruct the distribution one needs to operate with an extremely large number of particles, however, in general, only the moments of such distribution are required, which can be computed using a much more reduced population of particles. Thus, some stochastic simulations of muti-bead spring (MBS) models have been successfully carried out, see for example [2 2]. These problems do not arise at all in the Fokker-Planck approach. The difficulty, however, is that the Fokker-Planck equation (1) must be solved for the distribution function in both physical and configuration spaces. This necessitates a suitable discretization procedure for all relevant variables, namely position x, configuration X and time t. Until now, the dimensionality of the problem could be daunting and consideration of molecular models with many configurational degrees of freedom did not appear feasible. This probably ex plains why relatively few studies based of the Fokker-Planck approach have appeared in the literature until very recently at least. In [10] [14] the resolution of the Fokker-Planck equation involving a moderate number of dimensions is considered. Another determin istic particle approach, very close to that proposed in [9], was analyzed in [4] in the context of MBS models using smooth particles, but it was noticed that the impact of smoothing on the solution is in fact significant.
Kinetic theory models involving the Fokker-Planck equation, can be accurately dis cretized using a mesh support (Finite Elements, Finite Differences, Finite Volumes, Spectral Techniques, ... ). However these techniques involve a high number of approx imation functions. In the finite element framework, widely used in complex flow sim ulations, each approximation function (also known as shape function) is related to a node that defines the associated degree of freedom. When the model involves high di mensional spaces (including physical and conformation spaces and time) standard dis cretization techniques fail due to an excessive computation time required to perform accurate numerical simulations. To alleviate the computational effort some reduction strategies have been proposed, as is the case of the Partial Solution of the Cyclic Reduc tion (PSCR) [12] [19] [20] which solves linear systems by using techniques of separation of variables, partial solution and projection techniques [l] [13] [23]. Despite the signifi cant reduction of the computational effort this technique can be only applied for solving particular partial differential equations and it fails also in the multidimensional case.
Other appealing strategy that allows alleviating the computational effort is based on the use of reduced approximation bases within an adaptive procedure. The new approx imation functions are defined in the whole domain and they contain the most repre sentative information of the problem solution. Thus, the number of degrees of freedom involved in the resolution of the Fokker-Planck equation is drastically reduced. The con struction of those new approximation functions is done with an 'a priori' approach, which combines a basis reduction (using the Karhunen-Loeve decomposition) with a basis enrichment based on the use of some Krylov's subspaces. This strategy has been successfully applied, in some of our former works, to solve kinetic theory models de fined on the surface of the unit sphere (for simulating short fibers suspensions or liquid cristalline polymers) [18] as well as 3D models describing FENE molecular models [3].
An important drawback of one such approach is the fact that the approximation functions are defined in the whole domain, and until now, the simplest form to represent one such function is given its value in some points of the domain of interest, being its value defined in any other point by interpolation. Some times the treated model results highly multidimensional, and in this case the possibility of describing functions from their values at the nodes of a mesh or a grid in the domain of interest results simply prohibitory. Some attempts exist concerning the treatment of multidimensional problems. The interested reader can refer to [8] for a review on sparse grids methods involving sparse tensor product spaces, but despite of its optimality, the interpolation is defined in the whole multidimensional domain, and consequently only problems defined in spaces of dimension of the order of tens can be treated [2]. In [7] multidimensional problems are revisited and deeply analyzed, and for this purpose new mathematical entities are introduced to be applied in the numerical treatment of such problems. In conclusion, the problematic lied to models defined in multidimensional spaces remains still open, and new efforts must be paid to reach significant improvements in next years.
Some of the most used kinetic theory models defined from a Fokker-Planck equation can be expressed in a separated form. In this case the definition of tensor product ap proximation spaces, run perfectly, and allow to circumvent the difficulties related to the multidimensional character of kinetic theory models, as proved in [5]. This technique consists of the use of a separated representation of the molecular conformation distribu tion, that introduced in the variational formulation of the Fokker-Planck equation leads to an iteration procedure that involves at each step the resolution of a small-size non linear problem. Thus, the number of degrees of freedom involves in the discretization of the Fokker-Planck equation can be reduced from (nn)N (required in the usual grid/mesh based strategies) to (nn) x N (nn being the number of nodes involved in the discretization of each conformation coordinate and N the dimension of the conformation space).
In [5] we considered the steady state solution of some classes of multidimensional partial differential equations. The multidimensional transient Fokker-Planck equations could be considered within an incremental time discretization. However, being the time no more than other coordinate, one could expect a coupled space-time resolution. The main difficulties related to a such one approach lies in the fact that the initial condition being non-zero the procedure proposed in [5] cannot be applied in a direct manner. Other difficulty lies in the fact that space approximations are built using standard piece-wise functions, and when this kind of approximation is retained also to construct the time interpolation the known inconsistency related to centered differences of time derivatives is encountered. In [6] we propose alternatives to circumvent both difficulties, and then to allow an efficient treatment of multidimensional transient kinetic theory models.
The novelty of this technique justifies that a lot of questions are still open: (i) the treatment of non-linear Fokker-Planck equations; (ii) optimal basis enrichment; (iii) richer physical spaces; (iv) analysis of complex flows involving a non-homogeneous solution in the physical space; (v) general initial conditions; (vi) analysis of convergence; (vii) stabilization of advection operators; ... as well as its coupling with efficient flow solvers as for example the ones based on meshless approaches [15] or the fast Lattice Boltzmann solvers.