Robust Model Calibration Using Determinist and Stochastic Performance Metrics

The aeronautics industry has benefited from the use of numerical models to supplement or replace the costly design-build-test paradigm. These models are often calibrated using experimental data to obtain optimal fidelity-to-data but compensating effects between calibration parameters can complicate the model selection process due to the non-uniqueness of the solution. One way to reduce this ambiguity is to include a robustness requirement to the selection criteria. In this study, the info-gap decision theory is used to represent the lack of knowledge resulting from compensating effects and a robustness analysis is performed to investigate the impact of uncertainty on both deterministic and stochastic fidelity metrics. The proposed methodology is illustrated on an academic example representing the dynamic response of a composite turbine blade.


Introduction
Ceramic matrix composite (CMC) turbine blades have been developed by HERAKLES [1]. They consist of ceramic fibers embedded in a ceramic matrix. These textile composites show high resistance to extremely high temperature (1000 ı C), low density and a good fracture toughness compared to conventional metallic alloys. Finite element models (FEM) applied to the design of composite parts are under investigation in order to obtain the predictive simulations.
Model calibration is commonly used to improve the correlation between FEM and measured data. Typically, fidelity-todata is optimized in such approaches even though compensating effects between model parameters may lead to subspaces of fidelity-equivalent solutions [2]. These effects tend to hide missing physics in the model form and create ambiguity in the model selection process. A model is considered as predictive if it remains in a subdomain limited by a satisfying boundary at the required performance level. A satisfying boundary is a family of plausible models displaying the same comparable measure of fidelity [3]. A recent paper has proposed to complete a deterministic fidelity error metric with a robustness criteria quantifying the impact of compensating effects on the fidelity-to-data [3]. This paper will extend that approach to illustrate that it remains relevant for stochastic error metrics as well. Info-gap decision theory is used to provide a formal framework for investigating the impact of unknown compensating effects on both deterministic and stochastic fidelity metrics. A simplified model of a turbine blade is studied to illustrate the proposed methodology using simulated test results.

Fidelity-to-Data Metrics for Model Calibration
The fidelity of simulated outcomes to experimentally identified outcomes can be quantified with both deterministic and stochastic metrics. For example, a common deterministic metric is the normed Euclidean distance D E : where v m is a vector containing the experimentally identified outcomes (n outputs) and v a the homologous vector containing the analytical outcomes. When population data is available for simulations and/or experiments, stochastic metrics can be implemented. In this case, the model now provides uncertain outputs and can be calibrated using stochastic approaches such as covariance adjustment [4]orGibbssampling [5] and Metropolis-Hasting algorithm [6]. In this paper, the Bhattacharyya distance D B will be used to quantify the fidelity-to-data for multivariate features [7] but other metrics could be used as well. The Bhattacharyya metric takes into account the differences between both the means and the covariances between the simulated and experimental distributions: where N v m is the vector containing the mean values of the measured outcomes and N v a the mean values of the simulated outcomes. The pooled matrix † D † a C † m 2 combines the covariance matrices of the simulated and experimental outcomes.

Info-Gap Theory
Info-gap decision theory, developed by Ben-Haim [8], has been used to study a wide range of problems in various disciplines including climate models [9] and medical research [10]. The purpose of info-gap is to provide tools for decision-makers in order to assess risks and opportunities of model-based decisions under a severe lack of information. The reader is referred to [8] for more detailed information.
In the info-gap approach, a horizon of uncertainty˛quantifies the difference between what is known and what needs to be known to make an informed decision. When considering fidelity-to-data R, the worst case fidelity O R, for a given horizon of uncertainty˛i, is evaluated by solving the following optimization problem [3]: where Q is the vector of calibrated best-estimate parameters of the simulation. The robustness function expresses the greatest level of uncertainty at which performance remains acceptable.
where Ǫ is the maximum horizon for which the performance function R R c is satisfied. The notional diagram in Fig. 18.1 illustrates how the robustness curve is obtained. Let u 1 and u 2 be the calibration parameters. The point˛A D 0 corresponds to the nominal model with no uncertainty and R.u 1A ; u 2A / the corresponding performance. As the horizon of uncertainty increases to˛B, the worst case performance over the uncertain domain is given by R.u 1B ; u 2B /, and so on. In practice, the robustness curve is obtained by solving an optimization problem with an appropriate algorithm based on either local or global approaches. For example, in [11], the authors search the space using a factorial design.
Horizon of uncertainty

Numerical Application
Ceramic composite textiles are made of sets of tows Si-C sewn into a pattern as shown in Fig. 18.2. A ceramic matrix is filled into the fibres of the preform to enhance the mechanical properties. Composite textiles can be considered as periodic mesoscopic structures characterized by their smallest elementary pattern called the representative unit cell. A mesoscopic analysis allows the homogenized properties of the composite [12] to be computed. The mesoscopic structure of the studied material has been probed using an X-ray Micro Tomograph and its behavior through multi-instrumented experiments [13,14].
These properties are then implemented in a numerical model to simulate the structural responses. It is not uncommon for the nominal test-analysis error to be unacceptably large. Given the uncertainties in the manufacturing process and experiments, stochastic calibration approaches are best adapted to improve model fidelity. Model form error (MFE)-due to poorly modeled boundary conditions, geometry, and material properties-is also a important source of discrepancy between simulated and experimental outcomes and remains a challenge in the calibration process.
The purpose of this section is to illustrate the impact of compensating effects and model form error on model selection with both optimal and robust strategies. A simple two parameters example is studied representing a free-free thin CMC plate with orthotropic properties. The calibration parameters are the Young's modulus E 1 and E 2 while the other parameters are assumed to be known. The experimental data are simulated based on the nominal model parameters. The fidelity error metric is formulated with respect to the eigenfrequencies of the structure. The deterministic and stochastic calibration metrics for the first ten eigenfrequencies are calculated over the domain OE0:6 W 1:4, discretized into 21 levels, corresponding to the correction coefficients for the parameters E 1 and E 2 . The center point E 1 D E 2 D 1 thus defines the nominal design used to simulated the tests. In order to reduce calculation costs for the population based fidelity metric, a neural network surrogate model is constructed from a preliminary full factorial design. The Bhattacharyya metric is then evaluated at each design point based on 10 5 Monte-Carlo samples.

Test Case Without Model Form Error
In this first case, we assume that the numerical eigenfrequencies are obtained with a perfect model, that is to say, without any model form error. The fidelity metrics are plotted in the space of the two parameters ( Fig. 18.3). The contours indicate fidelity-equivalent solutions and define the satisfying boundaries. The square marker designates the globally minimum error and represents the optimal solution. As expected, this marker totally coincides with the design point used to generate the experimental data for distance metrics. In Fig. 18.3a, there is a large area where the error remains below 5 %, illustrating that calibrated solutions can be very different for equivalent levels of fidelity. These compensating effects are inevitable even in the absence of bias in the model prediction. In Fig. 18.3b, the isocontours create nested contours centered on OE1; 1. The reference outputs at E 1 D E 2 D 1 and some numerical results are presented in

Test Case with Model with Form Error
In this case we assume that the model contains a model form error and is unable to perfectly simulate the observed physics. In this academic example, the shear modulus G 12 is overestimated about 20 %. The experimental eigenfrequencies remain the same as before. Figure 18.4 displays the new satisfying boundary maps.
The shape of the isocontours for the euclidean metric is similar to the previous case (Fig. 18.3a) with a noticeable shift toward the upper left of the domain. Thus the optimal parameters are no longer located in .1; 1/ with 2:9 % error but .0:72; 1:12/ with 1:5 %. The Bhattacharyya metric displays more complex isocontour lines and a shifted optimal solution as well. The implication of this shift is that any calibration algorithm based uniquely on optimizing the fidelity will invariably converge to the wrong solution. Although there is clearly no simple solution to this problem, it is possible in some cases to attenuate the impact of compensating effects on model fidelity.

Robustness to Compensating Effects
Given the inevitable presence of compensating effects between calibration parameters-with and without model form errorthe approach proposed in [3] has been adopted here with the objective of demonstrating that it remains relevant for both deterministic and stochastic fidelity metrics. In this context, the info-gap uncertainty represents the lack of knowledge in the parameter space due to compensating effects and the goal is to find the most robust sets of parameters E 1 ; E 2 insuring an acceptable level of fidelity.
With respect to the previous test case without model form error, the corresponding robust fidelity metric is shown in Fig. 18.5 for a horizon of uncertainty˛D 10 %. These contours are obtain by evaluating, for each design point .E i 1 ; E i 2 /,the worst-case fidelity within the uncertain subdomain, a square of sides 0:1 0:1, centered about the point. The contours have not changed significantly in shape but the overall errors have of course increased. The round marker indicates the best robust calibrated solution which is different from the optimal solution from before. These contours clearly depend on the horizon of uncertainty and increasing˛will necessarily lead to larger fidelity errors and even different robust solutions.
The robust fidelity isocontours for the second test case containing model form error are shown in (Fig. 18.6), with markers for the reference solution (cross-shaped), the optimal solution (square), and the robust-optimal solution (round).   provides a summary of these solutions. In this example, these solutions are clearly different and although it is generally impossible to know the true solution, it is possible to improve the robustness of the solution to compensating effects for both deterministic and stochastic fidelity metrics.

Conclusion
Model calibration based uniquely on fidelity-to-data metrics fall prey to compensating effects between model parameters due to the presence of model form errors and subspaces of fidelity-equivalent solutions. This paper extends earlier work that includes a robustness criteria in the model selection process and illustrates the relevance of such an approach for both deterministic and stochastic metrics. An academic test case representing a composite turbine blade is studied in a notional way with simulated test data. Two calibration metrics have been investigated, in particular a deterministic Euclidean error and a statistical Bhattacharyya error, and both are seen possess both fidelity optimal and robust optimal solutions which are distinct from one another.