# Combustion kinetic model uncertainty quantification, propagation and minimization (2022)

## Introduction

“Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.” [1]

G. E. P. Box and N. R. Draper (1987)

In complex problems many aspects of the system are not exactly known and may never be known exactly. Coupled with the high nonlinearity and dimensionality, one must ask how it can be possible to use models of such systems to explore the intricate nature of a phenomenon and make useful predictions. The approach to developing detailed kinetic models (mechanisms) of fuel combustion involves compiling a set of elementary reactions whose rate parameters may be determined from individual rate measurements, reaction-rate theory, or a combination of both. For large hydrocarbon fuels, many of the reaction pathways and rates must be based on extrapolation of knowledge of smaller-species reactions. These methods have uncertainties. Whether the collective uncertainties can be small enough to meet a certain chemical accuracy and to satisfy a particular combustion simulation continues to be an open question. Of course, model uncertainties may also be the result of incomplete physics and missing reaction pathways. It has been shown and will be reiterated in this review article, that even if the reaction model is complete, the underlying rate-coefficient uncertainty generally precludes the possibility of predicting relevant combustion properties of a fuel a priori [2].

Let us consider the rate coefficient of the quintessential reaction of combustion.H+O2↔O+OH,with rate constant k1. According to the NIST chemical kinetics database [3], k1 has been examined in at least 77 independent experimental studies in the forward or reverse directions. The rate expressions have been reviewed and evaluated at least 31 times. A historical view of the uncertainty in k1, depicted in Fig.1, is revealing both in terms of the past achievements and future challenges. The tremendous advances in laser diagnostics and shock tube techniques in the late 1980s and early 1990s [14] brought significant improvement in the accuracy of k1. Today, the best experiments give a two-standard deviation uncertainty in k1 better than 15% over the temperature range of 1100K–3370K [12]. Among elementary reactions of combustion relevance, this precision is by far the highest achieved. It is truly an astonishing achievement considering that k1 itself spans two orders of magnitude over that temperature range. Yet, such precision may still not be enough for combustion simulations. For example, if a particular combustion response has a logarithmic sensitivity coefficient (see, Section 1.2 for definition) of 0.2 with respect to k1—a value typical for the laminar flame speed—the uncertainty of k1 causes the predicted combustion response to be uncertain by ±3% due to this uncertainty alone. The rate coefficients of other reactions are, in general, less certain than is k1, even though they generally do not impact the combustion prediction as much as k1.

As a further example, Fig.2 illustrates a model problem of ethylene oxidation in a perfectly stirred reactor at a pressure of 30bar and a constant-temperature of 1200K, and in particular, the prediction uncertainties exhibited with a representative reaction kinetic model [16]. The figure presents a Monte Carlo sampling of the uncertainties of the rate coefficients of the reactions considered in the model and their impact on the concentrations of OH, H2, H2O, CO and CO2 as a function of the mean residence time. Figuresin the left panel show the results obtained using uncertainty factors currently known for each reaction, whose values range from 1.15 for k1, 1.2 to 2 for 8% of the reactions, 2 and 3 each for one third of the reactions, and 5 to 10 for the remaining 20% of the reactions. The right panel of Fig.2 shows the corresponding uncertainty assuming all rate coefficients to be hypothetically accurate to within 15%. Two observations can be made from these plots. First, the current uncertainties are still too large for the model to be predictive. For example, in the hysteresis region, the combined uncertainty in the rate parameters produce a prediction uncertainty of more than an order of magnitude in residence time and two orders of magnitude in concentration. Second, even if all rate parameters could be determined to within 15% uncertainty and all reaction pathways were accounted for, the uncertainty of the model prediction is still not negligible. As shown in the figure, the extinction and ignition times can be still uncertain by as much as a factor of 2. Of course, not every rate coefficient needs to have an uncertainty as small as 15%. The uncertainty in the simulation results usually comes from the uncertainty in the rate parameters of just a handful number of reactions. As will be discussed in further details in Section 3, both observations underscore the key obstacles to combustion chemistry model development and the need for a proper consideration of model uncertainty, both now and the future.

Uncertainty quantification (UQ) is the science for quantification and minimization of uncertainties with computer experiments and simulation [17], [18]. It combines physical observations of the system, computational models, and expert opinions to make predictions about the system and, equally importantly, to determine the confidence of these predictions. As a mathematical science, UQ is rooted deeply in statistical mathematics. UQ methods, such as the Bayesian uncertainty approach [19], provide statistical alternatives to deterministic numerical integration.

One typical example in which UQ is essential is weather forecasting. Uncertainty is a fundamental property of weather and seasonal climate predictions. Today, no forecast is complete without a description of its uncertainty [20]. In other words, for such a complex phenomenon, we are forced to make only approximate yet still useful predictions because of the uncertainties intrinsic to the model and its numerical solutions. Although it is not the authors' view that the problem of weather forecasting is identical to combustion in nature, they do share similarities: weather-related prediction uncertainties stem from limitation in our knowledge about the physical processes, while others come from uncertainties in the parameters of the model and the experiments with which we “calibrate” the model, a situation not entirely different from combustion chemistry.

Indeed, one of the key challenges of combustion chemistry lies in the handling and reduction of uncertainties in the reaction pathways and their rate parameters and of an efficient approach to assess the impact of these uncertainties in our ability to predict a combustion phenomenon. This review paper highlights recent advances about UQ applications in combustion chemistry analysis. It also outlines the challenges in future progresses in combustion chemistry in the context of uncertainty quantification.

Combustion chemistry through numerical detailed kinetic modeling started from the work of Dixon–Lewis some fifty years ago [21], though studies that used the principles of elementary reaction mechanisms and kinetics to understand the chemistry of pyrolysis, oxidation and combustion started much earlier (e.g., Refs. [22], [23], [24], [25]). Since then, many kinetic models for the combustion of hydrocarbon fuels have been proposed (see, e.g., Refs.[26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [68], [69], [70], [71], [72], [73], [74], [75], [76], [77], [78], [79], [80], [81], [82], [83], [84], [85], [86], [87], [88], [89], [90] for work published prior to 2000). The triumph and progress over that period of time have been reviewed from different perspectives (see, e.g., Refs. [91], [92], [93]). The success of the earlier work has encouraged a wide range of studies in more recent years into the combustion chemistry of large-molecular weight, liquid hydrocarbon fuels [94], [95], [96], [97], [98]. Meanwhile, the community also realized the need to revisit the foundational part of the combustion chemistry from time to time as new, more accurate understanding emerges. Examples include the recent efforts in the H2/CO reaction mechanism (e.g., Refs.[12], [99], [100], [101], [102], [103], [104], [105], [106], [107], [108], [109], [110], [111]). Such effort has also manifested in persistent efforts to update rate coefficients through comprehensive evaluations [7], [8], [10], [112], [113], [114], [115], [116], [117], [118].

A feature common to past efforts is the lack of quantitative measure of the uncertainties underlying a kinetic model. A feature parallel to this problem is the “many-model” problem, or model proliferation [119]. In fact, these “many” models can be viewed as statistical sampling of the underlying parameter uncertainty. Consider some of the reaction models of H2 oxidation published prior to a comprehensive set of laminar burning rates of hydrogen at high pressures being made available [106]. The reaction models considered here [100], [101], [103], [104], [105], [120] share the same set of reactions and were all “calibrated” in one way or another against the laminar flame speed of hydrogen-air mixtures at atmospheric pressure. They differ only in the choices of rate coefficients. When compared to the laminar mass burning rate of hydrogen in an oxygen–helium mixture, however, Burke etal. [106], [121] noted the predictions of available reaction models diverge towards elevated pressure. As shown in Fig.3, the predictions of these models, in fact, lie within the uncertainty band predicted for an arbitrary, nominal model (e.g., the trial model of Davis etal. [101]). Hence, the differences in the prediction do not imply any missing physics. Rather, they are the result of rate parameter uncertainties whose impact is amplified towards high pressures. While the problem just mentioned is entirely expected, it is also troublesome that a user is often left with the dilemma as to what model to choose, and unfortunately the ultimate selection is often made on an ad hoc basis, because the criterion for choosing a model suitable for a particular application is unavailable.

Development of chemical kinetic models of combustion is not a linear process of reaction pathway and rates postulation followed by a treatment of a mathematical inverse problem. It is very clear that, during the early stage of kinetic mechanism studies, the underlying complexity requires the problem to be tackled piecewise under a degree of chemical isolation. Two important advances have become instrumental to combustion chemistry research: the use of electronic structure calculations and reaction rate theories [92], [122], [123], [124], and advances in laser diagnostics coupled with the shock tube technique [14]. Attempts were also made at addressing the comprehensiveness of a reaction model in terms of the range of thermodynamic state the model covers and the range of combustion phenomena it describes [125].

Given the advances just discussed, one has to ask whether it is possible to rely entirely on first-principles approaches combined with validation against highly accurate experiments to obtain a predictive kinetic model of fuel combustion. Currently, the best demonstrated uncertainty of shock tube experiments is around 15% for rate coefficients at high temperatures [12]. Ab initio theories (e.g., the coupled cluster CCSD(T) method) target a chemical accuracy of ±1kcal/mol for species thermochemistry and reaction energy barriers (see, e.g., Refs.[126], [127]). This accuracy gives an expected uncertainty factor value of 1.65, 1.4 and 1.3 in the rate coefficient at 1000K, 1500K and 2000K, respectively. Combined with the inaccuracies in the vibrational frequencies and the treatment of internal rotations using transition state theory or Rice–Ramsperger–Kassel–Markus theory/master equation modeling (see, e.g., Ref.[122], [124], [128]), we expect that the total uncertainty in rate predictions to be no better than 50% above 1000K (see Refs. [129], [130], [131] for some recent examples). As was shown in Fig.2, even if the rate coefficients are all known to within 15%, the uncertainty in predictions of many practical combustion phenomena still can be considerable. In summary, model construction using ab initio theories and/or well-designed experiments allowing for submodel isolation can make notable progress, but by themselves will not be enough to achieve truly predictive modeling.

In fact, a complete reaction model, for which each and every rate parameter is sufficiently chemically accurate, will probably never exist. There are two fundamental reasons that support this notion. First, the completeness of a kinetic model cannot be determined a priori. Then, the relevant question to ask is whether the kinetic model can describe known physical observations. Second, even for simple models like that of H2/CO combustion, the kinetic uncertainties and the dimensionality of the uncertainty space are too large to “pin” the model to a unique point in the uncertainty space. This situation is exemplified by the discussion made earlier for the mass burning rate of hydrogen. Under this condition, a predictive model can be represented at best by a collection of regimes on an acceptable, hyper uncertainty surface on which every point (or rate parameter combination) gives statistically acceptable predictions for a prescribed set of combustion data. The feasible set, a term from optimization theory recently introduced to the combustion field, defines this rate parameter set [132]. In this context the many competing reaction models in the literature may be viewed as points distributed in the kinetic uncertainty space, some of which lie on the acceptable uncertainty surface, and others that may not be on this surface. A rational goal for developing a predictive reaction model cannot be the identification of a single point (equivalent to a given reaction model), for such a point can be only as arbitrary as any of an infinite number of other points on the acceptable kinetic hyper surface. Rather, the further progress of combustion reaction kinetics must be defined as a progressive reduction of the size of the uncertainty surface using theoretical and experimental tools current available or to be developed in the future.

A widely used method to understand how the solution of a chemical kinetic model is dependent on the model parameters, and notably the reaction rate coefficients, is sensitivity analysis [17], [35], [51], [133], [134], [135]. A first-order local sensitivity analysis calculates the derivative of a model response with respect to model parameters. Higher-order sensitivity determines the joint impact of two or more parameters on a model prediction. As a diagnostic tool, sensitivity analysis is used to uncover those reactions that have the greatest influence on a global combustion property or a local property, e.g., what reactions impact the concentration of a species or temperature. The various sensitivity methods and their relationships to UQ have been discussed in great detail in a review paper by Turányi [133]. UQ is closely related to sensitivity analysis, and in many ways UQ is a natural extension of sensitivity analysis. For example, as first used by Warnatz [136] and further discussed by Turányi etal. [137], the product of the first-order sensitivity coefficient and the uncertainty in the rate coefficient measures of the influence of the uncertainty in an individual rate parameter on the uncertainty in a kinetic model prediction.

One of the commonly used sensitivity measures is the logarithmic sensitivity coefficient. As discussed by Gardiner [138], the sensitivity coefficient of the ith computed quantity or model prediction yi with respect to the jth rate parameter xj is${S}_{i,j}=\frac{\mathrm{log}\left({y}_{i}\right)-\mathrm{log}\left({y}_{i}^{\prime }\right)}{\mathrm{log}\left({x}_{j}\right)-\mathrm{log}\left({x}_{j}^{\prime }\right)}$where ${x}_{j}^{\prime }$ denotes that the parameter j has been altered from some reference value and ${y}_{i}^{\prime }$ denotes the computed quantity i calculated with this modified parameter. If the perturbation is sufficiently small, equation (1) is equivalent to the local slope on a log–log plot of computed quantity versus rate parameter,${S}_{i,j}=\frac{\partial \mathrm{log}\left({y}_{i}\right)}{\partial \mathrm{log}\left({x}_{j}\right)}=\frac{{x}_{j}}{{y}_{i}}\frac{\partial {y}_{i}}{\partial {x}_{j}}$

As an invaluable tool, the concept of sensitivity analysis was introduced in the late 1970s and early 1980s. Beyond the brute force method, several numerically efficient methods were introduced [35], [139], [140], [141], [142], [143], [144], [145], [146], [147], [148], [149], [150], [151], [152], [153], [154], [155], [156]. Shuler and coworkers [140], [141], [142], [143], [144] championed the method of Fourier amplitude analysis for multiparameter model systems. Rabitz and coworkers [145], [146], [147] introduced Green's function method of sensitivity analysis in chemical kinetics. The “Direct Method” by Dickinson and Gelinas [148] introduces the use of Jacobian of the model problem in sensitivity calculation. The commonly used method today was originally proposed by Stewart and Sorenson [139], which fits naturally into the solution methods used for problems frequently encountered in combustion chemistry and which works well for both time-independent and time-dependent problems. More importantly, the method was implemented in the CHEMKIN suite of codes [157], which has been the mainstream code for combustion chemistry analysis over the last thirty years. The reader is referred to the review paper of Miller etal. [51] for further details.

Consider a general dynamical system $\Xi$ in vector notation,$\Xi \left(\mathbf{s},t,\mathbf{y};\mathbf{x}\right)=L\left(\mathbf{s},t,\mathbf{y};\mathbf{x}\right)-f\left(\mathbf{s},t;\mathbf{x}\right)=0$where t is the time, L represents the conservations of various fluid mechanical and thermodynamic properties and of species mass fractions, y=y(s,t;x) is the solution vector of the system or model response and f=f(s,t;x) is the source term, both of which depend on the position s and time t, and are parametrically dependent on the parameters x, which can be uncertain. Here the source term f is basically a chemical kinetic model applied to a specific problem that is governed by $\Xi$. In combustion chemistry, Eq. (3) can be a set of conservation equations of a flame, or a set of rate equations describing the time evolution of species and thermodynamic quantities. By the chain rule, we have$\frac{\partial \Xi }{\partial {y}_{i}}\frac{\partial {y}_{i}}{\partial {x}_{j}}+\frac{\partial \Xi }{\partial {x}_{j}}=0$

In the above equation, $\partial \Xi /\partial {y}_{i}$ is the Jacobian J which is readily available as a part of the solution method for both initial value and boundary value problems. Hence, the sensitivity coefficients for a stationary system may be conveniently calculated as$J\frac{\partial {y}_{i}}{\partial {x}_{j}}=-\frac{\partial \Xi }{\partial {x}_{j}}$

Analysis of the first-order sensitivity coefficients, whether they are obtained from brute force calculation or from Eq. (5), quickly yielded some very useful understanding of combustion chemistry. For example, as Fig.4 shows, the number of reaction rate coefficients that impact a combustion response is usually very few compared to the total number of reactions considered in a kinetic model; the ranked sensitivity coefficients form the basis for the selection of active rate parameters for model uncertainty minimization, as will be discussed later. Although it is not used frequently, higher-order sensitivity has been discussed [147], [158]. In particular, the second-order sensitivity coefficients indicate the importance of parameter coupling, which can be crucial to kinetic model refinement [57].

Among the pioneering work of uncertainty-related analysis in combustion chemistry, the most influential work has been that of Michael Frenklach, who was the first to systematically treat kinetic parameter uncertainties [2], [17], [35], [37], [159]. In his studies of systematic optimization of detailed kinetic models [17], [159], Frenklach addressed the fundamental issue concerning the role of underlying parameter uncertainty in kinetic modeling and asked whether it is “possible to adjust a large-scale dynamic model in a systematic manner with a reasonable amount of effort.” In order to address this question, the method of solution mapping was introduced as a quantitative way to express model predictions as functions of rate parameters; a response surface is generated which contains the first- and second-order sensitivity information explicitly. The role of fundamental combustion experiments was reinterpreted as not only to serve as a measure for comparison with model predictions, but also to provide statistically meaningful information that is used for constraining the joint parameter uncertainty in a kinetic model. It is this reinterpretation that allows for systematic development of the model through multi-parameter optimization against experimental data. Finally, the concept of non-uniqueness of parameter choices, or the feasible region, was introduced. The last point lays the foundation for what is a renewed concept of the feasible parameter set in his later publications [119], [132].

The aforementioned studies form the theoretical and practical foundations for the much-celebrated GRI-Mech effort of the 1990s [160], [161], [162]—a successful drive of the systematic treatment of kinetic parameter uncertainty through collaborative and coordinated modeling and experimentation, from fundamental rate parameter evaluation, reaction rate theory application, initial trial model testing, to designing experiments for measurements of key rate parameters and critical combustion properties, and model optimization and interactive dissemination. It also inspired a large range of UQ research that has been conducted over the last decade.

An early effort that bridges the UQ analysis of chemical kinetics to sensitivity analysis is the work of Tamás Turányi. In his 1990 review article, Turányi [133] discussed the various sensitivity methods available at that time, but the real significance of the article is about extending sensitivity analysis to quantitative uncertainty analysis, coupled kinetic parameter estimation, experimental design, and mechanism reduction.

McRae and coworkers [163] championed the method of Deterministic Equivalent Modeling Method (DEMM) to represent the stochastic distribution of kinetic model outputs. The method relies on a representation of parametric uncertainty via polynomial chaos expansions, and utilizes orthogonal collocation to calculate the distributions of the model responses. A series of studies by Najm, Ghenum, and Knio [164], [165], [166] further advanced the method of spectral stochastic uncertainty quantification, and showed that polynomial chaos methods and conventional sensitivity analysis provide similar first-order information, but polynomial chaos provide better higher-order information critical to uncertainty quantification [166]. In particular, confidence intervals on sensitivity coefficients, which can be uncertain themselves, may be calculated.

In a recent paper by Frenklach [119], the role of uncertainty analysis and reduction has been discussed in a larger context of model development through Process Informatics Model (PrIMe). A quite comprehensive discussion of the role and methods of sensitivity and uncertainty analyses in combustion chemistry modeling may be found in Tomlin [167]. The intent of the current paper is to provide a self-contained review about the mathematical principles and methods of uncertainty quantification and their application in highly complex, multi-parameter combustion chemistry problems. We intend to provide a comprehensive review and classification for the various UQ methods and illustrate their applications for problems involving forward uncertainty quantification and propagation, and as an inverse problem leading to model uncertainty constraining.

The terms “error” and “uncertainty” have often been used interchangeably in many different ways. For instance, as discussed by Raman and co-workers [168], “error” is used in the same sense as the terms “experimental uncertainty” and “model form error” defined below, while “uncertainty” means only “parametric uncertainty.” In this paper, we adopt the following definitions. Uncertainty refers to deficiencies of an experimental measurement or model prediction that is caused by the lack of precise knowledge. Error, on the other hand, is a recognizable, recoverable, deterministic deficiency that is not due to lack of knowledge. Examples of error include a mistyped rate parameter, a mismatch of the thermochemical properties of an isomer due to confusion of species nomenclature, a wrong value in the initial condition of a simulation, rate constants of an elementary reaction not obeying the principle of detailed balancing, and a convergence error that can be resolved by reducing computational error tolerance.

Two sources of uncertainty may exist. They can be categorized into aleatory and epistemic uncertainties. Aleatory uncertainty is inherent to a probabilistic process. This uncertainty is irreducible and must be characterized by a probability distribution. Epistemic uncertainty is due to limited knowledge: e.g., an incomplete understanding of the underlying physics (missing pathways), and imprecise evaluation of a rate constant. Unless otherwise indicated, what we discuss here is the epistemic uncertainty.

The sources of model uncertainties can be many; and they may be classified into several categories. Oberkampf and Roy [169] divide the sources of uncertainty into three major groups, which are model input uncertainty, model form error, and numerical uncertainty. Numerical uncertainty comes from procedures for the solution of a particular physical problem using a given model. In general, numerical uncertainty can be minimized by careful convergence tests. For example, it has been long known that as an eigenvalue problem, the laminar flame speed or the mass burning rate can be a strong function of the computational domain size and local gradients. In principle, a properly converged solution requires sensitivity tests with respect to the domain size and grid refinement parameters. The other two categories can be subdivided further.

Model input uncertainty refers to uncertainty in the data that is being used to build the model, that is, the notion that a model will always have some inherent “fuzziness.” It can be split into parameter uncertainty and experimental uncertainty. Parameter uncertainty stems from chemical kinetic rate parameters, thermodynamic or transport properties whose exact values are unknown, cannot be known beyond a certain accuracy, or are only approximate due to assumptions in the fundamental theory (e.g., rigid-rotor, harmonic oscillator treatment of partition functions). Experimental uncertainty refers to the measurement uncertainty of fundamental combustion properties that are used to constrain the model. This uncertainty can be best quantified by careful assessments of similar errors in different apparatus and by repeating measurements across a range of experimental apparatus in one or more laboratories.

Model form error refers to uncertainties in the assumptions inherent in developing the model. It can be split into structural uncertainty and interpolation and extrapolation uncertainty. Structural uncertainty is the result of lack of knowledge of the underlying physics. Examples include missing reaction pathways, incomplete reaction description, or ill-formulated conservation equations. Interpolation and Extrapolation uncertainty is the result of a lack of available laboratory experiments for a particular set of thermodynamic conditions. The same category of uncertainty is applicable to elements of reaction model development that employs the reaction class rule or analogous reactions. In the context of UQ of chemical kinetics, interpolation and extrapolation uncertainties can be related to parameter uncertainty. For example, extrapolating high-temperature shock-tube measurement of a rate coefficient to low temperatures can lead to tremendously enlarged uncertainty at low temperatures because of the Arrhenius behavior, and in some cases, simple extrapolation can lead to violation of the collision theory and structural uncertainties. It is for this reason that rate coefficient extrapolation are appropriate only when such an extrapolation are guided, at least to some extent, by reaction rate theories.

The focus of the current review is on uncertainty quantification that results from model input uncertainties, although a proper treatment of these uncertainties can lead to useful conclusion about model form errors. For example, failure for a chemical reaction model to predict a particular combustion measurement, within its experimental uncertainty and also within the uncertainty bounds of the model rate parameters (and thermodynamic and transport properties), is a suggestion that the model is deficient beyond parameter uncertainty. The model perhaps has missing reaction pathways, or incomplete reaction descriptions, or the accuracy of the experiment is in question.

Before we start to discuss the various UQ methods for combustion chemistry analysis, we wish to make a brief remark about what is meant by model validation and verification, as these terms have been frequently used in the literature. There has been much discussion about rigorous definitions for these terms. A review is provided in the book, Verification and Validation in Scientific Computing by Oberkampf and Roy [169], which we will summarize here. The first definitions of verification and validation were provided by the Society for Computer Simulation (SCS) [170], in a short article providing concise definitions for a number of terms. In this work, SCS divided modeling and simulation into three key components, which were reality, the conceptual model, and the computerized model. Reality was defined as what you can see and touch. The conceptual model is the mathematical description of reality. The computerized model is the executable code that implements the conceptual model.

Multiple definitions of verification have been used for verification in computer modeling, differing in detail but similar in substance to the definition of the SCS, which is “substantiation that a computerized model represents a conceptual model within specified limits of accuracy.” As an example, the American Society for Mechanical Engineers' Guide for Verification and Validation in Computational Solid Mechanics divides verification into two components. These are code verification, which ensures that the computer code is correctly implemented (i.e., there are no mistakes), and solution verification, which ensures that the numerical solution agrees with known solutions to within certain tolerances and also addresses questions of numerical stability. For instance, in certain special cases, the conceptual model will admit analytical solutions, to which solutions of the computerized model must converge for it to be verified.

As with verification, model validation has many definitions, all similar to the SCS's, “substantiation that a computerized model within its domain of applicability possesses a satisfactory range of accuracy consistent with the intended application of the model.” This is divided by Oberkampf and Trucano [171] into three aspects, which are comparison with experimental data, extrapolation of the model and associated uncertainty to its intended domain, and assessing whether the model meets certain precision requirements in its intended domain. It is the first aspect that is usually meant by “validation” in the combustion community. As a general practice, unfortunately, the community usually ignores the second and third aspects of validation.

Validation includes extrapolation because experimental data may not be available in the intended domain of the model. This means that the uncertainty of the model must be estimated over that domain. If the model is later to be optimized against the validation data, then the corresponding measurement uncertainty must also be estimated and propagated. Without an uncertainty estimate, it can be said that the model is being compared with or tested against experiment, but it cannot be said to be validated.

An important distinction between verification and validation is that verification is a well-posed problem with a definite answer. In the case of code verification, for instance, the computerized model is being compared with a known (possibly analytical) solution, and the numerical solution either agrees with the analytical solution or it does not. Validation, on the other hand, is poorly posed. An experiment gives an answer, but that answer is subject to interpretations and assumptions. It can never be known whether those assumptions hold in all cases, and as such the validation data are not “true” in any more concrete sense than are the predictions of the model.

## Cited by (211)

• Real gas effect on steady planar detonation and uncertainty quantification

2022, Combustion and Flame

The steady planar detonation in hydrogen/air mixture was simulated at elevated pressure. The real gas effect was incorporated by considering the Peng-Robinson equationof state (EoS), nonideal thermodynamic functions and reaction kinetic laws. The nonideal EoS and corresponding thermodynamic functions increase the Chapman-Jouguet (CJ) velocity but decrease the post-shock temperature compared to the ideal gas model, which induces a change of the induction time and distance. The effect of the nonideal reaction rate law depends on the amplitude of compressibility factor and fugacity coefficient and tends to shorten both the induction time and induction distance. These effects counterbalance each other in the complete real gas model which lead to induction time and distance close to the ideal model results. The total heat release was also found to be reduced for the complete model, but the key reactions and their relative importance remain the same as when using the ideal gas model. The nonideal portion of heat release is minor compared with its ideal portion as temperature increases and pressure drops in the reaction zone. The uncertainty of the real gas model, originating from the uncertainty of the parameters in the EoS, was quantified using a Monte Carlo sampling approach. The uncertainty increases linearly with initial pressure and is mainly determined by the species with higher uncertainty factor, larger mole fraction weighted molecular attraction and covolume parameters. Compared to the uncertainty caused by the chemical reaction model, real gas model uncertainty is negligible at low pressure, but becomes of the same order of magnitude at elevated pressure.

• OptEx: An integrated framework for experimental design and combustion kinetic model optimization

2022, Combustion and Flame

Computational fluid dynamics (CFD) simulation in the design of combustion devices imposes increased demand on combustion kinetic models with acceptable uncertainties. Model optimization is often utilized to constrain the model parameters with experimental data to reduce the prediction uncertainties. Since it is unaffordable to conduct experiments under all the concerned conditions, experimental design approaches are proposed to find the most valuable experiments to be conducted. An integrated computational framework, OptEx (Optimal Experiments), is proposed to facilitate applications of experimental design, data clustering, and model optimization with optimal experimental data. Specifically, this framework integrates the functions of dimension reduction, global sensitivity analysis, forward uncertainty quantification, model-analysis-based experimental design, and model optimization. The share of data and surrogate models between different modules significantly improves the computational efficiencies of model analysis, experimental design and model optimization. Two case studies of a methanol system are utilized to demonstrate the functionalities of OptEx. First, experimental designs are performed based on sensitivity entropy and surrogate model similarity analysis to find out informative while independent experiments. Second, experimental data clustering with OptEx is demonstrated by grouping massive experiments and selecting optimal experiments for each group. For both cases, a small size of optimal experimental datasets are utilized for model optimization, yielding an optimized model with smaller uncertainty bounds. Meanwhile, the input parameter uncertainties are significantly reduced and parameter correlations are identified via the joint probability distributions of optimized parameters.

• Efficiency of uncertainty propagation methods for moment estimation of uncertain model outputs

2022, Computers and Chemical Engineering

Uncertainty quantification and propagation play a crucial role in designing and operating chemical processes. This study computationally evaluates the performance of commonly used uncertainty propagation methods based on their ability to estimate the first four statistical moments of model outputs with uncertain inputs. The metric used to assess the performance is the minimum number of model evaluations required to reach a certain confidence level for the moment estimates. The methods considered include Monte-Carlo simulation, numerical integration, and expansion-based methods. The true values of the moments were calculated by high-density sampling with Monte-Carlo simulations. Ninety-five functions with different characteristics were used in the computational experiments. The results reveal that, despite their accuracy, numerical integration methods’ performance deteriorates quickly with increases in the number of uncertain inputs. The Monte-Carlo simulation methods converge to the moments’ true values with the minimum number of model evaluations if model characteristics are not considered or known.

• Correlation in quantum chemical calculation and its effect on the uncertainty of theoretically predicted rate coefficients and branching ratios

2022, Combustion and Flame

The transition state theory (TST) and RRKM/master equation (ME) method have been well acknowledged in developing combustion models. Uncertainties in input parameters, such as energies and vibrational frequencies which are produced from quantum chemical computations, largely contribute to uncertainties of the theoretically predicted rate coefficients. The potential correlations among these parameters and their effects on the uncertainty of rate coefficients have been investigated in the present work. The correlations in quantum chemical computations on H abstraction, H addition and thermal decomposition reactions, regarding radical sites, molecular types and reaction types are investigated and quantified using Pearson correlation coefficients. The results show that notable correlations exist in energies and imaginary frequencies. The correlation factors are then incorporated into the global uncertainty analysis for two typical TST and RRKM/ME computation systems, i.e., H abstraction reactions of acetaldehyde by the HO2 radical (TST) and the multi-well multi-channel reactions on the C4H7 potential energy surface (RRKM/ME) to unravel the effects of correlation on the uncertainties of rate coefficients and branching ratios. Comparing with the random independent sampling for input parameters, to include correlations among input parameters largely reduces the predicted uncertainties, with the largest reduction of ∼30% for absolute rates and ∼45% for the branching ratio in TST calculations and ∼33% for absolute rates and ∼50% for branching ratios in RRKM/ME calculations. The uncertainty propagation behavior with and without considering the correlation in the input parameters was further uncovered by the sensitivity analysis. In the TST calculation, the uncertainty reduction for absolute rate coefficients solely originates from the reduction of the sampling space, and the uncertainty reduction for the branching ratio originates from both the reduced parameter space and the cancelation of sensitive parameters. In the RRKM/ME calculation, the uncertainty reduction in rate coefficients arises only from reduced sampling space, but the uncertainty reduction in branching ratios is due to both the parameter cancelling effect and correlation.

• Combustion machine learning: Principles, progress and prospects: Combustion machine learning

2022, Progress in Energy and Combustion Science

Progress in combustion science and engineering has led to the generation of large amounts of data from large-scale simulations, high-resolution experiments, and sensors. This corpus of data offers enormous opportunities for extracting new knowledge and insights—if harnessed effectively. Machine learning (ML) techniques have demonstrated remarkable success in data analytics, thus offering a new paradigm for data-intense analyses and scientific investigations through combustion machine learning (CombML). While data-driven methods are utilized in various combustion areas, recent advances in algorithmic developments, the accessibility of open-source software libraries, the availability of computational resources, and the abundance of data have together rendered ML techniques ubiquitous in scientific analysis and engineering. This article examines ML techniques for applications in combustion science and engineering. Starting with a review of sources of data, data-driven techniques, and concepts, we examine supervised, unsupervised, and semi-supervised ML methods. Various combustion examples are considered to illustrate and to evaluate these methods. Next, we review past and recent applications of ML approaches to problems in combustion, spanning fundamental combustion investigations, propulsion and energy-conversion systems, and fire and explosion hazards. Challenges unique to CombML are discussed and further opportunities are identified, focusing on interpretability, uncertainty quantification, robustness, consistency, creation and curation of benchmark data, and the augmentation of ML methods with prior combustion-domain knowledge.

• Automatically generated model for light alkene combustion

2022, Combustion and Flame

Light alkenes are common combustion intermediates for a variety of fuels. Therefore, understanding their oxidation and pyrolysis chemistry is key to building detailed mechanisms for heavier fuels. This work was focused on the development and evaluation of a detailed kinetic mechanism suitable for the combustion of light alkenes up to C4 without the use of tuned parameters, instead, the parameter values come from first principles or direct measurements. The generated mechanism accurately estimates the laminar burning velocity (Su) and ignition delay time (IDT) of light alkenes available in the literature, which represent fundamental combustion properties at a wide range of conditions. Because each parameter is thought to have a physically realistic value, not tuned to these measurements, the new model could be used as a sub-mechanism in models for other applications.

The reaction network was generated with the open-source Reaction Mechanism Generator (RMG) software. Sensitivity analyses were performed under wide ranges of temperatures and pressures, allowing for the identification of the most impactful species and reactions. Based on these, a comprehensive thermochemistry database, including calculations on 550 molecules performed in this work at the CBS-QB3 level of theory, and a kinetic library, including theoretically-derived reaction rates retrieved from the literature, were built and used in the mechanism generation. The developed mechanism was compared against several existing detailed kinetic mechanisms for ethene, propene, 1-butene, 2-butene, and isobutene. The newly generated model is the most accurate among the ones analyzed, in terms of fractional bias and normalized mean square error. Hence, this new model was used to analyze the chemistry of alkene combustion. Key rate coefficients were compared, to identify the cause of deviations between the models and possible areas for further improvements.

View all citing articles on Scopus

## Recommended articles (6)

• Research article

Determining predictive uncertainties and global sensitivities for large parameter systems: A case study for n-butane oxidation

Proceedings of the Combustion Institute, Volume 35, Issue 1, 2015, pp. 607-616

A global sampling approach based on low discrepancy sequences has been applied in order to propose error bars on simulations performed using a detailed kinetic model for the oxidation of n-butane (including 1111 reactions). A two parameter uncertainty factor has been assigned to each considered rate constant. The cases of ignition and oxidation in a jet-stirred reactor (JSR) have both been considered. For the JSR, not only the reactant mole fraction has been considered, but also that of some representative products. A temperature range from 500 to 1250K has been studied, including the negative temperature coefficient (NTC) region where the predictive error bars have been found to be the largest. It is this temperature region where the highest number of reactions play a role in contributing to the overall output errors. A global sensitivity approach based on high dimensional model representations (HDMR) has then been applied in order to identify those reactions which make the largest contributions to the overall uncertainty of the simulated results. The HDMR analysis has been restricted to the most important reactions based on a non-linear screening method, using Spearman Rank Correlation Coefficients at all studied temperatures. The final global sensitivity analysis for predicted ignition delays illustrates that the key reactions are mainly included in the primary mechanism, for temperatures from 700 to 900K, and in the C0C2 reaction base at higher temperatures. Interestingly, for predicted butane mole fractions in the JSR, the key reactions are almost exclusively from the reaction base, whatever the temperature. The individual contribution of some key reactions is also discussed.

• Research article

Global uncertainty analysis for RRKM/master equation based kinetic predictions: A case study of ethanol decomposition

Combustion and Flame, Volume 162, Issue 9, 2015, pp. 3427-3436

A precise understanding of the accuracy of reaction rate constants, whether determined experimentally or theoretically, is of considerable importance to kinetic modelers. While the uncertainties of experimentally measured rate constants are commonly provided, the “error bars” of computed (temperature- and pressure-dependent) rate constants are rarely evaluated rigorously. In this work, global uncertainty and sensitivity analysis is applied to the propagation of the uncertainties in the input parameters (e.g. barrier heights, frequencies and collisional energy transfer parameters et al.) to those in the rate constants computed by the RRKM/master equation method for the decomposition of ethanol. This case study provides a systematic exploration of the effect of temperature and pressure on the parametric uncertainties in RRKM/master equation calculations for a prototypical single-well multiple-channel dissociation. In the high pressure limit, the uncertainties in the theoretical predictions are controlled by the uncertainties in the input parameters involved in the transition state theory calculations, with the most important ones being those describing the energetics of the decomposition. At lower pressures, where fall-off is important, the uncertainties in the collisional energy transfer parameters play a significant role, particularly for the higher energy of the two channels. Remarkably, the competition between dissociation and collisional excitation leads to uncertainties of more than a factor of 100 in the predictions for the higher energy channel. These large uncertainties are related to the need for large-scale single-collision-induced transitions in energy in order to produce the higher energy products in the low pressure limit. The present study illustrates the value of detailed qualitative and quantitative studies of the uncertainties in theoretical kinetics predictions.

• Research article

Uncertainty quantification of reaction mechanisms accounting for correlations introduced by rate rules and fitted Arrhenius parameters

Combustion and Flame, Volume 160, Issue 9, 2013, pp. 1583-1593

We study correlations among uncertain Arrhenius rate parameters in a chemical model for hydrocarbon fuel–air combustion. We consider correlations induced by the use of rate rules for modeling reaction rate constants, as well as those resulting from fitting rate expressions to empirical measurements arriving at a joint probability density for all Arrhenius parameters. We focus on homogeneous ignition in a fuel–air mixture at constant-pressure. We outline a general methodology for this analysis using polynomial chaos and Bayesian inference methods. We examine the uncertainties in both the Arrhenius parameters and in predicted ignition time, outlining the role of correlations, and considering both accuracy and computational efficiency.

• Research article

Joint probability distribution of Arrhenius parameters in reaction model optimization and uncertainty minimization

Proceedings of the Combustion Institute, Volume 37, Issue 1, 2019, pp. 817-824

The method of uncertainty minimization by polynomial chaos expansions is extended to Arrhenius prefactor and activation energy co-optimization and uncertainty minimization. A covariance matrix is formulated to describe the joint probability distribution of the reaction rate parameters. The method is tested on a recently proposed foundational fuel chemistry model using 60 H2 and H2/CO flame speeds as the targets. The results show that co-optimizing A and Ea did not produce appreciable improvements in the ability of the reaction model to better predict the flame targets. It does yield reduction in the temperature-dependent uncertainty band of the rate coefficients of several key reactions. The importance of additional experimental and theoretical studies needed for the CO+OH→CO2+H, HO2+H→H2+O2 and HO2+H→2OH reactions is highlighted.

• Research article

Skeletal reaction model generation, uncertainty quantification and minimization: Combustion of butane

Combustion and Flame, Volume 161, Issue 12, 2014, pp. 3031-3039

Skeletal reaction models for n-butane and iso-butane combustion are derived from a detailed chemistry model through directed relation graph (DRG) and DRG-aided sensitivity analysis (DRGASA) methods. It is shown that the accuracy of the reduced models can be improved by optimization through the method of uncertainty minimization by polynomial chaos expansion (MUM-PCE). The dependence of model uncertainty on the model size is also investigated by exploring skeletal models containing different number of species. It is shown that the dependence of model uncertainty is subject to the completeness of the model. In principle, for a specific simulation the uncertainty of a complete model, which includes all reactions important to its prediction, is convergent with respect to the model size, while the uncertainty calculated with an incomplete model may display unpredictable correlation with the model size.

• Research article

Mechanism optimization based on reaction rate rules

Combustion and Flame, Volume 161, Issue 2, 2014, pp. 405-415

Accurate chemistry models form the backbone of detailed computational fluid dynamics (CFD) tools used for simulating complex combustion devices. Combustion chemistry is often very complex and chemical mechanisms generally involve more than one hundred species and one thousand reactions. In the derivation of these large chemical mechanisms, typically a large number of reactions appears, for which rate data are not available from experiment or theory. Rate data for these reactions are then often assigned using so-called reaction classes. This method categorizes all possible fuel-specific reactions as classes of reactions with prescribed rules for the rate constants. This ensures consistency in the chemical mechanism. In rate parameter optimizations found in the published literature, rate constants of single elementary reactions are usually systematically optimized to achieve good agreement between model performance and experimental measurements. However, it is not kinetically reasonable to modify the rate parameters of single reactions, because this will violate consistency of rate parameters of kinetically similar reactions. In this work, the rate rules, that determine the rates for reaction classes are calibrated instead of the rates of single elementary reactions leading to a chemically more consistent model optimization. This is demonstrated by optimizing an n-pentane combustion mechanism. The rate rules are studied with respect to reaction classes, abstracting species, broken C–H bonds, and ring strain energy barriers. Furthermore, the uncertainties of the rate rules and model predictions are minimized and the pressure dependence of reaction classes dominating low temperature oxidation is optimized.

Copyright © 2014 Elsevier Ltd. Published by Elsevier Ltd. All rights reserved.

## Latest Posts

Article information

Author: Lilliana Bartoletti

Last Updated: 08/03/2022

Views: 5919

Rating: 4.2 / 5 (53 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Lilliana Bartoletti

Birthday: 1999-11-18

Address: 58866 Tricia Spurs, North Melvinberg, HI 91346-3774

Phone: +50616620367928

Job: Real-Estate Liaison

Hobby: Graffiti, Astronomy, Handball, Magic, Origami, Fashion, Foreign language learning

Introduction: My name is Lilliana Bartoletti, I am a adventurous, pleasant, shiny, beautiful, handsome, zealous, tasty person who loves writing and wants to share my knowledge and understanding with you.