Title | Predictivity: Definition and Application to a Tangentially FIred Combustion System |

Conference | 2018 AFRC Industrial Combustion Symposium |

Creator | Parra-Alvarez, J. |

Contributor | Isaac, B., Smith, S., Smith, P. |

Date | 2018-09-17 |

Subject | Machine-learning, V&UQ, Oxy-coal combustion |

Description | Paper from the AFRC 2018 conference titled Predictivity: Definition and Application to a Tangentially FIred Combustion System |

Abstract | Quantification of uncertainty and prediction under uncertainty are playing a bigger; role in simulation science nowadays than it was 10 years ago. Since the seminal work of; Kennedy-O'hagan1 it is understood that point-value estimates from model predictions,; lack real engineering value if not provided with valid intervals in which the model is; capable of forecasting. In this work, we used tools commonly used in artificial intelligence; and machine learning in order to define predictivity and determine the predictive; distribution function for the input parameters that control a tangentially-fired coal; combustion boiler quantities of interest (QoI's): heat flux to the wall, gas temperature; and oxygen concentration. The predictive posterior distribution informs the modelers; and the experimentalist the allowed ranges of simulation input parameters in which the; model produces useful predictions. |

Type | Event |

Format | application/pdf |

Rights | No copyright issues exist |

OCR Text | Show Predictivity: Definition and Application to a Tangentially Fired Combustion System John Parra-Álvarez, Benjamin Isaac, Sean Smith, Philip Smith Institute for Clean and Secure Energy (ICSE), 155 S 1452 E Room 380, Salt Lake City, UT, University of Utah July 11th, 2018 Abstract Quantification of uncertainty and prediction under uncertainty are playing a bigger role in simulation science nowadays than it was 10 years ago. Since the seminal work of Kennedy-O'hagan 1 it is understood that point-value estimates from model predictions, lack real engineering value if not provided with valid intervals in which the model is capable of forecasting. In this work, we used tools commonly used in artificial intelligence and machine learning in order to define predictivity and determine the predictive distribution function for the input parameters that control a tangentially-fired coal combustion boiler quantities of interest (QoI's): heat flux to the wall, gas temperature and oxygen concentration. The predictive posterior distribution informs the modelers and the experimentalist the allowed ranges of simulation input parameters in which the model produces useful predictions. Keywords: Machine-learning, V&UQ, Oxy-coal combustion 1 Introduction Several sectors that are key components of today's economy are moving towards adopting machine-learning tools to improve in efficiency, productivity and full data-utilization. This trend is most seen in fields such as: robotics, education, healthcare, transportation, financial services & business and marketing. 2,3 The examples in these fields are almost ubiquitous and easy to identify, and they are all related to better understanding of the characteristics of the data that would allow to forecast better strategies and influence decisions in a particular field. This trend is slowly making its entrance into more technical fields of engineering. One such field is combustion, and one of the applications in which machine-learning can make an impact is the different aspects of boiler design, control and optimization. Some examples are found in recent works of research within this field. 4-7 1 Characterizing and understanding the different sources of data is the first step to propose a methodology for forecasting. In combustion engineering the main source of data comes from experiments performed in best case scenarios at the same conditions as the system. Unfortunately, most of the time this is not the case and data usually comes from pilot systems that assimilate only some of the characteristics of the real system. 8 Usually the cost of the experiments and the difficulty to obtain measurements are the determining factors in obtaining useful data. On the other hand, data that comes from simulations is usually taken with skepticism due to modeling assumptions and tuning parameters that might not represent the complexities of the underlying phenomena. In this regard, the field of verification and uncertainty quantification (V/UQ) uses tools common to the field of machine-learning to offer solutions to the consideration of both, simulation and experimental data, by quantifying the uncertainty carried in these data sets in order to make any assessment about the system under consideration. The field of V/UQ is fairly recent 9-11 and within the combustion field, examples are found in optimization and control were the concepts of machine-learning can be readily adapted; 4,5 these concepts apply to combustion in general. In particular, oxy-coal combustion can benefit from this methodology, especially from the challenges that it faces from the design point view. Based on prior information and data collected from the system, V&UQ estimates uncertainties of the system producing upper and lower bounds of confidence for which predictions might be possible. In this paper, we use a V/UQ methodology in order to determine uncertainty bounds that define the model space for which experimental data and simulation agree. These bounds will help us define predictivity as the process of using a body of experimental evidence, provided in near-by conditions, to forecast the behavior of a system for which there is no experimental data and to provide an estimation of the error in the prediction. The prediction error is naturally limited by both model-form error (which is analyzed through traditional validation) and parameter uncertainty. In the following we describe the oxycoal system and how the simulation and experimental data in conjunction with a Bayesian methodology can be used to produce forecasting. 2 2.1 Definition of Predictivity Hierarchical Validation Approach At the Carbon Capture Multidisciplinary Simulation Center (CCMSC), we employ an overall validation approach that includes uncertainty propagation across multiple scales and simultaneous validation across different sets of experimental and simulation data for air and oxy-combustion systems. The key physics components for these kind of systems, include particle combustion, ash transformations, multiphase flow radiation and LES turbulence. An illustration of this process is shown in figure 2.1. The lower bricks represent complex 2 Full-scale design prediction 500 MWe pressurized oxy-coal Design Pilot-scale validation 1.2 GWe USC Boiler Design 15 MWth oxy-coal boiler Lab-scale validation 1.5 MWth UofU air-coal atmospheric furnace Resolved scale (numerical uncertainty): verification multiphase flow CFD/LES radiation Subgrid model scale (model form uncertainty): validation devolatilization char oxidation soot formation ash transformation Figure 2.1: Hierarchy of validation based on simulation and experimental data from fundamental building models physics and models whose uncertainty have been characterized and will be propagated to the upper bricks containing the systems under study. The body of evidence propagated from the lower levels to the upper levels, will characterize the uncertainty of the system; this allows one to quantify the overall uncertainty of the systems and obtain predictions with their corresponding uncertainty. The premise of this hierarchical validation is relatively simple: interpolation of the physics among bricks with extrapolation of scales among systems. In order to achieve this premise, a body of complex tools utilized in machine learning fields is used. In the following we explain some of the mathematical background of these tools, namely Bayesian inference (BI) and bound-to-bound data collaboration (B2BDC). In essence, the definition of predictivity comes as a result of the amount of learning from BI and B2B-DC in which data (counted as both: experiments and simulations) drive forecasting of the process and provide optimized model inputs in which it is possible to make predictions. These intervals come as a form of "allowable uncertainty" based on the uncertainties of the experimental data and the uncertainty of the model form used in the simulations. 2.2 Bayesian Inference BI is common tool in machine-learning processes and presents powerful and intuitive concepts to convert a learning process from data into algorithms. For a general overview refer to Gelman et al. 12 , Barber 13 and Frenklach et al. 14 . The underlying principle to formulate BI, is a clear understanding of the Bayes' theorem for problems in which information (I) is available for the learning process. Let Y be a response variable, the target of the learning 3 process. Let X be the input models and parameters that are responsible for the underlying learning structure of the process. A mathematical statement relating response and input models and parameters, can be put as: f (X|Y, I) ∝ f (Y |X, I) × f (X|I) | | {z } {z } | Inverse Problem {z } forward problem (2.1) Parameter prior which for this specific problem is equivalent to: P (X, H|Yd ) | {z ∝ `(X, H; Yd ) × P (X0 ) × P (Σ0 ) } | Posterior distribution {z } Likelihood function {z | (2.2) } Prior distributions In which X is the input parameters for the model forms, Yd is the experimental and simulation data. Also H is the bandwidth of the kernel density used to represent the likelihood function; (for instance, if the kernel is a Gaussian distribution H is related to the covariance of the distribution). In BI, one of the main aims is to find X and H based on the inputted data. The last two terms in equation (2.2) correspond to the prior information about input parameters. These terms reflects one's knowledge of the system before any data is taken into account. Prior knowledge can come from different sources; experts' opinions and conclusions from previous analysis of the systems, count as prior knowledge once included in the overall current analysis of the system. Informative priors include specific knowledge about the system that might provide a better assessment of the system in general. On the other hand, when knowledge of the system is only general and even fragmented or scarce, the use of non-informative priors is necessary. 12 In BI, the likelihood function measures the similarity between simulated and observed data. Assuming that there is a simulation model for the physical system that is a function of some model parameters with the mathematical form: Ym = f (X, θ), (2.3) then the likelihood function can be written as: `(X; Y) = |H|−1/2 K H−1/2 (Y − Ym ) (2.4) = |H|−1/2 K H−1/2 (Y − f (X, θ)) where K(·) is a multivariate kernel density and H is the bandwidth of the kernel. Usual examples of kernel distributions are: Gaussian, Weibull, Gamma, χ2 and others. 15 The posterior distribution takes the form: P (X, H|Y) ∝ |H|−1/2 K H−1/2 (Y − f (X, θ)) × P (X0 ) × P (Σ0 ) 4 (2.5) Once the ultimate mathematical form of equation (2.5) has been established, the purpose of BI becomes clearer: through computational routines solve an optimization problem in order to find the set of parameters X that could span the data Y presented as evidence. In a later section the definitive mathematical form of the posterior will be derived according to the assumptions made for this system. 2.3 Consistency Analysis Equation (2.5) represents a BI framework in which it is statistically possible to span the ranges of uncertainty from both, the experimental and the simulation data. These uncertainty bounds can be further reduced to produce useful predictions by incorporating an additional constraint in the probabilistic space spanned by the posterior distribution. This constraint is developed and used in Feeley et al. 16 to perform a consistency analysis between models and experiments in their B2B-DC methodology. The constraint takes the form: γ up ≥ |Y − f (X, θ)| ≥ γ low (2.6) in which γ up and γ low correspond to the upper and lower bounds of uncertainty from both, BI analysis and experimental data. In essence, BI would help delineate the region of probability that spans the uncertainties of the experimental data while B2B-DC would find consistency between experiments and models within such region. Additional information can be found in Frenklach et al. 14 3 Description of the System The combustion system under study has several benefits in terms of overall efficiency and carbon capture and utilization. In an oxy-combustion process, the fuel (coal) is burnt in a mixture of pure oxygen, recirculated carbon dioxide and other combustion gases. In most cases, nitrogen is extracted from atmospheric air in an air separation unit (ASU) producing a combustion process with combustion gases consisting mostly of carbon dioxide, water vapor and other fuel-specific gases that can be used in gas fields or further processed for other industrial processes. A summary of this process is shown in figure 3.1. Recirculated gas is required in order to control the overall temperature of the process and the heat transfer characteristics. 3.1 Tangentially-Fired Oxy-coal Boiler The boiler-simulator facility (BSF) is a 15 MWth capacity pilot unit formerly located with ALSTOM-GE. The BSF is an atmospheric-pressure, balanced-draft, combustion facility designed to simulate the temperature-stoichiometry history of typical utility boilers. 17,18 The furnace is fired from 4 corners with 2 levels of separated overfired air and three levels 5 Oxy-coal combustion N2 Air ASU Flue gas (FG) recirculation Air-Coal >> 21% O2 << 79% N2 % wt Vol N2 6.23 7.81 O2 3.33 3.65 CO2 73.1 58.26 Inerts 1.61 2.29 H2O 14.25 27.74 SO2 % Air-Coal 21% O2 79% N2 0.48 0.26 wt Vol N2 68.25 72.33 O2 5.02 4.66 CO2 20.35 13.73 Inerts 1.14 0.85 Air-coal combustion H2O 5.07 8.35 SO2 0.16 0.07 Figure 3.1: Comparison of oxi-coal combustion and air combustion. of coal injections in which mixing occurs mainly throughout the boiler instead of near the burner nozzle. The heat-transfer surfaces are cooled by a surrounding water jacket and the steam generated is vented off at atmospheric pressure with a constant sink temperature of 212 F. Figure 3.2 shows the typical configuration of tangentially fired systems. The BSF is capable of switching from air to oxy-combustion, with modifications that include oxygen supply and injection system and gas recirculation system among the most important. This work focuses on the prediction of three overall quantities of interest (QOI's) for a tangentially-fired oxy-coal combustion system. These QOI's correspond to measurements of heat flux to specific locations to the walls, gas temperature at different boiler heights and oxygen concentration at different boiler heights. 3.2 Overall description of the Experimental Data The instrumentation in the BSF measures all flows, temperatures and pressures. Information such as combustion air, fuel input, mass flow inlet rates, air preheat temperatures, exit gas temperatures, fan-speed settings, etc., is recorded and processed by data-acquisition systems for process control and monitoring of critical operation. Four waterwall test panels are installed at three elevations to monitor heat absorption; six upper furnace loops are installed across the vertical furnace outlet plane to continuously measure upper furnace heat absorption rate and variations across the furnace. In addition to the continuously 6 logged data, probing and sampling is conducted at selected test conditions and furnace locations. Furnace and convection-pass temperatures are measured using suction pyrometers. Heated gas-extraction probes are used with a dedicated gas-analysis system to measure infurnace gas compositions. Incident heat fluxes to the furnace walls are measured using total heat-flux meters manufactured by the International Flame Research Foundation (IFRF). 19 Figure 3.3 shows the furnace planes typically characterized. 17,18 Measurements planes are labeled using a nomenclature that labels planes with respect to their distance, in inches, from the bottom of the windbox. Total heat flux and radiant heat flux are measured at a fixed location 2" into the furnace. Gas sampling and suction pyrometers measurements are conducted on a grid pattern at each measurement plane. Figure 3.4 presents the averages of probed data taken at different locations and different times as previously described with their corresponding estimated uncertainty. The plots represent measurements of heat flux to selected panels, gas temperature and oxygen concentration at selected ports. In BI+B2B-DC, we will analyze this uncertainty combined with the simulation data and produce updated ranges that would inform both the modeler and the experimentalist about the overall uncertainty of the system and its ability to forecast results. a) front view 3.3 furnace is fired from four corners with three b) top viewof the SimulationThe Overall Description Data levels of coal injection and two levels of separated similar to the tangential The simulation tool used to represent the complex overfire physicsair, processes occurring insidefiring the system in Figure 4. In tangential firing systems, oxy-coal system is ARCHES, which is a component illustrated of the UINTAH computational frame- mixing of fuel and air (or oxygen) occurs throughout work, 20-22 developed to solve partial differential equations on structured grids on systems hundreds the furnace, whereas in wall fired the mixing occurs primarilyfor near the burner nozzle. enof thousand of processors. ARCHES solves conservation equations mass, momentum, The furnace walls and heat transfer surfaces of the ergy and solid phases using a low-mac, pressure projected, variable density code. Turbulence 15MWth BSF are cooled by a surrounding water is modeled using the Large-Eddy Simulation (LES) approach with generated the dynamic-Smagorinsky jacket. Steam is vented off at atmospheric pressure soequation. that a constant sink temperature closure model for the random fluctuations in the momentum Advection terms inof 212°F is maintained. The lower furnace water-walls are the transport equations are discretized using an hybrid approach upwind and cen- furnace refractory linedbetween to maintain an appropriate Figure 3: Boiler Facility gas temperature history. equations in time. tral differencing. A Simulation forward-Euler scheme is used to evolve the discretized a) burner layers b) top view c) windbox/ burner Figure 4: Tangential-firing system with Separate Overfire Air Figure 3.2: Configuration of tangentially-fired systems with separated over-fire air. 17,18 Under US DOE Cooperative Agreement DE-NT0005290, Alstom is conducting a comprehensive oxy-fired boiler development program including BSF testing of a cross-sections of coals. BSF modifications for oxy-combustion testing were completed in August 2009. A detailed description of7 the BSF and modifications for oxy-combustion has been reported previously [7, 8]. Major facility equipment additions included an oxygen supply and injection system, multiple gas recirculation systems, a baghouse for particulate removal, new instrumentation and controls, and a NIDTM dry scrubbing system which can be operated to capture SO2 in the baghouse. Figure 5 shows a schematic of the BSF Figure 3.3: BSF furnace probe measurement planes The solid phase (coal and ash particles) is represented in an Eulerian framework through the direct quadrature method of moments (DQMOM), 23 introducing evolution equations for particle variables such as: velocity, size, raw coal mass, char mass, enthalpy, maximum temperature; that ultimately describe the high dimensional particle property distribution through their moments. Coal particle models include devolatilization and char oxidation, which will be discussed in a later section. Gas phase reactions were modeled using a mixture fraction approach with three streams: a primary stream, a secondary stream and a coal off-gas stream that relates the products of devolatilization and char oxidation from the coal particles. Transport equations are solved for the mixture fractions formulated in terms of the proposed streams and a lookup table relating state variables is computed based on the obtained mixture fractions. Radiation is solved using discrete ordinates with 8 ordinates and 80 directions. A more detailed explanation of the models used in this simulation can be found elsewhere. 24,25 One simulation of the BSF costs 740,000 CPU-hours on 3,000 cores with a resolution of 2 cm and a total number of cells around 17'000,000. The simulations runs up to 28 seconds of computational time achieving steady state at approximately 20 seconds. The last 5 seconds were used to take averages of the variables of interest (QOI's). Figure 3.5 shows some of the simulation results for oxygen concentration, gas temperature and heat flux to the walls. The first row in figure 3.5 shows volume-rendered images of temperature and oxygen concentration; while the second row shows slices through several selected planes of the domain. For the heat flux, the figures look very similar, the difference 8 Figure 3.4: Averages of different samples of the experimental measurements. is that the one in the first row is a snapshot at a particular time, while the second is an average of the last 5 seconds of simulation: 22 - 27 seconds. 4 Application of Predictivity to an Oxy-coal Combustion System The concept discussed in section 2 will be now applied to the boiler-simulator facility (BSF) that was described in section 3. As pointed out before, the concept of predictivity presented in this paper has its basis in the uncertainty quantification (V&UQ) analysis and the methodology presented here, will define the allowable ranges of uncertainty for which predictions are useful given the current state of knowledge and body of evidence. The specifics of the mathematical developments presented in previous sections will be developed in greater detail in the present section. 4.1 Verification and Uncertainty Quantification As mentioned previously, the V&UQ methodology is the critical step to accomplish predictivity. The aim is to find the ranges of uncertainty for which forecasting, based on available models and experimental data, is useful. A methodology that allows to do that is uncertainty quantification (UQ), whose main objective is also to find the accuracy of the models that represent QOI's or experimental data in general. Different descriptions of this methodology can be found in published work. 27-32 Even standards have been put in place by ASME for verification and validation of CFD codes. 33 In this work we are going 9 Figure 3.5: Different snapshots of simulation results for temperature, heat flux and oxygen concentration. The images were created using the visualization software VisIt at LLNL 26 to focus on the one proposed by Schroeder 32 which is a modification of the one proposed by the National Institute of Statistical Sciences (NISS). 28,29 The framework is based on six fundamental steps as show in figure 4.1 In step 1, the quantities that are going to be subjected to validation are selected. Input and expertise from both sides, simulation and experiments is required. Usually there has to be a compromise between what can be measured and what can be simulated. This step also includes the selection of models and model parameters involved in the representation of the selected QOI's. In step 2, prior uncertainties are assigned to the most important model parameters chosen in step 1. A ranking based on sensitivity, rate the overall importance of the parameters, and represents an overall view of the prior uncertainties in the system. In step 3, experimental data is collected and and estimation of experimental errors are analyzed. A design of experiments (DOE) is proposed to estimate how much simulation data is required to perform step 4; the simulation data is then generated from the proposed models under study. In step 4, the simulation data is post-processed and surrogate models are generated based on the processed data-set. The purpose of the surrogate models is to 10 Step 1 Step 2 Step 3 Step 4 Step 5 Selection of Quantities of interest (QOI's) Construction of uncertainty map (IU map) Collection of experimental and simulation data Construction of surrogate models Bayesian Inference and Consistency Analysis Step 6 Feedback and feedforward Figure 4.1: Six-step methodology for uncertainty quantification be able to perform thousands of evaluations of a cheap substitute that is able to represent the QOI's with (ideally) minimal introduced uncertainty. A large amount of evaluations are needed in step 5 where the statistical analysis are introduced. The complexity of the surrogate models will largely depend on the DOE; the more comprehensive the design, the more general the surrogate models obtained. This is crucial for systems in which non-linear behaviors dominate the response for the QOI's. In step 5, BI and B2B-DC are performed over a large sample of input model parameters. Essentially, this would help to delineate the high dimensional region of model parameters that spans the response of the QOI's. This step is crucial for the whole analysis since it provides the intervals constrained by the system's overall uncertainty in which it is possible to make predictions. Finally in step 6, an evaluation of the overall methodology is performed; in which consideration of model correction/updates, experimental data re-collection/addition and updates to prior uncertainties is taken into account. In the following, we summarize the most important results for each described step applied to the system described in section 3 4.1.1 Selection of QOI's and construction of IU map Previous to the analysis presented in this paper, a qualitative ranking of different model parameters was given to create an IU map, these parameters were used in an impact-factor study 24 for a ranked sensitivity. The results of step 1 and 2 for the BSF are summarized in table 1. The ranking yields the 12 most important parameters presented in table 2. The selected QOI's and their experimental data is described in section 3.2. 4.1.2 Collection of data and surrogate modeling The description of the type of data collected for the QOI's (step 3) and the tools used for that purpose is presented in section 3.2. This section will focus on the description of the 11 Table 1: UI map Model/Parameters vHiT Devolatilization Char Oxidation Wall models Priority 4 Range 0.1 - 0.9 Nominal 0.54 Tµ 4 1300 - 1600 1500 Tσ 4 700 - 1000 800 ThardB 4 1000 - 2000 1500 ACO2 4 506 - 2195 1053.3 ECO2 4 17984 - 78196 48090 AO2 4 14 - 163 47.5 EO2 4 13574 - 27361 20468 AH2 O 2 3000 - 10000 7614 EH2 O 2 40000 - 70000 60000 Avisc 5 −38 - −31 −34.1 Kwall 5 2.9 - 3.6 3.45 tsdep 5 1 - 10 4.5 Tslag 5 1400 - 1600 1525 enamel 3 0.1 - 0.7 0.6 δenamel 5 0.5 - 1.5 1.0 ydep 2 unspecified - ρdep 2 500 - 3000 2000 Nθ , Nφ 2 s4 - s16 s8 Fs 2 1 - 100 20 Cabs 4 0.5 - 2 1.0 ytar 1 0.1 - 0.3 0.2 Radiation Soot Comments high temperature volatile yield. K, temperature parameter from the volatile yield model. K, temperature parameter from the volatile yield model. K, critical temperature for breaking carbon bonds, g/cm2 atmO2 , preexponential factor CO2 reaction. cal/gmol Activation energy, CO2 reaction. g/cm2 atmO2 preexponential factor O2 reaction. cal/gmol Activation energy, O2 reaction. g/cm2 atmO2 preexponential factor H2 O reaction cal/gmol Activation energy, H2 O reaction. Pre-exponential factor particle viscosity model. In natural log space. W/Km, effective thermal conductivity. Days, time scale of ash deposition. Kelvin, melting temperature of ash. Porosity of the inner layers of deposits. mm. Thickness of the inner deposit layer. Composition (mass fraction) of the deposits. kg/m3 . Density of the ash deposits. Angular discretization of the space Radiation solve frequency: number of iterations before solving DO Absorption coefficient gas and particles Mass fraction of tar yield. surrogate models (step 4). In equation (2.3), θ refers to a surrogate representation of the simulation data. Due to the complexity and computational cost of the simulations BI, rely on an approximate representation of the simulation data through surrogate models that best represent the hyperspace spanned by current simulation data. These surrogate models will allow one to span uncertainty regions at a lower cost than the simulations themselves. This is especially useful when sampling from the posterior distribution is required, allowing to sample hundreds of thousands of points from the input hyperspace without re-running the simulations. There are many forms for surrogate models that would allow a practical representation of the hyperspace spanned by the simulations; the two main categories are: linear and non-linear. Non-linear surrogates can range from high-order polynomials to Gaussian-process regression. On the other hand, linear surrogates represent hyper-planes that would pass through the hyper-surface in a least-squares sense. In this work, our main focus is to define the region of uncertainty for which predictions are allowed. Keep in mind, that in order to capture more of the complexities of the space spanned by the simulations, non-linear surrogate models are required. As a first approximation, we consider 12 Table 2: Description of models and corresponding parameters selected as most sensitive Model Ash deposition Heat transfer properties of particles and ash deposits Char oxidation Maximum yield in coal devolatilization Parameter P1: pre-exponential factor of the viscosity of the particles. (Avisc ) P2: Time scale for ash deposition. (tsdep ) P3: Thickness of the initial deposits on unseasoned boilers. (δenamel ) P4: temperature of the melting deposits. (Tslag ) P5: Thermal conductivity of ash deposits. (Kwall ) P6: absorption coefficient of particles (Cabs ) P7: Pre-exponential factor for the O2 reaction. (AO2 ) P8: Activation energy for the O2 reaction. (EO2 ) P9: Pre-exponential factor for the CO2 reaction. (ACO2 ) P10: Activation energy for the CO2 reaction. (ECO2 ) P11: Mean temperature of the yields. (Tµ ) P12: Melting temperature of the carbon bonds. (ThardB ) only linear surrogate models for characterizing the simulation data; in the least square sense, we would be able to span statistically, the region of uncertainty, as will be demonstrated later. Therefore, the function f (X, θ) takes the form: f (X, θ) = Xθ + β (4.1) = xp1,j θp1,j + xp2,j θp2,j + xp3,j θp3,j + · · · + xp12,j θp12,j + βj which defines hyper-planes in the regions spanned by the simulation data, with θ being the slopes in each coordinate (corresponding to each of the 12 parameters) of the hyper-space, and β defines an independent constant. The index j corresponds to a specific QOI. 4.1.3 Bayesian inference and consistency analysis For the analysis of step 5 it is important to clarify the mathematical details of BI and B2B-DC. We do that in the following subsections. Incorporation of Priors. Forecasting in BI, requires the use of prior information as means to obtain useful predictions. Ideally, precise prior information about the models in the lower bricks of the hierarchy (figure 2.1), would yield precise and useful information about the uncertainty of the system under study, which in turn, would allow to predict quantities in the studied system with their associated uncertainty. In our study, prior information comes as particular knowledge of the associated uncertainty of the models and systems located below the system under study in the proposed hierarchy. As shown in figure 2.1 prior information about the uncertainty of char oxidation and devolatilization as 13 well as the gathered collective uncertainty of the L1500 would serve as inputs to determine the overall uncertainty of the BSF. In turn, this uncertainty can be converted into intervals of prediction in which forecasting becomes more meaningful and modeling efforts become more valuable. A precise description of the degree of knowledge obtained from the lower bricks including the L1500 system can be found in Diaz-Ibarra et al. 24 and Adamczyk et al. 25 . In order to describe mathematically the last two terms of equation (2.2), additional assumptions are required. One of those assumptions is the use of conjugate priors, in which the prior and the posterior correspond to the same family of distributions. One of the main advantages of this choice, is that it avoids complicated calculations because both the prior and posterior are known analytically. Here, we have adopted normal distributions for both, the prior and posteriors. Let the prior for the input parameters be distributed as: P (X) ∼ N (µ0 , Σ0 ) (4.2) where the mean and the variance are chosen from a Jeffrey's distribution: p(µ) = c p(Σ0 ) = (4.3) q det I(Σ0 ) = det(Σ0 )−(L+1)/2 where c is a constant and L is the number of response variables of the system. Posterior mean and covariance matrix. The exact definition of the likelihood function requires some common assumptions to the experimental data-set that would help specify the problem mathematically. One of these assumptions is that the experimental data is normally distributed which is not unrealistic since data is usually taken with the same calibrated device under the similar circumstances: Ye ∼ N (µls , Σd ) (4.4) Under these conditions the likelihood function described in equation 2.4 can be written as: `(X, Σ; Y) = (2π)−N L/2 det(Σ)−N/2 h (4.5) i × exp (Y − f (X, θ))T Σ−1 (Y − f (X, θ)) . 14 with the assumptions of the prior and likelihood in place, the posterior distribution (2.5) can be written as: P (X, Σ|Y) ∝ (2π)−N L/2 × det(Σ)−(N +L+1)/2 h 1 i × exp − tr[Σ−1 S(θ)] 2 h i (4.6) × exp (X − µ0 )T Σ0 (X − µ0 ) Let's define a positive-definite matrix S as: S(θ) = (Y − f (X, θ))2 (4.7) with this definition of S, the likelihood function can be also rewritten as: h i `(X, Σ; Y) = (2π)−N L/2 det(Σ)−N/2 × exp tr[Σ−1 S(θ)] . (4.8) Equation (4.6) needs to be reduced even further by combining the exponential terms into one single term. Applying the Fisher-Neyman factorization for sufficient statistics, the two exponential terms can be combined into one providing the form of the posterior mean distribution and the form of the posterior variance. The algebra for this factorization can be found elsewhere. 12 The posterior mean and variance can be written as: −1 Σp = θ T Σ−1 d θ + Σ0 −1 −1 T −1 Xp = ΣTp θΣ−1 d Yd − θ Σd β + Σ0 µ0 (4.9) In the limiting case where Σ0 is wide enough and the normality assumptions listed before hold, then the posterior Σd can be sampled from a scaled-inverse χ2 distribution, whose estimator in terms of the variables of this problem is: νs2 (θ T θ)−1 , ν−2 (4.10) (Y − f (X, θ))T (Y − f (X, θ)) . ν (4.11) Σd = where ν = L − N and s2 = Equation (4.9) summarizes the results for the posterior distributions given the assumptions proposed for this system. The mathematical description obtained in this section, is a simplified approximation of the mathematical machinery usually found in machine learning applications. One common application of these kind of tools, is in computer vision with Bayesian deep learning, where the surrogate models correspond to neural networks and the associated uncertainty of the system help to improve predictions made by the digital 15 twin. 34,35 The result of this analysis is summarized graphically in figure 4.2. This figure, corresponds to a matrix plot, in which the diagonal is the marginal distribution of each input parameters in table 2, showing the most likely value and its variance. The off-diagonals of the matrix plot corresponds to the conditional distribution among the different input parameters of the system, correlated by pairs. Note in table 1 the wide range of order of magnitude that the input parameters take; in general, this affects the numerical behavior of the solution, making it difficult to obtain meaningful results. In order to fix this, some transformations have been applied to some parameters in order to reduce their effects on the numerical behavior of the solution. For instance, the following non-linear transformation has been applied to the char oxidation parameters: i h AO2 = exp 1.54(PC1O2 ) − 0.308(PC2O2 ) + 3.86 EO2 = 2890(PC1O2 ) + 579(PC2O2 ) + 10300 R h (4.12) i ACO2 = exp 0.326(PC1CO2 ) − 0.0814(PC2CO2 ) + 6.96 ECO2 = 4040(PC1CO2 ) + 1010(PC2CO2 ) + 24200 R with this transformation the values of the new parameters are now −2 < PC < 2. Similarly, base 10 and natural logarithms have been taken for the pre-exponential factor particle viscosity model and the thickness of the inner deposit layer. Consistency analysis. Once the high-dimensional region of probability is defined (mathematically by equations (4.9) and graphically by figure 4.2), it is possible to perform a B2B-DC analysis to establish, within this region, the subspace that is consistent with the uncertainty of the experimental data. In order to accomplish this 800,000 random, normally distributed points, were sampled from the posterior distribution found with equations (4.9). The reader should keep in mind that a point in this high dimensional space corresponds to a vector of 12 components (one for each parameter). The surrogate models in equations (4.1) were evaluated on the sampled points and many possible responses of the QOI's (heat flux, temperature and oxygen concentration) were spanned by these samples. The specific consistency test was achieved by applying the constraint in equation (2.6). A more compact way to write this constraint is given by: Y − f (X, θ) ≤∆ γ (4.13) if ∆ ≤ 1, then a consistent point is found and recorded as part of the high-dimensional region that describe the parameter space. The set of points that fulfill the constraint (4.13), is a subset of the original sample space; this high-dimensional region is what we call the consistent space. A way to visualize such complexity is presented in figure 4.3, in which 16 Figure 4.2: Correlation matrix from the posterior distribution Figure 4.3: Input parameter space. The region between the magenta lines corresponds to the region spanned by BI. The region between the green lines corresponds to the consistency region found by B2B-DC. The black line corresponds to the posterior mean X found from equation (4.9) the parallel lines are a 1D map of the respective coordinate in the input parameter space. Figure 4.4 shows the results of the BI spanning and the B2B-DC consistency. The blue 17 Figure 4.4: QOI's space. Blue and green bars correspond to the uncertainty spanned by BI and B2B-DC respectively and red bars correspond to the uncertainty in the QOI's found from the BI analysis while the green bars correspond to the consistent region for the QOI's found from the B2B-DC consistency analysis. In other words, the blue region delineates the region in the QOI's space that spans the uncertainty of both, the experimental data and the simulation data. This region has a one-to-one correspondence with the magenta region delineated by the input parameters in figure 4.3. The green region, delineates the sub-space for which given a set of parameters, the prediction of the models are useful and forecasting is possible. 4.1.4 Feedback and feed-forward Up to this point, we have established a methodology that allowed us to define ranges around the experimental data and the model input parameters (figures 4.3 and 4.4 respectively) for which the proposed fundamental models in the lower bricks of the hierarchy (figure 2.1) presented as body of evidence, allowed us to make predictions given the current uncertainty. Several assumptions came into play in order to produce the results shown in figures 4.3 and 4.4. One of the major ones is the surrogate modeling strategy used to represent the simulation data. Linear surrogate models are suitable to represent systems where evidence shows that the interaction between models, parameters and responses is fairly linear. The 18 Figure 4.5: Prediction case. The posterior mean was used to run an additional simulation represented by the black line complexity of the BSF in terms of physics and possible interaction between model and parameters, could hinder the non-linearity of the system when simulation data is represented via linear surrogate models. This issue can be seen in figure 4.3 where a portion of the consistent region seems to be out of the magenta region and the predicted posterior mean X (equation (4.9)) seems to be out of the consistent region at the parameter tSB . For this reason, more complex surrogate methodologies that could capture such non-linear effects are required. Gaussian-processes interpolation (GP) and rational-quadratic polynomials are tested methodologies 24,30 used in V&UQ analysis. Future analysis will include these methodologies that might help understand (expand or reduce) regions of uncertainty in the system. It also important to help reduce the uncertainty on the experimental data; the more accurate the data the more accurate the predictions made with the proposed physics. This is especially true for the oxygen measurements whose greater uncertainty affects the region of consistency with the data. 4.2 Analysis and Forecasting An additional step is taken after obtaining the posterior mean for the input parameters (black line in figure 4.3): run a simulation with that set of parameters and compare the 19 results against the consistency region found in the previous analysis. Figure 4.5 shows these results. In general these results are encouraging. On one hand, they establish that the current state of knowledge of the system (translated as models that represent the physics in the system) is potentially accurate and useful to represent very complex systems. On the other hand it encourages us to explore more accurate methodologies to obtain the surrogate models. The field of machine learning presents several alternatives in this area, specially in deep learning. 34 Bayesian neural networks 6,36 could finally make a breakthrough in the field of simulation sciences and help update the concept of forecasting with the predictivity defined here. 5 Conclusions and Future work In this work, the concept of predictivity was defined based on the concepts and methods of verification and uncertainty quantification field (V&UQ). The V&UQ methodology used is a modification of the one proposed in Schroeder 32 and Diaz-Ibarra et al. 24 Tools common to the field of machine learning, where used to define the probabilistic region that spans the uncertainty of both, the experimental and the simulation data. The resulting information of this analysis was used to obtain an additional simulation and assess the accuracy of the consistency region. The results are very encouraging in the sense that the prediction obtained, follows closely the trends of the consistent region and forecasting is possible. Additional research is required in order to better represent the surrogate models and improve the representation of the high dimensional sub-spaces obtained by BI and B2B-DC. 20 References [1] Kennedy, M. C.; O'Hagan, A. Bayesian Calibration of Computer Models. Royal Statistical Society 2001, 63, 425-464. [2] 10 Ways 2018. Machine Learning Is Revolutionizing Manufacturing In https://www.forbes.com/sites/louiscolumbus/2018/03/11/ 10-ways-machine-learning-is-revolutionizing-manufacturing-in-2018/, Accessed: 2018-01-30. [3] Lee, J.; Kao, H.-A.; Yang, S. Service Innovation and Smart Analytics for Industry 4.0 and Big Data Environment. Procedia CIRP 2014, 16, 3 - 8, Product Services Systems and Value Creation. Proceedings of the 6th CIRP Conference on Industrial Product-Service Systems. [4] Schaffernicht, E.; Stephan, V.; Debes, K.; Gross, H.-M. Machine Learning Techniques for Selforganizing Combustion Control. Proceedings of the 32nd Annual Conference on Artificial Intelligence. 2009; pp 1 - 8. [5] Tracey, B.; Duraisamy, K.; Alonso, J. J. Application of Supervised Learning to Quantify Uncertainties in Turbulence and Combustion Modeling. 51st AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition. 2013; pp 1-18. [6] Vaughan, A.; Bohac, S. V. Real-time, adaptive machine learning for non-stationary, near chaotic gasoline engine combustion time series. Neural Networks 2015, 70, 18 - 26. [7] Zhai, Y.-J.; Yu, D.-W.; Guo, H.-Y.; Yu, D. Robust air/fuel ratio control with adaptive DRNN model and AD tuning. Engineering Applications of Artificial Intelligence 2010, 23, 283 - 289. [8] Moradipari, A.; Shahsavari, S.; Esmaeili, A.; Marvasti, F. Using Empirical Covariance Matrix in Enhancing Prediction Accuracy of Linear Models with Missing Information. ArXiv e-prints 2016, [9] Helton, J. C. Uncertainty and sensitivity analysis in the presence of stochastic and subjective uncertainty. ournal of Statistical Computation and Simulation 1997, 57, 3 - 76. [10] O'Hagan, A.; Kennedy, M. C.; Oakley, J. E. Uncertainty analysis and other inference tools for complex computer codes. Bayesian Statistics 6. Proceedings of the Sixth Valencia International Meeting. 1999; pp 503-524. [11] Saltelli, A.; Chan, K.; Scott, E. M. Sensitivity Analysis, 1st ed.; John Wiley & Sons, 2000. 21 [12] Gelman, A.; Carlin, J.; Stern, H.; Dunson, D.; Vehtari, A.; Rubin, D. Bayesian Data Analysis, 3rd ed.; Chapman & Hall/CRC Texts in Statistical Science; CRC Press, 2013. [13] Barber, D. Bayesian Reasoning and Machine Learning; Bayesian Reasoning and Machine Learning; Cambridge University Press, 2012. [14] Frenklach, M.; Packard, A.; Garcia-Donato, G.; Paulo, R.; Sacks, J. Comparison of Statistical and Deterministic Frameworks of Uncertainty Quantification. SIAM/ASA Journal on Uncertainty Quantification 2016, 4, 875-901. [15] Abdessalem, A. B.; Jenson, F.; Calmon, P. Quantifying uncertainty in parameter estimates of ultrasonic inspection system using Bayesian computational framework. Mechanical Systems and Signal Processing 2018, 109, 89 - 110. [16] Feeley, R.; Seiler, P.; Packard, A.; Frenklach, M. Consistency of a Reaction Dataset. J. Phys. Chem. A 2004, 108, 9573 - 9583. [17] Kluger, F.; Moenckert, P.; Stamatelopoulos, G.-N.; Levasseur, A. Alstom's OxyCombustion Technology Development - Update on Pilot Plants Operation. 35th International Technical Conference on Clean Coal and Fuel Systems. Sheraton Sand Key, Clearwater, FL, USA, 2010. [18] Edberg, C.; Levasseur, A.; Andrus, H.; Kenney, J.; Turek, D.; Kang, S. Pilot Scale Facility Contributions to Alstom's Technology Development Efforts for Oxy-Combustion for Steam Power Plants. AFRC Industrial Combustion Symposium. Kauai, Hawaii, 2013. [19] International Flame Research Foundation. https://www.ifrf.net, Accessed: 201801-30. [20] M. Berzins and J. Luitjens and Q. Meng and T. Harman and C. Wight and J. Peterson, Uintah: A scalable framework for hazard analysis. Proceedings of the 2010 TeraGrid Conference; Pittsburgh 2010, 1-3. [21] J. Davison de St Germain and J. McCorquodale and S. Parker and C. Johnson, Uintah: A massively parallel problem solving environment. Proceedings of the 9th International Symposium on High-Performance Distributed Computing 2002, 719-734. [22] Uintah Software. http://www.uintah.utah.edu, Accessed: 2018-01-30. [23] Marchisio, D. L.; Fox, R. O. Computational Models for Polydisperse Particulate and Multiphase Systems; Cambridge Series in Chemical Engineering; Cambridge University Press, 2013. 22 [24] Diaz-Ibarra, O. H.; Spinti, J.; Fry, A.; Isaac, B.; Thornock, J.; Hradisky, M.; Smith, S.; Smith, P. A Validation/Uncertainty Quantification Analysis for a 1.5 MW Oxy-Coal Fired Furnace: Sensitivity Analysis. Journal of Verification, Validation and Uncertainty Quantification. ASME 2018, 3, 011004-011004-13. [25] Adamczyk, W. P.; Isaac, B.; Parra-Alvarez, J.; Smith, S. T.; Harris, D.; Thornock, J. N.; Zhou, M.; Smith, P. J.; Zmuda, R. Application of LES-CFD for predicting pulverized-coal working conditions after installation of NOx control system. Energy 2018, 160, 693 - 709. [26] VisIt. https://wci.llnl.gov/simulation/computer-codes/visit, Accessed: 201801-30. [27] Aughenbaugh, J. M.; Paredis, C. J. J. The value of using imprecise probabilities in engineering design. Journal of Mechanical Design 2006, 128, 969. [28] M.J. Bayarri and J.O. Berger and D. Higdon and M.C. Kennedy and A. Kottas and R. Paulo and J. Sacks and J.A. Cafeo and J.C. Cavendish and C.H. Lin and J. Tui, A Framework for Validation of Computer Models. National Institute of Statistical Sciences, Research Triangle Park, NC 2002, NISS Technical Report No. 128. [29] M.J. Bayarri and J.O. Berger and R. Paulo and J. Sacks and J.A. Cafeo and J.C. Cavendish and C.H. Lin and J. Tui, A Framework for Validation of Computer Models. Technometrics 2007, 49, 138-154. [30] Jatale, A.; Smith, P. J.; Thornock, J. N.; Smith, S. T.; Spinti, J. P.; Hradisky, M. Application of a Verification, Validation and Uncertainty Quantifi- cation Framework to a Turbulent Buoyant Helium Plume. FLow Turbul. Combust. 2015, 95, 143 - 168. [31] Russi, T.; Packard, A.; Frenklach, M. Uncertainty Quantification: Making Predictions of Complex Reaction Systems Reliable. Chem. Phys. Lett. 2010, 499, 1 - 8. [32] Schroeder, B. B. Scale-bridging model development and increased model credibility. Ph.D. thesis, The University of Utah, 2015. [33] 20-2009, A. V. Standard for verification and validation in computational fluid dynamics and heat transfer; 2009. [34] Kendall, A.; Gal, Y. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? Advances in Neural Information Processing Systems (NIPS). 2017. [35] Kendall, A.; Gal, Y.; Cipolla, R. Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2018. 23 [36] Ling, J.; Kurzawski, A.; Templeton, J. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. Journal of Fluid Mechanics 2016, 807, 155-166. 24 |

Metadata Cataloger | Catrina Wilson |

ARK | ark:/87278/s6p3185p |

Setname | uu_afrc |

Date Created | 2018-12-12 |

Date Modified | 2018-12-12 |

ID | 1389186 |

Reference URL | https://collections.lib.utah.edu/ark:/87278/s6p3185p |