Title | Oxy-Coal Power Boiler Simulation and Validation Through Extreme Computing |
Creator | Smith, P.J. |
Contributor | Thornock, J.; Wu, Y.; Smith, S. |
Date | 2014-09-09 |
Spatial Coverage | Houston, Texas |
Subject | 2014 AFRC Industrial Combustion Symposium |
Description | Paper from the AFRC 2014 conference titled Oxy-Coal Power Boiler Simulation and Validation Through Extreme Computing by P.J. Smith. |
Abstract | High carbon content is fast becoming a fuel quality constraint around the globe. We have studied the operation of a 15 MW boiler under oxy-coal firing with flue gas recirculation for carbon capture. This study was performed using high performance computing (hpc) at the extreme scale (two thousand processors per simulation) and dynamic large eddy simulation (LES). The HPC and LES in this study allowed for 1-2 cm resolution of the turbulence within the boiler and temporal resolution at the microsecond time scale. All of the scales of the particle transport and reaction are fully resolved (DNS) for the entire particle size distribution. Particle size segregation and clustering are spatially and temporal resolved. This study targeted a formal validation and uncertainty quantification for the heat flux as the specific quantity of interest as a function of wall thermal conductivity, fuel feed rate, and coal reactivity. Our validation approach identified the region of consistency between experimental and simulation data; thus, providing insight into the experimental measurements that could not be achieved without concurrent simulation and validation. This simulation and validation approach provides a path to accelerating deployment of new technologies. |
Type | Event |
Format | application/pdf |
Rights | No copyright issues exist. |
OCR Text | Show THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER Oxy-Coal Power Boiler Simulation and Validation Through Extreme Computing P. Smith, J. Thornock, Y. Wu, S. Smith, B. Isaac: The University of Utah, and P. Chapman, D. Sloan, D. Turek, Y-M. Chen, A. Levasseur: Alstom Power ! Abstract High carbon content is fast becoming a fuel quality constraint around the globe. We have studied the operation of a 15 MW boiler under oxy-coal firing with flue gas recirculation for carbon capture. This study was performed using high performance computing (hpc) at an extreme scale (thousands of processors per simulation) and dynamic large eddy simulation (LES). The HPC and LES in this study allowed for 1-2 cm resolution of the turbulence structure within the boiler and its temporal resolution at the microsecond time scale. All of the scales of the particle transport and reaction are fully resolved (DNS) for the entire particle size distribution. Particle size segregation and clustering are spatially and temporal resolved. This study targeted a formal validation and uncertainty quantification using 50 different measurements of the heat flux, temperature and O2 concentration as the specific quantities of interest as a function of wall thermal conductivity, fuel feed rate, and coal reactivity. Our validation approach identified the region of simultaneous consistency between experimental and simulation data, according to the consistency constraint, ! where the defect between the quantity of interest as predicted by the model and as measured by the experiment is bounded by the measurement uncertainty. The experimental measurements were collected in the Boiler Simulator Facility, owned and operated by Alstom Power in Windsor Connecticut. This formal validation/ uncertainty-quantification produces an uncertainty bounds on the observed quantities of interest (the measurements) that could not be achieved without concurrent simulation and validation. The information gain from this analysis is shown for each of the quantities of interest (heat flux, temperature, O2 concentration) in the three accompanying figures. This simulation and validation approach provides a predictivity path for accelerating deployment of new low carbon technologies. ue ! [ym(x) − ye] ! le, 1 0 10 20 30 40 50 60 70 80 90 1350 1400 1450 1500 1550 1600 1650 1700 1750 1800 1850 Local Temperature [K] Measurement Index Experiments Simulation Range Mean of Simulation All Consistent Models & Params Mean of All Consistent 0 5 10 15 20 25 0.4 0.6 0.8 1 1.2 1.4 1.6 x 10 5 Heat Flux [BTU/(ft2 hr)] Measurement Index Experiments Simulation Range Mean of Simulation All Consistent Models & Params Mean of All Consistent 0 5 10 15 20 25 30 35 0 1 2 3 4 5 6 7 8 9 10 Oxygen [%] Measurement Index Experiments Simulation Range Mean of Simulation All Consistent Models & Params Mean of All Consistent THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER Introduction Richard Smalley, the 1996 Nobel laureate in chemistry, has identified energy as the number one problem of humanity and the accessibility of cheap energy as a path to solving most of the top ten problems of the world.1 Domestic coal offers the potential for reaching secure inexpensive sources of fuel for at least many hundreds of years,2 but only if new technologies are used to extract and utilize this energy resource in ways that protect the environment. On June 2, 2014, the U.S. EPA proposed state-specific rate-based goals for carbon dioxide emissions from the power sector, as well as guidelines for states to follow in developing plans to achieve the state-specific goals.3 This rule, as proposed, will require significant technological changes in existing coal fired power plants in order to reduce carbon dioxide emissions in the United States. Breakthroughs in clean energy can provide benefits to the economy, security, environment, and jobs.4 These breakthroughs must be implemented on a time scale that is unprecedented in the energy sector. The only way to meet these time scales is for energy advancements and implementation to be ‘revolutionized' through simulation-based science.5 Exascale simulation science offers the potential for not only discovering new technologies but for more rapidly implementing new inventions at a scale sufficient to displace or augment existing energy options. The World Bank's review of the factors that impede the deployment of new energy technologies6 identified the risks associated with new technology as "a mountain of death." Assessing and reducing this risk is essential to the licensing, funding and construction of new energy projects. Exascale computer simulation with formal uncertainty quantification appears to be a promising option for rapid risk assessment and reduction. The World Bank review also shows that the relatively low levels of research, development and deployment funding is a major impediment that must be overcome to accelerate innovation in the energy sector. In May of 2014, the U.S. DOE-NNSA funded the Carbon Capture Multidisciplinary Simulation Center (CCMSC) to address this impediment through the development of exascale simulation and uncertainty quantification tools for the design, risk assessment and deployment of high efficiency advanced ultra-supercritical oxy-coal power boilers. This paper describes the state of the high performance computing (hpc) predicitivity for this application. !!! 2 1 Smalley, R., Future Global Energy Prosperity: The Terawatt Challenge. Material Matters 2005, 30. 2 S. Shafiee, E. T., An Econometrics View of Worldwide Fossil Fuel Consumption and the Role of the U.S. Energy Policy 2008, 36. 3 http://www2.epa.gov/carbon-pollution-standards/clean-power-plan-proposed-rule. 4 http://www.google.org/energyinnovation/speed.html. 5 National Science Foundation (U.S.), Simulation-based engineering science revolutionizing engineering science through simulation. National Science Foundation: Washington, D.C., 2006; pp. 88. digital, PDF file. http://purl.access.gpo.gov/GPO/ LPS72566. 6 Worldbank http://publications.worldbank.org/ecommerce/catalog/product?item_id=8210192. THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER The Problem For the past dozen decades the stable price of coal, as compared to the price volatility of other fuels, has made it the U.S. fuel of choice for producing low cost power (Fig. 1). While the U.S. cannot produce enough oil to feed its domestic needs, it has enormous reserves of coal. According to the U.S. EIA, the U.S. has the largest recoverable coal reserves in the world.7 The U.S. is capable of meeting domestic demand for coal for roughly 290 years (260 billion tons total/890 million tons of coal consumed in 2012).8 ! During 2013 39% of the power generation in the U.S. came from coal.9 Coal fueled power increased by 4.8% in 2013 over 2012.10 Coal generation is projected to provide 40.5 percent of U.S. electricity in 2014 and 38.9 percent in 2015.11 By the end of 2012, there were 557 coal-fueled power plants in the U.S.12 Since the beginning of 2011, 14 new coal units (totaling 8,890 MW) have begun operation. Two additional plants (totaling 644 MW) are projected to begin operating by the end of 2014.13 However, on March 27, 2012, the United States Environmental Protection Agency (EPA) released proposed new source performance standards for emissions of carbon dioxide (CO2) for new coal-fired electric utility generating units. The combustion of petroleum products is the largest source of CO2 in the U.S., mainly as transportation fuels. Coal combustion is the second largest source of greenhouse gases (GHGs) and the country's largest stationary source emitter, and thus the target of EPA's new source performance standards. The new source requirements state that "new fossil fuel-fired power generation units greater than 25 megawatt electric (MWe) must meet an output-based standard of 1,000 pounds of CO2 per MW-hour." As a result of these new requirements, the EPA states that they do not project any new coal-fired power generation plants to be built without carbon capture through 3 7 U.S. Energy Information Administration (EIA), International Energy Outlook 2013. 8 Ibid. 9 EIA Electric Power Monthly, February 2014 edition, with data for December 2013. 10 Ibid. 11 EIA, Short Term Energy Outlook, March 2014. 12 EIA, Count of Electric Power Industry Power Plants By Sector, by Predominant Energy Sources Within Plant, Electric Power Annual 2012, December 2013. 13 EIA, Electric Power Monthly, February 2012, 2013, and 2014. Figure 1 Price of electric power generation fuels in 2005 chained dollars /million BTUs (from EIA). Natural Gas Coal 1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 2010 0 2 4 6 8 10 12 14 16 Chained (2005) Dollars per Million BTUs Crude Oil THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER 2030. "New coal-fired or pet coke-fired units could meet the standard either by employing carbon capture and storage (CCS) of approximately 50% of the CO2 in the exhaust gas at startup, or through later application of more effective CCS to meet the standard on average over a 30-year period.14" This new standard has the coal industry scrambling to rapidly deploy the lowest cost technology to capture CO2 in coal-fired power plants. In addition to new coal power plant performance standards, as already noted, in June of this year the U.S. EPA announced plans to regulate CO2 emissions from existing power plants. Specifically, the announcement states: "Under Clean Air Act (CAA) section 111(d), state plans must establish standards of performance that reflect the degree of emission limitation achievable through the application of the `best system of emission reduction' (BSER) that, taking into account the cost of achieving such reduction and any non-air quality health and environmental impact and energy requirements, the Administrator determines has been adequately demonstrated. Consistent with CAA section 111(d), this proposed rule contains state-specific goals that reflect the EPA's calculation of the emission reductions that a state can achieve through the application of BSER. The EPA is using the following four building blocks to determine state-specific goals: 1. Reducing the carbon intensity of generation at individual affected electrical generation units (EGUs ) through heat- rate improvements. 2. Reducing emissions from the most carbon-intensive affected EGUs in the amount that results from substituting generation at those EGUs with generation from less carbon- intensive affected EGUs (including natural gas combined cycle [NGCC] units that are under construction). 3. Reducing emissions from affected EGUs in the amount that results from substituting generation at those EGUs with expanded low- or zero-carbon generation. 4. Reducing emissions from affected EGUs in the amount that results from the use of demand-side energy efficiency that reduces the amount of generation required. The proposed rule also contains emission guidelines for states to use in developing plans that set their standards of performance."15 Alstom R&D has given their perspective on these recent EPA rulings in a Power Engineering publication16. Environmental footprints from combustion come not only in the form of GHGs but from other pollutants that are hazardous to human health. On July 6, 2011, the US EPA finalized the Cross-State Air Pollution Rule (CSAPR), requiring a reduction in power plant emissions like SOx and NOx. The history of the EPA and power generation over the past twenty years has shown that the combination of regulation and technology advances have reduced total NOx emissions from power plants in the U.S. by a factor of 4 and SOx emissions by a factor of 3, even while power generation capacity has increased by 30%. 4 14 Standards of Performance for Greenhouse Gas Emissions for New Stationary Sources: Electric Utility Generating Units; EPA: 2012. 15 http://www2.epa.gov/sites/production/files/2014-06/documents/20140602ria-clean-power-plan.pdf, pages ES-1 and ES-2. 16 "Executives Discuss the Future of Coal in North America," by Sharryn Dotson (Associate Editor), in the July (2014) issue of Power Engineering, pg. 20 THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER The Solution Oxy-combustion is emerging as a lower-cost first-generation technology solution for both carbon capture17 and simultaneous reduction of NOx and SOx emissions. Advanced Ultra-SuperCritical (AUSC) power generation units have the potential for increasing conversion efficiency to offset the parasitic load of the separation of oxygen from air in an oxy-fired unit.18 President Obama has presented a national plan19 requiring that barriers to the widespread, safe, and cost-effective deployment of CCS be overcome within 10 years. Such an aggressive schedule will require a large-scale effort for rapid deployment of new technologies. Exascale computing should help to accelerate current rates of deployment to meet this timeline. The CCMSC has the mission of using exascale predictive simulation science to rapidly design and deploy new technology for secure electric energy generation; namely, a high efficiency AUSC oxy-coal power boiler. This overarching problem integrates a group of multidisciplinary scientists and engineers and is partnered with Alstom Power, one of the worlds largest power generation companies20, to provide experimental data for validation with uncertainty quantification analysis (V/UQ). Alstom is engaged in a companion study sponsored, in-part, by the US DOE, to collect experimental data through a comprehensive test program focusing on tangentially fired boiler development.21 In this partnership the CCMSC is providing exascale predictive science expertise (i.e. with validation and uncertainty quantification, V/UQ) to more rapidly deploy new technologies for reducing GHGs, NOx, SOx, and costs from power generation. The CCMSC overarching problem uses predictive science with UQ to explore fire-side design constraints of a 350MWe high efficiency AUSC oxy-coal power boiler. To this end, the center integrates a group of computer scientists, engineers, statisticians, applied mathematicians, and combustion scientists into three teams: the Exascale Team, the Predictive Science/Physics Team, and the V/UQ Team. We use large eddy simulations (LES) of the multiphase reacting flow oxy-coal fired power boiler. Our specific intended use of the simulation is to predict fireside performance of an ultra-supercritical oxygen-fired coal boiler: to predict the heat flux distribution and to predict the residual carbon in the ash (also called loss on ignition, LOI) with uncertainty bounds. Secondarily we desire to also predict NOx and SOx concentrations in the boiler. Current uncertainties in oxy-coal center around the heat flux distribution. Can the pulverized-coal boiler firebox be made to produce the same heat flux distribution under oxygen firing with recycled flue-gas as do new state-of-the art AUSC air-fired boilers? What is the impact on NOx and SOx production? Much recent research has gone into materials science 5 17 Energy.gov http://www.energy.gov/9309.htm. 18 McCauley, K. J.; Moorman, S. A., Oxy-Coal Combustion for Low Carbon Electric Power Generation. In Fifth International Conference on Clean Coal Technologies, 2011. 19 whitehouse.gov http://www.whitehouse.gov/the-press-office/presidential-memorandum-a-comprehensive-federal-strategy-carbon- capture-and-storage. 20 Close to 25% of the world's power production capacity uses Alstom technology. 21 Project, partners include the US DOE, the Illinois Clean Coal Institute, and ten electric utility companies. They are evaluating oxy-combustion design options including gas recycle configurations and oxygen injection schemes. The purpose of the partnership is to design a full scale (100-350 MWe) oxy-coal power plant. THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER requirements for withstanding the high temperatures of air-fired AUSC designs. The CCMSC does not include material science research, but couples AUSC operations with oxy-firing and focus on predicting fireside performance. We use hierarchal validation to obtain simultaneous consistency between a set of selected experiments at different scales embodying the key physics components of the overarching problem. We extrapolate our uncertainty obtained from the validation/uncertainty quantification (V/UQ) of the sub-scale, sub-physics analysis to a prediction of the full-scale 350MWe boiler that is consistent with all of the experiments and with all of the validation metrics of our validation hierarchy simultaneously. The data set which is at the largest scale for the V/UQ process is that from the 15MWth boiler simulator facility (BSF) operated by Alstom Power. A picture of this facility is shown in Fig. 2. A volume rendered image of the oxygen concentration and the temperature from a sample simulation is also shown in Figure 2. The computational size of this sample simulation was: • 14,400 cores on Kraken • boiler size: 13.25m x 3.35m x 2.9m • mesh resolution: cells of 1cm per side = 130 million cells • residence time: 3 seconds • time required to reach representative steady-state: 30 seconds of real time • simulation time step: 1e-5 seconds • wall-clock time per time step: 14-60 seconds 6 Alstom pilot test facility Figure 2. Alstom Boiler Simulator Facility (BSF) with an LES sample simulation of the coal-combustion process shown in the right two panels. The colored panel on the right shows shows the temperature field at one time step at one plane in the simulation. The middle panel shows a volume rendering of the O2 concentration in the boiler and was generated using visualization tools developed by CCMSC visualization participants. THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER The Pulverized Coal Large Eddy Simulation The flow field within an Alstom tangentially-fired coal boiler is a highly turbulent system of multiphase combustion reactions and intense radiative heat transfer. Many of the operating parameters of interest to the plant engineers/operators of the coal boiler are considered only after sampling for long times. These time-averaged parameters, however, are a cumulative result of several short time scale phenomena. In fact, much of the short time scale information have a tremendous impact on fundamental outcomes of the boiler itself. Take, for instance, near-burner flame stability. Flame stability directly affects the flame shape and the subsequent formation of pollutant NOx zones resulting from air ingress into the boiler. In addition, flame stability in oxy-coal combustion is greatly affected by local O2 concentrations which ultimately will affect CO2/O2 concentrations throughout the boiler. These fields in turn greatly influence the net heat transfer to the boiler tubes. By using an LES code as the computational fluid-dynamics (CFD) framework developed at the University over the past 15 years, we are able to compute as many of these temporal and spatial scales as is both possible on an hpc machine, and as is necessary for the prediction as measured by formal V/UQ. The turbulence in the coal boiler is characterized by the turbulent length scales (wavenumber), k (1/ length scale), and the turbulent kinetic energy EU(k) spectra. Resolving all turbulent scales in the boiler is intractable, even at exascale, given the size and complication of the target application. In LES modeling, a low-pass filter is applied to the governing equations, resulting in a portion of the energy spectrum that is resolved and a portion that requires modeling approximations to account for the eventual dissipation of energy at the smallest scales. In general, LES aims to capture a significant portion of the energy containing eddies directly (~80%). The frequency location where the handoff occurs between grid-resolved to sub-grid modeled is the cut-off frequency. Many LES models have a particular advantage in that as more resolution is added to a fixed problem, the filter width is decreased and dependence on the sub-grid model is decreased. Clearly, the move towards extreme computing will greatly aid in the accuracy of LES models. For example, two constructed model spectra22 for our target simulation are shown in Fig. 3 and represent characteristic energy spectra for two important regions of our application: the flame envelope and the near nozzle region. Parameters for the model spectra were chosen based on the geometric dimensions of the 350MWe boiler. The left-most spectrum (shown in black) represents the turbulent kinetic energy in the region of the main flame envelope. The right-most spectrum (shown in blue) presents the energy for the near-nozzle region. On the plot, two filter widths (mesh spacing) are shown with the dotted red and green lines. The red line shows the filter width of the current demonstration problem used on the 15MW BSF. For a 1cm-filter width, roughly 97% of the energy in the flame envelope would be captured by the LES model. A 1mm filter width captures all turbulent scales in the flame envelope. The near nozzle region, because of the finer geometric scales, produces much smaller turbulent scales. The 1cm and 1mm filter widths for this region of the flow capture 20% and 80% of the energy spectra, respectively. It is through extreme computing at the exascale that near 7 22 Pope, S. B., Turbulent flows. Cambridge University Press: Cambridge; New York, 2000; p 771. THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER direct numerical simulation (DNS) is achieved for the flame envelope while obtaining practical LES for the near burner regions. Development of the LES CFD component has been built upon the 15+ years of existing code development. In the predictive simulation of the proposed objective, oxy-fired pulverized-coal burner for AUSC power generation, the critical process of radiation is highly dependent on precise prediction of particle position, size and composition. Accurate modeling of the particle transport and dynamics must incorporate the particle size distribution's polydispersity and each particle's interaction with the variety of turbulence fluid time scales. The two most common approaches in the multiphase-flow community are the multifluid model and Lagrangian particle tracking. The multifluid model offers a fixed computational workload on an Eulerian mesh but requires gross assumptions for closure. The closure models relate to the equilibrium assumption-that the velocities of particles within a given dispersed phase are all equal or follow an equilibrium distribution. The Lagrangian methods easily describe any non-equilibrium velocities, however the workload and communications in parallel computations can quickly become unreasonable. This is particularly true when accounting for particle-particle interactions. Currently, the Arches framework makes use of the direct quadrature method of moments (DQMOM).23 DQMOM is solved on an Eulerian mesh and avoids the potential number-of-particles-squared scaling of computational work that can be required for particle-particle interactions in a Lagrangian particle tracking method. 8 23 Fox, R. O.; Vedula, P., Quadrature based Moment Model for Moderately Dense Polydisperse Gas-Particle Flows. Industrial & Engineering Chemistry Research 2010, 49, 5174-5187. Figure 3 Normalized model energy spectra for flame envelope and near nozzle regions of the target application. Dotted lines show cut-off frequencies for LES at 1cm and 1mm. 10 0 10 1 10 2 10 3 10 4 10 5 10 −1 10 0 10 1 10 2 10 3 10 4 10 5 Wave Number, [1/m] Kinetic Energy [] Flame Envelope Near Nozzle =1cm =1mm THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER DQMOM calculates the particle position, diameter, and composition, which in turn is used by the radiation modeling effort. The particle-dynamics submodels including drag, devolatilization, char oxidation, moisture evaporation, etc. all appear as source terms in the DQMOM. In the past, many LES studies of particle-laden flows have neglected sub-grid scale drag effects. We argue that this assumption is dependent on the turbulence properties as well as local LES resolution and that that the simplicity of this assumption is another motivator for exascale computation. A general theoretical justification for this assumption in LES can be made by applying a Stokes number analysis to the turbulent kinetic energy spectrum. The spectrum, denoted EU, is convenient for this analysis since it breaks down the turbulent fluctuations into contributions of different sizes - with each size exhibiting a corresponding eddy-turnover time. When the eddy-turnover time is used as the fluid characteristic time in the Stokes number and when each wave number in the turbulent-energy spectrum has an associated eddy-turnover time, then a single particle of fixed radius and density will exhibit a unique Stokes number value for each different wave number. To illustrate this concept, the model energy spectrum for isotropic turbulence is shown in Fig. 4. This figure shows dotted lines corresponding to the wave numbers for which the eddy turnover time is equal to each particle's relaxation time - Stokes number equals one for that particle. For wave numbers much larger than the dashed line, the eddy turnover time is much smaller than the particle relaxation time, and the particle behaves ballistically (with respect to the fluctuations at those wave numbers). On the other hand, for wave numbers much smaller than the dashed line, the eddy-turnover time is much larger than the particle relaxation time and the particle behaves as a tracer element. It is only for a relatively narrow bandwidth that the turbulence interacts non-trivially with the particle. The implications of this concept on LES modeling for the pulverized coal combustion simulations are also illustrated in the three plots of Fig. 4. Three particle sizes are considered spanning the distribution in the boiler, one for each of the three plots, with the overlap bandwidth of non-trivial Stokes number illustrated in relation to the Nyquist cut off. The small particles, top plot, behave as tracers for all of the resolved scales. Therefore, the drag of particles in this unresolved regime should be modeled in the subgrid-scale flux. In the extreme of zero Stokes number, the subgrid scale flux is often modeled using gradient diffusion. 9 Figure 4 Illustration of different particle kinematic regimes as a function wave number for 3 fixed particle relaxation times showing implications on LES modeling. Wave Number, [1/m] Kinetic Energy, E U Model energy spectra Kolmogorov scale Large particles Medium particles Small particles THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER The large particles, bottom plot, are in a regime such that they are ballistic with regard to all the unresolved scales - their trajectories are essentially unaffected by the presence or absence of the unresolved scales. For this reason particles in the resolved regime should be modeled using the drag directly-as done in direct numerical simulations. The middle plot shows particles of intermediate size for which some of the non-trivial drag effects are resolved and some are not. As the scale of the computational effort increases, the Nyquist cut-off moves to smaller lengths and as a result particles of smaller size move from one regime to another. This regime change, when described by an appropriate change in modeling approach, results in more accurate and higher fidelity computations. The advantage of an hpc LES simulation is that with increasing computational power more of the particle drag effects are resolved directly. They are not modeled. For the computations of this paper we are resolving all of the particle drag effects for all relevant particles (all particles greater than 10 microns in diameter) in the system. 10 Figure 5 Particle number density as computed from the 3K core, LES at the same time step in the Alstom BSF. The particles on the left are 90 micron diameter particles those on the right 30 micron diameter. THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER This distribution of Stokes Numbers as a function of particle size and eddy size results in a segregation of particles in the pulverized coal boiler. Extreme computing with LES can capture this effect. Figure 5 shows volume rendered images of the particle number density for two different particle sizes at the same time step in the corner-fired Alstom boiler. The larger particles concentrate (blue color) along edges of eddies of much different size than the smaller particles, which, while still concentrating along edges of coherent structures, are much more uniformly dispersed than the larger particles. Coal particle reactions include moisture evaporation, devolatilization, heterogenous char combustion, and associated physical changes in the particle such as diameter and density changes. Simple models typically have assumed heat-transfer-limited conditions, herein we include effects of high mass transfer to inhibit heat transfer and reaction. At the particle high heating rates and high reaction rates in the oxy-fired boiler, these effects are significant. Analogous to particle drag effects, LES with extreme computing (temporal resolution smaller than milliseconds and spatial resolution on the order of 1 cm.) allows us to resolve all of the heterogeneous devolatilization and char oxidation rates without having to resort to sub grid scale models for mixing and reaction. For the devolatilization model in this study we are using an approach derived from the the Chemical Percolation Devolatilization (CPD) model by Yamamoto et al.24 For char oxidation, we have employed the approach by Shaddix for oxy-coal combusiton.25 Gas-phase reactions are currently modeled in Arches with two mixture fractions and one heat loss parameter, assuming equilibrium chemistry. While this initial implementation is being investigated, the eventual goal is to see if detailed but reduced chemistry models are necessary to use with the LES model in order to describe the primary objective of wall heat transfer. 11 Figure 6 Volume rendered image of the gas temperature field in the BSF simulation. Note that the simulation has a different temperature for each of the particles in the distribution and for the gases. 24 Kenji Yamamoto, Tomoya Murota, Teruyuki Okazaki, Masayuki Taniguchi, "Large eddy simulation of a pulverized coal jet flame ignited by a preheated gas flow," Proceedings of the Combustion Institute 33 (2011) 1771-1778. 25 Hecht, E.; Shaddix, C. R.; Molina, A., In Effect of CO2 gasification reaction on oxy-combustion of pulverized coal char, Combustion Institute, 2011; pp 1699-1706. THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER The major mode of heat transfer in the firebox of a coal-fired boiler is by radiation. The design and operation of the firing system for an oxy-combustion technology will require accurate radiative heat transfer simulations for environments that have not yet been studied (increased CO2 concentrations, higher temperatures, different radiative properties for new metal alloys, etc.). Our simulation experience shows that radiative heat transfer computations take 1/3 to 2/3 of the computational load of a boiler simulation. We have been using the Discrete Ordinates Method (DOM, a modeling method developed at LANL for neutron transport) and were one of the first to deploy this method for radiative heat transfer in CFD applications.26,27 In the multiphase scattering media of the pulverized-coal boiler there are many radiation sources (radiating gases like CO2, H2O, NO2, SO2, etc., as well as coal, char, soot and ash particles). It is a computational challenge to obtaining the spectral radiation properties of all the combustion gases and the particulates as a function of concentration and temperature. We have implemented an FSK algorithm28 for gases coupled with a coal and soot radiation properties model for the coal, char, and ash mixture.29 !! 12 Figure 7 Volume rendered image of the local heat flux field in the BSF simulation. 26 Jamaluddin, A.; Smith, P., Predicting radiative transfer in rectangular enclosures using the discrete ordinates method. Combustion Science and Technology 1988, 59 (4-6), 321-340. 27 Adams, B.; Smith, P., Three dimensional discrete ordinates modelling of radiative transfer in a geometrically complex furnace. Combustion Science and Technology 1993, 88 (5-6), 293-308. 28 Modest, M. F.; Zhang, H., The full-spectrum correlated-k distribution for thermal radiation from molecular gas-particulate mixtures. ASME Journal of Heat Transfer 2002, 124 (1), 30-38. 29 Simeonova, L. Numerical Implementation of Models for Radiative Properties of Molecular Gases and Particulate Media in Combustion Applications. University of Utah, 2012. THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER Boiler Simulation Validation with Uncertainty Quantification We have performed an analysis using both the available sets of experimental data and the sets of LES simulation data and requiring consistency between both data sets. This analysis transforms the validation process from a subjective evaluation of whether the simulation data agree with the experimental data into a quantitative consistency test. Thus, like a mass or energy balance check on experimental measurements, the consistency between the simulation and the experiment can inform the design engineer about the bounds on what might be known or not known about the particular pilot or full-scale combustion system. This study targeted a formal validation and uncertainty quantification using 50 different measurements of the heat flux, temperature and O2 concentration as the specific quantities of interest as a function of wall thermal conductivity, fuel feed rate, and coal reactivity. Our validation approach identified the region of simultaneous consistency between all experimental and simulation data, according to the consistency constraint, ! where the defect between the quantity of interest as predicted by the model and as measured by the experiment is bounded by the measurement uncertainty. The experimental measurements were collected in the Boiler Simulator Facility (BSF), owned and operated by Alstom Power in Windsor Connecticut. To identify the most sensitive parameters affecting these quantities of interest we drew on the collective prior knowledge of the joint University/Alstom team. An input uncertainty map (I/U map) was created containing the various numerical, modeling and scenario parameters that might impact the heat transfer, with estimated bounds or uncertainty intervals added for those parameters where prior knowledge was sufficient to provide such estimates. The parameters were ranked or prioritized based on their projected impact on temperature and wall heat flux. This I/U map represents only an initial effort to construct such a map, and would be expected to evolve as additional analysis and scrutiny are applied with each successive round of V/UQ. The highest priority parameters are the scenario boundary conditions, and the scenario parameters selected that have the greatest impact on the wall heat flux are the flow rates -- the coal, oxygen and recycle flow rates. The wall thermal resistance in the main furnace was identified as having a very high impact on the heat flux observable. Model parameters with lesser impacts are postulated to be radiation and homogeneous and heterogeneous reaction parameters. Prior research30 indicated that the devolatilization rates are likely the most important model parameters. There are four different wall-insulation zones as a function of vertical height in the BSF, with the upper furnace wall being un-insulated, bare steel, as shown in Figure 8. This figure also shows the location of the three levels (labeled L1, L2, and L3) where the average plane temperature was experimentally measured for the V/UQ study. The thermal boundary conditions for the simulation specified the backside temperature of heat transfer surface to be at 375 K, corresponding to the boiling water ue ! [ym(x) − ye] ! le, 13 30 Smith, J.D., Smith, P.J., Hill, S.C., "Parametric Sensitivity Study of a CFD-Based Coal Combustion Model," AIChE Journal, Vol. 39, No. 10, October (1993). THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER temperature in which the BSF sits. The wall thermal resistance (the most sensitive parameter from the I/ U map) is then the ratio of the parameters Δx/kw, where Δx is the thickness of the resistance material(s), and kw is the wall thermal conductivity. For this study, the uncertainty in kw for the main combustor was considered to be the most sensitive parameter for all the quantities of interest. Table 1 lists the kw, Δx, and inner surface temperature values used for all of the cases simulated. The simulations were then performed spanning the range of the uncertain thermal conductivity of the main furnace. A few surfaces, such as the nozzles themselves and a portion of the corner plate through which the nozzles protrude, were prescribed with adiabatic thermal boundary conditions. All of the superheater platens and the tubes in the economizer bank in the backpass were individually represented, made possible by the hpc simulation. 14 L1 L2 L3 Bare Steel 2" Cerawool Blanket Arch Hopper Upper Furnace Main Combustor Figure 8 Schematic of thermal boundary conditions in the BSF. THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER Each of the simulations performed to map out the uncertainty space was constructed with 2 cm resolution and had 17.414 million cells. Based on a preliminary investigation of the computing efficiency and associated wall clock time for the baseline case on a supercomputer cluster, about (20.4)3 cells/processor was selected for the simulation runs. Each computation was run on 2,008 processors (approximately 126 nodes with each node containing 16 processors). One large case was constructed with a resolution of 1 cm, resulting in 137.216 million cells. This latter case required 9,920 processors (or 620 nodes) to run in parallel. After the code optimization analyses and several necessary initial tests, the final version of the baseline case was started in August of 2013 on the ORNL Titan machine. The typical time step in the LES simulations was about 3.0x10-5 seconds. It took 2,930,850 CPU hours to acquire a physical time of 22 seconds of simulation results. Accordingly, the estimated simulation cost is about 2.5~3 million hours for each of the V/UQ cases (to achieve the requisite 22 seconds of real time). The physical time of about 22 seconds is about 3 times longer than the boiler residence time. The initial testing showed that the evacuation time of the boiler was about 8 seconds (depending on the wall boundary conditions and the initial conditions). After each case was run, the results were time-averaged over the last 5 seconds of the simulation history, and then further post-processed for use in the V/UQ studies. To illustrate the V/UQ consistency concept, we show in Figure 9 a consistency analysis for one variable, the average temperature across an entire plane, from both the simulations and the measurements at the three levels, L1, L2, and L3, (shown in Figure 8) for the Alstom BSF. Each of the solid lines (blue, red, and green) represents a surrogate model that interpolates 15 Table 1 Information on wall model parameters for thermal boundary conditions for the BSF simulations. Figure 9 Consistency Region for Averaged (Planar) Experimental and Simulation Data for Temperature (K) as a function of the uncertainty in the thermal conductivity (Btu/hr-ft-°F) of the Main Furnace (see Table 2). Location Thermal Conductivity [W/m-°C (Btu/hr-ft-°F)] Wall Material Wall Thickness [m (in)] Upper Furnace 5.2 (3.0) Bare Steel 0.018 (0.71) Wing Walls 5.2 (3.0) Bare Steel 0.018 (0.71) Tube Banks 5.2 (3.0) Bare Steel 0.018 (0.71) Arch 0.52 (0.3) Cerawool Blanket 0.053 (2.1) Main Furnace 2.6-4.8 (1.5-2.5) RAM 90 0.065 (2.5) Hopper 1.4 (0.8) H+W Tufshot and GUN 20 0.13 (5.2) Panels [see Note] 0.52 (0.3) Cerawool Blanket 0.15 (6.1) Note: Insulated roof explosion panels (8 in number) above the main furnace THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER the values predicted by the full LES simulation which was performed at the three values of the thermal conductivity shown in Figure 9. There is a separate surrogate model for each of the 3 planes. The dashed lines (of the same color) represent the average experimental temperature at each of the 3 planes, and the dotted lines represent uncertainty bands (+/- 30 K) on each side of the mean temperature. The consistency region (designated by the bold, black line segment) is that region of thermal conductive (in the vicinity of 2.25 Btu/hr-ft-°F) that simultaneously satisfies each of the three (average) temperature dataset units, given the assigned upper and lower experimental uncertainties. As a guiding principle, our V/UQ is based on the scientific method. By this we mean that, first, observations must inform theory thereby increasing confidence that the model's description of reality is accurate. Then, theory can be applied to situations when observations are lacking-this is the ‘raison d'être' for all theory. To implement this principle, we combine two disciplines: validation and uncertainty quantification. Traditional engineering validation asks physically relevant questions, but has commonly implemented ad hoc procedures such as the "eyeball norm." Statistical UQ has developed a rigorous set of techniques for asking questions about parameters for distributions. Our approach to V/ UQ is to make use of the tools created by the UQ community and adapt them to answer the questions posed by engineers. Verification then, in addition to the roles described above, is used to inform the V/ UQ analysis. Our specific methodology for this analysis follows the procedure of the National Institute of Statistical Sciences31 (NISS): 1. Delineate all sources of uncertainty with an Input-Uncertainty Map, I/U Map, for unknown parameters and their sensitivity. Unknown parameters include: experimental/scenario parameters, modeling parameters, and numerical parameters (resulting uncertainty of which are determined by verification). For the I/U map for the BSF we identified 40 total parameters which we narrowed by screening analysis to the most sensitive few. 2. Define evaluation/decision criteria-these are the uncertain quantities that are ultimately of interest and most directly influence a risk analysis. For this overarching problem this is the heat flux and temperature profiles in the boiler. 3.Perform a design of experiments for the experimental data and for the simulations. Expense limits our data to the sparse regime-particularly for hpc simulations with LES. 4. Develop surrogate models to enable fast model evaluation between the sparse data points. 5.Perform analysis: compare simulation against experiment to quantify uncertainty and degree of validation. 6. Draw conclusions, provide feedback, and make decisions. However, in large-scale engineering problems, the experiments are often expensive and crude. This results in the potential for significant bias. To deal with this, we have adopted the consistency approach 16 31 M. J. Bayarri, J.O. Berger, R. Paulo et al. Technometrics, 49 (2007) 138-154 THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER of Frenklach and Packard from University of California at Berkeley32 in place of the analysis step proposed by NISS. The consistency analysis is a mathematical statement limiting error. It is a constraint on the UQ process that requires the difference (the ‘defect') between the model result, ym, and the experimental means, ye, to be bounded by the experimental uncertainty, le & ue. If the defect is within the prescribed bounds, the model and experiment are termed "consistent" for the specified bounds. Furthermore, the model may depend on an unknown parameters, x, from the I/U map with their own physically imposed bounds, α & β. Certain values of the model parameters will result in inconsistency between model and experiment while other values may result in consistency-this defines a consistent set for the model parameter. When multiple experiments (indexed e) are included in a data set, E, the model parameter's consistent space is the intersection of that for the individual experiments: Selection of experimental bounds, le & ue, is critical. We prefer a preliminary Bayes analysis of errors, and we associate the values of these bounds with the Bayesian credible interval. While the primary goal of the consistency analysis is identification of a consistent model-parameter set across the entire data set, an inconsistent outcome of the analysis offers the opportunity for experimentalists and simulation scientists to sit down together and identify the source(s) of bias. However, to overcome the inconsistent result, the bias must be corrected and the procedure repeated. Once the consistent set is identified, it is used as a constraint in a full Bayesian analysis for the probability-distribution function of the model parameters. This posterior distribution function is used, in turn, for the Bayesian posterior predictive which includes parameter uncertainty but not measurement error. Any single simulation output is not predictive by itself without demonstrating the effect of the full range of model parameter uncertainty on the results of the predictive simulation. The consistency analysis answers the engineering questions, "Do the experimental observations validate the theory?", and, "Using the theory, do the experimental observations validate each other?" On the other hand, the Bayesian analysis answers the probabilistic question, "What is the state of our knowledge/uncertainty about the parameters?" Incident heat flux is available at 26 ports, and temperature data are available at 116 measurement locations on three levels (L1, L2, L3). The experimental temperature data was averaged over the entire plane at each level, so that the final set of measured data for the V&UQ analysis consisted of 29 points - 3 temperature points (one averaged temperature for each plane) and 26 heat flux data points. For the oxygen molar concentration (mole %, wet) data collection a total of 48 measurement points were collected. Simultaneous consistency is required for each of the local temperature measurements, the average planar data, the heat flux data points and the Oxygen measurement points. For a few of the samples, the probe was inserted incrementally along the traverse, and then when the probe was being extracted, a measurement was repeated. Although these additional measurements 17 32 T. Russi, A. Packard, M. Frenklach. Chem. Phys. Letters, 499 (2010) 1-8 le [ym(x) − ye] ue for each e 2 E when ↵i xi "i for all i = 1, . . . ,n THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER are not ideal replicates, in the sense that they were not spaced (substantially) in time and taken by different personnel, etc., they are still useful replicate data points in our uncertainty analysis. These replicates were used to help ascertain uncertainty in the measurements for the V/UQ analysis. This formal validation/uncertainty-quantification produces an uncertainty bounds on the observed quantities of interest (the measurements) that could not be achieved without concurrent simulation and validation. The information gain from this analysis is shown for each of the quantities of interest (heat flux, temperature, O2 concentration) in the Figures 10 through 12. For the heat flux measurements in Figure 10, the simulation (blue) error bars are largely encompassed within the experimental (red) error bars, so that the consistency band (green) is closer to the central portion of the experimental error bars for most of the measurement port locations. The initial experimental uncertainty for the heat flux was estimated to be +/- 6.3x104 W/m2 (or +/- 2.0x104 Btu/hr-ft2), and that value was found appropriate to ensure the existence of a consistency region. 18 2.5 Samples 0 5 10 15 20 25 0.4 0.6 0.8 1 1.2 1.4 1.6 x 10 5 Heat Flux [BTU/(ft2 hr)] Measurement Index Experiments Simulation Range Mean of Simulation All Consistent Models & Params Mean of All Consistent Figure 10 Validation from Alstom BSF experiments for heat flux with uncertainty before and after simultaneous consistency between heat flux, temperature and oxygen experimental data and LES simulation data. THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER Rather than use averaged planar temperatures in the overall consistency analysis, the next step in complexity was to increase the number of individual data set units by using all of the local temperature data (available at 104 measurement locations on levels L1, L2 and L3) instead of the averaged planar data. The incident heat flux data (available at 26 ports) was retained in the analysis. The mole % (wet) oxygen concentration data (available at 48 measurement points) were also ultimately added as additional ye. A consistency analysis was performed on all of the point-wise data (simultaneously). In order to achieve consistency using all of the temperature data, the temperature error bars had to be +/- 600 K. This degree of error was deemed unreasonable. Thus, the data set was determined to be inconsistent. Examining the experimental data, we identified several measurements that were reported to be 500 K lower than the body of the data. On Plane L1, there are 8 points near the front and rear walls that exhibit very low temperatures; this is presumably due to ambient cold air leaking through the measurement ports. These low-temperature data points are not mutually consistent with the rest of the 19 0 10 20 30 40 50 60 70 80 90 1350 1400 1450 1500 1550 1600 1650 1700 1750 1800 1850 Local Temperature [K] Measurement Index Experiments Simulation Range Mean of Simulation All Consistent Models & Params Mean of All Consistent 0 0.4 0.6 0.8 1 1.2 1.4 1.6 x 10 5 Heat Flux [BTU/(ft2 hr)] Figure 11 Validation from Alstom BSF experiments for local temperature with uncertainty before and after simultaneous consistency between heat flux, temperature and oxygen experimental data and LES simulation data. THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER data (planar and local temperature, heat flux and oxygen concentration) and thus were determined to be outliers / bad data. In this way a total of 93 temperature points (out of the available 104 points) were determined to be consistent. The V/UQ analysis made it possible to identify a few oxygen data points which were inconsistent with all measurements and simulations. On Plane L1 (between the windbox and lower OFA nozzles), 11 measurement points exhibit high oxygen levels (e.g., 3.5 to 5.5 mole % (wet)) in a region near the wall, which are inconsistent. It is uncertain where the high oxygen came from (at least in that particular region of the plane). Speculative reasons given may include: (a) CFS jets from a lower elevation, (b) ambient air leakage coming in with the probe, (c) the actual char burnout rate may be lower than the simulations, producing excess oxygen locally (d) unburned particles may be in the core, whereas the CFS gas is around the periphery of the boiler, etc. The 11 measurement points were labelled temporarily inconsistent, until more of the parameter space can be studied in the future. In other 20 Figure 12 Validation from Alstom BSF experiments for local oxygen concentration with uncertainty before and after simultaneous consistency between heat flux, temperature and oxygen experimental data and LES simulation data. 25 Range Simulation Consistent Models & Params Consistent 0 5 10 15 20 25 30 35 0 1 2 3 4 5 6 7 8 9 10 Oxygen [%] Measurement Index Experiments Simulation Range Mean of Simulation All Consistent Models & Params Mean of All Consistent THE INSTITUTE FOR CLEAN AND SECURE ENERGY, THE UNIVERSITY OF UTAH AND ALSTOM POWER words, since the thermal conductivity is the only active variable in this study, other active variables may also have a significant impact on the oxygen at L1. The data deemed to be inconsistent in the present analysis, may be found to be mutually consistent when the analysis is extended to include other active variables. !! Conclusions Extreme computing, or high performance computing (hpc) of a pulverized oxy-coal boiler has been shown to allow the simulation engineer to reduce the level of model approximations that must be made to perform an overall simulation. Thus, hpc resolves more of the physics and chemistry in the coal combustion boiler. Much of the turbulence is directly computed, reducing the requirements for approximate turbulence models. Many of the reactions (all of the slow reactions) are resolved directly (including all of the particle devolatilization and heterogeneous oxidation reactions), reducing the reaction or combustion modeling that is needed for the simulation. All of the particle gas momentum exchange and drag interactions are resolved directly thus reducing the modeling for this complex multi-phase flow interactions. Much of the thermal exchange within the boiler including different particle and gas temperatures and the radiation exchange within the boiler are resolved directly too. This increased modeling fidelity provides increased confidence in the ability to predict thermal performance of a power boiler for the purpose of design, deployment or retrofit for oxy-combustion applications. Extreme computing not only resolves more chemistry and physics, but resolves more spatial and temporal time scales. Herein we have demonstrated simulations of the Alstom BSF using ~3000 processors. This level of extreme computing allowed for spatial resolution of 1 to 2 cm and temporal resolution of ~10-6 seconds. Additionally, extreme computing allows for multiple simulations to be performed to understand the operation of a single boiler design or operating configuration. This allows for the exploration of the uncertainty in the operating and modeling parameter space to define a region of uncertainty in both the experiment and simulation space. A formal process of identifying the consistent subspace of all uncertainties in experiments and simulations provides the engineer with quantified information for validation. This extreme computing and validation approach provides a productivity path for accelerating deployment of new low carbon technologies. 21 |
ARK | ark:/87278/s6n044rf |
Setname | uu_afrc |
ID | 14393 |
Reference URL | https://collections.lib.utah.edu/ark:/87278/s6n044rf |