Title | Modeling Combustion Efficiency for Industrial Flares: Implementation of RCCE in LES for a better CE prediction |
Creator | Thornock, J.N. |
Contributor | Smith, Philip, and Smith, Sean |
Date | 2013-09-25 |
Spatial Coverage | Kauai, Hawaii |
Subject | AFRC 2013 Industrial Combustion Symposium |
Description | Paper from the AFRC 2013 conference titled Modeling Combustion Efficiency for Industrial Flares: Implementation of RCCE in LES for a better CE prediction by Jeremy Thornock |
Abstract | In this paper, we demonstrate the use of a rate-controlled constrained equilibrium (RCCE) concept with large eddy simulation (LES) for performing predictions of combustion efficiency (CE) for industrial flares. Given that LES explicitly represents a wide range of turbulent time and length scales, resolving roughly %80 of the turbulent kinetic energy, we implement the RCCE concept using a slow rate limiting step and a subsequent fast step. The slow step is LES resolved and is modeled using grid resolved mass fraction combined with the Westbrook/Dryer global reaction rate. For the fast step we use chemical equilibrium consistent with RCCE. The sub-grid scale model informs the resolved LES scale process, tightly coupling the combustion across all scales. In this manner, the model takes advantage of the resolved scale information to better represent the combustion processes occurring across a range of time scales. The model has several desirable features, including accounting for flame extinction and ignition consistent with the so-called flammability nose plots for a given fuel. Such features are desirable for representing the important physical processes affecting the overall CE. We demonstrate our RCCE implementation by performing CE predictions through a constrained optimization process that pairs experimentally measured data with output from the LES simulation tool. The analysis examines a multi-dimensional space where model and scenario input parameters to the LES model are varied within prescribed error bounds to produce a bounded output across several measurable quantities. This bounded data are then compared to experimentally observed data. Regions of data consistency provide a bounded parameter space. The bounded parameter space can then be used to make predictions of flares where no data are available and provide an estimate of the uncertainty of the prediction. |
Type | Event |
Format | application/pdf |
Rights | No copyright issues |
OCR Text | Show Modeling Combustion Efficiency for Industrial Flares: Implementation of RCCE in LES for a better CE prediction J.N. Thornock, P.J. Smith, S. Smith September 4, 2013 Abstract In this paper, we demonstrate the use of a rate-controlled constrained equilibrium (RCCE) concept with large eddy simulation (LES) for per-forming predictions of combustion efficiency (CE) for industrial flares. Given that LES explicitly represents a wide range of turbulent time and length scales, resolving roughly %80 of the turbulent kinetic energy, we implement the RCCE concept using a slow rate limiting step and a sub-sequent fast step. The slow step is LES resolved and is modeled using grid resolved mass fraction combined with the Westbrook/Dryer global reaction rate. For the fast step we use chemical equilibrium consistent with RCCE. The sub-grid scale model informs the resolved LES scale process, tightly coupling the combustion across all scales. In this manner, the model takes advantage of the resolved scale information to better rep-resent the combustion processes occurring across a range of time scales. The model has several desirable features, including accounting for flame extinction and ignition consistent with the so-called flammability nose plots for a given fuel. Such features are desirable for representing the important physical processes affecting the overall CE. We demonstrate our RCCE implementation by performing CE pre-dictions through a constrained optimization process that pairs experi-mentally measured data with output from the LES simulation tool. The analysis examines a multi-dimensional space where model and scenario input parameters to the LES model are varied within prescribed error bounds to produce a bounded output across several measurable quanti-ties. This bounded data are then compared to experimentally observed data. Regions of data consistency provide a bounded parameter space. The bounded parameter space can then be used to make predictions of flares where no data are available and provide an estimate of the uncer-tainty of the prediction. 1 1 Introduction Flaring of waste gasses is common throughout industry. Despite their preva-lence, however, the performance of the flaring operation (combustion process and subsequent hydrocarbon destruction) can be difficult to assess given the modes in which flares typically operate. For example, flares are typically em-ployed in the open atmosphere and are subject to the prevailing atmospheric conditions. When subject to extreme atmospheric conditions, the flare perfor-mance can be degraded and the flare destruction efficiency (i.e., the combustion of the unwanted gas stream) seriously affected. Flare performance can also suf-fer from human intervention through poor set points on day to day operation. For example, the use of steam or air assist in gas flaring is intended to encourage turbulent mixing and thus enhanced combustion. It has been recently shown [1], however, that flares can be easily over assisted having a negative affect on the flare performance especially at low fuel flow rates. Determining actual performance of the flare is difficult. This is simply due to the mode in which they operate (high in the air with poor accessibility) and the fact that combustion in the open atmosphere presents many obvious chal-lenges for obtaining reliable measurements. Recent work on obtaining optical measurements show some promise, however many challenges still remain, most notably performing rigorous validation and uncertainty quantification on these measurements. One drawback of optical measurements is that they measure quantities along a line of sight. Obtaining true combustion efficiency requires more than line-integrated measurements. Measurement of combustion efficiency requires knowledge of the mass flow of the system and an integration about a volume bounding the flare and the downstream mixture of combustion products mixed with excess air. That is, a full mass balance must be performed to get the real combustion performance of any flare. Beyond pure measurements, simulation is an attractive solution for evaluat-ing flare performance. Simulations are relatively cheap to perform and can be run with various atmospheric and operating scenarios. Assessing the accuracy of simulation predictions is paramount to using simulation as a tool for mea-suring flare performance and design. Much of the assessment of past attempts of modeling flare behavior has been done with so-called view-norm graph tech-niques. That is, plots of experimentally measured data, in the best cases with measurement error bars, are plotted along with the simulation results. The engineer is then left to make subjective decisions about the validity of the simu-lation result. It is this procedure where modeling and simulation typically come up short. The subsequent use of the model for prediction, design, and scale up is then based on this somewhat unsatisfying procedure and represents risk for decision makers where high financial risk and/or human health and welfare hang in the balance. In this paper, we explore the use of an Large eddy simulation (LES) [8] model developed at the University of Utah for performing simulations of flares. We chose LES over the industrial-standard approach (Reynolds Average Navier Stokes, RANS) for one simple reason: The LES approach takes greater ad- 2 vantage of existing computer power to resolve/represent more time and length scales than RANS. The increased resolution of these scales allows for better computational accuracy. We have developed in our LES code a chemistry model that takes advan-tage of the increased representation of the temporal scales. This model is best described as a rate controlled constrained equilibrium (RCCE) approach [6]. Modeling combustion is a difficult task. On one hand, direct representation of the complete reaction set would represent the most accurate solution. This approach, however is difficult for LES because the reaction set is typically composed of several tightly coupled reactions with rates that span a range of timescales and are often not well defined. The numerical stiffness present in such a system leads to an inability to obtained a converged result. The other extreme in combustion modeling is to use chemical equilibrium, wherein the entire com-bustion state-space is represented through the use of a single parameter: the fuel mixture fraction. This approach, while straightforward and cheap, assumes instantaneous chemical equilibrium of the gas mixtures and thus assumes no de-pendence on the resolved LES time-scale. The use of full chemical equilibrium is not appropriate in cases where slower time scales have a significant impact on the combustion chemistry, as in the case of industrial flare. For example, use of equilibrium chemistry will always produce a combustion efficiency of %100, invariant of the modeled atmospheric conditions. The RCCE model attempts to inject a time-scale dependence by representing a sub-set of the full chemical mechanism directly and then using the result to constrain the resulting equilib-rium concentration. In other words, the resolved rate on the CFD mesh are the rate limiting steps for the combustion. All other combustion reactions proceed instantaneously to chemical equilibrium as constrained by the products of the slow reactions. Our implementation of RCCE envisions two distinct chemistry steps: 1) the progress of a set of slow reactions occurring at timescales that are as slow or slower as the resolved LES timescale and 2) an instantaneous equilibration of the chemistry occurring at timescales much faster than the resolved LES timescale. Step 1) is accomplished by solving for a set of filtered scalar species and their respective rate terms. This is the rate controlled step. The remaining chemistry and the resultant equilibrium concentrations are then constrained by the resolved scalar concentrations. This last process is simply a minimization of Gibbs free energy subject to an element and energy balance. With this model, we are able to incorporate ignition and quenching behavior because we have implied a timescale through Step 1. Such behavior are critical in predicting the effects of flare fuel stripping, where aerodynamic effects between the flare stack and head along with interactions with the plume of hot combustion gasses cause eddies to strip unburnt fuel away from the combustion zone. Subsequent dilution with air and cooling of the stripped fuel permanently results in combustion inefficiency. In our RCCE model, additional constraints are placed on the rate terms for the LES resolved processes. Most notably we have included the effect of a flammability limit. Regions of mixture that are outside of the flammability limit are not allowed to progress, thus allowing for further quenching. 3 In this paper, we evaluated our RCCE/LES model using measurements of flares in wind-tunnels as measured by Gogolek and Hayden [5]. We employ a rigorous Bayesian methodology for assessing the data consistency region. That is, we seek to find the range of parameters that are consistent with all measured results as bounded by the measurement error. This bounded region on the sim-ulation inputs thus implies a range on the simulation outputs. An advantage of this approach is that the analysis includes all measured data (where appro-priate) simultaneously to provide error quantification rather than considering each measurement individually. Subsequent predictive simulations can borrow these bounds to make quantified statements about the model's capability for flare efficiency assessment, design, and scale-up. 2 Approach 2.1 LES Model with RCCE Simulations were performed using Arches [10], an in-house LES tool resulting from a 10 year partnership with the Department of Energy and the Univer-sity of Utah. It is a massively parallel code that solves conserved quantities (mass, momentum, energy, and scalar) spatially and temporally in a turbulent flow field, allowing for detailed and accurate simulations of fires and flames. The code is integrated into a C++ framework called Uintah [3], which provides large-scale parallelization tools for physics components. Arches is maintained in a repository, and distribution is freely available. The gas phase is solved in a Eulerian manner by a finite volume method with a pressure-projection ap-proach. Favre-filtered governing equations for continuity, momentum, enthalpy, and scalar transport are solved. A second-order central difference scheme is used to calculate the convection and diffusion terms of the transport equations. The sub-grid scale (SGS) stress tensor is calculated by a dynamic SGS Smagorisnky model. For the enthalpy equation, radiation, heat exchange between gas and wall boundary conditions are taken into account. The radiation source term is calculated by the discrete ordinate radiation method. The combustion chemistry is represented using an rate-controlled constrained equilibrium (RCCE) approach. We conceptualize the approach with the follow-ing expression, CxHy + O2 slow ! intermediates fast ! products, (1) where we have used CxHy to represent generically a the fuel hydrocarbon. The rate processes are indicated with the respective labels. The slow process is pos-tulated to occur at the resolved grid scale with a rate that can be computed from LES filtered quantities. The fast process is allowed to proceed to equilibrium as constrained by the amount of CxHy available from the slow process. Thus, the fast process is simply a minimization of Gibb's free energy with the usual constraints on mass and energy. The slow rate process can be modeled with a variety of user-prescribed rate processes. In this work, we have chosen to use 4 the empirical-global model of Westbrook and Dryer (WD) [11]. This model is a simple one step mechanism that is defined for a wide set of hydrocarbons. The WD rate is computed as dCxHy dt = −A exp(−E/RT)[CxHy]m[O2]n, (2) where A is the pre-exponential factor, E is the activation energy, T is the gas temperature, and R is the universal gas constant. Suggested empirical values of the preceding variables along with the exponent m and n are found in literature. To further constrain our model, we make use of flammability limits as de-scribed by Zabetakis [13]. That is, homogeneous fuel-oxidizer mixtures can only propagate flames from an ignition source within a somewhat narrow range of concentrations. The presence of an inert can alter the flammability region as a function of the inert concentration. The resulting flammability diagrams as reported are often described as the Zabetakis "nose-plots", due to their shape resembling that of a human nose. We have incorporated the use of the flamma-bility diagrams by allowing the slow reaction to proceed only with mixtures falling within the flammability envelope. This constraint is determined on a computational cell-by-cell basis and can be viewed as a mixing constraint on the combustion model. While we anticipate that the LES will resolve a larger portion of the scalar energy spectrum, we do recognize that the use of the flammability diagram assumes a homogenous mixture and thus an assumption of a perfectly mixed mixture at the sub-grid scale. At this time, we have not attempted to apply the use of a presumed probability distribution function to account for any sub-grid inhomogeneity. 2.2 V/UQ Through Data Consistency As a guiding principle, our validation/uncertainty quantification (V/UQ) is based on the scientific method. By this we mean that, first, observations must inform theory thereby increasing confidence that the model's description of real-ity is accurate. Then, theory can be applied to situations when observations are lacking. To implement this principle, we combine three disciplines: validation, uncertainty quantification and verification. Traditional engineering validation asks physically relevant questions, but has commonly implemented ad hoc pro-cedures such as the "eyeball" or "view-graph" norm. Statistical uncertainty quantification has developed a rigorous set of techniques for asking questions about distribution parameters. Our approach to V/UQ is to make use of the tools created by the UQ community and adapt them to answer the questions posed by engineers. Our specific methodology for this analysis follows the six step procedure of Sacks et al. [2]: 1. Specify an Input-Uncertainty Map, I/U Map, for unknown parameters and their sensitivity. Unknown parameters include: experimental/scenario parameters, modeling parameters, and numerical parameters (resulting uncertainty of which are determined by verification). 5 2. Define evaluation/decision criteria-these are the uncertain quantities that are ultimately of interest and most directly influence a risk analysis. 3. Perform a design of experiments (DOE) for the experimental data and for the simulations. Expense limits our data to the sparse regime-particularly for simulations on the peta to exa-scales. 4. Develop surrogate models to enable fast model evaluation between the sparse data points. 5. Perform analysis: compare simulation against experiment to quantify un-certainty and degree of validation. 6. Draw conclusions, provide feedback, and make decisions. However, in large scale engineering problems, the experiments are often expen-sive and crude. This results in the potential for significant bias. The analysis step proposed by Sacks, attributes all bias to the simulation. Thus, in place of the analysis step proposed by Sacks, we have adopted the consistency ap-proach of Frenklach [4, 9, 12]. This choice is influenced by the philosophy of Oberkampf and his emphasis on validation metrics [7]. The consistency analysis is a mathematical statement limiting error. It is a constraint on the UQ process that requires the difference (the ‘defect') between the model result, ym, and the experimental means, ye, to be bounded by the experimental uncertainty, le and ue. When the difference is bounded, themodel and experiment are termed "consistent" for the specified bounds. Furthermore, the model may depend on an unknown model parameters, x, with their own physically imposed bounds, ↵ and " . Certain values of the model parameters will result in inconsistency be-tween model and experiment while other values may result in consistency-this defines a consistent space for the model parameter. When multiple experiments (indexed e) are included in a data set, E, themodel parameter's consistent space is the intersection of that for the individual experiments: le [ym(x) − ye] ue for each e 2 (3) when ↵i xi "i for all i = 1, ...,n. (4) Selection of experimental bounds, le and ue, is critical. We prefer a preliminary Bayesian analysis of errors and we associate the values of these bounds with the Bayesian confidence interval. While the primary goal of the consistency analysis is identification of a consistent model-parameter space across the entire data set, an inconsistent outcome of the analysis offers the opportunity for experimentalists and simulation scientists to sit down together and identify the source(s) of bias. However, to overcome the inconsistent result, the bias must be corrected and the procedure repeated. Once the consistent space is identified, it is used as a constraint in a full Bayesian analysis for the probability-distribution function of the model parameters. This posterior distribution function is used, in turn, for the Bayesian posterior predictive which includes parameter uncertainty 6 but not measurement error. Any single simulation output is not predictive by itself without demonstrating the effect of the full range of model parameter uncertainty on the results of the predictive simulation. 3 LES Simulations A series of LES simulation were performed on our local University of Utah clus-ter. Each simulation required about 55K CPU hours, which typically completed in 48 hours. The geometry of the setup is representative of the Flare Test Fa-cility experiments performed by Gogolek and Hayden [5]. This includes a main wind-tunnel section with an internal flow-direction length of 8.2 m and a cross section of 1.2 m x1.8 m. Note that for this work, the height was fixed at 1.8 m, although the facility does include the capability to vary the ceiling height. A section of the flue-gas exit was also represented which includes a 0.772 m x 1.2 m cross sectional area. For all simulations, a flare pipe was placed 1 m from the wind-inlet of the wind-tunnel. The height of the flare pipe was set to 1.016 m. The pipe had a diameter of four inches with a mass flow-rate of natural gas (91% CH4, 1.6% CO2, 3% N2, 4.4% higher HC by mass) set to 0.005556 kg/s. The wind boundary condition at the front of the facility was set to a constant velocity across the cross-section of the tunnel. Standard air was used in the inlet adjusted for 350ppm of CO2. A uniform LES mesh of 1.025 cm was used across the entire geometry. Figure 1 shows the wind-tunnel with temperature field results from a sample calculation. For this study, we have narrowed the V/UQ scope to include two parameters: the pre-exponential factor, A, from the WD model and the cross-wind speed. The cross-wind speed was an obvious choice given the range of the experimental data. As for the rate parameter, several scoping runs were performed to examine the sensitivity of the various parameters to the measured species concentrations. It was determined that the simulation output was, as expected, highly sensitive to A and ET. While all rate parameters could be included in the V/UQ study, we chose only to vary A for this demonstration. Future work will include adding more parameters to the study. Once a reasonable range for A was determined, we held all other rate parameters constant. Experience from the scoping runs helped guide adjustment of some parameters from suggested literature values. The final range for A along with the other rate parameters are shown in Table 1. Volume rendered images at a fixed time of the temperature, volume fraction of fuel, and local stripping factor are shown in Figure 2. The local stripping factor is defined as the ratio of the un reacted fuel to the total amount of fuel available locally if no reaction had occurred. The stripping factor can help identify where local combustion inefficiencies are occurring. Note that a weighting function was applied over the scales to highlight various features or make the image more clear. The weighting function is shown to the left of the color bar and is illustrated by the grey curve. These plots are shown mostly to give a sense of the range of scales captured by the LES and to illustrate the 7 Figure 1: Computational representation of the Flare Test Facility showing an instantaneous slice of the temperature field. In this figure, the crosswind was set to 3 m/s. Parameter Values Used Suggested Literature Values A 2.0E9-5E9 1.3E8 ET 2100 24358 x 1 1 y 4 4 m -0.3 -0.3 n 1.3 1.3 lowerfl 0.01 0.025 upperfl .08 0.08 Table 1: Westbrook Dryer rate parameters and flammability limits used in this study. 8 type of information that might be computed from the simulations. In order to compare to the measured experimental results, values of CO2, O2, and CH4 along with density and velocity were extracted along a one-dimensional line horizontally across the flue stack exit and temporally and spatially aver-aged. We will refer to this as the instrument model. The intent of the instrument model was to represent the Flare Test Facility species sampling probe as the ex-perimental data were measured. After an apparent steady-state was reached in the LES simulation, the instrument model began to sample over a specified time interval. After sufficient data had been collected, the single point measurement of any species was computed as, < # >= g˙ m# e˙ m , (5) where the tilde represents a time averaging procedure and the bar represents a spatial mean over the line. We have discovered that the results appear to be very sensitive to the instrument model, especially the exact placement of the probe. Varying the lateral position of the probe by just a few centimeters had a noticeable effect on the final values of < # >. Given this behavior, it is possible to include the lateral position (and potentially other unknowns regarding the actual probe position) into the V/UQ study. This inclusion of the instrument model could then deduce the actual placement of the probe given that a consistent data is discovered. However, at the time this paper was delivered for the conference proceedings, not all LES data had been collected. Thus, we demonstrate the results of two of the LES simulations as they compare to the actual measured data in Figure 3 using one placement of the probe for two simulations at a crosswind speed of 3 m/s. Given the lack of LES at this time, we are unable to present the final V/UQ analysis but do anticipate being able to present the work in the oral presentation. The authors of this paper welcome any followup inquiry of the V/UQ study. 4 Acknowledgements This research was sponsored by the National Nuclear Security Administration under the Accelerating Development of Retrofittable CO2 Capture Technolo-gies through Predictivity program through DOE Cooperative Agreement DE-NA0000740 and by an STTR grant, Sensors and Wireless Networks for Industry Applications, Spectral Sciences Inc (SSI), DE-SC0003373. References [1] D.T. Allen. Tceq 2010 flare study. Technical report, Texas Commission on Environmental Quality, 2010. 9 (a) Temperature (b) Volume Fraction of Fuel (c) Local Stripping Factor Figure 2: Volume rendered images of the temperature, volume fraction of fuel, and stripping factor for a 3 m/s cross-1w0ind case. (a) (b) (c) Figure 3: Preliminary LES results of species concentrations compared to the experimentally measured values. Experimentally data are shown red circles while the LES results are shown with the blue squares. 11 [2] M. J. Bayarri, J. O. Berger, D. Higdon, M. C. Kennedy, A. Kottas, R. Paulo, J. Sacks, J. A. Cafeo, J. Cavendish, C. H. Lin, and J. Tu. A framework for validation of computer models. Technical report, 2002. [3] M. Berzins, J. Luitjens, Q. Meng, T. Harman, C.A. Wight, and J.R. Pe-terson. Uintah - a scalable framework for hazard analysis. In TG '10: Proceedings of the 2010 TeraGrid Conference, New York, NY, USA, 2010. ACM. [4] R. Feely, P. Seiler, A. Packard, and M. Frenklach. Consistency of a reaction dataset. J. of Physical Chemistry, 108:9573-9583, 2004. [5] P.E.G. Gogolek and A.C.S. Hayden. Performance of flares in crosswind with nitrogen dilution. Journal of Canadian Petroleum Technology, 43:43- 47, 2004. [6] J.C. Keck and D. Gillespie. Rate-controlled partial-equilibrium method for treating reacting gas mixtures. Combustion and Flame, 17:237-241, 1971. [7] William L. Oberkampf and Matthew F. Barone. Measures of agreement between computation and experiment: Validation metrics, 2006. [8] Stephen B. Pope. Turbulent Flows. Cambridge Press, 2000. [9] T. Russi, A. Packard, and M. Frenklach. Uncertainty quantification: Mak-ing predictions of complex reaction systems reliable. Chemical Physics Letters, 499(1):1-8, 2010. [10] J. Spinti, J. Thornock, E. Eddings, P. Smith, and A. Sarofim. Heat Transfer to Objects in Pool Fires, volume 20, chapter 3, pages 69-136. Wit Press, Southampton, UK, 2008. [11] C.K.Westbrook and F.L. Dryer. Chemical kinetic modeling of hydrocarbon combustion. Combustion Science and Technology, 10:1-57, 1984. [12] X. You, T. Russi, A. Packard, and M. Frenklach. Optimization of com-bustion kinetic models on a feasible set. Proceedings of the Combustion Institute, 33:509-516, 2011. [13] M.G. Zabetakis. Flammability characteristics of combustible gases and vapors. Technical report, Bulletin 627, Bureau of Mines, 1965. 12 |
ARK | ark:/87278/s6cp0230 |
Setname | uu_afrc |
ID | 14364 |
Reference URL | https://collections.lib.utah.edu/ark:/87278/s6cp0230 |