OCR Text |
Show discontinuity in particle number density for the size range of 2.0-5.0 microns is apparent for the cold flow measurement of the coal water slurry shown here and in other results. At first glance, these results and the appearance of other peaks in the distribution may appear to be artifacts or errors in the measuring technique. However, the results for pulverized coal do not show these large peaks or discontinuity, and we also observe that the discontinuity diminishes at longer residence times for the coal-water slurry particles. Furthermore, mass balances are routinely obtained (discussed in a later section) and show good agreement between the optical counter measurements and vacuum sample measurements. These results along with careful size calibration checks with latex particles give us confidence that the counter technique is accurate. Other features of these distributions should be noted. The last data point plotted for the largest particle size represents the statistical limit for accurate particle counting. The count-rate for larger particles is too low to be statistically significant. Although the frequency distribution will tend to zero above this last data point, it is not possible to quantify the slope. The use of log-frequency as the distribution variable gives the absolute number or volume density of particles per unit of flow volume independent of instrument size range capability or instrument resolution. The number and volume frequency terms used here are defined as dN/dlogD (#/cm3)> and dV/dlogD (particle volume/cn^), respectively. This form of expression gives the absolute number or volume density of particles per unit of flow volume independent of instrument size range capability or instrument resolution. This is an important point since many instruments provide a normalized frequency distribution which is dependent on the instrument range itself, rather than the intrinsic aerosol distribution. Thus normalized results tend to show a true but confusing increase in mean particle size with char burnout, because of the more rapid loss of small particles. Our detailed distributions show this effect directly along with the more physically intuitive loss in particle number density with char burnout. Each data set is based on a total particle count of 15,000-30,000 depending on the available count rate. The optimal method of processing the spectra is addressed in detail by Holve, 1983. Briefly, the data processing algorithm is set up to account for calibration, deconvolution, and statistical counting errors. The objective of the algorithm is to process the data so that the size resolution and frequency resolution are optimized; the error estimation is set up to compute frequency values only when the combined error of size and number density is minimized. One can estimate the resolution of frequency data by noting the spacing between data points. Closer spacing on the size coordinate represents greater resolution. Each data point is placed at the logarithmic mean of the size bin width, and thus has an accuracy better than the resolution. In general, for each data point presented, the sizing error is of the order 5%, and the number density error is approximately 20%. Thus, the irregular and non-monotonic distributions shown here are statistically significant. In all cases the number density error can be improved by taking a larger data set. |