These are the most common and simple instruments to measure temperature. A thermocouple consists of two dissimilar metallic elements connected to form a closed loop. The temperature difference between the two junctions obtained is obtained by measuring the voltage generated with a high impedance voltmeter.

If one junction (the cold junction) is maintained at a standard reference temperature (0C), then the temperature of the other (hot) junction may be determined from the thermo-electric properties of the materials. In industrial applications it is not practical to have cold junction so this is replaced with a simple electronic correcting device such as a thermister.

Different materials have different sensitivities and ranges. Below is a list of commonly used thermocouples.


+ -

Max Temp (C)


(m V/° C) @500C


Constantan (CuNi)


46 (100C)





Chromel (NiCr)





Alumel (NiAl)







Pt(13%) - Rh




Iridium (Ir)

Ir - Rh(40%)



Tungsten (W)

Molybdenem (M)



The voltage / temperature relationship for each junction is well known. No further calibration is required. The thermocouple wires are usually very small and fragile and the junction can suffer from corrosion. So the thermocouple is usually protected by a metal sheath being insulated by alumina powder inside. There are special cases when this is not done such as when a high frequency response is required. The sheathing increases the time response considerably.

Thermocouple rely on heat transfer from the combustion gases to the thermocouple bead. This is mainly by convection. The assumption (hope!) is that the bead temperature is the same as the gas temperature. However, an error is introduced to the system due to radiation from the bead to its surroundings. A heat balance is set up such that:

convection to the bead = radiation from the bead

Where h is the convective heat transfer coefficient, e is the thermocouple emissivity and s is the Stefan Boltzmann constant (s = 5.6697x10-8)

Conduction down the legs of the bead can also play a part but is minimised by inserting a long length of the thermocouple into the gas.

The convective heat transfer coefficient, hc, is dependent on the local flow conditions and is expressed in terms of Nusselt number correlations.

Where Nu is the Nusselt number = hcD/K, Re is the Reynolds number = r uD/m and Pr is the Prandtl number = Cpm /K (K is the gas thermal conductivity).

For combustion gases Pr = 0.7 (approx). Chedaille and Braud give the following correlations. More extensive correlations may be obtained from Holman.

For a thermocouple perpendicular to the flow:

For a thermocouple placed along the direction of flow:

It is always best to augment heat transfer as much as possible, by means of high velocity, increased turbulence etc. The above correlations show that below a Reynolds number of 15000 it is better to use a thermocouple perpendicular to the flow. Above this, an axial design is better.


A thermocouple with sheath diameter 2mm is placed axially in a flow of combustion gases, the measured temperature is 1100K. The gas is flowing at 40m/s, the sheath emissivity = 0.2, k = 0.0883W/mK, gas viscosity = 54x10-6kg/ms and density = 0.235kg/m3. Calculate the true gas temperature if the probe is surrounded by ambient walls at 300K.

So the Reynolds number Re = 0.235 x 40 x 0.002/54e-6 = 348

hence Nu = 0.085 x Re0.674 = 4.39

So the hc = 4.39 x 0.0883/0.002 = 193.8 W/m2K

The heat balance results in the following relationship:


The actual gas temperature = 1185K. Hence, the measured value is about 90K below the actual gas temperature, an error of 7%. This is not a hypothetical case but could easily happen. For example, measurement of hot combustion gases being drawn through a water cooled probe. The problem gets worse as the temperature increases. Radiation increases with temp to the power 4, Convection only linearly.



The greatest problem with measuring gas temperatures is combatting radiation loss. This cannot be ignored and should always be estimated to make sure the difference is not excessive. There are a number of commonly used methods. The simplest is to reduce the size of the thermocouple. The convective heat transfer coefficient, hc, increases as the size reduces. In the limit you can have a bare, fine wire thermocouple. The metal protective sheath is removed. These can be as small as 50microns in diameter but are VERY fragile and are normally only used in research. In a practical device, fine wire thermocouples are normally not used.

The next simplest method is to surround the probe with a radiation shield:

Fig 1.2 Shielded Thermocouple

The gas is free to pass through the shield and the shield and thermocouple are heated. The thermocouple bead radiates to the shield which is much hotter than the surrounding walls. Thus the radiative loss and hence temperature error is significantly reduced. The shield itself radiates to the walls.

The next level of sophistication is to use a suction pyrometer, fig 1.3. If the gas velocity around the probe is low, even shielding doesn't help matters greatly since convective heat transfer is very low. Suction pyrometers draw the gas into a probe by suction locally accelerating it around the sensor. At the sensor the flow is locally decelerated but is not brought fully to rest. A "recovery" temperature, T, is measured which takes a value between the static temperature T and the stagnation temperature, Tt. The probe recovery factor is denoted, r, and defined


Where g is the ratio of specific heats of the gas (1.4 for air). The maximum possible heat transfer occurs when the flow is sonic (Ma =1). "Sonic pyrometers" achieve this by having an extremely low suction pressure and having the probe situated in the throat of the nozzle. Because of the constant Mach number, the recovery factor is relatively independent of variations in gas outside the probe. In this case, the stagnation temperature may be related to the measured recovery temperature thus:

Individual sonic pyrometers need calibration to determine the value of r which is affected by probe design and heat transfer. Typically Tr is 2.5% below the stagnation temperature corresponding to a recovery factor of 0.85.

Pyrometers are limited in their temperature range by the materials of construction. If made from Inconel (a high temperature superalloy) they can measure temperatures of up to 1200oC.



If higher temperature measurement is required, sonic pyrometers may be replaced with venturi pyrometers, fig 1.4. The hot gas at temperature, T1, is aspirated through a water cooled probe which has two venturis installed - one at the hot end, the other at the cold end. The "cold" end (max temp 1600° C) has a platinum thermocouple to measure the temperature, T2, of the cooled gas. Two differential pressure transducers also measure the pressure drop across the two venturis. These are related to the mass flow rate via the following expression:

where r is the gas density and k1, 2 are the venturi constants.

Using the ideal gas law with p1 = p2, the gas temperature T1 may be found from the following equation.

Where k = (k1/k2). This value is an instrument constant which is determined by calibration. Air is sucked at ambient temperature (T1=T2). Temperatures up to 2500C can be measured with an accuracy of ± 2%. Uncertainties of the method are discussed at length in Chedaille and Braud (chapter 2).




An alternative method of measurement of temperature is to measure the resistance of a fine wire. As temperature increases so does resistance. Hence, if a fine wire of known diameter, length and thermal properties is placed in an electric circuit, the resistance R=V/I, may be related to temperature.

These devices may be used in exactly the same conditions as thermocouples and are made from the same materials. They also suffer the same problems of heat transfer. Thermocouples are usually used instead since the signal processing is slightly simpler, however, resistance wires are still widely used. Calibration is not necessary but the voltage / resistance characteristic must be known.



These instruments are based on the well established fact that the speed of sound through an ideal gas may be expressed as follows:

If we have a sound sender and receiver installed in a furnace then the time of flight of the sound may be related to the mean path sound velocity (and hence temperature) as follows:

The main difficulty with the technique is the discrimination between the sound originating from the sender and that originating from the combustion itself. This is achieved by the use of very high power sound sources and digital signal processing techniques. A further uncertainty is the exact composition of the gas which is required in order to calculate g and M. In practice these will be variable. Kleppe, however, has demonstrated that the associated errors still allow measurement to an accuracy of ± 2%.

Calibration is not required although certain corrections may be required to compensate for the sound passing through the furnace refractory etc.

Two big advantages of the technique are the fact that the probes are sited remote from the combustion and an actual property of the gas is measured (hence, it does not rely on heat transfer as thermocouples). Also a mean path temperature is measured rather than at a single point, this reduces the chance of an unrepresentative measurement being made. The main advantage however, comes about due to the ability with the technique to measure a two dimensional map of temperature. If a number of sender / receiver pairs are situated around the perimeter of a furnace, the mean path temperature of each of the paths may be reconstructed tomographically to determine the two dimensional map of temperature.



Another very important class of temperature measurement device relies on the optical properties of the object of interest as a function of temperature. These are termed radiation pyrometers and are dealt with separately in the "Radiative Heat Transfer" module (second semester).



Relative newcomers to field of temperature measurement are laser diagnostic techniques. These methods rely on very high energy lasers exciting particular vibrational and rotational states of molecules. The method which has had most success to date is CARS (coherent antistokes Raman spectroscopy). At present these methods do not offer a practical alternative to conventional temperature measurement techniques due to cost, complexity and portability. They are worth mentioning, however, since laser technology is advancing at a great rate at present.



A very widely used temperature measurement technique is thermal paint. This is used where the engineer is concerned with a particular combustion device having a 'hot spot'. These occur if the flame aerodynamics cause hot combustion products to play directly onto a metal surface and substantially reduce life.

Thermal paints consist of acrylic complex's which go through series of irreversible colour changes depending on time exposed and temperature. A data sheet of one paint (Thermographic C3A Yellow) is included with these notes. The combustion chamber of interest is covered with the paint and then put through a timed combustion test. At the end of the test, the combustor is covered with a coloured contour map. Each colour contour refers to a particular temperature range. The temperature range depends on the length of time of the test.

From experience, thermal paints do not work very well in the presence of combustion (even though some are marketed as such). The usual method is to paint the outside of the combustion chamber. For this reason, they are very useful in gas turbine applications but not so good for large scale refractory lined furnaces.




The measurement of mixture is one of the most important experimental techniques in combustion. It can be done in a variety of ways depending on complexity and detail required. The majority of systems rely on the analysis of a sample taken from the point of interest. Therefore the first objective, with any device is to ensure a representative sample is obtained. This is not as simple as it may seem.


The majority of conventional gas sampling techniques require a sample of the mixture to be withdrawn from the stream to be analysed continuously or in batches at a later date. This may seem like a trivial task but it isn't. Care must be taken to ensure that the sample is quenched before the measurement is made and also care must be taken to ensure a representative sample is taken.

The gases sampled often come from very hot environments and, in certain case, may have products of incomplete combustion. Precautions must be taken to ensure that the chemical reactions, as far as possible, are frozen and not allowed to continue further. This is impossible for the unstable compounds of combustion such as CH or OH radicals, however, for the main gaseous species such as CO, CO2 etc. it can be done by one of two methods. Either cooling or pressure reduction (or both). In the first case, the cooling has the effect of reducing the reaction rates considerably thus preventing further reaction from occurring. In the second case, a large pressure reduction has the effect of reducing the chance of molecular collisions hence slowing reaction.

This can be adequately achieved if the sampling orifice is designed such that it has a distinct throat at the point of sampling such that the throat velocity is sonic. This results in a large pressure drop across the throat and reduces the density of the gas greatly, lowering the chance of further reaction. In normal circumstances this should be sufficient.

However, if the probe must penetrate far into the combustion system (over 0.5m) additional cooling may be required to prevent the reaction continuing in the tube. In certain circumstances, the cooling system must be some form of heated oil as a water based system would lower the temperature of the gas below the dew point thus condensing water and possibly some other gases of interest (especially acid gases SOx, NOx, HCl etc.). Sampling lines after the probe are normally heated (150C) to prevent this from occurring downstream.

It is important to note that no method other than extremely rapid sampling and analysis may be used to prevent the gaseous compounds finding a new equilibrium.

Once the sample is taken, further conditioning may also be required downstream to remove any unwanted components in the sample. Two common operations are to remove water and particulates from the sample since they can affect the operation of certain analysers.

Particulates are removed with simple glass fibre or packed paper filter units (heated or unheated). Water may be removed using a cross flow, water cooled condenser unit if there is no interest in water soluble components which will also be removed in the process. If soluble components are of interest then, more sophisticated membrane filter units may be used which are claimed not to suffer the same problems. Further water removal is often achieved by the use of chemical dryers such as Calcium Carbonate which remove the last traces of water still bound in the vapour phase.

Gas sampling is much more complex if the particulate component of the gas is that which is of interest. The difficulty arises due to the different mobilities of particle depending on size and shape. Considering fig.2.1, if the probe causes any disturbance to the flow because it is accelerating or decelerating the flow, the gas (and particles) must change their course in order to enter the probe. If the sampling velocity is higher than the gas velocity, this will cause the streamlines around the probe to converge. Divergence would if the velocity was lower. Heavy, less mobile particles will be unable to change their courses to match this change whereas smaller mobile particles will behave more like molecules and easily change course.

The result is that, in either case, an unrepresentative sample of particles will be drawn. Only when the velocity of the sampling exactly matches that of the flow will the sample be representative. This is called "isokinetic" sampling. If the sampling velocity is too high, the size distribution will be weighted towards the smaller particles, if too low, it will be biased towards the larger particles. It should be noted that the condition for isokinetic sampling can interfere with the condition for quenched reactions.



Most gas analysis instrumentation requires frequent calibration. Usually once per day if accuracy is required. This is achieved by means of calibration gases. These gases have an very accurately measured ratio of the gas of interest in a balance of another gas (usually Nitrogen). This is used to calibrate the higher end of the range. A zero is also usually required (air is often used for this where appropriate). The calibration procedure is as follows:

1. Sample the calibration (span) gas and check the instrument is reading the expected value. If not adjust the "span" or "range" control of the instrument until it does.

2. Sample the zero gas or air (as appropriate). Check to make sure the instrument reads 0. If not adjust the "zero" control of the instrument until it does.

3. Re-check the span again as step 1. Repeat the loop until no change is required. Lock the positions of the zero and span controls.



A summary of the most common mixture measurement techniques of sampled gas are given below.


These are probably the cheapest and most widely used analysers used for continuous sampling of mixtures in industry and research. They rely on the fact that gases only have the ability to absorb and emit radiation in discrete bands in the radiation spectrum (see the radiative heat transfer - sem 2). Hence a gas may be detected based on its absorption in a very specific wavelength range, for example CO2 has very strong absorption peaks at wavelengths of 2.8 and 4.4m m. Care is required since there is the possibility that another gas may also absorb in the same range which would result in erroneous results. It is for this reason that CO and CO2 analysers cannot operate in the presence of water and must have a dry sample.

The principle of the analyser is given the following figure 2.2. The radiation emitted from an infra-red source, filtered if necessary, traverses the measurement chamber which contains the gas to be analysed, and undergoes absorption there according the wavelength characteristics of the gas. A second chamber which contains a non absorbing reference gas is placed in parallel to the first.

At the opposite end a sensing cell receives the transmitted radiation from both chambers. This consists of a cell filled with the gas of interest but split in the middle by a flexible membrane. The gas absorbs the incoming radiation, heats up and expands. However, the radiation on the sample side has already been partially absorbed so the gas on that side of the detector does not heat and expand so much. The membrane therefore moves in proportion to the amount of gas in the sample and may be detected. Frequent calibration is required.

Other variants of these devices exist including those which have electronic detectors. Commercial devices have a thermostatically controlled temperature to prevent external effects and also have other features to improve accuracy such as a rotating shutter and a common IR source.

Accuracies to within a few percent of full scale are expected and measurement ranges are between 0-0.05% and 0-100%. The range depends on the size of the tubes used and hence cannot be adjusted greatly with electronics.

Gases which are commonly detected in this way are CO, CO2, CH4, NO, HCl, SO2, other hydrocarbons etc. An analyser must be bought for each component. This method is normally only preferred for CO, CO2.

A recent innovation to the infra red analysis technique is the development of in situ analysers. Instead of measuring the absorption of IR radiation in the analyser cells it is measured across the stack of a combustion system. The individual components are discriminated by means of narrow band pass optical filtering. The problem of interference between bands (for example water) is overcome by measuring that component and compensating. These systems have the advantage of requiring no sampling and being effectively instantaneous. They are reported (Lord) to being sensitive to gas levels of 5-15ppm due to the large path lengths across the stack.



Again, these are very cheap and widely used. They are based on the paramagnetic properties of certain molecules, the most notable of which is oxygen (but to a lesser extent NO). These molecules display a positive magnetic susceptibility, X, which decreases with temperature following Curie's law X = C/T where C is a constant. All other gases are diamagnetic (X < 0 and independent of T).

There are different versions but the device operates in the following way. The sample flows through a donut shaped tube with a central, heated joining pipe, fig 2.3. The joining pipe has a strong magnet which attracts the paramagnetic molecules. These are immediately displaced since they heat up and become less susceptible. The result is a so called "magnetic wind". The strength of the wind is proportional to the amount of the paramagnetic gas present. This may be detected by a velocity measurement device such as a hot wire in the flow. Frequent calibration is required.

Paramagnetic analysers have an accuracy similar to IR analysers and can be made very robust and compact. They are not sensitive to water, but it is usually advisable to remove the water since it does behave differently to other gases and can condense in the apparatus. Paramagnetic analysers are only commercially available for oxygen measurement.



FID's are another commonly found industrial analyser used to detect the presence of hydocarbon species in flames. They are reliable and capable of measuring a very wide range of concentrations.

They are based on the observation in a hydrogen flame that the degree of ionisation of the flame is a function of the concentration of hydrocarbon species. The instrument, fig 2.4, has a small flame of hydrogen - air or (hydrogen / nitrogen and air). The sample is mixed with the hydrogen upstream and burns with it. Above the fame two electrodes monitor the ionisation of the flame. The ionising current is proportional to concentration.

The instrument is unable to differentiate species but responds to all in relation to the number of carbon atoms that molecule has. Usually in combustion products there is a complex cocktail of various hydrocarbon molecules. The FID simply measures the total concentration. As a result the measurement is often expressed as equivalent methane concentration. In other words, the concentration of methane that would cause the same degree of ionisation as the sample.

The instruments are capable of a very wide range of mixture detection (gain adjusted by increasing the ionising current). Ranges on a single instrument can be 0-4ppm up to 0 -10% with accuracy in all ranges better than 1% FSD. The analysers are not affected by water vapour in the sample. Calibration is required.



This class of detectors are again commonly found in industry and are the recommended way of measuring the oxides of Nitrogen and Sulphur. They are based on a phenomenon once classified as a laboratory curiosity, for example the chemiluminescent reaction between Nitrous Oxide (NO) and Ozone (O3).

About 10% of the Nitrogen dioxide produced in is an electrically excited energy state. The transition of this state to the normal gives rise to emission of a photon of varying wavelength between 0.3 and 0.6m m. This emission, measured with a photomultiplier, is proportional to the flow rate of Nirous oxide through the system and hence forms the basis of the measurement, fig 2.5.

In practice only a very small proportion of the nitrogen dioxide molecules emit the photon, they can also lose the energy in collisions with other molecules. In particular water and carbon dioxide are very efficient at deactivating them. This can result in quite a large error in the measurement. Hence, in order to reduce the chance of collision, the reaction chamber is kept at a very low pressure with the aid of a vacuum pump.

The analyser can only detect NO, not any other oxide of nitrogen. However, they can be easily equipped with a NOx converter which converts all oxides of Nitrogen to NO. The difference in the reading with and without the converter gives indication of the ratio of the two molecules.

Because the analysers have a photomultiplier which has variable gain, extremely wide ranges are possible. Modern machines have ranges from 0-10ppm to 0-10000ppm with error less than 1%. They are also unaffected by water. As well as the calibration gas, they also require a supply of pure oxygen to make ozone. Calibration is required.



This technique is much more sophisticated than those discussed so far and has the ability to distinguish between and measure all molecules rather than just a single one or a single group.

Mass spectrometers are not generally suited for continuous industrial sampling unless there is a specific need for their particular advantages, in particular the multicomponent capability. However, some relatively cheap spectrometers are now available on the market (e.g. VG Micromass). The principle of operation is shown in fig2.7. A very small fraction of the sampled gas is drawn into the device where it is reduced in pressure to around 10-5torr. The molecules are then ionised with an electron source. Once charged, they are accelerated through a strong electric field into the main body of the device where there is a second perpendicular field. This causes their path to follow a circle whose radius depends on the power of the second field and the charge / mass ratio of the molecule.

An ion collector is situated in the main chamber at a position corresponding to one particular radius. This collector generates a signal proportional to the number of molecules collected. With the collector fixed, a progressive modification of the electric field intensity makes it possible to collect the ions corresponding to a particular charge / mass ratio and hence to discriminate between molecules based on their molecular weight. The size of the collector voltage being proportional to the concentration of that molecule.

An important problem with the technique is the fact that a number of important molecules to the combustion engineer have the same molecular weight. For example N2, CO, C2H4 all have 28. This is solved due to the fact that the process of ionising the molecules smashes them into their constituent parts. For example fig 2.8 shows a trace for pure methane(M=16). There is a peak at 16 but also at 15 (CH3), 14 (CH2) etc. There is also the possibility of having multiply charged molecules.

Each molecule has a distinct and repeatable characteristic spectrum. With the use of computer post processors (sold with the machine), even if a mixture of molecules is present, quantitative discrimination is possible. The technique however is not infallible and subject to drift. Frequent calibration is required.

There is no clear cut sensitivity of the instrument since this depends on its quality but also on the molecule. For example Hydrogen is much more difficult to ionise than other molecules reducing sensitivity. The manufacturer should be asked to specify the sensitivity of the machine to a particular molecule.



This technique probably allows the most complete gas analysis. Like the mass spectrometer it is capable of multicomponent mixture measurement, but unlike the mass spec it suffers none of the problems of discrimination.

The technique is not suitable for continuous measurement but requires batch samples. This makes it unsuitable for general industrial / control applications, but for research and development it is unsurpassed in its abilities.

The principle is illustrated in fig 2.9. The main component of the device is a column filled with adsorbent solid material or liquid on a solid inert base. Through this, a constant stream of carrier gas (usually Nitrogen or Helium) passes. The adsorbent material has the property of absorbing (adsorbing) gaseous species at a different rate. The result is that different gases travel through the column at different speeds. So, a small quantity of the gas sample mixture is injected into one of the column in the carrier gas. The gases eventually re-appear at the other end of the column, but because they travel at different rates, each gas will appear at a different retention time. The output of a chromatograph will thus appear as a series of spikes at different retention times corresponding to each molecular species, fig 2.10. Each gas molecule has a specific retention time and may be discriminated on this basis.

At the exit of the column is some form of mass detection device to determine the amount of gas. Common ones are hot wires, FID, electrical conductivity cell, mass spectrometer etc. Each is suited to a particular range of gases. The amount of each gas can be calculated by integrating the volume under the curve.

The adsorbent material in the column is also dependent on the gases of interest. Common ones are silicates, activated charcoal, silica gel, alumina. Care must be taken since some adsorbents react with certain components. For example, silicates and activated charcoal absorb CO2 and certain hydrocarbons irreversibly. This also has the effect of limiting the life of the column.

The method still requires calibration in order to relate the size of the spikes to the gas concentration. Once calibrated it is very repeatable and accurate to better than 0.5% FSD. This is again dependent on the quality of instrument. The method offers excellent multicomponent gas analysis on a discontinuous basis.




Particles are the most publicly despised but often the most innocuous of all pollutants from combustion. The reason is simply that they are the thing that people can see in a flue if a burner is running fuel rich or off condition - black smoke. There is therefore a lot of interest in their measurement.

The size ranges of normal smoke is of the order 0.01 to 1m m. This should be compared with that for pulverised coal of between 1 and 250m m. Quite different methods of classification are therefore used.

Before considering the method of measurement we must consider how to express particulate concentration usefully. Since particulates are polydisperse in nature (i.e. they are not of a single size but have particles with a range of sizes), they are characterised by a series of mass fractions in a particular size range. Thus experimental data is not easily interpreted. It is more convenient to have a single average measurement for easy comparison. However, the average diameter is not always appropriate since the majority of the mass of particles is in the higher diameter ranges. On the other hand, if mobility is of interest, then the lower sizes are most important. Hence there a number of alternative "mean" diameters. They are listed below:







Linear mean diameter of particles

D or D1

Area Length

Diameter of a particles having the same surface / diameter ratio as entire cloud


Volume surface diameter

Diameter of a particle having the same surface area / volume ratio as entire cloud (Sauter mean diameter)


Area diameter

Diameter of a particle whose surface area is equal to the surface area of all the particles in the cloud


Volume (mass) diameter

Particle of diameter below or above which lies 50% of the mass of the particles




For pulverised coal characterisation or generally when the particles are in the larger size range above and not prone to coagulating, sieving or centrifuging offers the simplest and cheapest method of characterisation.

A weighed sample is passed through a series of sieves or centrifugal separations to separate the different size ranges from each other. The mass fraction of each range may then be calculated by weighing. This is an extremely simple and reliable ex-situ technique.


Due to the "sticky" nature of soot and its size, the above method is not suited. Other in-situ methods are available but are much more complex. Industry requires a simple test of smoke from a furnace and hence the smoke number tests were developed. The commonest are the Ringelmann and Bacharach tests.

The Ringelman test involves the operator comparing the smoke from the stack with a chart of graded greyness. The grades run from 0 to 5. 0 corresponds to no visible soot in the plume, 5 is completely black. This method is simple but very subjective. It depends on the observers judgement, the local atmospheric and light conditions and on the width of the plume.

The next level of sophistication is the Bacharach smoke test. In this case a metered quantity of gas is drawn through a piece of standard filter paper by a calibrated hand or electric pump. The filter turns grey due to soot. This is compared with a standard Bacharach chart this time running from 0 to 10. The lower the number, the lower the soot concentration. This method is better than the Ringelmann method but it is still very subjective.



An advance on both the above methods is the technique of light obscuration. If we have a source of light shining through an optically dense (turbid) region, fig2.7, then the light absorption is described by the Beer Lambert law:

Where, Io is the original light intensity, I is the transmitted light intensity, K is a constant relating to the optical properties of soot, L is the path length and fv is the soot volume fraction. This expression is true only if the particulate matter is the only species absorbing light and the particles are very small in comparison to the light wavelength. This is possible if the red wavelength range is chosen. l » 650m m.

If this is the case and the light source is constant, the obscuration is dependent only on the soot content. New diode laser light sources are cheap and very stable. The devices are calibrated by measurement of Io in the absence of soot. Obscuration is usually expressed in terms of percentage light decrease rather than as soot volume fraction since further calibration would be required. Care must be taken with these devices to ensure the windows and optical components don't get dirty with time by soot deposition. This is achieved with air purge.



These techniques are also in-situ and based on an extension of the theory that governs the light obscuration techniques. The theory is based on the scattering of light by particles, the Mie theory (Kerker), and is strictly limited to spherical particles although it is generally (if dubiously) applied to particles of different shape.

The theory describes the intensity of light scattered by monodisperse, small particles from a beam of intensity, Io. The scattered light is predicted in all directions and is dependent upon angle, particle optical characteristics, particle diameter and light wavelength.

Hence, if light is measured at a number of different angles and polarisations to the original light beam, the ratios of intensity will be dependent on diameter. A commonly used instrument which is based on this technology (which originated at the University of Sheffield Chem Eng Dept) is the Malvern particle sizer. This instrument fig. 2.8 measures the small angle scattering from the particles in a flow. A multi-element (dart board style) detector measures the Fraunhoffer diffraction pattern of concentric rings from laser light passed through the aerosol and focused through a Fourier transform lens. The ring pattern is fitted via the theory to the particle diameter. The best fit is output as the measured value. Modern instruments are capable of discrimination between diameters of 0.5 to 2000m m.

A problem with the technique is that it has no way of distinguishing between the different sizes in a polydisperse aerosol and so a size distribution is assumed. Often the Rosin Rammler distribution is used (as it is in Fluent).

Where v is the total volume contained in drops of less than diameter, x. The parameters b and q thus describe the particle size distribution characteristics. The fitting technique is again used to find the distribution which best fits the measured data. However, there is no method of checking how representative the chosen distribution is (apart from the goodness of fit). This is potentially dangerous.

The technique also suffers since it requires knowledge of the refractive index of the particles. This value for soot has been reviewed by a number of authors by different methods and there is little consensus of opinion.






Chedaille and Braud (1972) "Industrial Flames Vol 1: Measurements in Flames", Edward Arnold Publishers Ltd.

Eckbreth (1981) "Recent advances in laser diagnostics for temperature and species concentration in combustion", 18th Symp (int) on Comb, The Comb Inst

Eckbreth (1988) "Laser diagnostics for combustion temperature and species", Abacus press.

Fristrom and Westernberg (1965) "Flame Structure", McGraw Hill

Greenhalgh (1988) "Quantitative CARS spectroscopy" Chap 5 in "Advances in Non-Linear Spectroscopy", (Eds. Clark and Hester), Wiley

Holman (1971) "Experimental methods for Engineers", McGraw Hill

Kerker (1969) "The scattering of light and other electromagnetic radiation", Academic press

Kleppe J.A. "Engineering applications of acoustics", Artech House Press, Boston, London

Nuspl S.P., Szmania E.P., Kleppe J.A. (1989) "Acoustic Pyrometer", United States patent, No. 4,848,924, Jul. 1989

Strahle (1993) "An Introduction to Combustion", Gordon and Breach

Holman J.P. (1981) "Heat Transfer - 2nd edition", McGraw Hill

Lord H.C (1987) "Resource Recovery / Waste Incineration Emissions Control Optimisation", Proc 6th Annual Control Engineering Conf, pp.419-423.