Professional Documents
Culture Documents
25-28 June
Petroleum Training Centre
Ron Frend
Bernoulli’s Theorem 33
How pressure and velocity interact 33
Liquid Flow 35
Flow Units 35
Restriction Flow Sensors 36
Reynolds Number 41
SOME NOTES FOR THE METRIC PIPE FRICTION CHART SHOWN BELOW 43
Performance Curves 49
Thermal Conductivity 59
Insulation 62
Heat transfer coefficients and calculation 63
Heat exchangers, type and sizing 65
Steam Reboilers 69
Condensers and sub-cooling 70
Introduction to energy recovery 73
Crude Distillation 96
Catalytic Cracking 99
Introduction 99
FCC Process Configuration 100
Main Characteristics 100
Equipment in FCC 101
Feedstock & Yield 101
Conclusion 101
Catalysis 103
TABLES
TABLE 1THERMAL CONDUCTIVITY PROPERTIES 60
TABLE 2 TYPICAL STREAM DATA 81
TABLE 3 TYPES OF LIQUID/GAS SEPARATORS 121
TABLE 4 COMPARISON OF THE DOP AND LASE 123
A Process Flow Diagram - PFD - (or System Flow Diagram - SFD) shows the
relationships between the major components in the system. PFD also tabulate
process design values for the components in different operating modes, typical
minimum, normal and maximum. A PFD does not show minor components, piping
systems, piping ratings and designations.
• Process Piping
• Major equipment symbols, names and identification numbers
• Control, valves and valves that affect operation of the system
• Interconnection with other systems
• Major bypass and recirculation lines
• System ratings and operational values as minimum, normal and maximum
flow, temperature and pressure
• Composition of fluids
• pipe class
• pipe line numbers
• minor bypass lines
• isolation and shutoff valves
• maintenance vents and drains
• relief and safety valve
• code class information
• seismic class information
P&ID shows all of piping including the physical sequence of branches, reducers,
valves, equipment, instrumentation and control interlocks.
Thermodynamics
The First Law of Thermodynamics states:
Energy can neither be created nor destroyed, only altered in form.
For any system, energy transfer is associated with mass and energy crossing the
control boundary, external work and/or heat crossing the boundary, and the change
of stored energy within the control volume. The mass flow of fluid is associated with
the kinetic, potential, internal, and "flow" energies that affect the overall energy
balance of the system. The exchange of external work and/or heat complete the
energy balance.
The First Law of Thermodynamics is referred to as the Conservation of Energy
principle, meaning that energy can neither be created nor destroyed, but rather
transformed into various forms as the fluid within the control volume is being studied.
The energy balance spoken of here is maintained within the system being studied.
The system is a region in space (control volume) through which the fluid passes. The
various energies associated with the fluid are then observed as they cross the
boundaries of the system and the balance is made.
A system may be one of three types: isolated, closed, or open. The open system,
the most general of the three, indicates that mass, heat, and external work are
allowed to cross the control boundary. The balance is expressed in words as: all
energies into the system are equal to all energies leaving the system plus the change
in storage of energies within the system.
Remember that energy in thermodynamic systems is composed of
• kinetic energy (KE),
• potential energy (PE),
• internal energy (U), and
• flow energy (PL); as well as
• heat and work processes.
For most industrial plant applications that most frequently use cycles, there is no
change in storage (i.e. heat exchangers do not swell while in operation).
In equation form, the balance appears as indicated in the heat balance figure below:
Example 1:
Open System Control Volume
The enthalpies of steam entering and leaving a steam turbine are 1349
Btu/lbm and 1100 Btu/lbm, respectively.
The estimated heat loss is 5 Btu/lbm of steam.
The flow enters the turbine at 164 ft/sec at a point 6.5 ft above the discharge
and leaves the turbine at 262 ft/sec.
Determine the work of the turbine
Where”
It turns out that head is a very convenient term in the pumping business. Pressure is
not as convenient a term because the amount of pressure that the pump will deliver
depends upon the weight (specific gravity) of the liquid being pumped and the
specific gravity changes with the fluid temperature and concentration.
Each litre of liquid has weight, so we can easily calculate the kilograms per minute
being pumped. Head or height is measure in meters so if we multiply these two
together we get kilogram meters per minute which converts directly to work at the
rate of 610 kgm/min = 1 kilowatt.
If you are more comfortable with metric horsepower units, you should know that
735.5 watts makes one metric horsepower
If you will refer to the Figure below you should get a clear picture of what is meant by
static head. Please note that we always measure from the centreline of the pump to
the highest liquid level
To calculate head accurately we must calculate the total head on both the suction
and discharge sides of the pump. In addition to the static head we will learn that there
is a head caused by resistance in the piping, fittings and valves called friction head,
and an additional head caused by any pressure that might be acting on the liquid in
the tanks, including atmospheric pressure. This head is called "surface pressure
head".
Once we know all of these heads it gets simple. We subtract the suction head from
the discharge head and the head remaining will be the amount of head that the pump
must be able to generate at its rated flow.
As we make these calculations you must be sure that all your calculations are made
in either "meters of liquid gauge" or "meters of liquid absolute". In case you have
forgotten "absolute" means that you have added atmospheric pressure (head) to the
gauge reading.
Normally head readings are made in gauge readings and we switch to the absolute
readings only when we want to calculate the net positive suction head available
(NPSHA) to find out if our pump is going to cavitate. We use the absolute term for
these calculations because we are often calculating a vacuum or using negative
numbers
We will begin by making some actual calculations. You will not have to look up the
friction numbers because I am going to give them to you, but you can find them in a
number of publications and these charts:
The next illustration (Figure #2) shows that the discharge head is still measured to
the liquid level, but you will note that it is now below the maximum height of the
piping.
Although the pump must deliver enough head to get up to the maximum piping height
it will not have to continue to deliver this head when the pump is running because of
the "siphon effect". There is of course a maximum siphon effect. It is derived from the
formula to convert pressure to head:
• The suction head is negative because the liquid level in the suction tank is
below the centreline of the pump:
o hss = -2 meters
• The suction tank is open so the suction surface pressure equals atmospheric
pressure :
o hps = 0 meters gauge
In these examples you will not be calculating the suction friction head. When you
learn how you will find that there are two ways to do it
• You would look at the charts and add up the K factors for the various fittings
and valves in the piping. You would then multiply these K factors by the
velocity head that is shown for each of the pipe sizes and capacities. This
final number would be added to the friction loss in the piping for the total
friction head.
• Or, you can look at a chart that shows the equivalent length of pipe for each
of the fittings and add this number to the length of the piping in the system to
determine the total friction loss.
For this example, I will tell you the total friction head on the suction side of the pump
is:
• The total suction head is going to be a gauge value because atmosphere was
given as 0,
o hs = hss + hps - hfs = - 2 + 0 - 1.5 =
- 3.5 meters of liquid gauge at rated flow
Head = hd - hs
= 47 - (-3.5)
Our next example involves a few more calculations, but you should be able to handle
them without any trouble.
If we were pumping from a vented suction tank to an open tank at the end of the
discharge piping we would not have to consider vacuum and absolute pressures. In
this example we will be pumping from a vacuum receiver that is very similar to the
hotwell we find in many condenser applications
Again, to make the calculations you will need some pipe friction numbers that are
available from charts:
I will give you the friction numbers for the following examples.
Specifications:
• Transferring 300 m3/hr weak acid from the vacuum receiver to the storage
tank
• Specific Gravity = 0.98
• Viscosity = equal to water
• Piping = all 150 mm Schedule 40 steel pipe
• Discharge piping rises 15 meters vertically above the pump centreline and
then runs 135 meters horizontally. There is one 90° elbow in this line
• Suction piping has 1.5 meters of pipe, one gate valve, and one 90° elbow, all
of which are 150 mm in diameter.
• The minimum level in the vacuum receiver is 2 meters above the pump
centreline.
• The pressure on top of the liquid in the vacuum receiver is 500 mm of
mercury, vacuum.
Now that you have all of the necessary information we will begin by dividing the
system into two different sections using the pump as the dividing line.
• The suction side of the system shows a minimum static head of 2 meters
above suction centreline. Therefore, the static suction head is
o hss = 2 meters
• Using the first conversion formula, the suction surface pressure is
• The suction friction head fs equals the sum of all the friction losses in the
suction line. If you referenced the metric pipe friction loss table you would
learn that the friction loss in 150 mm. pipe at 300 m3/hr is 9 meters per 100
meters of pipe.
• Check valves
• Foot valves
• Strainers
• Sudden enlargements
• Shut off valves
• Entrance and exit losses
Friction loss in 150 mm pipe at 300 m3/hr, from the chart is 9 meters per hundred feet
of pipe.
The discharge friction head is the sum of the above losses, that is:
Our next example will be the same as the one we just finished except that there is an
additional 3 meters of pipe and another 90° flanged elbow in the vertical leg.
The total suction head will be the same as in the previous example. Take a look at
the figure below
Nothing has changed on the suction side of the pump so the total suction head will
remain the same:
• The static discharge head (hsd) will change from 15 meters to 12 meters
since the highest liquid surface in the discharge is now only 12 meters above
the pump centreline. This value is based on the assumption that the vertical
leg in the discharge tank is full of liquid and that as this liquid falls it will tend
to pull the liquid up and over the loop in the pipeline. This arrangement is
called a siphon leg.
• The discharge surface pressure is unchanged:
• hpd = 0 meters
• The additional 3 meters of pipe and the additional elbow will increase the
friction loss in the discharge pipe.
The friction loss in the additional elbow = 3.4 / 100 x 9 = 0.31 meters
Head = hd - hs
= 26.39 - (-5.78)
= 32.17 meters at 300 m3/hr.
General Concept:
The Bernoulli effect is simply a result of the conservation of energy. The work done
on a fluid (a fluid is a liquid or a gas), the pressure times the volume, is equal to the
change in kinetic energy of the fluid.
General Facts:
Where there is slow flow in a fluid, you will find increased pressure.
Where there is increased flow in a fluid, you will find decreased pressure.
In a real flow, friction plays a large role - a lot of times you must have a large
pressure drop (decrease in pressure) just to overcome friction.
This is the case in your house. Most water pipes have small diameters (large friction),
hence the need for "water pressure" - it is the energy from that pressure drop that
goes to friction.
In a real flow i.e. around an immersed body, friction plays a large role – most of the
time when the ship is in service you have a large pressure drop (decrease in
pressure) just to overcome friction. For example, if you have a water pipe with a
small diameter (large friction), hence the need for "water pressure" – it is the energy
from that pressure drop that goes to friction.
Example
When a liquid runs freely through a pipe of a constant area (B), to which three
ascension pipes (D,E,F) are connected, the static pressure will decrease along the
dashed line towards the outlet (Fig.1), The pressure decreases as result of friction
loss in the horizontal pipe.
Fig. 1
In (Fig.2) the area has been changed in two places, with a thinner pipe at section (G)
and a thicker pipe at section (H). The following occurs:
Section (G)
The resultant constriction causes the liquid to move at a higher speed, increasing the
dynamic pressure, with the result that the static pressure in pipe (D) falls below the
dashed line.
Section (H)
In section (H), which has a much larger area, the static pressure rises above the
dashed line, the speed of the liquid having decreased due to the larger area, with the
result that the dynamic pressure will be decreased.
Fig. 2
Laminar and turbulent flow are most common in flow regimes or in liquid flow
measurement operations but there is also transitional flow.
If we want to calculate the Reynolds number , we can use the following equation
R = 3160 x Q x Gt D x µ
where:
• When the Reynolds number is less than 2000, flow will be described as
laminar
• When the Reynolds number is greater than 4000, flow will be described as
turbulent
• When the Reynolds number is in the range of 2000 to 4000 the flow is
considered transitional.
Viscosity can be a major factor that affects the value of the Reynolds number.
For e.g You may find highly viscous hydraulic oils may exhibit laminar flow in most
conditions while things like water will be turbulent
As the impeller churns the water (see figure above), it purges air from the casing
creating an area of low pressure, or partial vacuum, at the eye (centre) of the
impeller. The weight of the atmosphere on the external body of water pushes water
rapidly through the hose and pump casing toward the eye of the impeller.
Centrifugal force created by the rotating impeller pushes water away from the eye,
where pressure is lowest, to the vane tips where the pressure is highest. The velocity
of the rotating vanes pressurizes the water forced through the volute and discharges
it from the pump.
Water passing through the pump brings with it solids and other abrasive material that
will gradually wear down the impeller or volute. This wear can increase the distance
between the impeller and the volute resulting in decreased flows, decreased heads
and longer priming times. Periodic inspection and maintenance is necessary to keep
pumps running like new.
Typically seals are cooled by water as it passes through the pump. If the pump is dry
or has insufficient water for priming it could damage the mechanical seal. Oil-
lubricated an occasionally grease-lubricated seals are available on some pumps that
provide positive lubrication in the event that the pump is run without water. The seal
is a common wear part that should be periodically inspected.
The capacity, or amount of fluid you are pumping, varies directly with this number.
Example: 100 Gallons per minute x 2 = 200 Gallons per minute
Or in metric, 50 Cubic meters per hour x 0,5 = 25 Cubic meters per hour
The head varies by the square of the number.
Example : a 50 foot head x 4 (22) = 200 foot head
Or in metric, a 20 meter head x 0,25 ( 0,52) = 5 meter head
The horsepower required changes by the cube of the number.
Example : a 9 Horsepower motor was required to drive the pump at 1750 rpm.. How
much is required now that you are going to 3500 rpm?
We would get: 9 x 8 (23) = 72 Horse power is now required.
Likewise if a 12 kilowatt motor were required at 3000 rpm. and you decreased
the speed to 1500 the new kilowatts required would be: 12 x 0,125 (0.53) =
1,5 kilowatts required for the lower rpm.
In other words Butane at this temperature would not vapourize as long as I had the
above absolute heads available at the suction side of the pump.
Page 57
Heat Transfer And Reaction Engineering
Heat Transfer
• Thermal conductivity
• Conduction and convection
• Insulation
• Heat transfer coefficients and calculation
• Heat exchangers, type and sizing
• Steam reboilers
• Condensers and sub-cooling
• Introduction to energy recovery
Page 58
Thermal Conductivity
In physics, thermal conductivity, k, is the intensive property of a material that indicates its
ability to conduct heat.
Examples
In metals, thermal conductivity approximately tracks electrical conductivity, as the freely
moving valence electrons transfer not only electric current but also heat. However, this
correlation does not apply to some materials, as shown in the table below, where highly
electrically conductive silver is shown to be less thermally conductive than diamond, which is
an electrical semiconductor.
Thermal conductivity is not a simple property, and depends intimately on structure and
temperature. For instance, pure, crystalline substances also exhibit highly variable thermal
conductivities along different crystal axes. One particularly notable example is sapphire, for
which the CRC Handbook reports a thermal conductivity perpendicular to the c-axis of 2.6
W·m-1·K-1 at 373 K, and 6000 W·m-1·K-1 at 35 K for an angle of 36 degrees to the c-axis.
Air and other gases are generally good insulators, in the absence of convection. Therefore,
many insulating materials function simply by having a large number of gas-filled pockets
which prevent large-scale convection. Examples of these include polystyrene (styrofoam)
and silica aerogel.
Thermal conductivity is clearly an important quantity for construction and related fields.
However, materials used in such trades are rarely subjected to chemical purity standards.
Several construction materials' k values are listed below. These should be considered
approximate due to the uncertainties related to material definitions.
The following table is meant as a small sample of data to illustrate the thermal conductivity of
various types of substances. For more complete listings of measured k-values, see the
references.
Page 59
Table 1Thermal conductivity properties
Thermal
Temperature
conductivity Notes
(K)
(W·m-1·K-1)
Diamond 1,000a 273 type I diamond
Highest electrical conductivity of any
Silver 429a 300
metal
Iron, pure 80.2a 300
Stainless
Steel
14a 273
Limestone 1.3b
Ice 2.2a 273
Soil 0.2-1.1c
Oak 0.16a 298
Rubber
(92%) 0.16a 303
For general scientific use, thermal conductance is the quantity of heat that passes in unit
time through a plate of particular area and thickness when its opposite faces differ in
temperature by one degree. For a plate of thermal conductivity k, area A and thickness L this
is kA/L, measured in W·K-1. This matches the relationship between electrical conductivity
(A·m-1·V-1) and electrical conductance (A·V-1).
There is also a measure known as heat transfer coefficient: the quantity of heat that passes
in unit time through unit area of a plate of particular thickness when its opposite faces differ
in temperature by one degree. The reciprocal is thermal insulance. In summary:
The heat transfer coefficient is also known as thermal admittance, but this term has other
meanings.
Page 60
Conduction & Convection
Conduction:
In metals, the dominant method of conduction is through the movement of electrons. This
method of conduction does not operate in non-metals because there are no free
electrons (other than graphite). When a metal is heated, the electrons closest to the heat
source vibrate more rapidly. Electrons then collide with these atoms and gain more
kinetic energy (movement energy). The electrons therefore move around faster and
collide with other free electrons which then gain more kinetic energy. Kinetic energy is
therefore transferred between the electrons and through the metal from the point closest
to the heat source towards points futher away. The electrons all travel very short
distances but are very fast moving therefore conduction of heat happens very quickly.
In metals and in insulators, there is conduction of heat due to the vibration of the atoms.
As atoms closest to the heat source absorb heat/thermal energy, they make their
neighbouring atoms vibrate more rapidly which then in turn make their neighbouring
atoms vibrate more.
Examples of conduction:
The wire gauzes used on tripods are metal therefore they are good heat conductors.
Gauzes on cookers are also metal so that heat is conducted quickly and food is cooked
fast.
Poor thermal conductors (insulators) are used for saucepan handles so that they don't
heat up and can still be handled.
Metals are used for the containers which heat liquids e.g. pans, kettles on hobs
Air is a poor conductor therefore materials that trap air are used for insulation in lofts and
hot water cylinders.
Convection:
The cool particles gain kinetic energy when they are heated from the source and expands
as it heats up. The particles become less dense than the surrounding cold air therefore it
rises and displace the cool air. Cool particles are more dense therefore they fall and
move towards the heat source to take the place of the warm particles. They then heat up
and rise while other particles cool down and fall.
Example of convection:
Convection is used in fridges to cool it down. Heat is carried away, therefore the back of
fridges are always warm.
Atmospheric winds.
Page 61
Insulation
Insulation is any material used to reduce or “slow down” or “resist” the flow of energy. There
are several different types of insulators:
A material may insulate well in more than one way. Some materials, such as diamond, are
superb insulators in one way (electrical), but extremely poor insulators in another way
(thermal). A purified synthetic diamond conducts heat even better than copper, and has the
highest thermal conductivity of any known solid at room temperature. Thus it is the worst
thermal insulator known that's solid at room temperature.
Heat is the internal kinetic, vibrational energy that all materials contain (except at absolute
zero). Heat spontaneously flows from a high temperature region to a low temperature region,
and the greatest heat flow occurs through the path of least resistance.
Insulation exists in most large appliances, for example, in ovens, refrigerators, freezers, and
water heaters. In some cases, the insulation serves to prevent heat loss to the environment.
In other cases, it serves to prevent heat gain from the environment.
Page 62
Heat transfer coefficients and calculation
The heat transfer coefficient is used as a fudge factor in calculating heat transfer in
thermodynamics. The heat transfer coefficent is often calculated from the Nusselt number (a
dimensionless number). Below is an example where it is used to find the heat lost from a hot
tube to the surrounding area.
where
• Q = power input or heat lost
• h = overall heat transfer coefficient
• A = outside surface area of tubing
• ∆T = difference in temperature between tubing surface and surrounding area
There are different heat transfer relations for different liquids, flow regimes, and
thermodynamic conditions. A common example pertinent to many of the necessary power
plant efficiency and thermal hydraulic calculations is the Dittus-Boelter heat transfer
corelation, valid for water in a circular pipe with Reynolds numbers between 100 000 and 120
000 and Prandtl numbers between 0.7 and 120. An example is shown below where it is used
to calculate the heat transfer from a tubing wall to water.
where
• Pr = Prandtl number =
• Re = Reynolds number =
• DH = hydraulic diameter
• = mass flow rate
• µ = water viscosity
• Cp = heat capacity at constant pressure
• A = cross-sectional area of flow
The heat transfer coefficient has SI units in watts per meter squared-kelvin. Often it can be
estimated by dividing the thermal conductivity by a length scale. Heat transfer coefficients
add inversely, like resistances. It can be thought of as a thermal resistance. Shown below is
an addition of heat transfer coefficients where one is estimated as a thermal conductivity
divided by a length scale.
where
• Q = power input
• h = heat transfer coefficient
• t = tubing thickness
Page 63
• k = thermal conductivity of metal tube
• A = cross-sectional area of flow
• ∆T = difference in temperature between outer wall of tubing and sample water.
Page 64
Heat exchangers, type and sizing
A heat exchanger is a component that allows the transfer of heat from one fluid (liquid or
gas) to another fluid. Reasons for heat transfer include the following: 1. To heat a cooler
fluid by means of a hotter fluid 2. To reduce the temperature of a hot fluid by means of a
cooler fluid 3. To boil a liquid by means of a hotter fluid 4. To condense a gaseous fluid by
means of a cooler fluid 5. To boil a liquid while condensing a hotter gaseous fluid Regardless
of the function the heat exchanger fulfills, in order to transfer heat the fluids involved must be
at different temperatures and they must come into thermal contact. Heat can flow only from
the hotter to the cooler fluid.
In a heat exchanger there is no direct contact between the two fluids. The heat is
transferred from the hot fluid to the metal isolating the two fluids and then to the cooler fluid.
Page 65
Figure 20 Tube & Shell Heat Exchanger
Plate
A plate type heat exchanger, as illustrated below, consists of plates instead of tubes to
separate the hot and cold fluids. The hot and cold fluids alternate between each of the plates.
Baffles direct the flow of fluid between plates.
Because each of the plates has a very large surface area, the plates provide each of the
fluids with an extremely large heat transfer area. Therefore a plate type heat exchanger, as
compared to a similarly sized tube and shell heat exchanger, is capable of transferring much
more heat. This is due to the larger area the plates provide over tubes.
Due to the high heat transfer efficiency of the plates, plate type heat exchangers are usually
very small when compared to a tube and shell type heat exchanger with the same heat
transfer capacity. Plate type heat exchangers are not widely used because of the inability to
reliably seal the large gaskets between each of the plates. Because of this problem, plate
type heat exchangers have only been used in small, low pressure applications such as on oil
coolers for engines. However, new improvements in gasket design and overall heat
exchanger design have allowed some large scale applications of the plate type heat
exchanger. As older facilities are upgraded or newly designed facilities are built, large plate
type heat exchangers are replacing tube and shell heat exchangers and becoming more
common.
Page 66
Figure 21 Plate Type Heat Exchanger
Because heat exchangers come in so many shapes, sizes, makes, and models, they are
categorized according to common characteristics. One common characteristic that can be
used to categorize them is the direction of flow the two fluids have relative to each other. The
three categories are parallel flow, counter flow and cross flow. Parallel flow, as illustrated in
below, exists when both the tube side fluid and the shell side fluid flow in the same direction.
In this case, the two fluids enter the heat exchanger from the same end with a large
temperature difference. As the fluids transfer heat, hotter to cooler, the temperatures of the
two fluids approach each other. Note that the hottest cold-fluid temperature is always less
than the coldest hot-fluid temperature
Page 67
Counter flow, as illustrated later, exists when the two fluids flow in opposite directions. Each
of the fluids enters the heat exchanger at opposite ends. Because the cooler fluid exits the
counter flow heat exchanger at the end where the hot fluid enters the heat exchanger, the
cooler fluid will approach the inlet temperature of the hot fluid. Counter flow heat exchangers
are the most efficient of the three types. In contrast to the parallel flow heat exchanger, the
counter flow heat exchanger can have the hottest cold- fluid temperature greater than the
coldest hot-fluid temperature.
Page 68
Steam Reboilers
A reboiler is a special kind of heat exchanger used to put heat into a distillation column
Steam may also be used to evapourate (or vapourise) a liquid, in a type of shell and tube
heat exchanger known as a reboiler. These are used in the petroleum industry to vapourise a
fraction of the bottom product from a distillation column. These tend to be horizontal, with
vapourisation in the shell and condensation in the tubes
Page 69
Condensers and sub-cooling
The steam condenser, shown below, is a major component of the steam cycle in power
generation facilities. It is a closed space into which the steam exits the turbine and is forced
to give up its latent heat of vapourization. It is a necessary component of the steam cycle for
two reasons. One, it converts the used steam back into water for return to the steam
generator or boiler as feedwater. This lowers the operational cost of the plant by allowing the
clean and treated condensate to be reused, and it is far easier to pump a liquid than steam.
Two, it increases the cycle's efficiency by allowing the cycle to operate with the largest
possible delta- T and delta-P between the source (boiler) and the heat sink (condenser).
Because condensation is taking place, the term latent heat of condensation is used instead
of latent heat of vapourization. The steam's latent heat of condensation is passed to the
water flowing through the tubes of the condenser.
After the steam condenses, the saturated liquid continues to transfer heat to the cooling
water as it falls to the bottom of the condenser, or hotwell. This is called subcooling, and a
certain amount is desirable. A few degrees subcooling prevents condensate pump cavitation.
The difference between the saturation temperature for the existing condenser vacuum and
the temperature of the condensate is termed condensate depression. This is expressed as a
number of degrees condensate depression or degrees subcooled. Excessive condensate
depression decreases the operating efficiency of the plant because the subcooled
condensate must be reheated in the boiler, which in turn requires more heat from the reactor,
fossil fuel, or other heat source
Figure 27 Condenser
Page 70
There are different condenser designs, but the most common, at least in the large power
generation facilities, is the straight-through, single-pass condenser illustrated above. This
condenser design provides cooling water flow through straight tubes from the inlet water box
on one end, to the outlet water box on the other end. The cooling water flows once through
the condenser and is termed a single pass. The separation between the water box areas and
the steam condensing area is accomplished by a tube sheet to which the cooling water tubes
are attached. The cooling water tubes are supported within the condenser by the tube
support sheets. Condensers normally have a series of baffles that redirect the steam to
minimize direct impingement on the cooling water tubes. The bottom area of the condenser
is the hotwell. This is where the condensate collects and the condensate pump takes its
suction. If non-condensable gasses are allowed to build up in the condenser, vacuum will
decrease and the saturation temperature at which the steam will condense increases.
Non-condensable gasses also blanket the tubes of the condenser, thus reducing the heat
transfer surface area of the condenser. This surface area can also be reduced if the
condensate level is allowed to rise over the lower tubes of the condenser. A reduction in the
heat transfer surface has the same effect as a reduction in cooling water flow. If the
condenser is operating near its design capacity, a reduction in the effective surface area
results in difficulty maintaining condenser vacuum.
The temperature and flow rate of the cooling water through the condenser controls the
temperature of the condensate. This in turn controls the saturation pressure (vacuum) of the
condenser. To prevent the condensate level from rising to the lower tubes of the condenser,
a hotwell level control system may be employed. Varying the flow of the condensate pumps
is one method used to accomplish hotwell level control. A level sensing network controls the
condensate pump speed or pump discharge flow control valve position. Another method
employs an overflow system that spills water from the hotwell when a high level is reached.
Condenser vacuum should be maintained as close to 29 inches Hg as practical. This allows
maximum expansion of the steam, and therefore, the maximum work. If the condenser were
perfectly air-tight (no air or non-condensable gasses present in the exhaust steam), it would
be necessary only to condense the steam and remove the condensate to create and
maintain a vacuum. The sudden reduction in steam volume, as it condenses, would maintain
the vacuum. Pumping the water from the condenser as fast as it is formed would maintain
the vacuum. It is, however, impossible to prevent the entrance of air and other non-
condensable gasses into the condenser. In addition, some method must exist to initially
cause a vacuum to exist in the condenser. This necessitates the use of an air ejector or
vacuum pump to establish and help maintain condenser vacuum.
Air ejectors are essentially jet pumps or eductors, as illustrated in Figure 10 below. In
operation, the jet pump has two types of fluids. They are the high pressure fluid that flows
through the nozzle, and the fluid being pumped which flows around the nozzle into the throat
of the diffuser. The high velocity fluid enters the diffuser where its molecules strike other
molecules. These molecules are in turn carried along with the high velocity fluid out of the
diffuser creating a low pressure area around the mouth of the nozzle. This process is called
entrainment. The low pressure area will draw more fluid from around the nozzle into the
throat of the diffuser. As the fluid moves down the diffuser, the increasing area converts the
velocity back to pressure. Use of steam at a pressure between 200 psi and 300 psi as the
high pressure fluid enables a single stage air ejector to draw a vacuum of about 26 inches
Hg.
Page 71
Figure 28 Change of section - change in pressure
Normally, air ejectors consist of two suction stages. The first stage suction is located on top
of the condenser, while the second stage suction comes from the diffuser of the first stage.
The exhaust steam from the second stage must be condensed. This is normally
accomplished by an air ejector condenser that is cooled by condensate. The air ejector
condenser also preheats the condensate returning to the boiler. Two-stage air ejectors are
capable of drawing vacuums to 29 inches Hg.
A vacuum pump may be any type of motor-driven air compressor. Its suction is attached to
the condenser, and it discharges to the atmosphere. A common type uses rotating vanes in
an elliptical housing. Single-stage, rotary-vane units are used for vacuums to 28 inches Hg.
Two stage units can draw vacuums to 29.7 inches Hg. The vacuum pump has an advantage
over the air ejector in that it requires no source of steam for its operation. They are normally
used as the initial source of vacuum for condenser start-up.
Page 72
Introduction to energy recovery
Energy recovery.
A process of converting used oil into usable energy, e.g., burned to recover energy, heat
building, or incinerator
At various stages in the refining process, useful energy carriers may be lost. The most
important (energy) sources are the recovery of combustible products for useful applications,
which would have been flared otherwise, as well as the recovery of hydrogen from different
flue and process gas streams. The latter will reduce the need for additional hydrogen
makeup; an energy-intensive and expensive process.
Page 73
gas is an expensive energy input in the refinery process, and lately associated with large
fluctuation in prices. The major technology developments in the hydrogen management
within the refinery are hydrogen process integration (or hydrogen cascading) and hydrogen
recovery technology (Zagoria and Huycke, 2003). Revamping and retrofitting existing
hydrogen networks can increase hydrogen capacity between 3% and 30% (Ratan and Vales,
2002).
Hydrogen integration
at refineries is a new and important application of pinch analysis. Most hydrogen systems in
refineries feature limited integration and pure hydrogen flows are sent from the reformers to
the different processes in the refinery. But as the use of hydrogen is increasing the value of
hydrogen is more and more appreciated. Using the approach of composition curves used in
pinch analysis the production and uses of hydrogen of a refinery can be made visible.
This allows us to identify the best matches between different hydrogen sources and uses
based on quality of the hydrogen streams. It allows the user to select the appropriate and
most cost-effective technology for hydrogen purification. A recent improvement of the
analysis technology also accounts for gas pressure, to reduce compression energy needs
(Hallale, 2001). The analysis method accounts also for costs of piping, besides the costs for
generation, fuel use and compression power needs. It can be used for new and retrofit
studies.
The BP refinery at Carson, in a project with the California Energy Commission, has executed
a Hydrogen Pinch analysis of the large refinery. Total potential savings of $4.5 million on
operating costs were identified, but the refinery decided to realize a more cost effective
package saving $3.9 million per year. As part of the plant-wide assessment of the Equilon
(Shell) refinery at Martinez, an analysis of the hydrogen network has been included (US
DOE-OIT, 2002). This has resulted in the identification of large energy savings. Further
development and application of the analysis method at Californian refineries, especially as
the need for hydrogen is increasing due to reduced future sulfur content of diesel and other
fuels, may result in reduced energy needs at all refineries with hydrogen needs (all, except
San Joaquin Refining in Bakersfield) (Khorram and Swaty, 2002). One refinery identified
savings of $6 million/year in hydrogen savings without capital projects (Zagoria and Huycke,
2003).
Hydrogen recovery
is an important technology development area to improve the efficiency of hydrogen recovery,
reduce the costs of hydrogen recovery and increase the purity of the resulting hydrogen flow.
Hydrogen can be recovered indirectly by routing low-purity hydrogen streams to the
hydrogen plant (Zagoria and Huycke, 2003) or can be recovered from off gases by routing it
to the existing purifier of the hydrogen plant or by installing additional purifiers to treat the off
gases and vent gases. The cost savings of recovered hydrogen are around 50% of the costs
of hydrogen production (Zagoria and Huycke, 2003). Membranes are an attractive
technology for hydrogen recovery. If the content of recoverable products is higher than 2-5%
(or preferably 10%), recovery may make economic sense (Baker et al., 2000). New
membrane applications for the refinery and chemical industry are under development.
Membranes for hydrogen recovery from ammonia plants have first been demonstrated about
20 years ago (Baker et al., 2000), and are used in various state-of-the-art plant designs.
Refinery off gas flows have a different composition, making different membranes necessary
for optimal recovery. Membrane plants have been demonstrated for recovery of hydrogen
from hydrocracker off gases. Various suppliers offer membrane technologies for hydrogen
recovery in the refining industry, including Air Liquide, Air Products and UOP. The hydrogen
content has to be at least 25% for economic recovery of the hydrogen, with a recovery yield
of 85-95% and a purity of 95%.
Page 74
Membrane technology generally represents the lowest cost option for low product rates, but
not necessarily for high flow rates (Zagoria and Huycke, 2003). For high-flow rates PSA
technology is often the conventional technology of choice. Development of low-cost and
efficient membranes is an area of research interest to improve cost-effectiveness of
hydrogen recovery, and enable the recovery of hydrogen from gas streams with lower
concentrations.
Heat Recovery.
Heat is recovered and re-used throughout the refinery. Next to efficient integration of heat
flows throughout the refinery, the efficient operation of heat exchangers is a major area of
interest. In a complex refinery most processes occur under high temperature and pressure
conditions; the management and optimization of heat transfer among processes is therefore
key to increasing overall energy efficiency. Fouling, a deposit buildup in units and piping that
impede heat transfer, requires the combustion of additional fuel. For example, the processing
of many heavy crude oils in the U.S. increases the likelihood of localized coke deposits in the
heating furnaces, thereby reducing furnace efficiency and creating potential equipment
failure. An estimate by the Office of Industrial Technology at the U.S. Department of Energy
noted that the cost penalty for fouling could be as much as $2 billion annually in material and
energy costs. The problem of fouling is expected to increase with the trend towards
processing heavier crudes.
Fouling is the effect of several process variables and heat exchanger design. Fouling may
follow the combination of different mechanisms (Bott, 2001). Several methods of
investigation have been underway to attempt to reduce fouling including the use of sensors
to detect early fouling, physical and chemical methods to create high temperature coatings
(without equipment modification), the use of ultrasound, as well as the improved long term
design and operation of facilities. The U.S. Department of Energy initially funded preliminary
research into this area, but funding has been discontinued (Huangfu, 2000; Bott, 2000).
Initial analysis on fouling effects of a 100,000 bbl/day crude distillation unit found an
additional heating load of 12.3 kBtu/barrel (13.0 MJ/barrel) processes (Panchal and Huangfu,
2000). Reducing this additional heating load could results in significant energy savings.
This technology is still in the conceptual and basic research stage and therefore it is difficult
to assess capital costs at this time. Argonne National Laboratory (ANL) has been the lead in
working with the refining industry in the area. Progress so far has included: a basic
understanding of fouling mechanisms developed (for example, the presence of iron sulfide in
crude oil and its link to fouling), the development of a threshold fouling model by ANL, the
testing of prototype fouling detection units, the development of a Heat Exchanger Design
Handbook (1999 Edition) incorporating ANL’s petroleum fouling threshold model, and the
preparation of a guideline document on Heat Exchanger Fouling in the Crude Oil Distillation
Unit (Panchal, 2000). Besides ANL, several other groups have worked in the area of fouling
reduction. Outside the U.S., groups in Europe and Canada have worked on fouling.
While the issue of fouling is now on the radar screen of plant managers (there is a biannual
Fouling Mitigation conference held by the American Institute for Chemical Engineers), a
stronger commitment by the refining industry would be needed to advance this technology to
the next stage of development. Some sources believe that the future development of this
area is expected to be in the area of Condition-Based Maintenance of Heat-Transfer
Equipment that will be based on Knowledge-Based and Monitoring –Based Mitigation of
Fouling/Corrosion (Panchal, 2000, see also section on process control systems).
Furthermore, developments in heat exchanger design and process intensification may also
contribute to reducing the problem of fouling.
Page 75
An Introduction to Pinch Technology
While oil prices continue to climb, energy conservation remains the prime concern for many process
industries. The challenge every process engineer is faced with is to seek answers to questions related
to their process energy patterns. A few of the frequently asked questions are:
1. Are the existing processes as energy efficient as they should be?
2. How can new projects be evaluated with respect to their energy
requirements?
3. What changes can be made to increase the energy efficiency without
incurring any cost?
4. What investments can be made to improve energy efficiency?
5. What is the most appropriate utility mix for the process?
6. How to put energy efficiency and other targets like reducing emissions,
increasing plant capacities, improve product qualities etc, into a one coherent
strategic plan for the overall site?
Page 76
A Simple Example of Process Integration by Pinch Analysis
Consider the following simple process [Figure 29] where feed stream to a reactor is heated before inlet
to a reactor and the product stream is to be cooled. The heating and cooling are done by use of steam
(Heat Exchanger -1) and cooling water (Heat Exchanger-2), respectively. The Temperature (T) vs.
Enthalpy (H) plot for the feed and product streams depicts the hot (Steam) and cold (CW) utility loads
when there is no vertical overlap of the hot and cold stream profiles.
Page 77
Development of the Pinch Technology Approach
When the process involves single hot and cold streams (as in above example) it is easy to design an
optimum heat recovery exchanger network intuitively by heuristic methods. In any industrial set-up the
number of streams is so large that the traditional design approach has been found to be limiting in the
design of a good network. With the development of pinch technology in the late 1980’s, not only
optimal network design was made possible, but also considerable process improvements could be
discovered. Both the traditional and pinch approaches are depicted below
Page 78
Basic Concepts of Pinch Analysis
Most industrial processes involve transfer of heat either from one process stream to another process
stream (interchanging) or from a utility stream to a process stream. In the present energy crisis
scenario all over the world, the target in any industrial process design is to maximize the process-to-
process heat recovery and to minimize the utility (energy) requirements. To meet the goal of maximum
energy recovery or minimum energy requirement (MER) an appropriate heat exchanger network
(HEN) is required. The design of such a network is not an easy task considering the fact that most
processes involve a large number of process and utility streams. As explained in the previous section,
the traditional design approach has resulted in networks with high capital and utility costs. With the
advent of pinch analysis concepts, the network design has become very systematic and methodical.
A summary of the key concepts, their significance, and the nomenclature used in pinch
analysis is given below:
Combined (Hot and Cold ) Composite Curves: Used to predict targets for
Minimum energy (both hot and cold utility) required,
• Minimum network area required, and
• Minimum number of exchanger units required.
• DTmin and Pinch Point: The DTmin value determines how closely the hot and cold
composite curves can be ‘pinched’ (or squeezed) without violating the Second Law of
Thermodynamics (none of the heat exchangers can have a temperature crossover).
• Grand Composite Curve: Used to select appropriate levels of utilities (maximize cheaper
utilities) to meet over all energy requirements.
• Energy and Capital Cost Targeting: Used to calculate total annual cost of utilities and capital
cost of heat exchanger network.
• Total Cost Targeting: Used to determine the optimum level of heat recovery or the optimum
DTmin value, by balancing energy and capital costs. Using this method, it is possible to obtain
an accurate estimate (within 10 - 15%) of overall heat recovery system costs without having to
design the system. The essence of the pinch approach is the speed of economic evaluation.
• Plus/Minus and Appropriate Placement Principles: The "Plus/Minus" Principle provides
guidance regarding how a process can be modified in order to reduce associated utility needs
and costs. The Appropriate Placement Principles provide insights for proper integration of key
equipments like distillation columns, evaporators, furnaces, heat engines, heat pumps, etc. in
order to reduce the utility requirements of the combined system.
• Total Site Analysis: This concept enables the analysis of the energy usage for an entire plant
site that consists of several processes served by a central utility system.
Page 79
Figure 32 Steps of Pinch Analysis
Page 80
The above equation simplifies to: d H = Q, where Q represents the heat supply or demand associated
with the stream. It is given by the relationship: Q= CP x (TS - TT).
Enthalpy Change, dH = CP x (TS - TT)
** Here the specific heat values have been assumed to be temperature independent within the
operating range.
The stream data and their potential effect on the conclusions of a pinch analysis should be considered
during all steps of the analysis. Any erroneous or incorrect data can lead to false conclusions. In order
to avoid mistakes, the data extraction is based on certain qualified principles. For details on principles
of data extraction, check out Link-2 at the end of the article. The data extracted is presented in the
below.
The value of DTmin is determined by the overall heat transfer coefficients (U) and the geometry of the
heat exchanger. In a network design, the type of heat exchanger to be used at the pinch will determine
the practical Dtmin for the network. For example, an initial selection for the Dtmin value for shell and
0
tubes may be 3-5 C (at best) while compact exchangers such as plate and frame often allow for an
0
initial selection of 2-3 C. The heat transfer equation, which relates Q, U, A and LMTD (Log Mean
Temperature Difference) is depicted in Figure 4.
Page 81
Figure 33 Heat Transfer Equation
For a given value of heat transfer load (Q), if smaller values of DTmin are chosen, the area
requirements rise. If a higher value of DTmin is selected the heat recovery in the exchanger decreases
and demand for external utilities increases. Thus, the selection of DTmin value has implications
for both capital and energy costs. This concept will become clearer with the help of composite
curves and total cost targeting discussed later.
Just as for a single heat exchanger, the choice of DTmin (or approach temperature) is vital in the
design of a heat exchanger networks. To begin the process an initial DTmin value is chosen and pinch
analysis is carried out. Typical DTmin values based on experience are available in literature for
reference. A few values based on Linnoff March’s application experience are tabulated below for shell
and tube heat exchangers.
2 Petrochemical 10-20ºC
3 Chemical 10-20ºC
Page 82
Figure 34 Temperature-Enthalpy Relations Used to Construct Composite Curves
For heat exchange to occur from the hot stream to the cold stream, the hot stream cooling curve must
lie above the cold stream-heating curve. Because of the ‘kinked’ nature of the composite curves
(Figure 6), they approach each other most closely at one point defined as the minimum approach
temperature (DTmin). DTmin can be measured directly from the T-H profiles as being the minimum
vertical difference between the hot and cold curves. This point of minimum temperature difference
represents a bottleneck in heat recovery and is commonly referred to as the "Pinch". Increasing
the DTmin value results in shifting the of the curves horizontally apart resulting in lower process to
process heat exchange and higher utility requirements. At a particular DTmin value, the overlap shows
the maximum possible scope for heat recovery within the process. The hot end and cold end
overshoots indicate minimum hot utility requirement (QHmin) and minimum cold utility requirement
(QCmin), of the process for the chosen DTmin.
Thus, the energy requirement for a process is supplied via process to process heat exchange and/or
exchange with several utility levels (steam levels, refrigeration levels, hot oil circuit, furnace flue gas,
etc.).
Graphical constructions are not the most convenient means of determining energy needs. A numerical
approach called the "Problem Table Algorithm" (PTA) was developed by Linnhoff & Flower (1978) as a
means of determining the utility needs of a process and the location of the process pinch. The PTA
lends itself to hand calculations of the energy targets.
To summarize, the composite curves provide overall energy targets but do not clearly indicate how
much energy must be supplied by different utility levels. The utility mix is determined by the Grand
Composite Curve.
Page 83
Figure 35 Combined Composite Curves
GRAND COMPOSITE CURVE (GCC): In selecting utilities to be used, determining utility
temperatures, and deciding on utility requirements, the composite curves and PTA are not
particularily useful. The introduction of a new tool, the Grand Composite Curve (GCC), was
introduced in 1982 by Itoh, Shiroko and Umeda. The GCC (Figure 7) shows the variation of
heat supply and demand within the process. Using this diagram the designer can find which
utilities are to be used. The designer aims to maximize the use of the cheaper utility levels and
minimize the use of the expensive utility levels. Low-pressure steam and cooling water are
preferred instead of high-pressure steam and refrigeration, respectively.
The information required for the construction of the GCC comes directly from the Problem Table
Algorithm developed by Linnhoff & Flower (1978). The method involves shifting (along the temperature
[Y] axis) of the hot composite curve down by ½ DTmin and that of cold composite curve up by ½
DTmin. The vertical axis on the shifted composite curves shows process interval temperature. In other
words, the curves are shifted by subtracting part of the allowable temperature approach from the hot
stream temperatures and adding the remaining part of the allowable temperature approach to the cold
stream temperatures. The result is a scale based upon process temperature having an allowance for
temperature approach (DTmin). The Grand Composite Curve is then constructed from the enthalpy
(horizontal) differences between the shifted composite curves at different temperatures. On the GCC,
the horizontal distance separating the curve from the vertical axis at the top of the temperature scale
shows the overall hot utility consumption of the process.
Page 84
Figure 36 Grand Composite Curve
Figure 7 shows that it is not necessary to supply the hot utility at the top temperature level. The GCC
indicates that we can supply the hot utility over two temperature levels TH1 (HP steam) and TH2 (LP
steam). Recall that, when placing utilities in the GCC, intervals, and not actual utility temperatures,
should be used. The total minimum hot utility requirement remains the same: QHmin = H1 (HP steam)
+ H2 (LP steam). Similarly, QCmin = C1 (Refrigerant) +C2 (CW). The points TH2 and TC2 where the
H2 and C2 levels touch the grand composite curve are called the "Utility Pinches." The shaded green
pockets represent the process-to-process heat exchange.
In summary, the grand composite curve is one of the most basic tools used in pinch analysis for the
selection of the appropriate utility levels and for targeting of a given set of multiple utility levels. The
targeting involves setting appropriate loads for the various utility levels by maximizing the least
expensive utility loads and minimizing the loads on the most expensive utilities.
Page 85
3. the distribution of area between the exchangers
Pinch analysis enables targets for the overall heat transfer area and minimum number of units of a
heat exchanger network (HEN) to be predicted prior to detailed design. It is assumed that the area is
evenly distributed between the units. The area distribution cannot be predicted ahead of design.
• AREA TARGETING: The calculation of surface area for a single counter-current heat
exchanger requires the knowledge of the temperatures of streams in and out (dTLM i.e. Log
Mean Temperature Difference or LMTD), overall heat transfer coefficient (U-value), and total
heat transferred (Q). The area is given by the relation
Area = Q / [ U x dTLM ]
The composite curves can be divided into a set of adjoining enthalpy intervals such that within each
interval, the hot and cold composite curves do not change slope. Here the heat exchange is assumed
to be "vertical" (pure counter-current heat exchange). The hot streams in any enthalpy interval, at any
point, exchanges heat with the cold streams at the temperature vertically below it. The total area of the
HEN (Amin) is given by the formula in Figure below, where i denotes the ith enthalpy and interval j
denotes the jth stream and dTLM denotes LMTD in the ith interval.
Page 86
• HEN TOTAL CAPITAL COST TARGETING: The targets for the minimum surface area (Amin)
and the number of units (Nmin) can be combined together with the heat exchanger cost law to
determine the targets for HEN capital cost (CHEN). The capital cost is annualized using an
annualization factor that takes into account interest payments on borrowed capital. The
equation used for calculating the total capital cost and exchanger cost law is given below.
For the Exchanger Cost Equation shown above, typical values for a carbon steel shell and tube
exchnager would be a = 16,000, b = 3,200, and c = 0.7. The installed cost can be considered to be 3.5
times the purchased cost given by the Exchanger Cost Equation.
• Estimation of Optimum DTmin Value by Energy-Capital Trade Off
To arrive at an optimum DTmin value, the total annual cost (the sum of total annual energy and capital
cost) is plotted at varying DTmin values (Figure 7). Three key observations can be made from Figure
9:
An increase in DTmin values result in higher energy costs and lower capital costs.
A decrease in DTmin values result in lower energy costs and higher capital costs.
An optimum DTmin exists where the total annual cost of energy and capital costs is minimized.
Thus, by systematically varying the temperature approach we can determine the optimum heat
recovery level or the DTminOPTIMUM for the process.
Page 87
Recognizing the significance of the pinch temperature allows energy targets to be realized by design
of appropriate heat recovery network.
So what is the significance of the pinch temperature?
The pinch divides the process into two separate systems each of which is in enthalpy balance with the
utility. The pinch point is unique for each process. Above the pinch, only the hot utility is required.
Below the pinch, only the cold utility is required. Hence, for an optimum design, no heat should be
transferred across the pinch. This is known as the key concept in Pinch Technology.
To summarize, Pinch Technology gives three rules that form the basis for practical network design:
No external heating below the Pinch.
No external cooling above the Pinch.
No heat transfer across the Pinch.
Violation of any of the above rules results in higher energy requirements than the minimum
requirements theoretically possible.
Plus/Minus Principle: The overall energy needs of a process can be further reduced by introducing
process changes (changes in the process heat and material balance). There are several parameters
that could be changed such as reactor conversions, distillation column operating pressures and reflux
ratios, feed vaporization pressures, or pump-around flow rates. The number of possible process
changes is nearly infinite. By applying the pinch rules as discussed above, it is possible to identify
changes in the appropriate process parameter that will have a favorable impact on energy
consumption. This is called the "Plus/Minus Principle."
Applying the pinch rules to study of composite curves provide us the following guidelines:
• Increase (+) in hot stream duty above the pinch.
• Decrease (-) in cold stream duty above the pinch.
This will result in a reduced hot utility target, and any
• Decrease (-) in hot stream duty below the pinch.
• Increase (+) in cold stream duty below the pinch
will result in a reduced cold utility target.
These simple guidelines provide a definite reference for the adjustment of single heat duties such as
vaporization of a recycle, pump-around condensing duty, and others. Often it is possible to change
temperatures rather than the heat duties. The target should be to
• Shift hot streams from below the pinch to above and
• Shift cold streams from above the pinch to below.
The process changes that can help achieve such stream shifts essentially involve changes in following
operating parameters:
• reactor pressure/temperatures
• distillation column temperatures, reflux ratios, feed conditions, pump around conditions,
intermediate condensers
• evaporator pressures
• storage vessel temperatures
For example, if the pressure for a feed vaporizer is lowered, vaporization duty can shift from above to
below the pinch. The leads to reduction in both hot and cold utilities.
Appropriate Placement Principles: Apart from the changes in process parameters, proper
integration of key equipment in process with respect to the pinch point should also be considered. The
pinch concept of "Appropriate Placement" (integration of operations in such a way that there is
reduction in the utility requirement of the combined system) is used for this purpose. Appropriate
placement principles have been developed for distillation columns, evaporators, heat engines,
Page 88
furnaces, and heat pumps. For example, a single-effect evaporator having equal vaporization and
condensation loads, should be placed such that both loads balance each other and the evaporator can
be operated without any utility costs. This means that appropriate placement of the evaporator is on
either side of the pinch and not across the pinch.
In addition to the above pinch rules and principles, a large number of factors must also be considered
during the design of heat recovery networks. The most important are operating cost, capital cost,
safety, operability, future requirements, and plant operating integrity. Operating costs are dependent
on hot and cold utility requirements as well as pumping and compressor costs. The capital cost of a
network is dependent on a number of factors including the number of heat exchangers, heat transfer
areas, materials of construction, piping, and the cost of supporting foundations and structures.
With a little practice, the above principles enable the designer to quickly pan through 40-50 possible
modifications and choose 3 or 4 that will lead to the best overall cost effects.
The essence of the pinch approach is to explore the options of modifying the core process design,
heat exchangers, and utility systems with the ultimate goal of reducing the energy and/or capital cost.
Page 89
The design of a network is based on certain guidelines like the "CP Inequality Rule", "Stream
Splitting", "Driving Force Plot" and "Remaining Problem Analysis".
Having made all the possible matches, the two designs above and below the pinch are then brought
together and usually refined to further minimize the capital cost. After the network has been designed
according to the pinch rules, it can be further subjected to energy optimization. Optimizing the network
involves both topological and parametric changes of the initial design in order to minimize the total
cost.
Benefits and Applications of Pinch Technology
One of the main advantages of Pinch Technology over conventional design methods is the ability to
set energy and capital cost targets for an individual process or for an entire production site ahead of
design. Therefore, in advance of identifying any projects, we know the scope for energy savings and
investment requirements.
General Process Improvements
In addition to energy conservation studies, Pinch Technology enables process engineers to achieve
the following general process improvements:
Update or Modify Process Flow Diagrams (PFDs): Pinch quantifies the savings available by
changing the process itself. It shows where process changes reduce the overall energy target, not just
local energy consumption.
Conduct Process Simulation Studies: Pinch replaces the old energy studies with information that
can be easily updated using simulation. Such simulation studies can help avoid unnecessary capital
costs by identifying energy savings with a smaller investment before the projects are implemented.
Set Practical Targets: By taking into account practical constraints (difficult fluids, layout, safety, etc.),
theoretical targets are modified so that they can be realistically achieved. Comparing practical with
theoretical targets quantifies opportunities "lost" by constraints - a vital insight for long-term
development.
Debottlenecking: Pinch Analysis, when specifically applied to debottlenecking studies, can lead to
the following benefits compared to a conventional revamp:
• Reduction in capital costs
• Decrease in specific energy demand giving a more competitive production facility
For example, debottlenecking of distillation columns by Column Targeting can be used to identify less
expensive alternatives to column retraying or installation of a new column.
Determine Opportunities for Combined Heat and Power (CHP) Generation: A well-designed CHP
system significantly reduces power costs. Pinch shows the best type of CHP system that matches the
inherent thermodynamic opportunities on the site. Unnecessary investments and operating costs can
be avoided by sizing plants to supply energy that takes heat recovery into consideration. Heat
recovery should be optimized by Pinch Analysis before specifying CHP systems.
Decide what to do with low-grade waste heat: Pinch shows, which waste heat streams, can be
recovered and lends insight into the most effective means of recovery.
Industrial Applications
The application of Pinch Technology has resulted in significant improvements in the energy and capital
efficiency of industrial facilities worldwide. It has been successfully applied in many different industries
from petroleum and base chemicals to food and paper. Both continuous and batch processes have
been successfully analyzed on an individual unit and site-wide basis. Pinch technology has been
extensively used to capitalize on the mistakes of the past. It identifies the existence of built-in spare
heat transfer areas and presents the designer with opportunities for cheap retrofits. In case of the
design of new plants, Pinch Analysis has played a very important role and minimized capital costs.
A Case Study: When Pennzoil was adding a residual catalytic cracking (RCC) unit, the gas plant
associated with the RCC and an alkylation unit at its Atlas Refining facility in Shreveport, energy
efficiency was one of their major considerations in engineering the refinery expansion. Electric Power
Research Institute (EPRI) and Pennzoil's energy provider, SWEPCO, used pinch technology to carry
Page 90
out an optimization study of the new units and the utility systems that serve them rather than simply
incorporating standard process packages provided by licensors. The pinch study identified
opportunities for saving up to 23.7% of the process heating through improved heat integration. Net
savings for Pennzoil were estimated at $13.7 million over 10 years.
Page 91
Catalysts and Reaction Engineering
• Chemical reactions
• Reaction kinetics
• Introduction to catalysis
Page 92
Chemical Reactions
Modern separation
involves piping oil through
hot furnaces. The
resulting liquids and
vapours are discharged
into distillation towers, the
tall, narrow columns that
give refineries their
distinctive skylines.
Inside the towers, the liquids and vapours separate into components or fractions according
to weight and boiling point. The lightest fractions, including gasoline and liquid petroleum gas
(LPG), vapourize and rise to the top of the tower, where they condense back to liquids.
Medium weight liquids, including kerosene and diesel oil distillates, stay in the middle.
Heavier liquids, called gas oils, separate lower down, while the heaviest fractions with the
highest boiling points settle at the bottom. These tarlike fractions, called residuum, are
literally the "bottom of the barrel."
The fractions now are ready for piping to the next station or plant within the refinery. Some
components require relatively little additional processing to become asphalt base or jet fuel.
However, most molecules that are destined to become high-value products require much
more processing.
The most widely used conversion method is called cracking because it uses heat and
pressure to "crack" heavy hydrocarbon molecules into lighter ones. A cracking unit consists
of one or more tall, thick-walled, bullet-shaped reactors and a network of furnaces, heat
exchangers and other vessels.
Page 93
Fluid catalytic cracking, or "cat cracking," is the basic gasoline-making process. Using
intense heat (about 1,000 degrees Fahrenheit), low pressure and a powdered catalyst (a
substance that accelerates chemical reactions), the cat cracker can convert most relatively
heavy fractions into smaller gasoline molecules.
Hydrocracking applies the same principles but uses a different catalyst, slightly lower
temperatures, much greater pressure and hydrogen to obtain chemical reactions. Although
not all refineries employ hydrocracking, Chevron is an industry leader in using this
technology to cost-effectively convert medium- to heavyweight gas oils into high-value
streams. The company's patented hydrocracking process, which takes place in the
Isocracker unit, produces mostly gasoline and jet fuel.
Some refineries also have cokers, which use heat and moderate pressure to turn residuum
into lighter products and a hard, coallike substance that is used as an industrial fuel. Cokers
are among the more peculiar-looking refinery structures. They resemble a series of giant
drums with metal derricks on top.
Cracking and coking are not the only forms of conversion. Other refinery processes, instead
of splitting molecules, rearrange them to add value. Alkylation, for example, makes gasoline
components by combining some of the gaseous byproducts of cracking. The process, which
essentially is cracking in reverse, takes place in a series of large, horizontal vessels and tall,
skinny towers that loom above other refinery structures.
Reforming uses heat, moderate pressure and catalysts to turn naphtha, a light, relatively
low-value fraction, into high-octane gasoline components. Chevron's patented reforming
process is called Rheniforming for the rheniumplatinum catalyst used.
To make gasoline, refinery technicians carefully combine a variety of streams from the
processing units. Among the variables that determine the blend are octane level, vapour
pressure ratings and special considerations, such as whether the gasoline will be used at
high altitudes. Technicians also add performance additive and dyes that distinguish the
various grades of fuel.
Refining has come a long way since the oil boiling days of refning. By the time a gallon of
gasoline is pumped into a car's tank, it contains more than 200 hydrocarbons and additives.
All that changing of molecules pays off in a product that ensures smooth, high-performance
driving.
Page 94
Reaction Kinetics
A simple chemical reaction - the rearrangement of electrons and bonding partners - occurs
between two small molecules. From understanding the kinetics of the reaction, and the
equilibrium extent to which it can proceed, come applications: the network of reactions during
combustion, the chain reactions that form polymers, the multiple steps in the synthesis of a
complex pharmaceutical molecule, the specialized reactions of proteins and metabolism.
Chemical kinetics is the chemical engineer's tool for understanding chemical change.
A catalyst influences the reaction rate. Catalysts are sought for increasing production,
improving the reaction conditions, and emphasizing a desired product among several
possibilities. The challenge is to design the catalyst, to increase its effectiveness and
stability, and to create methods to manufacture it.
The chemical reactor should produce a desired product reliably, safely, and economically. In
designing a reactor, the chemical engineer must consider how the chemical kinetics, often
modified by catalysis, interacts with the transport phenomena in flowing materials. New
microreactor designs are expanding the concept of what a reactor may do, how reactions
may be conducted, and what is required to scale a process from laboratory to production.
Page 95
Crude Distillation
Distillation is the first step in the processing of crude oil and it takes place in a tall steel tower
called a fractionation column. The inside of the column is divided at intervals by horizontal
trays. The column is kept very hot at the bottom (the column is insulated) but as different
hydrocarbons boil at different temperatures, the temperature gradually reduces towards the
top, so that each tray is a little cooler than the one below.
The crude needs to be heated up before entering the fractionation column and this is done at
first in a series of heat exchangers where heat is taken from other process streams which
require cooling before being sent to rundown. Heat is also exchanged against condensing
streams from the main column. Typically, the crude will be heated up in this way upto a
temperature of 200 - 280 0C, before entering a furnace.
As the raw crude oil arriving contains quite a bit of water and salt, it is normally sent for salt
removing first, in a piece of equipment called a desalter. Upstream the desalter, the crude is
mixed with a water stream, typically about 4 - 6% on feed. Intense mixing takes place over a
mixing valve and (optionally) as static mixer. The desalter, a large liquid full vessel, uses an
electric field to separate the crude from the water droplets. It operates best at 120 - 150 0C,
hence it is conveniently placed somewhere in the middle of the preheat train.
Part of the salts contained in the crude oil, particularly magnesium chloride, are hydrolysable
at temperatures above 120 0C. Upon hydrolysis, the chlorides get converted into hydrochloric
acid, which will find its way to the distillation column's overhead where it will corrode the
overhead condensers. A good performing desalter can remove about 90% of the salt in raw
crude.
Page 96
Downstream the desalter, crude is further heated up with heat exchangers, and starts
vapourising, which will increase the system pressure drop. At about 170 -200 0C, the crude
will enter a 'pre-flashvessel', operating at about 2 - 5 barg, where the vapours are separated
from the remaining liquid. Vapours are directly sent to the fractionation column, and by doing
so, the hydraulic load on the remainder of the crude preheat train and furnace is reduced
(smaller piping and pumps).
Just upstream the preflash vessel, a small caustic stream is mixed with the crude, in order to
neutralise any hydrochloric acid formed by hydrolysis. The sodium chloride formed will leave
the fractionation column via the bottom residue stream. The dosing rate of caustic is adjusted
based on chloride measurements in the overhead vessel (typically 10 - 20 ppm).
At about 200 - 280 0C the crude enters the furnace where it is heated up further to about 330 -
370 0C. The furnace outlet stream is sent directly to the fractionation column. Here, it is
separated into a number of fractions, each having a particular boiling range.
At 350 0C, and about 1 barg, most of the fractions in the crude oil vapourise and rise up the
column through perforations in the trays, losing heat as they rise. When each fraction reaches
the tray where the temperature is just below its own boiling point, it condenses and changes
back into liquid phase. A continuous liquid phase is flowing by gravity through 'downcomers'
from tray to tray downwards. In this way, the different fractions are gradually separated from
each other on the trays of the fractionation column. The heaviest fractions condense on the
lower trays and the lighter fractions condense on the trays higher up in the column. At
different elevations in the column, with special trays called draw-off trays, fractions can be
drawn out on gravity through pipes, for further processing in the refinery.
At top of the column, vapours leave through a pipe and are routed to an overhead condenser,
typically cooled by air fin-fans. At the outlet of the overhead condensers, at temperature about
40 0C, a mixture of gas, and liquid naphtha exists, which is falling into an overhead
accumulator. Gases are routed to a compressor for further recovery of LPG (C3/C4), while the
liquids (gasoline) are pumped to a hydrotreater unit for sulfur removal.
Page 97
The lightest side draw-off from the fractionating column is a fraction called kerosene, boiling
in the range 160 - 280 0C, which falls down through a pipe into a smaller column called 'side-
stripper'. The purpose of the side stripper is to remove very light hydrocarbons by using steam
injection or an external heater called 'reboiler'. The stripping steam rate, or reboiled duty is
controlled such as to meet the flashpoint specification of the product. Similarly to the
atmospheric column, the side stripper has fractionating trays for providing contact between
vapour and liquid. The vapours produced from the top of the side stripper are routed back via
pipe into the fractionating column.
The second and third (optional) side draw-offs from the main fractionating column are gasoil
fractions, boiling in the range 200 - 400 0C, which are ultimately used for blending the final
diesel product. Similar as with the kerosene product, the gasoil fractions (light and heavy
gasoil) are first sent to a side stripper before being routed to further treating units.
At the bottom of the fractionation column a heavy, brown/black coloured fraction called
residue is drawn off. In order to strip all light hydrocarbons from this fraction properly, the
bottom section of the column is equipped with a set of stripping trays, which are operated by
injecting some stripping steam (1 - 3% on bottom product) into the bottom of the column.
Page 98
Catalytic Cracking
Introduction
Already in the 30's it was found that when heavy oil fractions are heated over clay type
materials, cracking reactions occur, which lead to significant yields of lighter hydrocarbons.
While the search was going on for suitable cracking catalysts based on natural clays, some
companies concentrated their efforts on the development of synthetic catalyst. This resulted
in the synthetic amorphous silica-alumina catalyst, which was commonly used until 1960,
when it was slightly modified by incorporation of some crystalline material (zeolite catalyst).
When the success of the Houdry fixed bed process was announced in the late 1930s, the
companies that had developed the synthetic catalyst decided to try to develop a process
using finely powdered catalyst. Subsequent work finally led to the development of the
fluidised bed catalytic cracking (FCC) process, which has become the most important
catalytic cracking process.
Originally, the finely powdered catalyst was obtained by grinding the catalyst material, but
nowadays, it is produced by spray-drying a slurry of silica gel and aluminium hydroxide in a
stream of hot flue gases. Under the right conditions, the catalyst is obtained in the form of
small spheres with particles in the range of 1-50 microns.
When heavy oil fractions are passed in gas phase through a bed of powdered catalyst at a
suitable velocity (0.1-0.7m/s), the catalyst and the gas form a system that behaves like liquid,
i.e. it can flow from one vessel to another under the influence of a hydrostatic pressure. If the
gas velocity is too low, the powder does not fluidise and it behaves like a solid. If velocity is
too high, the powder will just be carried away with the gas. When the catalyst is properly
fluidised, it can be continously transported from a reactor vessel, where the carcking
reactions take place and where it is fluidised by the hydrocarbon vapour, to a regenerator
vessel, where it is fluidised by the air and the products of combustion, and then back to the
reactor. In this way the proces is truly continous.
The first FCC unit went on stream in Standard Oil of New Jersey's refinery in Baton Rounge,
Louisiana in May 1942. Since that time, many companies have developed their own FCC
process and there are numerous varieties in unit configuration.
Page 99
Figure 40 Fluid catalytic cracking
: Hot feed, together with some steam, is introduced at the bottom of the riser via special
distribution nozzles. Here it meets a stream of hot regenerated catalyst from the regenerator
flowing down the inclined regenerator standpipe. The oil is heated and vapourised by the hot
catalyst and the cracking reactions commence. The vapour, initially formed by vapourisation
and successively by cracking, carries the catalyst up the riser at 10-20 m/s in a dilute phase.
At the outlet of the riser the catalyst and hydrocarbons are quickly separated in a special
device. The catalyst (now partly deactivated by deposited coke) and the vapour then enter
the reactor. The vapour passes overhead via cyclone separator for removal of entrained
catalyst before it enters the fractionator and further downstream equipment for product
separation. The catalyst then descends into the stripper where entrained hydrocarbons are
removed by injection of steam, before it flows via the inclined stripper standpipe into the
fluidised catalyst bed in the regenerator.
Air is supplied to the regenerator by an air blower and distributed throughout the catalyst
bed. The coke deposited is burnt off and the regenerated catalyst passes down the
regenerator standpipe to the bottom of the riser, where it joins the fresh feed and the cycle
recommences.
The flue gas (the combustion products) leaving the regenerator catalyst bed entrains catalyst
particles. In particular, it entrains "fines", a fine dust formed by mechanical rubbing of catalyst
particles taking place in the catalyst bed. Before leaving the regenerator, the flue gas
therefore passes through cyclone separators where the bulk of this entrained catalyst is
collected and returned to the catalyst bed.
Before being disposed of via a stack, the flue gas is passed through a waste heat boiler,
where its remaining heat is recovered by steam generation.
In the version of the FCC process described here, the heat released by burning the coke in
the regenerator is just sufficient to supply the heat required for the riser to heat up, vapourise
and crack the hydrocarbon feed. The units where this balance occurs are called " heat
balanced" units. Some feeds caused excessive amounts of coke to be deposited on the
catalyst, i.e. much more than is required for burning in the regenerator and to have a "heat
balanced" unit. In such cases, heat must be removed from the regenerator, e.g. by passing
water through coils in the regenerator bed to generate steam. Some feeds cause so little
coke to be deposited on the catalyst that heat has to be supplied to the system. This is done
by preheating the hydrocarbon feed in a furnace before contacting it with the catalyst.
Main Characteristics
Page 100
• A special device in the bottom of the riser to enhance contacting of catalyst and
hydrocarbon feed.
• The cracking takes place during a short time (2-4 seconds) in a riser ("short-contact
time riser") at high temperatures ( 500-540 0C at riser outlet).
• The catalyst used is so active that a special device for quick separation of catalyst
and hydrocarbons at the outlet of the riser is required to avoid undesirable cracking
after the mixture has left the riser. Since, no cracking in thereactor is required, the
reactor no longer functions as a reactor; it merely serves as a holding vessel for
cyclones.
• The regenerator takes place at 680-720 0C. With the use of special catalysts, all the
carbon monoxide (CO) in the flue gas is combusted to carbon dioxide (CO2) in the
regenerator.
• Modern FCC includes a power recovery system for driving the air blower.
Equipment in FCC
Before the introduction of residues, vacumn distillates were used as feedstock to load the
Catalytic Cracker fully. These days, even residues are used to load the cracker. The term
used for this type of configuation is Long Residue Catalytic Cracking Complex. The only
modification or addition needed are a residue desalter and a bigger and more heat resistent
reactor.
Conclusion
Page 101
The FCC Unit can a real margin improver for many refineries. It is able to convert the
residues into high value products like LPG , Butylene, Propylene and Mogas together with
Gasoil. The FCC is also a start for chemical production (poly propylene). Many FCC's have 2
modes: a Mogas mode and a Gasoil mode and FCC's can be adapted to cater for the 2
modes depending on favourabale economic conditions. The only disavantage of an FCC is
that the products produced need to be treated (sulfur removal) to be on specification.
Normally Residue FCCs act together with Residue Hydroconversion Processes and
Hydrocrackers in order to minimise the product quality give away and get a yield pattern that
better matches the market specifications. Via product blending, expensive treating steps can
be avoided and the units prepare excellent feedstock for eachother: desulfurised residue or
hydrowax is excellent FCC feed, while the FCC cycle oils are excellent Hydrocracker feed.
In the near future, many refiners will phase the challenge how to desulfurise cat cracked
gasoline without destroying its octane value. Catalytic destillation appears to be one of the
most promising candidate processes for that purpose.
Page 102
Catalysis
Catalysts and initiators start or promote chemical reactions that are used to produce organic
chemicals, polymers and adhesives. A chemical catalyst is a substance that increases the
rate at which a chemical reaction occurs; however, the catalyst itself does not undergo
chemical change. An initiator is a chemical compound that helps start a chemical reaction
such as polymerization. Unlike a catalyst, an initiator is usually consumed in the reaction.
Substances such as organic peroxides are commonly used as initiators. According to some
estimates, more than half of all petrochemical processes use catalysts and initiators. In
heterogeneous catalysis, a chemical catalyst provides a surface on which reactants become
adsorbed temporarily, and where chemical bonds in the reactants are weakened, allowing
new bonds to be created. Because the bonds between the products and the catalyst are
weaker, the products are released from the chemical catalyst. Continuous process catalysts
(CPC) are used to process industrial chemicals such as solvents, plasticizers, monomers and
intermediates. Catalytic solutions include a variety of specialized catalyst products.
There are two basic types of catalysis: homogeneous catalysis, in which both the catalyst
and reactants are in the same phase (for example, liquid or gas), and heterogeneous
catalysis, in which the catalyst and reactants are in different phases (for example, solid
catalysts and gaseous reactants). Metal catalysts and initiators are made from precious
metals such as gold, iridium, osmium, palladium, platinum, rhodium, ruthenium and silver.
They are used as heterogeneous catalysts for reactions such as hydrogenation and
isomerization. Zeolites, minerals with a porous structure, can also be used as catalysts.
Synthetic zeolites are the most important catalysts in petrochemical refineries. The proper
selection of catalysts and initiators is an important consideration. For example, using rhodium
or platinum as catalysts can produce different products depending on whether methane or
ethane are used.
ASTM International (formerly called the American Society for Testing and Materials (ASTM),
maintains standards for catalysts such as ASTM D3766, standard terminology relating to
catalysts and initiators. Some catalysts and initiators must be handled as hazardous
materials. The National Fire Protection Association (NFPA) maintains NFPA 432, a standard
which covers the catalyst organic peroxide.
Page 103
Catalysis And Distillation
Page 104
Distillation and Other Separation Processes
• Distillation basics
• Phase behavior and vapour/liquid equilibria
• Gas/Liquid separation
• Trays: function, pressure drop, efficiency, flooding, operations, and damage
• Bubble and dew points: calculation and application
• Foam: formation, detection, cause
• Packed v. trayed columns
Page 105
Distillation basics
ATMOSPHERIC DISTILLATION
Page 106
Feed Preheat Exchanger Train
Since crude oil is elevated from atmospheric liquid temperature to over 700°F, recovery of
heat is of prime importance in crude distillation economics. The path of crude being
preheated is typical. The crude is first used to absorb part of the overhead condensation load
after which there is exchange with one or more of the liquid sidestreams withdrawn,
beginning with the top sidestream.
The crude desalting is placed within the feed preheat exchanger train. The point at which the
crude is desalted is carefully selected. Normally it is at a temperature of 250°F to 300°F and
is a function of the gravity of the crude, with lighter crudes (lighter than 40°API) being
desalted at 250°F and those heavier than 30°API being desalted at 300°F. Care must be
taken in the temperature and pressure balance of the heat exchanger train that water does
not vapourize.
Page 107
Figure 42 Desalting - single stage
Page 108
Figure 44 Crude unit furnace
Overflash
Page 109
The furnace is normally operated to produce overflash. Overflash is defined as vapourization
in excess of requirements for lifting all of the products taken overhead and withdrawn as
side-streams. The purpose of overflash is to generate internal reflux in the wash trays
between the flash zone and the bottom side-stream draw tray. Overflash vapours are
condensed and wash the trays to prevent carryover and coking. Overflash is generally 3% to
5% (volume) of gross vapour from the flash zone which is essentially overhead and side-
stream products. Overflash is also defined in terms of crude charge to tower and is 2% to 3%
(volume) on that basis.
Although not shown on the schematic, when processing high vapour pressure crudes, a flash
drum is placed in the train after the desalter and before the furnace inlet control valves.
Flashing off light gases and water lowers the feed vapour pressure to avoid flashing of the
crude before the control valves, which leads to maldistribution in the furnace.
Page 110
Atmospheric Crude Fractionator
Wash Section
The wash section consists of 3 to 4 trays above the flash zone and below the bottom gas oil
draw. The purpose of the wash section is to provide reflux to the vapours from the flash zone
to wash resins and materials that may contaminate the products. The reflux is the condensed
overflash vapour. Either sieve trays or grid are utilized. Overhead System, Number of Trays
and Pressure Profile in Tower
The overhead vapours from the tower are cooled and partially condensed by exchange with
cold feed followed by condensation with air fin or water condensers. The vapours if any are
directed from the overhead accumulator to the fuel system. Operating pressures have
increased from the 1970s to be high enough to reduce noncondensing vapours to a
minimum. The intent is to reduce compression on the overhead system. On the other hand,
high operating pressures decrease vapourization, increase flash zone temperatures and
furnace duty, and affect yields.
Pressures in the reflux drum may vary according to the design and be as low as 0.5 psig to
as high as 20 psig if the overhead vapour is totally condensed. This discussion will use a
reflux drum pressure of 5 psig as a basis.
The pressure drop across the overhead condensers is also variable but is on the order of 3 –
10 psid; this discussion will assume 5 psid.
Pressure drops across trays average 0.1 psid/tray to 0.2 psid/tray. This discussion will
assume a total of 32 trays above the flash zone including 4 wash trays, resulting in a 5 psid
pressure drop in the tower from the flash zone to the top of the column. Assumed are 6 trays
below the reflux to the naphtha draw, 6 trays below to the light distillate draw, 6 trays below
to the heavy distillate draw, and 10 trays below to the gas oil draw. There are 4 wash trays
below the gas oil draw. Trays may be either sieve trays or grid.
Sidestreams
Page 111
Liquid product from the overhead is straight run gasoline, a part of which is returned to the
tower as reflux for the top section of the tower. There is a pumparound on the fractionator
where liquid is taken from a draw tray and cooled and returned to the next tray down as
subcooled reflux. This not only reduces the overhead condensing load but achieves uniform
tower loadings by providing overflow at lower points in the tower since successive product
withdrawal reduces liquid overflow.
The cut point for each sidestream fraction is the final boiling point of the stream being
withdrawn. However, the liquid has a lighter component tail that must be removed from the
sidestream. Traditionally, the liquid product sidestream was directed to a stripper that used
live steam for stripping. Emphasis on reducing steam stripping and sour water has led to the
replacement of live steam injection with reboiled strippers in some instances.
Page 112
Three-step fractionation permits a crude unit capacity expansion with production of LPG,
isopentane, straight run gasoline, and naphtha streams from the crude unit overhead. A
combined stream of straight run gasoline and naphtha will be taken overhead on the crude
tower. The liquids are then fractionated in a splitter to make naphtha for reformer feed as the
splitter bottoms. The splitter overhead goes to a stabilizer. The overhead from the stabilizer
is LPG is directed to the saturates gas plant. The stabilizer bottoms goes to a de-
isopentanizer that produces isopentane overhead and straight run gasoline bottoms.
Page 113
Phase behavior and vapour/liquid equilibrium
(2-45)
This equation allows the calculation of temperature composition diagrams and pressure
composition diagrams for two component mixtures. The Figure below displays a temperature
composition diagram constructed using the Patel-Teja equation of state and equation 2-45
for the ammonia-butane system at 20.7 bar (300 psi). The lower line represents the bubble
point temperature line.
For example, when a subcooled 50/50 liquid mixture of ammonia and butane is heated from
300K (point 1), the first bubble of vapour will form at just above 316 K (point 2) and the
mixture will be in vapour-liquid equilibrium. Upon further heating, the overall composition of
both the vapour and the mixture will remain 50/50, but the compositions of the liquid and
vapour phases will vary (see point 3). Finally, as the last drop of liquid is evapourated, the
50/50 mixture is now a saturated vapour (point 4). Further heating will superheat the vapour
(point 5). Point 4 lies on the dew point line so named because this is where the first drop of
condensation would form if it was approached via cooling from point 5. Note at a temperature
around 316 K, the equilibrium vapour and liquid phases are at the same composition (~0.82).
At this point, known as an azeotrope, the azeotropic mixture boils at a constant temperature
with constant vapour and liquid phase compositions (similar to a pure substance).
It is important to note that there may be two compositions required when making vapour
liquid equilibrium (VLE) calculations. At a saturated temperature and pressure in a mixture
there are two cubic equations for the compressibility; one for the liquid and one for the
vapour. The liquid compressibility cubic equation is obtained by evaluating equations 2-39
through 2-41 using liquid compositions. The vapour compressibility cubic equation is
obtained by evaluating equations 2-39 through 2-41 using vapour compositions. The liquid's
compressibility is still the smallest positive real root but of the liquid compressibility equation.
The vapour's compressibility is the largest positive real root of the vapour compressibility
equation.
Page 114
Figure 45 Temperature-Composition Diagram for Ammonia-Butane at 20.7 bar
(2-46)
(2-47)
Calculations produce three different cubic equations for the compressibility since there are
three sets of compositions. The compressibility of each liquid is represented by the smallest
positive real root of each liquid's compressibility cubic equation while the compressibility of
the vapour is the largest positive real root of the vapour's cubic compressibility equation.
Page 115
Figure 46 T-x-y Diagram for Ammonia-Butane at 20.7 bar
Page 116
Gas/Liquid separation
Recent Development In Liquid/Gas Separation Technology
Removing liquids and solids from a gas stream is very important in refining and gas
processing applications. Effective removal of these contaminants can prevent costly
problems and downtime with downstream equipment like compressors, turbines, and
burners. In addition, hydrocarbons and solid contaminants can induce foaming in an amine
contactor tower and can contribute to premature catalyst changeouts in catalytic processes.
In compressors that use oil to lubricate cylinders, the lube oil often gets into the discharge
gas causing contamination downstream. A thin film of hydrocarbon deposited on heat
exchangers will thicken and coke, decreasing heat transfer efficiency, increasing energy
consumption and creating a risk of hot spots and leaks.
Several technologies are available to remove liquids and solids from gases. This paper will
first provide selection criteria for the following gas/liquid separation technologies:
• gravity separators
• centrifugal separators
• filter vane separators
• mist eliminator pads
• liquid/gas coalescers
and then focus on the separation of fine aerosols from gases using liquid/gas coalescing
technology.
• Removal Mechanisms
• Liquid/Gas Separation Technologies
• Formation of Fine Aerosols
• Ratings/Sizing
• Design and Its Impact on Sizing
• Field Testing For Liquid/Gas Coalescers
• Test Procedure
• Field Test Results
• Conclusions
Removal Mechanisms
Before evaluating specific technologies, it is important to understand the mechanisms used
to remove liquids and solids from gases. These can be divided into four different categories.2
The first and easiest to understand is gravity settling, which occurs when the weight of the
droplets or particles (ie. the gravitation force) exceeds drag created by the flowing gas.
A related and more efficient mechanism is centrifugal separation which occurs when the
centrifugal force exceeds the drag created by the flowing gas. The centrifugal force can be
several times greater than gravitational force.
Page 117
The third separation mechanism is called inertial impaction which occurs when a gas passes
through a network, such as fibers and impingement barriers. In this case, the gas stream
follows a tortuous path around these obstacles while the solid or liquid droplets tend to go in
straighter paths, impacting these obstacles. Once this occurs, the droplet or particle loses
velocity and/or coalesces, and eventually falls to the bottom of the vessel or remains trapped
in the fiber medium.
And finally, a fourth mechanism of separation occurs with very small aerosols (less than 0.1
µm). Called diffusional interception or Brownian Motion, this mechanism occurs when small
aerosols collide with gas molecules. These collisions cause the aerosols to deviate from the
fluid flow path around barriers increasing the likelihood of the aerosols striking a fiber surface
and being removed.3
Throughout this paper, reference to droplet and particle sizes will be in the unit micron. One
micron is 1/1000 of a millimeter or 39/1,000,000 of an inch. Figure 1 shows the size of
various material in microns.
Page 118
The separation mechanism for mist eliminator pads is inertial impaction. Typically, mist
eliminator pads, consisting of fibers or knitted meshes, can remove droplets down to 1-5
microns but the vessel containing them is relatively large because they must be operated at
low velocities to prevent liquid reentrainment.
Filter Vane Separators
Vane separators are simply a series of baffles or plates within a vessel. The mechanism
controlling separation again is inertial impaction. Vane separators are sensitive to mass
velocity for removal efficiency, but generally can operate at higher velocities than mist
eliminators, mainly because a more effective liquid drainage reduces liquid reentrainment.
However, because of the relatively large paths between the plates constituting the tortuous
network, vane separator can only remove relatively large droplet sizes (10 microns and
above). Often, vane separators are used to retrofit mist eliminator pad vessels when gas
velocity exceeds design velocity.7
Liquid/Gas Coalescers
Liquid/gas coalescer cartridges combine features of both mist eliminator pads and vane
separators, but are usually not specified for removing bulk liquids. In bulk liquid systems, a
high efficiency coalescer is generally placed downstream of a knock-out drum or
impingement separator. Gas flows through a very fine pack of bound fibrous material with a
wrap on the outer surface to promote liquid drainage (See Figure 2 below). A coalescer
cartridge can trap droplets down to 0.1 micron. When properly designed and sized, drainage
of the coalesced droplets from the fibrous pack allows gas velocities much higher than in the
case of mist eliminator pads and vane separators with no liquid reentrainment or increase in
pressure drop across the assembly.
Page 119
Figure 49 Coalescer Cut-away View
Table 2 summarizes each of these technologies and provides guidelines for proper selection.
As you can see, for systems containing very fine aerosols, under 5 µm, a coalescer should
be selected. Removing very fine aerosols from gases results in major economic, reliability,
and maintenance benefits in compressor systems.
Page 120
Table 3 Types of Liquid/Gas Separators
Ratings/Sizing
Page 121
It is important to note that a coalescer is different from a filter in that it performs both filtration
of fine solid particles and coalescence of liquid aerosols from a gas stream. The sizing and
rating criteria for coalescers, as it pertains to liquids removal, is very critical to the ultimate
performance of the coalescer. An undersized coalescer will result in continuous liquid
reentrainment, very low liquid separation efficiency and will be vulnerable to any process
changes. The critical nature of coalescer sizing is illustrated in Figure 4 which shows that
coalescer performance can drop very rapidly once the coalescer is challenged by too much
liquid (either because of high aerosol concentration in the gas stream or because of a high
gas flowrate). This marks a dramatic departure from most other separation equipment whose
performance gradually diminishes as it is pushed passed its rated maximum.
5:
Page 122
Figure 52 Liquid Aerosol Separation Efficiency Test Schematic
The LASE test differs from the DOP test in the following ways:
1. It gives a more accurate and meaningful measure of efficiency. The DOP
efficiency essentially tells you what percent of 0.3 µm dioctylphthalate droplets
will be removed by a dry coalescer; the LASE test tells you what ppmw of
contaminants will be in the gas downstream of the coalescer. In other words,
what the LASE test tells you is how much contaminant your downstream
equipment will be exposed to.
2. The DOP uses monodispersed (ie. same sized) droplets of DOP, a liquid not
commonly encountered in a gas processing or refinery gas streams; the LASE
test uses a lube oil which has droplet sizes that range from 0.1-0.9 µm.
3. The LASE test more closely simulates process conditions, by being run on a
saturated cartridge and being performed under positive pressure.
Table 4 comparison of the DOP and LASE
Page 123
Design and Its Impact on Sizing
The goal for improving coalescer design is to maximize efficiency while preventing liquid
reentrainment. Reentrainment occurs when liquid droplets accumulated on a coalescer
element are carried off by the exiting gas. This occurs when velocity of the exiting gas, or
annular velocity exceeds the gravitational forces of the draining droplet.
We earlier discussed the importance of correct coalescer sizing. In designing and sizing a
coalescer, the following parameters must be taken into account:
• Gas velocity through the media,
• Annular velocity of gas exiting the media,
• Solid and liquid aerosol concentration in the inlet gas, and
• Drainability of the coalescer
Each of these factors with the exception of the inlet aerosol concentration can be controlled.
At a constant gas flow rate, media velocity can be controlled by either changing the
coarseness of the medium’s pore structure or by increasing or decreasing the number of
cartridges used. The coarser the medium, however, the less efficient the coalescer will be at
removing liquid.
At a constant gas flow rate, the exiting velocity of the gas can be controlled by increasing or
decreasing the size of the vessel or the space between the cartridges.
Drainage can be improved by either selecting low surface energy coalescer materials or by
treating the coalescer medium with a chemical that lowers the surface energy of the medium
to a value lower than the surface tension of the liquid to be coalesced.13 Having a low
surface energy material prevents liquid from wetting the filter medium and accelerates
drainage of liquids down along the medium’s fibers. The liquid coalesced on the fibrous
material falls rapidly through the network of fibers without accumulating in the pores where it
would otherwise be pushed through by the gas and be reentrained. Figure 6 shows the effect
that a chemical treatment can have on a coalescer. It shows that the maximum flowrate of a
chemically treated cartridge is more than twice that of a similar cartridge that is not treated.
Page 124
Figure 53 Effect of Chemical Treatment on Coalescer Performance
One can conclude from these design parameters that a large housing with a large number of
cartridges that have very fine pores would easily eliminate any liquid problems you may
encounter in a gas stream. Obviously, the costs associated with such a vessel is very high.
As vessel size and cartridge quantity are reduced so is the probability of reentrainment and
poorer removal efficiency. In addition, as the assembly size decreases, the pressure drop
increases which can result in increased operating costs. So, an optimization is required.
When evaluating a coalescer assembly, make sure that all of these parameters are taken
into consideration when the assembly is sized. A coalescer is best used in conjunction with a
knock-out drum or other impingement separator.
Page 125
Figure 54 Schematic of Pall LG Coalescer Test Stand
Test Procedure
Before going on-site for a field test, the plant is contacted to obtain system conditions
(pressure, temperature, gas flow rate, type of gas and if possible liquid concentration in the
gas stream). Based on this information, an orifice plate is selected to measure gas flow rates
in the range indicated. The orifice is also selected to minimize pressure drop so that gas
condensation and hydrate formation is not induced.
After putting the side stream test kit on-line, the flow rate is adjusted below the critical flow
rate, so as not to get reentrained. Once the coalescer cartridge is saturated, test membranes
are inserted in the test jigs upstream and downstream of the coalescer housing, the sump is
emptied of any liquid that may have been accumulated during the cartridge saturation period,
and the actual test begins.
t the end of the test, the volume of liquid accumulated in the sump is measured and collected
in a sample bottle for subsequent lab analysis. Test membranes are also collected to
determine the amount of solids suspended in the gas and for qualitative identification of the
solid contaminants. Liquid aerosol concentration is determined from the amount of liquid
coalesced and the quantity of gas sampled.
Page 126
Figure 55 Field Test Results of Gas Streams in Refineries and Gas Processing Plants
Conclusions
4. Selecting gas/liquid separation technologies requires not only knowledge of
the process conditions, but a knowledge of the characteristics of the liquid
contaminants. Selection should be made based on droplet size, concentration,
and whether the liquid has waxing or fouling tendencies.
5. Through an analysis of field data, it was shown that due to the presence of
very fine liquid droplets (below 1 micron) in most gas processes, high
efficiency liquid/gas coalescers should be recommended whenever high
recovery rates are required to protect downstream equipment or to recover
valuable liquids.
6. The sizing and design of a coalescer is of critical importance. Once a
coalescer is challenged with too much liquid, either because of excessive
aerosol concentrations or large gas flow rates, its efficiency will decrease
rapidly.
7. The Liquid Aerosol Separation Efficiency (LASE) test is a meaningful
performance test of liquid/gas coalescers, as it allows coalescer cartridges to
be tested under conditions closely resembling actual operating conditions
(saturated element, realistic pressure drops and gas properties (density,
viscosity).
8. A surface treatment of the coalescer medium improved liquid drainage in the
fibrous materials and decreased by 50% the number of cartridges required to
handle a given flow.
9. Field testing has demonstrated that significant amounts of liquids are present
in gas stream in refinery and gas processing plants.
Page 127
Industrial uses of Fractional Distillation
Distillation is the most common form of separation technology in the chemical industry. In
most chemical processes, the distillation is continuous steady state, where batch
fractionation is not as economical. New feed is always being added to the distillation column
and products are always being removed. Unless the process is disturbed due to changes in
feed, heat, ambient temperature, or condensing, the amount of feed being added and the
amount of product being removed are normally equal. This is known as continuous, steady-
state fractional distillation.
The most widely used industrial applications of continuous, steady-state fractional distillation
are in petroleum refineries, petrochemical plants and natural gas processing plants.
Page 128
Trays: function, pressure drop, efficiency, flooding, operations, and
damage
In valve trays, perforations are covered by liftable caps. Vapour flows lifts the caps,
thus self creating a flow area for the passage of vapour. The lifting cap directs the
vapour to flow horizontally into the liquid, thus providing better mixing than is possible
in sieve trays.
Sieve trays
Sieve trays are simply metal plates with holes in them. Vapour passes straight
upward through the liquid on the plate. The arrangement, number and size of the
holes are design parameters.
Page 129
Because of their efficiency, wide operating range, ease of maintenance and cost factors,
sieve and valve trays have replaced the once highly thought of bubble cap trays in many
applications.
Each tray has 2 conduits, one on each side, called ‘downcomers’. Liquid falls through the
downcomers by gravity from one tray to the one below it. The flow across each plate is
shown in the above diagram on the right.
A weir on the tray ensures that there is always some liquid (holdup) on the tray and is
designed such that the the holdup is at a suitable height, e.g. such that the bubble caps are
covered by liquid.
Being lighter, vapour flows up the column and is forced to pass through the liquid, via the
openings on each tray. The area allowed for the passage of vapour on each tray is called the
active tray area.
Page 130
The picture on the left is a photograph of a section of a pilot
scale column equipped with bubble capped trays. The tops of
the 4 bubble caps on the tray can just be seen. The down-
comer in this case is a pipe, and is shown on the right. The
frothing of the liquid on the active tray area is due to both
passage of vapour from the tray below as well as boiling.
As the hotter vapour passes through the liquid on the tray above, it transfers heat to the
liquid. In doing so, some of the vapour condenses adding to the liquid on the tray. The
condensate, however, is richer in the less volatile components than is in the vapour.
Additionally, because of the heat input from the vapour, the liquid on the tray boils,
generating more vapour. This vapour, which moves up to the next tray in the column, is
richer in the more volatile components. This continuous contacting between vapour and
liquid occurs on each tray in the column and brings about the separation between low boiling
point components and those with higher boiling points.
Tray Designs
A tray essentially acts as a mini-column, each accomplishing a fraction of the separation
task. From this we can deduce that the more trays there are, the better the degree of
separation and that overall separation efficiency will depend significantly on the design of the
tray. Trays are designed to maximise vapour-liquid contact by considering the
vapour distribution
on the tray. This is because better vapour-liquid contact means better separation at each
tray, translating to better column performance. Less trays will be required to achieve the
same degree of separation. Attendant benefits include less energy usage and lower
construction costs.
Figure 59 Liquid distributors - Gravity (left), Spray (right)(photos courtesy of Paul Phillips)
Packings
Page 131
There is a clear trend to improve separations by supplementing the use of trays by additions
of packings. Packings are passive devices that are designed to increase the interfacial area
for vapour-liquid contact. The following pictures show 3 different types of packings.
These strangely shaped pieces are supposed to impart good vapour-liquid contact when a
particular type is placed together in numbers, without causing excessive pressure-drop
across a packed section. This is important because a high pressure drop would mean that
more energy is required to drive the vapour up the distillation column.
Page 132
Tower Capacity:
Factors, calculation, modification
In order to have stable operation in a distillation column, the vapour and liquid flows must be
managed. Requirements are:
Tray layout and column internal design is quite specialized, so final designs are
usually done by specialists; however, it is common for preliminary designs to be
done by ordinarily superhuman process engineers. These notes are intended to give
you an overview of how this can be done, so that it won't be a complete mystery
when you have to do it for your design project.
Basically in order to get a preliminary sizing for you column, you need to obtain values for
Typically, the liquid flow between trays is governed by a weir on each tray. The flow depends
on the length of the weir and how high the liquid level on the tray is above the weir. The
Francis weir equation is one example of how the flow off a tray may be modeled.
Tray Efficiency
Page 133
Ideally, tray efficiencies are determined by measurements of the performance of actual trays
separating the materials of interest; however, this is usually not practical in the early phases
of a design. Consequently, some form of estimation is required. Estimates can be based on
theory or on data collected from other columns.
The O'Connell correlation is based on data collected from actual columns. It is based on
bubble cap trays and is conservative for sieve and valve trays. It correlates the overall
efficiency of the column with the product of the feed viscosity and the relative volatility of the
key component in the mixture. These properties should be determined at the arithmetic mean
of the column top and bottom temperatures. A fit of the data has been determined:
This, or a similar data set, can be used to get preliminary estimates of efficiency numbers.
Column Diameter
Column diameter is found based on the constraints imposed by flooding. The number of ideal
stages isn't needed to find the diameter -- only the vapour and liquid loads. You do need the
number of actual stages to get the column height.
Before beginning a diameter calculation, you want to know the vapour and liquid rates
throughout the column. You then do a diameter calculation for each point where the loading
might be an extreme: the top and bottom trays; above and below feeds, sidedraws, or heat
addition or removal; and any other places where you suspect peak loads.
Once you've calculated these diameters, you select one to use for the column, then check it
to make sure it will work. Some columns will have two sections with different diameters --
consider this possibility if you end up with regions where the estimated diameter varies by
20% or more, but realize it will be more expensive than a column that is the same all the way
up.
One issue that ought to be considered is the validity of your design numbers. If you are
following the "traditional" approach, you've probably designed your column for reflux rates in
the range of 1.1 to 1.2 times the minimum. This may not give you a column that can handle
"upsets" well, so you may want to design for a capacity slightly greater than that -- increasing
the flows by about 20% might be wise.
Flooding
Downcomer flooding occurs when liquid backs up on a tray because the downcomer area is
two small. This is not usually a problem. More worrisome is entrainment flooding, caused by
too much liquid being carried up the column by the vapour stream.
A number of correlations and techniques exist for calculating the flooding velocity; from this,
the active area of the column is calculated so that the actual velocity can be kept to no more
than 80-85% of flood; values down to 60% are sometimes used.
A force balance can be made on droplets entrained by the vapour stream (which can lead to
entrainment flooding). This balance yields an expression relating the vapour and liquid
densities and a capacity factor (C, with velocity units) to the flooding velocity:
Page 134
Capacity Factors
The capacity factor can be determined from theory (it depends on droplet diameter, drag
coefficient, etc.), but is usually obtained from correlations based on experimental data from
distillation tray tests. Depending on the correlation used, C may include the effects of surface
tension, tendency to foam, and other parameters.
A common correlation is one proposed by Fair in the late 50s - early 60s. The version for
sieve trays is available in a wide range of sources (including Figure 21.28 of MSH). The
correlation takes the form of a plot of a capacity factor (which must be corrected for surface
tension) vs. a functional group based on the liquid to vapour mass ratio:
Enter the plot from the bottom with this number, and then read the capacity factor from the
left. This capacity factor applies to nonfoaming systems and trays meeting certain hole and
weir size restrictions. It will need to be corrected for surface tension:
Other correlations for the capacity factor are also available. Several are based on more
recent information, and may well be more accurate than the Fair plot; however, they also
tend to be less broadly known and often require more a priori information on the system. You
should use a correlation that is acceptable for your problem.
Diameter
Once you have the capacity factor, you can readily solve for the flooding velocity:
(this solution is for the Fair correlation, and adds the surface tension correction).
We know that flow=velocity*area, so we can calculate the flow area from the known vapour
flow rate and the desired velocity (a fraction of flood). This area needs to be increased to
account for the downcomer area which is unavailable for mass transfer. The resulting tray
area can then be used to calculate the column diameter. So, with everything lumped
together, we have:
Page 135
The only "new" term is the ratio of downcomer area to tray area. This should probably never
be less than 0.1, and probably seldom will be greater than 0.2.
Trays probably aren't a good idea for columns less than about 1.5 ft in diameter (you can't
work on them) -- these are normally packed. Packing is less desirable for large diameter
columns (over about 5 ft in diameter).
Pressure Drop
There is a pressure gradient through the column -- otherwise the vapour wouldn't flow. This
gradient is normally expressed in terms of a pressure drop per tray, usually on the order of
0.10 psi.
The best source of pressure drop information is to measure the actual drop between trays,
but this isn't always feasible at the beginning of a design. Detailed calculations are possible,
but these depend so much on the actual tray specifications that final values are usually
obtained from experts, but approximate methods can be used to get values to put in your
design basis.
There are two main components to the pressure drop: the "dry tray" drop caused by
restrictions to vapour flow imposed by the holes and slots in the trays and the head of the
liquid that the vapour must flow through.
The dry tray head loss can be related to an orifice flow equation:
This equation determines the dry tray drop in inches of fluid (your text has a similar equation
in SI units). The constant 0.186 takes care of the units and is appropriate for sieve trays. The
orifice size coefficient Co depends on the tray configuration and will usually fall between 0.65
and 0.85. The hole velocity can be obtained by dividing the vapour flow rate by the total hole
area of the tray.
Liquid Losses
The liquid head pressure drop includes the effects of surface tension and of the frothing on
the tray. It is typically represented as the product of an aeration factor and the height of liquid
on the tray:
Page 136
Correlations are available for the aeration factor (beta); a value of 0.6 is good for a wide
variety of situations.
The height of liquid on the tray is the sum of the weir height and the height of liquid over the
weir. The total height can be calculated directly from the volume of liquid on the tray and its
active area. Another approach is to back the height out of a version of the Francis weir
equation (which relates flow off a tray to liquid height and weir length). One version, for a
straight weir, in units of inches and gal/min is:
Realize that these equations depend on the size and shape of the weir.
Column Height
The height of a trayed column is calculated by multiplying the number of (actual) stages by
the tray separation. Tray spacing can be determined as a cost optimum, but is usually set by
mechanical factors. The most common tray spacing in 24 inches -- it allows enough space to
work on the trays whenever the column is big enough around (>5 ft diameter) that workers
must crawl inside. Smaller diameter columns may be able to get by with 18 inch tray
spacings.
In addition to the space occupied by the trays, height is needed at the top and bottom of the
column. Space at the top -- typically an additional 5 to 10 ft -- is needed to allow for
disengaging space.
The bottom of the tower must be tall enough to serve as a liquid reservoir. Depending on
your boss's feelings about keeping inventory in the column, you will probably design the base
for about 5 minutes of holdup, so that the total material entering the base can be contained
for at least 5 minutes before reaching the bottom tray.
The total of height added to the top and bottom will usually amount to about 15% or so added
to that required by the trays.
You rarely will see a real tower that is more than about 175 ft. tall. Tall, skinny towers are not
a good idea, so watch the height/diameter ratio. You generally want to keep it less than 20 or
30. If your tower ends up exceeding these values, you probably want to look at a redesign,
maybe by reducing the tray spacing, or splitting the tower into two parts.
Page 137
Absorption & Adsorption
The enriched absorption fluid is heated and passed into a stripping column, where the light
product vapours pass upward and are condensed for recovery as liquefied petroleum gas
(LPG). The unvapourised absorption fluid passes from the base of the stripping column and
is reused in the absorption tower.
Certain highly porous solid materials have the ability to select and adsorb specific types of
molecules, thus separating them from other materials. Silica gel is used in this way to
separate aromatics from other hydrocarbons, and activated charcoal is used to remove liquid
components from gases. Adsorption is thus somewhat analogous to the process of
absorption with an oil, although the principles are different. Layers of adsorbed material only
a few molecules thick are formed on the extensive interior surface of the adsorbent; the
interior surface may amount to several hectares per kilogram of material.
Adsorption is employed for about the same purpose as absorption; in the processjust
mentioned natural gasoline may be separated from natural gas by adsorption on charcoal.
Adsorption is also used to remove undesirable colours from lubricating oils, usually
employing activated clay. The use of molecular sieves in separating close boiling
components deserves a special mention.
Molecular sieves are a special form of adsorbent. Such sieves are produced by the
dehydration of naturally occurring or synthetic zeolites (crystalline alkali-metal
aluminosilicates). The dehydration leaves intercrystalline cavities that have pore openings of
definite size, depending on the alkali metal of the zeolite. Under adsorptive conditions,
normal paraffin molecules can enter the crystalline lattice and be selectively retained,
whereas all other molecules are excluded. This principle is used commercially for the
removal of normal paraffins from gasoline fuels, thus improving their combustion properties.
The use of molecular sieves is also extensive in the manufacture of high-purity solvents.
Page 138
Introduction
Separation techniques concentrate contaminated solids through physical and chemical
means. These processes seek to detach contaminants from their medium (i.e., the soil,
sand, and/or binding material that contains them).
Description:
Gravity Separation
Gravity separation is a solid/liquid separation process, which relies on a density difference
between the phases. Equipment size and effectiveness of gravity separation depends on the
solids settling velocity, which is a function of the particles size, density difference, fluid
viscosity, and particle concentration (hindered settling). Gravity separation is also used for
removing immiscible oil phases, and for classification where particles of different sizes are
separated. It is often preceded by coagulation and flocculation to increase particle size,
thereby allowing removal of fine particles.
Magnetic Separation
Magnetic separation is used to extract slightly magnetic radioactive particles from host
materials such as water, soil, or air. All uranium and plutonium compounds are slightly
magnetic while most host materials are nonmagnetic. The process operates by passing
contaminated fluid or slurry through a magnetized volume. The magnetized volume contains
a magnetic matrix material such as steel wool that extracts the slightly magnetic
contamination particles from the slurry.
Sieving/Physical Separation
Sieving and physical separation processes use different size sieves and screens to
effectively concentrate contaminants into smaller volumes. Physical separation is based on
Page 139
the fact that most organic and inorganic contaminants tend to bind, either chemically or
physically, to the fine (i.e., clay and silt) fraction of a soil. The clay and silt soil particles are, in
turn, physically bound to the coarser sand and gravel particles by compaction and adhesion.
Thus, separating the fine clay and silt particles from the coarser sand and gravel soil
particles would effectively concentrate the contaminants into a smaller volume of soil that
could then be further treated or disposed
Page 140
Module 5 – Process Control & Economics
Page 141
Process Control Basics
Measured Variables
Definition: The physical quantity, property, or condition which is to be measured. Common
measured variables are temperatures, pressure, rate of flow, thickness, speed, etc. 2. The
pan of the process that is monitored to determine the actual condition of the controlled
variable.
Why Control?
Chemical plants are intended to be operated under known and specified conditions. There
are several reasons why this is so:
Safety:
Formal safety and environmental constraints must not be violated.
Operability:
Certain conditions are required by chemistry and physics for the desired reactions or
other operations to take place. It must be possible for the plant to be arranged to
achieve them.
Economic:
Page 142
Plants are expensive and intended to make money. Final products must meet market
requirements of purity, otherwise they will be unsaleable. Conversely the manufacture
of an excessively pure product will involve unnecessary cost.
A chemical plant might be thought of as a collection of tanks in which materials are heated,
cooled and reacted, and of pipes through which they flow. Such a system will not, in general,
naturally maintain itself in a state such that precisely the temperature required by a reaction
is achieved, a pressure in excess of the safe limits of all vessels be avoided, or a flowrate
just sufficient to achieve the economically optimum product composition arise.
Control Objectives
Control systems in chemical plants have, as noted, three functions.
• Safety.
• Operability, i.e. to ensure that particular flows and holdup are maintained at chosen
values within operating ranges.
• To control product quality, process energy consumption etc.
To a large extent these are quite separate objectives. Indeed, in the case of safety systems
separate equipment is generally used. The aims of control for operability are secondary to
those of strategic control for quality etc., which directly affect process profitability.
Page 143
The majority of control loops in a plant control system are associated with operability.
Specific flow rates have to be set, levels in vessels maintained and chosen operating
temperatures for reactors and other equipment achieved.
Techniques of Control
Page 144
Figure 63 Feedback control loop
Notice that this extremely simple idea has a number of very convenient properties. The
feedback control system seeks to bring the measured quantity to its required value or
setpoint. The control system does not need to know why the measured value is not
currently what is required, only that this is so. There are two possible causes of such a
disparity:
• The system has been disturbed. This is the common situation for a chemical plant
subject to all sorts of external upsets. However, the control system does not need to
know what the source of the disturbance was.
• The setpoint has been changed. In the absence of external disturbance, a change in
setpoint will introduce an error. The control system will act until the measured quantity
reaches its new setpoint.
A control system of this sort should also handle simultaneous changes in setpoint and
disturbances.
Page 145
the process. The other is the occurrence of a large amount of lag (time delay) within the
process. These are discussed further below.
Page 146
Figure 65 Time delay
The question of importance of either occurrence is defined in economic terms. In either case,
the principle concern is the existence of errors that have significant economic consequences
in the overall process operation. In these cases, feedforward control can be used to deal with
these disadvantages or inadequacies of feedback control.
Page 147
Process Economics
Refinery Economics
The overall economics or viability of a refinery depends on the interaction of three key
elements: the choice of crude oil used (crude slates), the complexity of the refining
equipment (refinery configuration) and the desired type and quality of products produced
(product slate). Refinery utilization rates and environmental considerations also influence
refinery economics.
Using more expensive crude oil (lighter, sweeter) requires less refinery upgrading but
supplies of light, sweet crude oil are decreasing and the differential between heavier and
more sour crudes is increasing. Using cheaper heavier crude oil means more investment in
upgrading processes. Costs and payback periods for refinery processing units must be
weighed against anticipated crude oil costs and the projected differential between light and
heavy crude oil prices.
Crude slates and refinery configurations must take into account the type of products that will
ultimately be needed in the marketplace. The quality specifications of the final products are
also increasingly important as environmental requirements become more stringent.
Crude Slate
Different types of crude oil yield a different mix of products depending on the crude oil´s
natural qualities. Crude oil types are typically differentiated by their density (measured as
API gravity) and their sulphur content. Crude oil with a low API gravity is considered a heavy
crude oil and typically has a higher sulphur content and a larger yield of lower-valued
products. Therefore, the lower the API of a crude oil, the lower the value it has to a refiner as
it will either require more processing or yield a higher percentage of lower-valued by-
products such as heavy fuel oil, which usually sells for less than crude oil.
Crude oil with a high sulphur content is called a sour crude while sweet crude has a low
sulphur content. Sulphur is an undesirable characteristic of petroleum products, particularly
in transportation fuels. It can hinder the efficient operation of some emission control
technologies and, when burned in a combustion engine, is released into the atmosphere
where it can form sulphur dioxide. With increasingly restrictive sulphur limits on
transportation fuels, sweet crude oil sells at a premium. Sour crude oil requires more severe
processing to remove the sulphur. Refiners are generally willing to pay more for light, low
sulphur crude oil.
Most refineries in Western Canada and Ontario were designed to process the light sweet
crude oil that is produced in Western Canada. Unlike leading refineries in the U.S.,
Canadian refineries in these regions have been slower to reconfigure their operations to
process lower cost, less desirable crude oils, instead choosing to rely extensively on the
abundant, domestically-produced, light, sweet crudes. As long as these lighter crudes were
available, refining economics were insufficient to warrant new investment in heavy oil
conversion capacity.
However, with growing oil sands production and the declining production of conventional light
sweet crudes, refineries in Western Canada and Ontario have started to make the
investment required to process the increasing supply of heavier crudes. In 2003, Shell
Canada completed the conversion of their Scotford refinery to use bitumen feedstock. In the
fall of 2003, Consumer´s Co-operative Refineries Ltd completed a 35,000 bbls/day
expansion of their refinery in Regina, Saskatchewan. This increased their heavy oil refining
capacity to approximately 85,000 bbls/day. Petro-Canada has also announced plans to do a
major refitting of their Edmonton refinery. Although this construction is not expected to
increase their capacity, it will allow them to upgrade and refine oil sands feedstock. The
Page 148
$1.2 billion CDN project will significantly expand the existing coker at Edmonton allowing for
approximately 53,000 bbls/day of bitumen upgrading. Similarly, Suncor is expected to do a
feedstock conversion at its Sarnia refinery to run more lower value oil sands feedstock.
Much of this investment by the large integrated oil companies (companies that are involved
in both the production of crude oil and the manufacturing and distribution of petroleum
products) is associated with ensuring a market for their growing oil sands production.
In Western Canada and Ontario, almost 50% of the crude oil processed by refiners is
conventional light, sweet crude oil and another 25% is high quality synthetic crude oil.
Synthetic crude is a light crude oil that is derived by upgrading oil sands. Most of the
remaining crude oil processed by these refineries is heavy, sour crude. The crude slate is
expected to change significantly in the years ahead as refiners increase their capacity to
process heavy crude oil and lower quality synthetic crudes.
Refineries in Atlantic Canada and Quebec are dependent on imported crudes and tend to
process a more diverse crude slate than their counterparts in Western Canada and Ontario.
These refiners have the capacity to purchase crude oil produced almost anywhere in the
world and therefore have incredible flexibility in their crude buying decisions. Approximately
1/3 of crude processed in Eastern Canada and Quebec is conventional, light sweet crude
and another 1/3 is medium sulphur, heavy crude oil. The remaining 1/3 is a combination of
sour light, sour heavy and very heavy crude oil. The crude slate in Eastern Canada is
expected to remain much more static than that in Western Canada and Ontario, as these
refiners are not constrained by the quality or volume of domestic crude production.
Figure 3 illustrates the product yield for six typical types of crude oil processed in Canada. It
includes both light and heavy as well as sweet and sour crude oils. A very light condensate
(62 API) and a synthetic crude oil are also included. The chart compares the different output
when each crude type is processed in a simple distillation refinery. The output is broken
down into five main product groups: gasoline, propane and butane (C3/C4), Cat feed (a
partially processed material that requires further refining to make usable products), distillate
(which includes diesel oil and furnace oil) and residual fuel (the heaviest and lowest-valued
part of the product output, used to make heavy fuel oil and asphalt).
Refinery Configuration
A refiner´s choice of crude oil will be influenced by the type of processing units at the
refinery. Refineries fall into three broad categories. The simplest is a topping plant, which
Page 149
consists only of a distillation unit and probably a catalytic reformer to provide octane. Yields
from this plant would most closely reflect the natural yields from the crude processed.
Typically only condensates or light sweet crude would be processed at this type of facility
unless markets for heavy fuel oil (HFO) are readily and economically available. Asphalt
plants are topping refineries that run heavy crude oil because they are only interested in
producing asphalt.
The next level of refining is called a cracking refinery. This refinery takes the gas oil portion
from the crude distillation unit (a stream heavier than diesel fuel, but lighter than HFO) and
breaks it down further into gasoline and distillate components using catalysts, high
temperature and/or pressure.
The last level of refining is the coking refinery. This refinery processes residual fuel, the
heaviest material from the crude unit and thermally cracks it into lighter product in a coker or
a hydrocraker. The addition of a fluid catalytic cracking unit (FCCU) or a hydro cracker
significantly increases the yield of higher-valued products like gasoline and diesel oil from a
barrel of crude, allowing a refinery to process cheaper, heavier crude while producing an
equivalent or greater volume of high-valued products.
Hydrotreating is a process used to remove sulphur from finished products. As the
requirement to produce ultra low sulphur products increases, additional hydrotreating
capability is being added to refineries. Refineries that currently have large hydrotreating
capability have the ability to process crude oil with a higher sulphur content.
Figure 4 demonstrates that using the same crude input (heavy crude with a 27 API) yields a
very different range of petroleum products depending on the refining units and processes
used. In the case of the cracking refinery, the addition of other blending materials at various
stages of production is required but the resulting volumetric output is greater than the volume
of the crude oil input. Each refinery is unique due to age / technology and modifications over
time, but generalizations are possible. The installation of additional conversion capability
increases the yield of clean products and reduces the yield of heavy fuel oil. However,
increased conversion capability would generally result in higher energy use and, therefore,
higher operating costs. These higher operating and capital costs must be weighed against
the lower cost of the heavier crude oil.
Canada has primarily cracking refineries. These refineries run a mix of light and heavy crude
oils to meet the product slate required by Canadian consumers. Historically, the abundance
of domestically produced light sweet crude oils and a higher demand for distillate products,
such as heating oil, than in some jurisdictions reduced the need for upgrading capacity in
Canada. However, in more recent years, the supply of light sweet crude has declined and
newer sources of crude oil tend to be heavier. Many of the Canadian refineries are now
being equipped with upgraders to handle the heavier grades of crude oil currently being
produced.
Page 150
[1] Source: NRCan surveys
Product Slate
Refinery configuration is also influenced by the product demand in each region. Refineries
produce a wide range of products including: propane, butane, petrochemical feedstock,
gasolines (naphtha specialties, aviation gasoline, motor gasoline), distillates (jet fuels, diesel,
stove oil, kerosene, furnace oil), heavy fuel oil, lubricating oils, waxes, asphalt and still gas.
Nationally, gasoline accounts for about 40% of demand with distillate fuels representing
about one third of product sales and heavy fuel oil accounting for only eight percent of sales.
Total petroleum product demand is distributed almost equally across the regions, with
Atlantic/Quebec, Ontario and the West each accounting for about one third of total sales.
However, the mix of products varies quite significantly among the regions. [2]
In the Atlantic provinces, where furnace oil (light heating oil) is the primary source of home
heating, distillate fuels make up 40% of product demand, and heavy fuel oil, used to
generate electricity, accounts for another 24%. Gasoline sales account for less than 30% of
product demand.
In Quebec, where natural gas and hydroelectricity are prevalent, distillate fuel has a 34%
share of sales and gasoline is about 40%. Similarly, in Ontario, gasoline sales outpace
distillate sales and account for more than 45% of total product demand, with distillates at less
than 30%.
In Western Canada, agricultural use is one of the primary drivers behind distillate demand
and gasoline and distillate each account for about 40% of total petroleum product sales.
These regional differences in product demand have influenced the configurations of the
refineries in each area.
By comparison, in the U.S., the demand for gasoline is much larger than distillate demand
and, therefore, refiners configure their installations to maximize gasoline production.
Page 151
Gasoline sales account for nearly 50% of demand while distillate sales account for less than
30% of product demand. In several Western European countries, most notably Germany
and France, policies exist that encourage the use of diesel engines creating a much stronger
distillate component. Gasoline accounts for less than 20% of petroleum product sales in
Europe.
The US refineries are configured to process a large percentage of heavy, high sulphur crude
and to produce large quantities of gasoline, and low amounts of heavy fuel oil. U.S. refiners
have invested in more complex refinery configurations, which allow them to use cheaper
feedstock and have a higher processing capability.
Canada´s refineries do not have the high conversion capability of the US refineries, because,
on average, they process a lighter, sweeter crude slate. Canadian refineries also face a
higher distillate demand, as a percent of crude, than those found in the U.S. so gasoline
yields are not as high as those in the US, but are still significantly higher than European
yields.
The relationship between gasoline and distillate sales can also create challenges for refiners.
A refinery has a limited range of flexibility in setting the gasoline to distillate production ratio.
Beyond a certain point, distillate production can only be increased by also increasing
gasoline production. For this reason, Europe is a major gasoline exporter, primarily to the
U.S.
Page 152
Mass-exchange processes, such as distillation, absorption, extraction, adsorption and drying
are used in chemical technology for separation of substances into their components. These
processes have common features.
At least three substances are used in a mass-exchange process, namely, a distributive
substance, which forms the first phase, the second distributive substance - the second
phase, and a distributed substance, that migrates from one phase to another. The driving
force of this process is determined by the difference between current and equilibrium
concentrations of substances. The correlation between these concentrations could be linear
(for absorption or extraction) or non-linear (for distillation). However, they (concentrations)
strongly depend on the process parameters (temperature, pressure), and on the presence of
various additives. Industrial apparatus is designed with respect to the certain values of these
parameters and certain concentrations of initial products. In reality, disturbances could lead
to the distortion of material and thermal balances in apparatus, to the deviation of pressure
and temperatures from the desired values, and , finally, to the deviation of the composition
(quality) of final products from the required ones. Therefore, the objective of control systems
is to stabilise these process parameters in order to maintain the material and thermal
balances by suppressing various disturbances.
The majority of mass-exchange processes occur in columns, which have several meters in
diameter and several dozens meters in height. Time delays in these apparatus could vary
from several minutes to several dozens of minutes. Therefore, single-loop control systems
have large offsets and long time transient processes. Employing cascade control systems
one can improve the performance of these processes. Deficiency of instrumentation for
continuous measuring of the composition of intermediate and final products creates
difficulties for the accurate control. In such cases, control of the quality is performed
indirectly, i.e. by controlling the boiling temperatures, densities or viscosities of mixtures.
Page 153
Example 1.
Fig. 22.1 presents a schematic view of a simple control system, which comprise six single-
loop control systems. This control system stabilises the composition of the distillate product
and maintains the material and thermal balances in the distillation column.
The major controller, which stabilises the composition of the distillate, is the temperature
controller (pos. 5-2). It manipulates the flow rate of reflux (pos. 5-3). Temperature controller
(pos. 1-2) controls feed temperature by manipulating the flowrate of heat-transfer agent.
Level controllers (pos. 7-2) and (pos. 8-2) maintain the material balance of liquid phase in the
column, and pressure controller (pos. 4-2) maintains the material balance of vapor phase.
The flowrate controller (pos. 3-3) stabilises the flowrate of heating vapour into the re-boiler.
If our task is to control the composition of the bottom product, then the flowrate of steam for
heating is manipulated by the control signal from the temperature controller (pos. 2-2), and
the flowrate of reflux is manipulated by the flowrate controller (pos. 6-3). Simultaneous
control of compositions of the distillate and bottom products or temperatures at the top and at
the bottom of the column usually is not used because these process variables are
interdependent. An application of feedback control systems may reduce the stability of these
control systems.
Page 154
Coolant out
PT
TT PC
TC
1-1
4-1 1-2
4-2
LT
TT LC
TC
Coolant in 1-1
7-1 1-2
7-2
TT TC 4-3
1-1
5-1 1-2
5-2
TC TT 5-3 Distillate
1-2 1-1 FE
TC
1-2
6-1
Reflux 7-3
FT
TT
Heat-transfer 1-2
1-1
6-2
agent
FC
FT
TT
1-3 1-2
1-1
6-3
TC TT
Feed mixture 1-2
2-2 1-1
2-1
LT
TT LC
TC
3-4 1-1
8-1 1-2
8-2
FE
TC
1-2
3-1
Steam for
heating FT
TT
1-2
1-1
3-2
8-3 Bottom product
FC
FT
TT
1-2
1-1
3-3
155
Coolant out
PT
TT PC
TC
1-1
3-1 1-2
3-2
LT
TT LC
TC
Coolant in 1-1
5-1 1-2
5-2
3-3
TT 5-3
1-1
4-1
Heat-transfer
agent TC
4-2
1-3 FT
TT TC
1-2
1-1
4-3 4-4
Feed mixture
FE
TC
1-2
2-3
FT
TT
1-2
1-1
2-4
Steam for LT
TT LC
TC
heating 2-6 1-1
6-1 1-2
6-2
FE
TC
1-2
2-1
FT
TT
1-2
1-1
2-2
Bottom product
FFC
FT
TT
1-2
1-1
2-5
6-3
156
Another significant disadvantage of using the temperature of the product to control its
composition is as follows: variations of this temperature due to changes in the
composition of the product are comparable with its variations caused by pressure
changes in the column. These temperature variations may be comparable with the
accuracy of a temperature sensor. Let variations in the composition of the product
can not exceed of ±1%. The difference in boiling temperatures of components is 20
°C. Then corresponding variations of temperature are equal to ±0.2 °C. For a
potentiometer with a temperature range from 0 to 150 °C and an accuracy of 0.5%
the error in temperature measurements is 0.75 °C. One should take this into account
when select a temperature sensing device.
Example 2.
In Fig. 22.2 the controller (pos. 2-5) controls the flowrate ratio between a feed
mixture and steam for heating in a re-boiler, thus reducing the consumption of energy
for the separation of mixture into its components. A cascade control system is used
to control temperature at the top of the distillation column by introduction of a
correction signal (pos. 4-3) from the loop for measuring temperature on the selected
tray of the column.
These are only two simple examples, whereas in reality more complex control
systems are used.
22.2. Control of absorption columns
The major sources of disturbances are the flowrate, composition and temperature of
a gas stream entering for absorption, and, sometimes, the temperature and
composition of the liquid absorbing stream. The major manipulated variables are the
flowrate of the liquid absorbing stream and flowrate of the bottom product.
When one controls pressure and level in the column this maintains the material
balance between gaseous and liquid phases. A control system with several single
control loops (see Fig. 22.3a) keeps the material and thermal balances by using the
level controller (pos.2-2) and pressure controller (pos. 1-2); and keeps the
composition of the bottoms product at the desired value by using the composition
controller (pos. 3-2). An introduction of a control signal using a flow ratio controller
(pos. 3-5 in Fig. 22.3b) suppresses the effect of the variation of the gaseous mixture
flowrate (this is the disturbance) and improves the performance of this cascade
control system. Cascade control system (see Fig. 22.3c) uses the composition of the
gaseous-liquid mixture on the certain tray of the column as an auxiliary controlled
variable. In this case the composition controller (pos. 3-2) is the primary, or master,
controller, whereas the composition controller (pos. 3-5) is the secondary, or slave
controller.
4 4 4
1-3 1-3 1-3
PT PC PT PC PT PC
1-1 1-2 1-1 1-2 1-1 1-2
2
FT
2 3-3 2
3-3 3-6 3-5
FFC CT
3-5 3-3
1 FT
1
1
3-4 CC
LT LT 3-4
2-1 2-1
LT
2-1
LC LC
2-2 2-2
LC
2-2
CT CC CT CC CT CC
3-1 3-2 3-1 3-2 3-1 3-2
3 3 3
a b c
Figure 22.3. Control of absorbers.
1 - gas mixture;
2 - liquid absorbent;
3 - bottom product;
4 - end gases.
158