You are on page 1of 158

Fundamentals of Process Plant Equipment Control

25-28 June
Petroleum Training Centre

Ron Frend

Fundamentals of Process Plant & Equipment Control 1


©Ron Frend 2006
FUNDAMENTALS AND HYDRAULICS 8

PFD - Process Flow Diagram 9


The Process Flow Diagram - PFD, a schematic illustration of the system 9

P&ID - Piping and Instrumentation Diagram 11

P&ID / PFD Symbols 13


General Instrument or Function Symbols -1 13
Instrument or Function Symbols - 2 14
General Instrument or Function Symbols 3 15
General Instrument or Function Symbols – 4 16

FIRST LAW OF THERMODYNAMICS 17


Thermodynamics 17
Example 1: 23

HYDRAULICS & FLUID FLOW 24

Pressure & Head 25

Bernoulli’s Theorem 33
How pressure and velocity interact 33

Liquid Flow 35
Flow Units 35
Restriction Flow Sensors 36

Two Phase & Multiphase Flow 38

Reynolds Number 41

SOME NOTES FOR THE METRIC PIPE FRICTION CHART SHOWN BELOW 43

FRICTION LOSS FOR METRIC PIPE, VALVES AND FITTINGS 44

PUMPS & COMPRESSORS 45

Centrifugal pump designs 45

Pump Affinity Laws 47

Performance Curves 49

Compressors and Expanders 52


CENTRIFUGAL COMPRESSORS 54

HEAT TRANSFER AND REACTION ENGINEERING 58

Thermal Conductivity 59

Conduction & Convection 61

Fundamentals of Process Plant & Equipment Control 2


©Ron Frend 2006
Conduction: 61
Examples of conduction: 61
Convection: 61
Example of convection: 61

Insulation 62
Heat transfer coefficients and calculation 63
Heat exchangers, type and sizing 65
Steam Reboilers 69
Condensers and sub-cooling 70
Introduction to energy recovery 73

An Introduction to Pinch Technology 76


What is Pinch Technology? 76
Basic Concepts of Pinch Analysis 79
Steps of Pinch Analysis 79

Catalysts and Reaction Engineering 92


Chemical Reactions 93
Reaction Kinetics 95

Crude Distillation 96

Catalytic Cracking 99
Introduction 99
FCC Process Configuration 100
Main Characteristics 100
Equipment in FCC 101
Feedstock & Yield 101
Conclusion 101

Catalysis 103

CATALYSIS AND DISTILLATION 104

Distillation and Other Separation Processes 105

Distillation basics 106


ATMOSPHERIC DISTILLATION 106
Feeds and Products for Atmospheric Distillation 106
Feed Preheat Exchanger Train 107
Atmospheric Crude Fractionator 111
Trends and Variations in Atmospheric Unit Design 112
Phase behavior and vapour/liquid equilibrium 114
Gas/Liquid separation 117

Industrial uses of Fractional Distillation 128


Trays: function, pressure drop, efficiency, flooding, operations, and damage 129

Tower Capacity: 133


Equipment and Column Sizing 133
Pressure Drop 136
Column Height 137

Absorption & Adsorption 138


Separation > Absorption 138

Fundamentals of Process Plant & Equipment Control 3


©Ron Frend 2006
Separation > Adsorption 138

Solid Liquid Separation 138


Introduction 139

MODULE 5 – PROCESS CONTROL & ECONOMICS 141

Process Control Basics 142


Measured Variables 142

Process Control Systems 142


Why Control? 142
Control Objectives 143
Techniques of Control 144

Process Economics 148


Refinery Economics 148
Crude Slate 148
Product Slate 151
22.1. Control of distillation columns. 153

Fundamentals of Process Plant & Equipment Control 4


©Ron Frend 2006
FIGURE 1 FIRST LAW OF THERMODYNAMICS ......................................................................18
FIGURE 2 CONTROL VOLUME CONCEPTS...............................................................................19
FIGURE 3 OPEN SYSTEM CONTROL VOLUMES .....................................................................20
FIGURE 4 MULTIPLE CONTROL VOLUMES IN SAME SYSTEM..........................................21
FIGURE 5 STATIC HEAD................................................................................................................25
FIGURE 6 HEAD EXAMPLE ............................................................................................................27
FIGURE 7 HEAD EXAMPLE 2 ........................................................................................................29
FIGURE 8 SUCTION HEAD EXAMPLE ........................................................................................31
FIGURE 9 THREE DIFFERENT TYPES OF RESTRICTIONS COMMONLY ARE USED TO
CONVERT FLOW RATE TO A PRESSURE DIFFERENCE, P1 - P2.............................37
FIGURE 10 MULTIPHASE FLOW ..................................................................................................38
FIGURE 11 THREE DIFFERENT TYPES OF OBSTRUCTION FLOW METERS .................40
FIGURE 12 PIPE FRICTION HEAD LOSS NOMOGRAPH ......................................................43
FIGURE 13 FRICTION LOSS FOR FITTINGS...........................................................................44
FIGURE 14 CENTRIFUGAL PUMP PERFORMANCE CURVE .................................................49
FIGURE 15 PUMP CURVES SHOWING SPEED & DIAMETER.............................................50
FIGURE 16 COMPRESSOR TYPES...............................................................................................52
FIGURE 17 COMPRESSOR SELECTION NOMOGRAPH ........................................................53
FIGURE 18 CENTRIFUGAL COMPRESSOR FLOW RANGE ..................................................55
FIGURE 19 COMPRESSOR CURVES...........................................................................................55
FIGURE 20 TUBE & SHELL HEAT EXCHANGER ........................................................................66
FIGURE 21 PLATE TYPE HEAT EXCHANGER ...........................................................................67
FIGURE 22 PARALLEL FLOW HEAT EXCHANGER....................................................................67
FIGURE 23 COUNTER FLOW HEAT EXCHANGER ....................................................................68
FIGURE 24 CROSS FLOW HEAT EXCHANGER..........................................................................68
FIGURE 25 A STEAM REBOILER .................................................................................................69
FIGURE 26 REBOILER SCHEMATIC ...........................................................................................69
FIGURE 27 CONDENSER ...............................................................................................................70
FIGURE 28 CHANGE OF SECTION - CHANGE IN PRESSURE ...........................................72
FIGURE 29 A SIMPLE FLOW SCHEME WITH T-H PROFILE ...............................................77
FIGURE 30 IMPROVED FLOW SCHEME WITH T-H PROFILE.............................................77
FIGURE 31 GRAPHIC REPRESENTATION OF TRADITIONAL AND PINCH DESIGN
APPROACHES ...........................................................................................................................78
FIGURE 32 STEPS OF PINCH ANALYSIS..................................................................................80
FIGURE 33 HEAT TRANSFER EQUATION.................................................................................82
FIGURE 34 TEMPERATURE-ENTHALPY RELATIONS USED TO CONSTRUCT
COMPOSITE CURVES ............................................................................................................83
FIGURE 35 COMBINED COMPOSITE CURVES .......................................................................84
FIGURE 36 GRAND COMPOSITE CURVE..................................................................................85
FIGURE 37 HEN AREA MIN ESTIMATION FROM COMPOSITE CURVES........................86
FIGURE 38 ENERGY-CAPITAL COST TRADE OFF (OPTIMUM DTMIN) ..........................87
FIGURE 39 TYPICAL GRID DIAGRAM .......................................................................................89
FIGURE 40 FLUID CATALYTIC CRACKING ............................................................................100
FIGURE 41 EARLY BATCH FRACTIONATION ........................................................................106
FIGURE 42 DESALTING - SINGLE STAGE.............................................................................108
FIGURE 43 DESALTING - 2 STAGE .........................................................................................108
FIGURE 44 CRUDE UNIT FURNACE .........................................................................................109
FIGURE 45 TEMPERATURE-COMPOSITION DIAGRAM FOR AMMONIA-BUTANE AT
20.7 BAR..................................................................................................................................115
FIGURE 46 T-X-Y DIAGRAM FOR AMMONIA-BUTANE AT 20.7 BAR ...........................116
FIGURE 47 T-X-Y DIAGRAM FOR AMMONIA-BUTANE AT 4, 10, AND 20.7 BAR ....116
FIGURE 48 PARTICLE DIAMETERS OF TYPICAL CONTAMINANTS..............................118
FIGURE 49 COALESCER CUT-AWAY VIEW ...........................................................................120
FIGURE 50 AEROSOL SIZES ......................................................................................................121
FIGURE 51 COALESCER EFFICIENCY CHANGE VS. GAS FLOW RATE ........................122
FIGURE 52 LIQUID AEROSOL SEPARATION EFFICIENCY TEST SCHEMATIC ..........123

Fundamentals of Process Plant & Equipment Control 5


©Ron Frend 2006
FIGURE 53 EFFECT OF CHEMICAL TREATMENT ON COALESCER PERFORMANCE.125
FIGURE 54 SCHEMATIC OF PALL LG COALESCER TEST STAND ..................................126
FIGURE 55 FIELD TEST RESULTS OF GAS STREAMS IN REFINERIES AND GAS
PROCESSING PLANTS.........................................................................................................127
FIGURE 56 TYPICAL DISTILLATION TOWERS IN OIL REFINERIES.............................128
FIGURE 57 VALVE TRAYS (PHOTOS COURTESY OF PAUL PHILLIPS).........................129
FIGURE 58 VAPOUR & LIQUID FLOW ACROSS COLUMN/TRAY .................................130
FIGURE 59 LIQUID DISTRIBUTORS - GRAVITY (LEFT), SPRAY (RIGHT)(PHOTOS
COURTESY OF PAUL PHILLIPS).......................................................................................131
FIGURE 60 TRAY PACKINGS ......................................................................................................132
FIGURE 61 STRUCTURED PACKING (PHOTO COURTESY OF PAUL PHILLIPS)........132
FIGURE 62 TYPICAL GRAVITY SEPARATION SYSTEM ......................................................139
FIGURE 63 FEEDBACK CONTROL LOOP ................................................................................145
FIGURE 64 LARGE MAGNITUDE DISTURBANCE.................................................................146
FIGURE 65 TIME DELAY ..............................................................................................................147
FIGURE 66 DISTILLATION COLUMN WITH SIX SINGLE-LOOP CONTROL SYSTEMS.
....................................................................................................................................................155
FIGURE 67 DISTILLATION COLUMN WITH SINGLE-LOOP AND CASCADE CONTROL
SYSTEMS .................................................................................................................................156

TABLES
TABLE 1THERMAL CONDUCTIVITY PROPERTIES 60
TABLE 2 TYPICAL STREAM DATA 81
TABLE 3 TYPES OF LIQUID/GAS SEPARATORS 121
TABLE 4 COMPARISON OF THE DOP AND LASE 123

Fundamentals of Process Plant & Equipment Control 6


©Ron Frend 2006
Consultants Profile

• Ronald Frend B.Sc. M.Vib.Inst M.ThermographicInst.


o Shell Tankers (UK) Ltd
o 1970 – 1984
o Marine Engineer Certified Chief Engineer
• Petroleum Development (Oman)
o 1984 – 1989
o Rotating Equipment Specialist – Vibration Analysis
o Head of Maintenance Planning
o Head of Surface Maintenance (North Oman)
• Private Consultant
o 1989 – present
o Petro-Chemical,
o Manufacturing,
o Shipping,
o Process

Fundamentals of Process Plant & Equipment Control 7


©Ron Frend 2006
Fundamentals And Hydraulics
Basics
• Process equipment and flow diagrams
• P&IDs
• Mass and energy balances

Fundamentals of Process Plant & Equipment Control 8


©Ron Frend 2006
PFD - Process Flow Diagram

The Process Flow Diagram - PFD, a schematic illustration of the


system

A Process Flow Diagram - PFD - (or System Flow Diagram - SFD) shows the
relationships between the major components in the system. PFD also tabulate
process design values for the components in different operating modes, typical
minimum, normal and maximum. A PFD does not show minor components, piping
systems, piping ratings and designations.

A PFD should include:

• Process Piping
• Major equipment symbols, names and identification numbers
• Control, valves and valves that affect operation of the system
• Interconnection with other systems
• Major bypass and recirculation lines
• System ratings and operational values as minimum, normal and maximum
flow, temperature and pressure
• Composition of fluids

This figure depicts a small and simplified PFD:

Fundamentals of Process Plant & Equipment Control 9


©Ron Frend 2006
System Flow Diagrams should not include:

• pipe class
• pipe line numbers
• minor bypass lines
• isolation and shutoff valves
• maintenance vents and drains
• relief and safety valve
• code class information
• seismic class information

Fundamentals of Process Plant & Equipment Control 10


©Ron Frend 2006
P&ID - Piping and Instrumentation Diagram
A Piping and Instrumentation Diagram - P&ID, is a schematic illustration of functional
relationship of piping, instrumentation and system equipment components

P&ID shows all of piping including the physical sequence of branches, reducers,
valves, equipment, instrumentation and control interlocks.

The P&ID are used to operate the process system.

A P&ID should include:

• Instrumentation and designations


• Mechanical equipment with names and numbers
• All valves and their identifications
• Process piping, sizes and identification
• Miscellaneous - vents, drains, special fittings, sampling lines, reducers,
increasers and swagers
• Permanent start-up and flush lines
• Flow directions
• Interconnections references
• Control inputs and outputs, interlocks
• Interfaces for class changes
• Seismic category
• Quality level
• Annunciation inputs
• Computer control system input
• Vendor and contractor interfaces
• Identification of components and subsystems delivered by others
• Intended physical sequence of the equipment

This figure depicts a very small and simplified P&ID:

Fundamentals of Process Plant & Equipment Control 11


©Ron Frend 2006
A P&ID should not include:

• Instrument root valves


• control relays
• manual switches
• equipment rating or capacity
• primary instrument tubing and valves
• pressure temperature and flow data
• elbow, tees and similar standard fittings
• extensive explanatory notes

Fundamentals of Process Plant & Equipment Control 12


©Ron Frend 2006
P&ID / PFD Symbols

General Instrument or Function Symbols -1

Fundamentals of Process Plant & Equipment Control 13


©Ron Frend 2006
Instrument or Function Symbols - 2

Fundamentals of Process Plant & Equipment Control 14


©Ron Frend 2006
General Instrument or Function Symbols 3

Fundamentals of Process Plant & Equipment Control 15


©Ron Frend 2006
General Instrument or Function Symbols – 4

Fundamentals of Process Plant & Equipment Control 16


©Ron Frend 2006
FIRST LAW OF THERMODYNAMICS

Thermodynamics
The First Law of Thermodynamics states:
Energy can neither be created nor destroyed, only altered in form.
For any system, energy transfer is associated with mass and energy crossing the
control boundary, external work and/or heat crossing the boundary, and the change
of stored energy within the control volume. The mass flow of fluid is associated with
the kinetic, potential, internal, and "flow" energies that affect the overall energy
balance of the system. The exchange of external work and/or heat complete the
energy balance.
The First Law of Thermodynamics is referred to as the Conservation of Energy
principle, meaning that energy can neither be created nor destroyed, but rather
transformed into various forms as the fluid within the control volume is being studied.
The energy balance spoken of here is maintained within the system being studied.
The system is a region in space (control volume) through which the fluid passes. The
various energies associated with the fluid are then observed as they cross the
boundaries of the system and the balance is made.
A system may be one of three types: isolated, closed, or open. The open system,
the most general of the three, indicates that mass, heat, and external work are
allowed to cross the control boundary. The balance is expressed in words as: all
energies into the system are equal to all energies leaving the system plus the change
in storage of energies within the system.
Remember that energy in thermodynamic systems is composed of
• kinetic energy (KE),
• potential energy (PE),
• internal energy (U), and
• flow energy (PL); as well as
• heat and work processes.

For most industrial plant applications that most frequently use cycles, there is no
change in storage (i.e. heat exchangers do not swell while in operation).
In equation form, the balance appears as indicated in the heat balance figure below:

Fundamentals of Process Plant & Equipment Control 17


©Ron Frend 2006
Figure 1 First Law of Thermodynamics

Fundamentals of Process Plant & Equipment Control 18


©Ron Frend 2006
Heat and/or work can be directed into or out of the control volume. But, for
convenience and as a standard convention, the net energy exchange is presented
here with the net heat exchange assumed to be into the system and the net work
assumed to be out of the system.
If no mass crosses the boundary, but work and/or heat do, then the system is
referred to as a "closed" system.
If mass, work and heat do not cross the boundary (that is, the only energy exchanges
taking place are within the system), then the system is referred to as an isolated
system. Isolated and closed systems are nothing more than specialized cases of the
open system. In this text, the open system approach to the First Law of
Thermodynamics will be emphasized because it is more general. Also, almost all
practical applications of the first law require an open system analysis.
An understanding of the control volume concept is essential in analyzing a
thermodynamic problem or constructing an energy balance. Two basic approaches
exist in studying Thermodynamics:
• the control mass approach and the
• control volume approach.
The former is referred to as the LeGrange approach and the latter as the Eulerian
approach. In the control mass concept, a "clump" of fluid is studied with its
associated energies. The analyzer "rides" with the clump wherever it goes, keeping a
balance of all energies affecting the clump.

Figure 2 Control volume concepts

Fundamentals of Process Plant & Equipment Control 19


©Ron Frend 2006
The control volume approach is one in which a fixed region in space is established
with specified control boundaries, as shown above. The energies that cross the
boundary of this control volume, including those with the mass crossing the
boundary, are then studied and the balance performed. The control volume approach
is usually used today in analyzing thermodynamic systems. It is more convenient
and requires much less work in keeping track of the energy balances.

Figure 3 Open system control volumes

Fundamentals of Process Plant & Equipment Control 20


©Ron Frend 2006
Figure 4 Multiple Control Volumes in Same System
The forms of energy that may cross the control volume boundary include those
associated with the mass (m) crossing the boundary. Mass in motion has potential
(PE), kinetic (KE), and internal energy (U). In addition, since the flow is normally
supplied with some driving power (a pump for example), there is another form of
energy associated with the fluid caused by its pressure. This form of energy is
referred to as flow energy (Pn-work). The thermodynamic terms thus representing
the various forms of energy crossing the control boundary with the mass are given as
m (u + Pn + ke + pe).
In open system analysis, the u and Pn terms occur so frequently that another
property, enthalpy, has been defined as h = u + Pn. This results in the above
expression being written as m (h + ke + pe). In addition to the mass and its energies,
externally applied work (W), usually designated as shaft work, is another form of
energy that may cross the system boundary.
In order to complete and satisfy the conservation of energy relationship, energy that
is caused by neither mass nor shaft work is classified as heat energy (Q). Then we
can describe the relationship in equation form as follows.
m(hin+pein-kein) Q = m(hout-peout+keout) +W

Fundamentals of Process Plant & Equipment Control 21


©Ron Frend 2006
where:

Fundamentals of Process Plant & Equipment Control 22


©Ron Frend 2006
Example 1 illustrates the use of the control volume concept while solving a first law
problem involving most of the energy terms mentioned previously.

Example 1:
Open System Control Volume
The enthalpies of steam entering and leaving a steam turbine are 1349
Btu/lbm and 1100 Btu/lbm, respectively.
The estimated heat loss is 5 Btu/lbm of steam.
The flow enters the turbine at 164 ft/sec at a point 6.5 ft above the discharge
and leaves the turbine at 262 ft/sec.
Determine the work of the turbine
Where”

Fundamentals of Process Plant & Equipment Control 23


©Ron Frend 2006
Hydraulics & Fluid Flow
Hydraulics and Fluid Flow
• Pressure and head
• Bernoulli’s theorem and its field applications
• Flow of liquids
• Reynolds number and pressure drop in pipes
• Two-phase and multi-phase flow
• Pumps and compressors
• Mixing and mixers

Fundamentals of Process Plant & Equipment Control 24


©Ron Frend 2006
Pressure & Head

It turns out that head is a very convenient term in the pumping business. Pressure is
not as convenient a term because the amount of pressure that the pump will deliver
depends upon the weight (specific gravity) of the liquid being pumped and the
specific gravity changes with the fluid temperature and concentration.

Each litre of liquid has weight, so we can easily calculate the kilograms per minute
being pumped. Head or height is measure in meters so if we multiply these two
together we get kilogram meters per minute which converts directly to work at the
rate of 610 kgm/min = 1 kilowatt.

If you are more comfortable with metric horsepower units, you should know that
735.5 watts makes one metric horsepower

If you will refer to the Figure below you should get a clear picture of what is meant by
static head. Please note that we always measure from the centreline of the pump to
the highest liquid level

Figure 5 Static Head

To calculate head accurately we must calculate the total head on both the suction
and discharge sides of the pump. In addition to the static head we will learn that there
is a head caused by resistance in the piping, fittings and valves called friction head,
and an additional head caused by any pressure that might be acting on the liquid in
the tanks, including atmospheric pressure. This head is called "surface pressure
head".

Once we know all of these heads it gets simple. We subtract the suction head from
the discharge head and the head remaining will be the amount of head that the pump
must be able to generate at its rated flow.

Here is how it looks in a formula:

Fundamentals of Process Plant & Equipment Control 25


©Ron Frend 2006
System head = total discharge head - total suction head or H = hd - hs

The total discharge head is made from three separate heads:

hd = hsd + hpd + hfd

• hd = total discharge head


• hsd = discharge static head
• hpd = discharge surface pressure head
• hfd = discharge friction head

The total suction head also consists of three separate heads

hs = hss + hps - hfs

• hs = total suction head


• hss = suction static head
• hps = suction surface pressure head
• hfs = suction friction head

As we make these calculations you must be sure that all your calculations are made
in either "meters of liquid gauge" or "meters of liquid absolute". In case you have
forgotten "absolute" means that you have added atmospheric pressure (head) to the
gauge reading.

Normally head readings are made in gauge readings and we switch to the absolute
readings only when we want to calculate the net positive suction head available
(NPSHA) to find out if our pump is going to cavitate. We use the absolute term for
these calculations because we are often calculating a vacuum or using negative
numbers

We will begin by making some actual calculations. You will not have to look up the
friction numbers because I am going to give them to you, but you can find them in a
number of publications and these charts:

• Piping friction losses, metric,


• Valves and fittings losses, metric,

The next illustration (Figure #2) shows that the discharge head is still measured to
the liquid level, but you will note that it is now below the maximum height of the
piping.

Although the pump must deliver enough head to get up to the maximum piping height
it will not have to continue to deliver this head when the pump is running because of
the "siphon effect". There is of course a maximum siphon effect. It is derived from the
formula to convert pressure to head:

Fundamentals of Process Plant & Equipment Control 26


©Ron Frend 2006
Since atmospheric pressure at seal level is one bar we get a maximum siphon
distance of 10.2 meters

Figure 6 Head example

We will begin with the total suction head calculation

• The suction head is negative because the liquid level in the suction tank is
below the centreline of the pump:
o hss = -2 meters
• The suction tank is open so the suction surface pressure equals atmospheric
pressure :
o hps = 0 meters gauge

In these examples you will not be calculating the suction friction head. When you
learn how you will find that there are two ways to do it

• You would look at the charts and add up the K factors for the various fittings
and valves in the piping. You would then multiply these K factors by the
velocity head that is shown for each of the pipe sizes and capacities. This
final number would be added to the friction loss in the piping for the total
friction head.
• Or, you can look at a chart that shows the equivalent length of pipe for each
of the fittings and add this number to the length of the piping in the system to
determine the total friction loss.

For this example, I will tell you the total friction head on the suction side of the pump
is:

hfs = 1.5 meters at rated flow

• The total suction head is going to be a gauge value because atmosphere was
given as 0,
o hs = hss + hps - hfs = - 2 + 0 - 1.5 =
- 3.5 meters of liquid gauge at rated flow

Fundamentals of Process Plant & Equipment Control 27


©Ron Frend 2006
• The total discharge head calculation is similar
o The static discharge head is:
 hsd = 40 meters
o The discharge tank is also open to atmospheric pressure, so:
 hpd = 0 feet, gauge
o I will give you the discharge friction head as:
 hfd = 7 meters at rated flow
o The total discharge head is:
 hd = hsd + hpd + hfd = 40 + 0 +7 =
47 meters of liquid gauge at rated flow

The total system head calculation becomes:

Head = hd - hs
= 47 - (-3.5)

= 50.5 meters of liquid at rated flow

Our next example involves a few more calculations, but you should be able to handle
them without any trouble.

If we were pumping from a vented suction tank to an open tank at the end of the
discharge piping we would not have to consider vacuum and absolute pressures. In
this example we will be pumping from a vacuum receiver that is very similar to the
hotwell we find in many condenser applications

Again, to make the calculations you will need some pipe friction numbers that are
available from charts:

• Piping friction losses, metric,


• Valves and fittings losses, metric,

I will give you the friction numbers for the following examples.

Specifications:

• Transferring 300 m3/hr weak acid from the vacuum receiver to the storage
tank
• Specific Gravity = 0.98
• Viscosity = equal to water
• Piping = all 150 mm Schedule 40 steel pipe
• Discharge piping rises 15 meters vertically above the pump centreline and
then runs 135 meters horizontally. There is one 90° elbow in this line
• Suction piping has 1.5 meters of pipe, one gate valve, and one 90° elbow, all
of which are 150 mm in diameter.
• The minimum level in the vacuum receiver is 2 meters above the pump
centreline.
• The pressure on top of the liquid in the vacuum receiver is 500 mm of
mercury, vacuum.

Fundamentals of Process Plant & Equipment Control 28


©Ron Frend 2006
Figure 7 Head example 2

To calculate suction surface pressure use the following formula:

Now that you have all of the necessary information we will begin by dividing the
system into two different sections using the pump as the dividing line.

Total suction head calculation

• The suction side of the system shows a minimum static head of 2 meters
above suction centreline. Therefore, the static suction head is
o hss = 2 meters
• Using the first conversion formula, the suction surface pressure is

• The suction friction head fs equals the sum of all the friction losses in the
suction line. If you referenced the metric pipe friction loss table you would
learn that the friction loss in 150 mm. pipe at 300 m3/hr is 9 meters per 100
meters of pipe.

In 1.5 meters of pipe, friction loss = 15/100 x 9 = 0.14 meters

Fitting Equivalent length of straight pipe


150 mm normal bend elbow 3.4 meters
150 mm Gate valve 2.1 meters

Fundamentals of Process Plant & Equipment Control 29


©Ron Frend 2006
In a real life pumping application there would be other valves and fittings that
experience friction losses. You might find:

• Check valves
• Foot valves
• Strainers
• Sudden enlargements
• Shut off valves
• Entrance and exit losses

The loss in the suction fittings becomes:

In 5.5 meters of pipe friction loss = 55 / 100 x 9 = 0.50 meters


The total friction loss on the suction side is:
hfs = 0.14 + 0.50 = 0.64 meters at 300 m3/hr
The total suction head then becomes:
hs = hss + hps - hfs = 2 - 7.14 - 0.64
= - 5.78 meters gauge at 300 m3/hr

Now we will look at the total discharge head calculation

• Static discharge head = hsd = 15 meters


• Discharge surface pressure = hpd = 0 meters gauge
• Discharge friction head = hfd = sum of the following losses :

Friction loss in 150 mm pipe at 300 m3/hr, from the chart is 9 meters per hundred feet
of pipe.

• In 150 meters of pipe the friction loss = 150/100 x 9 = 13.5 meters


• Friction loss in 150 mm. Elbow:= 3.4/100 x 9 = 0.31 meters 1

The discharge friction head is the sum of the above losses, that is:

hfd = 13.5 + .31 = 13.81 meters at 300 m3/hr


The total discharge head then becomes:
hd = hsd + hpd + hfd
= 15 + 0 + 13.81 = 28.81 meters at 300 m3/hr.
Total system head calculation:
H = hd - hs
= 28.81 - (-5.78)
= 34.59 meters at 300 m3/hr

Our next example will be the same as the one we just finished except that there is an
additional 3 meters of pipe and another 90° flanged elbow in the vertical leg.

The total suction head will be the same as in the previous example. Take a look at
the figure below

Fundamentals of Process Plant & Equipment Control 30


©Ron Frend 2006
Figure 8 Suction head example

Nothing has changed on the suction side of the pump so the total suction head will
remain the same:

hs = - 5.78 meters at 300 m3/hr

Total discharge head calculation

• The static discharge head (hsd) will change from 15 meters to 12 meters
since the highest liquid surface in the discharge is now only 12 meters above
the pump centreline. This value is based on the assumption that the vertical
leg in the discharge tank is full of liquid and that as this liquid falls it will tend
to pull the liquid up and over the loop in the pipeline. This arrangement is
called a siphon leg.
• The discharge surface pressure is unchanged:
• hpd = 0 meters
• The additional 3 meters of pipe and the additional elbow will increase the
friction loss in the discharge pipe.

In 3 meters of pipe the friction loss = 3 / 100 x 9 = 0.27 meters

The friction loss in the additional elbow = 3.4 / 100 x 9 = 0.31 meters

The friction head will then increase as follows:

hfd = 0.27 + 0.31 = 0.58 at 300 m3/hr.


The total discharge head becomes:
hd = hsd + hpd + hfd
= 12 + 13.81+ 0 + 0.58
= 26.39 meters at 300 m3/hr

Fundamentals of Process Plant & Equipment Control 31


©Ron Frend 2006
Total system head calculation

Head = hd - hs
= 26.39 - (-5.78)
= 32.17 meters at 300 m3/hr.

Fundamentals of Process Plant & Equipment Control 32


©Ron Frend 2006
Bernoulli’s Theorem
Bernoulli's theorem: in fluid dynamics, relation among the pressure, velocity, and
elevation in a moving fluid (liquid or gas), the compressibility and viscosity (internal
friction) of which are negligible and the flow of which is steady, or laminar. First
derived (1738) by the Swiss mathematician Daniel Bernoulli, the theorem states, in
effect, that the total mechanical energy of the flowing fluid, comprising the energy
associated with fluid pressure, the gravitational potential energy of elevation, and the
kinetic energy of fluid motion, remains constant.
Bernoulli's theorem is the principle of energy conservation for ideal fluids in steady, or
streamline, flow. Bernoulli's theorem implies, therefore, that if the fluid flows
horizontally so that no change in gravitational potential energy occurs, then a
decrease in fluid pressure is associated with an increase in fluid velocity. If the fluid is
flowing through a horizontal pipe of varying cross-sectional area, for example, the
fluid speeds up in constricted areas so that the pressure the fluid exerts is least
where the cross section is smallest. This phenomenon is sometimes called the
Venturi effect, after the Italian scientist G.B. Venturi (1746-1822), who first noted the
effects of constricted channels on fluid flow.
Bernoulli's theorem is the basis for many engineering applications, such as aircraft-
wing design. The air flowing over the upper curved surface of an aircraft wing moves
faster than the air beneath the wing, so that the pressure underneath is greater than
that on the top of the wing, causing lift.

How pressure and velocity interact


static pressure + dynamic pressure = total pressure = constant
static pressure + 1/2 x density x velocity2 = total pressure = constant

General Concept:
The Bernoulli effect is simply a result of the conservation of energy. The work done
on a fluid (a fluid is a liquid or a gas), the pressure times the volume, is equal to the
change in kinetic energy of the fluid.
General Facts:
Where there is slow flow in a fluid, you will find increased pressure.
Where there is increased flow in a fluid, you will find decreased pressure.
In a real flow, friction plays a large role - a lot of times you must have a large
pressure drop (decrease in pressure) just to overcome friction.
This is the case in your house. Most water pipes have small diameters (large friction),
hence the need for "water pressure" - it is the energy from that pressure drop that
goes to friction.
In a real flow i.e. around an immersed body, friction plays a large role – most of the
time when the ship is in service you have a large pressure drop (decrease in
pressure) just to overcome friction. For example, if you have a water pipe with a
small diameter (large friction), hence the need for "water pressure" – it is the energy
from that pressure drop that goes to friction.

Example: the showerhead

Fundamentals of Process Plant & Equipment Control 33


©Ron Frend 2006
A showerhead (if you have a fancy one) has a number of different operation modes.
If you go for the "massage" mode, you are moving a little water fast. For the "lite
shower," you are moving a lot of water slowly. It takes the same amount of energy to
move a little water fast as it does to move a lot of water slowly. This is the amount of
energy you have due to your "water pressure".

Example
When a liquid runs freely through a pipe of a constant area (B), to which three
ascension pipes (D,E,F) are connected, the static pressure will decrease along the
dashed line towards the outlet (Fig.1), The pressure decreases as result of friction
loss in the horizontal pipe.

Fig. 1

In (Fig.2) the area has been changed in two places, with a thinner pipe at section (G)
and a thicker pipe at section (H). The following occurs:

Section (G)

The resultant constriction causes the liquid to move at a higher speed, increasing the
dynamic pressure, with the result that the static pressure in pipe (D) falls below the
dashed line.

Section (H)

In section (H), which has a much larger area, the static pressure rises above the
dashed line, the speed of the liquid having decreased due to the larger area, with the
result that the dynamic pressure will be decreased.

Fig. 2

Fundamentals of Process Plant & Equipment Control 34


©Ron Frend 2006
Liquid Flow
Flow Units
The units used to describe the flow measured can be of several types, depending on
how the specific process needs the information. The most common descriptions are
the following:
1. Volume flow rate Expressed as a volume delivered per unit time.
Typical units are gallons/min, m3/hr, ft3hr.
2.
2. Flow velocity Expressed as the distance the liquid travels in the
carrier per unit time. Typical units are m/min, ft/min. This is related to
the volume flow rate by

where V = flow velocity


• Q = volume flow rate
• A = cross-sectional area of flow carrier (pipe, and so on)
3. Mass or weight flow rate Expressed as mass or weight flowing per unit time.
Typical units are kg/hr, Ib/hr. This is related to the volume flow rate by
F = pQ (5.36)
where F = mass or weight flow rate
• p = mass density or weight density
• Q = volume flow rate
EXAMPLE
Water is pumped through a 1.5-in diameter pipe with a flow velocity of 2.5 ft per
second. Find the volume flow rate and weight flow rate. The weight density is 62.4
Ib/ft3.
Solution
The flow velocity is given as 2.5 ft/s, so the volume flow rate can be found from
Equation (5.35), Q = VA. The area is given by
A = pd24
where the diameter d = (1.5 in)(l/12 ft/in) = 0.125 ft
so that
A = (3.14)(0.125)2/4 = 1.0122 ft2.
Then, the volume flow rate is
• Q = (2.5ft/s)(0.0122ft2)(60s/min)
• Q = 1.8 ft3/min
The weight flow rate is found from Equation (5.36):
F = (62.4 lb/ft3)(1.8 ft3/min)
F = 112 lb/min

Fundamentals of Process Plant & Equipment Control 35


©Ron Frend 2006
Restriction Flow Sensors
One of the most common methods of measuring the flow of liquids in pipes is by
introducing a restriction in the pipe and measuring the pressure drop that results
across the restriction. When such a restriction is placed in the pipe, the velocity of the
fluid through the restriction increases, and the pressure in the restriction decreases.
We find that there is a relationship between the pressure drop and the rate of flow
such that as the flow increases, the pressure drops. In particular, one can find an
equation of the form

where Q = volume flow rate


K = a constant for the pipe and liquid type
Dp = drop in pressure across the restriction
The constant K depends on many factors, including the type of liquid, size of pipe,
velocity of flow, temperature, and so on. The type of restriction employed also will
change the value of the constant used in this equation. The flow rate is linearly
dependent not on the pressure drop, but on the square root. Thus, if the pressure
drop in a pipe increased by a factor of 2 when the flow rate was increased, the flow
rate will have increased only by a favor of 1.4 (the square root of 2). Certain standard
types of restrictions are employed in exploiting the pressure-drop method of
measuring flow.
Figure 5.37 shows the three most common methods. It is interesting to note that
having converted flow information to pressure, we now employ one of the methods of
measuring pressure, often by conversion to displacement, which is measured by a
displacement sensor before finally getting a signal that will be used in the process-
control loop. The most common method of measuring the pressure drop is to use a
differential pressure sensor similar to that shown in Figure 5.33. These are often
described by the name DP cell.

Fundamentals of Process Plant & Equipment Control 36


©Ron Frend 2006
Figure 9 Three different types of restrictions commonly are used to convert flow rate to a
pressure difference, p1 - p2
EXAMPLE
Flow is to be controlled from 20 to 150 gal/min. The flow is measured using an orifice
plate system such as that shown in Figure 5.37c. The orifice plate is described by
Equation (5.37) with K = 119.5 (gal/minypsi1/2. A bellows measures the pressure with
an LVDT so that the output is 1.8 V/psi. Find the range of voltages that result from
the given flow range.
Solution
From Equation (5.37), we find the pressures that result from the given flow:
Dp = (Q/K)2
For 20 gal/min
Dp = (20/119.5)2 = 0.0280 psi
and for 150 gal/min
Dp = (150/119.5)2 = 1.5756 psi
Because there are 1.8 V/psi, the voltage range is easily found.
For 20 gal/min, V = 0.0280(1.8) = 0.0504 V
For 150 gal/min, V = 1.5756(1.8) = 2.836 V

Fundamentals of Process Plant & Equipment Control 37


©Ron Frend 2006
Two Phase & Multiphase Flow
Multiphase flow means that a mixture of two or more substances (gases, liquids or
solid particles) flow together without being dissolved in each other. Such flows are
ubiquitous - e.g. in the atmosphere, in the food industry, in cooling systems, in the
process industry and in petroleum reservoirs.

Figure 10 Multiphase flow


Multiphase flow is simultaneous flow of gas, liquid and/or solid phases in different
combinations. Multiphase flow, particularly two-phase flows, are probably the most
common flow cases in nature, examples being the flow of blood in our body and drift
of clouds in the sky. The bubbles rising in a glass of soda provide another good
example of multiphase flow. Multiphase flows are also important in most industrial
applications, such as energy conversion, paper manufacturing, food manufacturing,
bio-technological and medical applications.
In industry, controlling multiphase flow is essential for efficiency, quality and
profitability and often has a decisive effect on environmental aspects.
In oil and gas production, multiphase flow often occurs in wells and pipelines
because the wells produce gas and oil simultaneously. This is called two-phase flow.
In addition to gas and oil, water is also often produced at the same time. This is
called three-phase flow. In the North Sea, the oil companies previously often built
large production platforms standing on the sea floor, equipped with process facilities
separating gas. oil and water. The gas was sent to market in one pipeline, while the
oil was shipped directly or sent to shore in another pipeline. Today this is usually too
expensive. Instead the operators often choose subsea developments where the
untreated well stream is sent directly from a subsea template to an existing platform
or to shore in one multiphase pipeline. In cases where a subsea development with
multiphase transport is feasible, billions may be saved by dispensing with costly
platforms.
Multiphase transport implies many new challenges. Under unfavourable conditions,
oil and water may be expelled from the pipeline in large batches (slug flow) which
can disturb the receiving facilities. Oil and water may form thick emulsions which give
high pressure losses and reduced production. Wax and hydrates (an ice-like
substance) may precipitate and block the pipe. Unfavourable water chemistry may
lead to fatal corrosion attacks piercing the pipe. Before commissioning a field, it is
important to be able to predict possible production problems and to predict flow
patterns and pressure losses as accurately as possible so that pipelines and process
plants may be designed optimally.

Fundamentals of Process Plant & Equipment Control 38


©Ron Frend 2006
Obstruction Flow Sensor
Another type of flow sensor operates by the effect of flow on an obstruction placed in
the flow stream. In a rotameter, the obstruction is a float that rises in a vertical
tapered column. The lifting force and thus the distance to which the float rises in the
column is proportional to the flow rate. The lifting force is produced by the differential
pressure that exists across the float, because it is a restriction in the flow. This type
of sensor is used for both liquids and gases. A moving vane flow meter has a vane
target immersed in the flow region, which is rotated out of the flow as the flow velocity
increases. The angle of the vane is a measure of the flow rate. If the rotating vane
shaft is attached to an angle-measuring sensor, the flow rate can be measured for
use in a process-control application. A turbine type of flow meter is composed of a
freely spinning turbine blade assembly in the flow path. The rate of rotation of the
turbine is proportional to the flow rate. If the turbine is attached to a tachometer, a
convenient electrical signal can be produced. In all of these methods of flow
measurement, it is necessary to present a substantial obstruction into the flow path to
measure the flow. For this reason, these devices are used only when an obstruction
does not cause any unwanted reaction on the flow system.

Magnetic Flow Meter


It can be shown that if charged particles move across a magnetic field, a potential is
established across the flow, perpendicular to the magnetic field. Thus, if the flowing
liquid is also a conductor (even if not necessarily a good conductor) of electricity, the
flow can be measured by allowing the liquid to flow through a magnetic field and
measuring the transverse potential produced. The pipe section in which this
measurement is made must be insulated and a nonconductor itself, or the potential
produced will be cancelled by currents in the pipe. A diagram of this type of flow
meter is presented in Figure 5.39. This type of sensor produces an electrical signal
directly and is convenient for process-control applications involving conducting fluid
flow.

Fundamentals of Process Plant & Equipment Control 39


©Ron Frend 2006
Figure 11 Three different types of obstruction flow meters

Fundamentals of Process Plant & Equipment Control 40


©Ron Frend 2006
Reynolds Number

Flow performance can be affected by a dimensionless unit called the Reynolds


Number. It is defined as the ratio of the liquid's inertial forces to its drag forces.

Laminar and turbulent flow are most common in flow regimes or in liquid flow
measurement operations but there is also transitional flow.

If we want to calculate the Reynolds number , we can use the following equation

R = 3160 x Q x Gt D x µ

where:

Fundamentals of Process Plant & Equipment Control 41


©Ron Frend 2006
R = Reynolds number
Q = liquid's flow rate, gpm
Gt = liquid's specific gravity
D = inside pipe diameter, in.
µ = liquid's viscosity, cp

• When the Reynolds number is less than 2000, flow will be described as
laminar
• When the Reynolds number is greater than 4000, flow will be described as
turbulent
• When the Reynolds number is in the range of 2000 to 4000 the flow is
considered transitional.

Viscosity can be a major factor that affects the value of the Reynolds number.

For e.g You may find highly viscous hydraulic oils may exhibit laminar flow in most
conditions while things like water will be turbulent

Fundamentals of Process Plant & Equipment Control 42


©Ron Frend 2006
SOME NOTES FOR THE METRIC PIPE FRICTION CHART SHOWN
BELOW
• The chart is calculated for fresh water at 15°C.
• Use actual bores rather than nominal pipe size.
• For stainless steel pipe multiply the numbers by 1.1.
• For steel pipe multiply the numbers by 1.3
• For cast iron pipe multiply the numbers by 1.7
• The losses are calculated for a fluid viscosity similar to fresh water.

Figure 12 Pipe friction head loss nomograph

Fundamentals of Process Plant & Equipment Control 43


©Ron Frend 2006
FRICTION LOSS FOR METRIC PIPE, VALVES AND FITTINGS

Figure 13 Friction Loss for fittings

Fundamentals of Process Plant & Equipment Control 44


©Ron Frend 2006
Pumps & Compressors
Centrifugal pump designs
The overwhelming majority of contractor pumps use centrifugal force to move water.
Centrifugal force is defined as the action that causes something, in this case water,
to move away from its center of rotation.
All centrifugal pumps use an impeller and volute to create the partial vacuum and
discharge pressure necessary to move water through the casing. The impeller and
volute form the heart of the pump and help determine its flow, pressure and solid
handling capability.
An impeller is a rotating disk with a set of vanes coupled to the engine/motor shaft
that produces centrifugal force within the pump casing. A volute is the stationary
housing (in which the impeller rotates) that collects, discharges and recirculates
water entering the pump. A diffuser is used on high pressure pumps and is similar to
a volute but more compact in design. Many types of material can be used in their
manufacture but cast iron is most commonly used for construction applications.
In order for a centrifugal pump, or self priming, pump to attain its initial prime the
casing must first be manually primed or filled with water. Afterwards, unless it is run
dry or drained, a sufficient amount of water should remain in the pump to ensure
quick priming the next time it is needed.

As the impeller churns the water (see figure above), it purges air from the casing
creating an area of low pressure, or partial vacuum, at the eye (centre) of the
impeller. The weight of the atmosphere on the external body of water pushes water
rapidly through the hose and pump casing toward the eye of the impeller.
Centrifugal force created by the rotating impeller pushes water away from the eye,
where pressure is lowest, to the vane tips where the pressure is highest. The velocity
of the rotating vanes pressurizes the water forced through the volute and discharges
it from the pump.
Water passing through the pump brings with it solids and other abrasive material that
will gradually wear down the impeller or volute. This wear can increase the distance
between the impeller and the volute resulting in decreased flows, decreased heads
and longer priming times. Periodic inspection and maintenance is necessary to keep
pumps running like new.

Fundamentals of Process Plant & Equipment Control 45


©Ron Frend 2006
Another key component of the pump is its mechanical seal. This spring loaded
component consists of two faces, one stationary and another rotating, and is located
on the engine shaft between the impeller and the rear casing (see figure below). It is
designed to prevent water from seeping into and damaging the engine. Pumps
designed for work in harsh environments require a seal that is more abrasion
resistant than pumps designed for regular household use.

Typically seals are cooled by water as it passes through the pump. If the pump is dry
or has insufficient water for priming it could damage the mechanical seal. Oil-
lubricated an occasionally grease-lubricated seals are available on some pumps that
provide positive lubrication in the event that the pump is run without water. The seal
is a common wear part that should be periodically inspected.

Fundamentals of Process Plant & Equipment Control 46


©Ron Frend 2006
Pump Affinity Laws
There are occasions when you might want to permanently change the amount of fluid
you are pumping, or change the discharge head of a centrifugal pump. There are four
ways you could do this:
• Regulate the discharge of the pump.
• Change the speed of the pump.
• Change the diameter of the impeller.
• Buy a new pump
Of the four methods the middle two are the only sensible ones. In the following
paragraphs we will learn what happens when we change either the pump speed or
impeller diameter and as you would guess other characteristics of the pump are
going to change along with speed or diameter.
To determine what is going to happen you begin by taking the new speed or impeller
diameter and divide it by the old speed or impeller diameter. Since changing either
one will have approximately the same affect I will be referring to only the speed in
this part of the discussion.
As an example:

The capacity, or amount of fluid you are pumping, varies directly with this number.
Example: 100 Gallons per minute x 2 = 200 Gallons per minute
Or in metric, 50 Cubic meters per hour x 0,5 = 25 Cubic meters per hour
The head varies by the square of the number.
Example : a 50 foot head x 4 (22) = 200 foot head
Or in metric, a 20 meter head x 0,25 ( 0,52) = 5 meter head
The horsepower required changes by the cube of the number.
Example : a 9 Horsepower motor was required to drive the pump at 1750 rpm.. How
much is required now that you are going to 3500 rpm?
We would get: 9 x 8 (23) = 72 Horse power is now required.
Likewise if a 12 kilowatt motor were required at 3000 rpm. and you decreased
the speed to 1500 the new kilowatts required would be: 12 x 0,125 (0.53) =
1,5 kilowatts required for the lower rpm.

Fundamentals of Process Plant & Equipment Control 47


©Ron Frend 2006
The following relationships are not exact, but they give you an idea of how
speed and impeller diameter affects other pump functions.
The net positive suction head required by the pump manufacturer (npshr) varies by
the square of the number.
Example : A 3 meter NPSHR x 4 (22) = 12 meter N.P.S.H.R.
Or: 10 foot NPSHR x 0.25 ( 0.52) = 2.5 foot N.P.S.H.R.
The amount of shaft run out ( deflection) varies by the square of the number
As an example : If you put a dial indicator on the shaft and noticed that the total run
out at 1750 rpm. was 0.005 inches then at 3500 rpm the run out would be 0.005" x 4
(22), or 0.020 inches.
Likewise if you had 0,07 mm. run out at 2900 rpm. and you slowed that shaft down to
1450 rpm the run out would decrease to 0,07 mm x 0,25 ( 0.52) or 0,018 mm.
The amount of friction loss in the piping varies by about 90% of the square of the
number. Fittings and accessories varies by almost the square of the number.
As an example : If the system head loss was calculated or measured at 65 meters at
1450 rpm., the loss at 2900 rpm. would be : 65 meters x 4 (22) = 260 x 0.9 = 234
Meters
If you had a 195 foot loss at 3500 rpm the loss at 1750 rpm. would be : 195 x 0.25
(0.52) = 48.75 0.9 = 43.87 feet of head loss.
The wear rate of the components varies by the cube also
Example : At 1750 rpm. the impeller material is wearing at the rate of 0.020 inches
per month. At 3500 rpm the rate would increase to: 0.020 " x 8 (23) or 0.160 inches
per month. Likewise a decrease in speed would decrease the wear rate eight times
as much.
We started this section by stating that a change in impeller speed or a change in
impeller diameter has approximately the same affect. This is true only if you
decrease the impeller diameter to a maximum of 10% . As you cut down the impeller
diameter the housing is not coming down in size so the affinity laws do not remain
accurate below this 10% maximum number.
The affinity laws remain accurate for speed changes and this is important to
remember when we convert from jam packing to a balanced mechanical seal. We
sometimes experience an increase in motor speed rather than a drop in amperage
during these conversions and the affinity laws will help you to predict the final
outcome of the change.

Fundamentals of Process Plant & Equipment Control 48


©Ron Frend 2006
Performance Curves

Figure 14 Centrifugal Pump Performance Curve


Please look at the above illustration. You will note that we have plotted the head of
the pump against its capacity. The head of a pump is read in feet or meters. The
capacity units will be either gallons per minute, liters per minute, or cubic meters per
hour.
According to the above illustration this pump will pump a 40 capacity to about a 110
head, or a 70 capacity to approximately a 85 head (you can substitute either metric
or imperial units as you see fit)
The maximum head of this pump is 115 units. This is called the maximum shutoff
head of the pump. Also note that the best efficiency point (BEP) of this impeller is
between 80% and 85% of the shutoff head. This 80% to 85% is typical of centrifugal
pumps, but if you want to know the exact best efficiency point you must refer to the
manufacturers pump curve.
Ideally a pump would run at its best efficiency point all of the time, but we seldom hit
ideal conditions. As you move away from the BEP the shaft will deflect and the pump
will experience some vibration. You will have to check with your pump manufacturer
to see how far you can safely deviate from the BEP (a maximum of 10% either side is
typical)
Now look at the following illustration:

Fundamentals of Process Plant & Equipment Control 49


©Ron Frend 2006
Figure 15 Pump curves showing speed & diameter
Note that we have added some additional curves to the original illustration. These
curves show what happens when you change the diameter of the impeller.
Impeller diameter is measured in either inches or millimeters. If we wanted to pump
at the best efficiency point with a 11.5 impeller we would have to pump a capacity of
50 to a 75 head.
The bottom half of the illustration shows the power consumption at various capacities
and impeller diameters. We have labeled the power consumption horsepower, but in
the metric system it would be called kilowatts
Each of the lines represents an impeller diameter. The top line would be for the 13
impeller the second for the 12.5 etc. If we were pumping a capacity of 70 with a 13
impeller it would take about 35 horsepower. A capacity of 60 with the 12 impeller
would take about 20 horsepower.
Most pump curves would show you the percent of efficiency at the best efficiency
point . The number varies with impeller design and numbers from 60% to 80% are
normal.
When you will look at an actual pump curve you should have no trouble reading the
various heads and corresponding capacities for the different size impellers. You will
note however, that the curve will usually show an additional piece of information and
that is NPSHR which stands for net positive suction head required to prevent the
pump from cavitating.
Depending upon the pump curve you might find a 10 foot (3.0 meter) NPSH required
head at a capacity of 480 Gallons per minute (110 cubic meters per hour) if you were
using a 13 inch (330 mm.) diameter impeller.

Fundamentals of Process Plant & Equipment Control 50


©Ron Frend 2006
You should keep in mind that the manufacture assumed you were pumping 20°C
(68°F) fresh water and the N.P.S.H. Required was tested using this assumption. If
you are pumping water at a different temperature or if you are pumping a different
fluid, you are going to have to add the vapour pressure of that product to the
N.P.S.H. Required. The rule is that Net Positive Suction Head Available minus the
Vapour Pressure of the product you are pumping (converted to head) must be equal
to or greater than Net Positive Suction Head Required by the manufacturer.

Suppose we wanted to pump some liquid Butane at 32 degrees Fahrenheit (0


degrees Centigrade) with this pump. If we look at the curve for Butane on a vapour
pressure chart similar to the one shown in the charts and graphs section of this web
site you will note that Butane at 32°F needs at least 15 psi (1,0 Bar) to stay in a liquid
state. To convert this pressure to head we use the standard formula :

In other words Butane at this temperature would not vapourize as long as I had the
above absolute heads available at the suction side of the pump.

Fundamentals of Process Plant & Equipment Control 51


©Ron Frend 2006
Compressors and Expanders
Depending on application, compressors are manufactured as positive displacement, dynamic,
or thermal type. Positive displacement types fall in two basic categories:
reciprocating and rotary

Figure 16 Compressor Types


The reciprocating compressor consists of one or more cylinders each with a piston or
plunger that moves back and forte displacing a positive volume with each stroke.
The diaphragm compressor uses a hydraulically pulsed flexible diaphragm to displace
the gas.
Rotary compressors cover lobe-type, screw-type, vane-type, and liquid ring type, each having
a casing with one or more rotating elements that either mesh with each other such as lobes or
screws, or that displaces a fixed volume with each rotation.
The dynamic types include radial-flow (centrifugal), axial flow and mixed flow
machines. They are rotary continuous-flow compressors in which the rotating element
(impeller or bladed rotor s accelerates the gas as it passes through the element,
converting the velocity head into static pressure, partially in the rotating element and
partially in stationary diffusers or blades.
Ejectors are "thermal" compressors that use a high velocity gas or steam jet to entrain
the inflowing gas then convert the velocity of the mixture to pressure in a diffuser.
The Figure below covers normal range of operation for compressors of the commercially
available types Fig (4) summarizes the difference between reciprocating and centrifugal
compressors.

Fundamentals of Process Plant & Equipment Control 52


©Ron Frend 2006
Figure 17 Compressor selection nomograph

Fundamentals of Process Plant & Equipment Control 53


©Ron Frend 2006
CENTRIFUGAL COMPRESSORS
This section is intended to supply information sufficiently accurate lo determine
whether a centrifugal compressor should be considered for a specific job. The
secondary objective is to present information for evaluating compressor performance.
The compressor selection nomograph gives an approximate idea of the flow range that
a centrifugal compressor will handle. A multi-wheel (multistage) centrifugal compressor
is normally considered for inlet volumes between 500 and 200,000 inlet acfm. A single-
wheel (single stage) compressor would normally have application between 100 and
150,000 inlet acfm. A multi-wheel compressor can be thought of as a series of single
wheel compressors contained in a single casing.
Historically, centrifugal compressors used to operate at speeds of 3,000 rpm no higher,
a limiting factor being impeller stress considerations as well as velocity limitation of 0.8
to 0.85 Mach number at the impeller tip and eye. Recent advances in machine design
have resulted in production of some units running at speeds in excess of 40,000.
Centrifugal compressors are usually driven by electric motors, steam or gas turbines
(with or without speed-increasing gears), or turbo-expanders.
There is an overlap of centrifugal and reciprocating compressors on the low end of the
flow range. At the higher end of the flow range an overlap with the axial compressor
exists. The extent of this overlap depends on a number of things. Before a technical decision
could be reached as to the type of compressor that would be installed, the service,
operational requirements and economics would have to be considered.

Fundamentals of Process Plant & Equipment Control 54


©Ron Frend 2006
Figure 18 Centrifugal Compressor flow range
The operating characteristics must be determined before an evaluation of
compressor suitability for the application can be made. Fig. (29) gives a rough
comparison of the characteristics of the axial, centrifugal, and reciprocating compressor.
The centrifugal compressor approximates the constant head - variable volume machine,
while the reciprocating is a constant volume-variable head machine. The axial compressor,
which is a low head, high flow machine, falls somewhere in between. A compressor is
a part of the system, and its performance is dictated by the system resistance. The
desired system capability or objective must be determined before a compressor can be
selected.
With variable speed, the centrifugal compressor can deliver constant capacity at
variable pressure, variable capacity at constant pressure, or a combination variable
capacity and variable pressure.

Figure 19 Compressor Curves

Fundamentals of Process Plant & Equipment Control 55


©Ron Frend 2006
Basically the performance of the centrifugal compressor, at speeds other than design, is such
that the capacity will vary directly as the speed, the head developed as the square of
the speed, and the required horsepower as the cube of the speed. As the speed
deviates from the design speed, the error of these rules, known as the affinity laws, or fan
laws, increases. The fan laws only apply to single stages or multi-stages with vent low
compression ratios or very low Mach numbers.
Fan Laws:

Fundamentals of Process Plant & Equipment Control 56


©Ron Frend 2006
By varying speed, the centrifugal compressor will meet any load and pressure condition
demanded by the process system within the operating limits of the compressor and the driver. It
normally accomplishes this as efficiently as possible, since only the head required by the
process is developed by the compressor. This compares to the essentially constant head
developed by the constant speed compressor.

Page 57
Heat Transfer And Reaction Engineering
Heat Transfer
• Thermal conductivity
• Conduction and convection
• Insulation
• Heat transfer coefficients and calculation
• Heat exchangers, type and sizing
• Steam reboilers
• Condensers and sub-cooling
• Introduction to energy recovery

Page 58
Thermal Conductivity

In physics, thermal conductivity, k, is the intensive property of a material that indicates its
ability to conduct heat.

It is defined as the quantity of heat, Q, transmitted in time t through a thickness L, in a


direction normal to a surface of area A, due to a temperature difference ∆T, under steady
state conditions and when the heat transfer is dependent only on the temperature gradient.

thermal conductivity = heat flow rate × distance / (area × temperature difference)

Examples
In metals, thermal conductivity approximately tracks electrical conductivity, as the freely
moving valence electrons transfer not only electric current but also heat. However, this
correlation does not apply to some materials, as shown in the table below, where highly
electrically conductive silver is shown to be less thermally conductive than diamond, which is
an electrical semiconductor.

Thermal conductivity is not a simple property, and depends intimately on structure and
temperature. For instance, pure, crystalline substances also exhibit highly variable thermal
conductivities along different crystal axes. One particularly notable example is sapphire, for
which the CRC Handbook reports a thermal conductivity perpendicular to the c-axis of 2.6
W·m-1·K-1 at 373 K, and 6000 W·m-1·K-1 at 35 K for an angle of 36 degrees to the c-axis.

Air and other gases are generally good insulators, in the absence of convection. Therefore,
many insulating materials function simply by having a large number of gas-filled pockets
which prevent large-scale convection. Examples of these include polystyrene (styrofoam)
and silica aerogel.

Thermal conductivity is clearly an important quantity for construction and related fields.
However, materials used in such trades are rarely subjected to chemical purity standards.
Several construction materials' k values are listed below. These should be considered
approximate due to the uncertainties related to material definitions.

The following table is meant as a small sample of data to illustrate the thermal conductivity of
various types of substances. For more complete listings of measured k-values, see the
references.

Page 59
Table 1Thermal conductivity properties
Thermal
Temperature
conductivity Notes
(K)
(W·m-1·K-1)
Diamond 1,000a 273 type I diamond
Highest electrical conductivity of any
Silver 429a 300
metal
Iron, pure 80.2a 300
Stainless
Steel
14a 273
Limestone 1.3b
Ice 2.2a 273
Soil 0.2-1.1c
Oak 0.16a 298
Rubber
(92%) 0.16a 303

Polystyrene 0.033a 98-298


Nitrogen 0.026a 300
Air (100 kPa) 0.0262a 300
Silica aerogel 0.003a 98-298

For general scientific use, thermal conductance is the quantity of heat that passes in unit
time through a plate of particular area and thickness when its opposite faces differ in
temperature by one degree. For a plate of thermal conductivity k, area A and thickness L this
is kA/L, measured in W·K-1. This matches the relationship between electrical conductivity
(A·m-1·V-1) and electrical conductance (A·V-1).

There is also a measure known as heat transfer coefficient: the quantity of heat that passes
in unit time through unit area of a plate of particular thickness when its opposite faces differ
in temperature by one degree. The reciprocal is thermal insulance. In summary:

• thermal conductance = kA/L, measured in W·K-1


-1
o thermal resistance = L/kA, measured in K·W
-1 -2
• heat transfer coefficient = k/L, measured in W·K ·m
2 -1
o thermal insulance = L/k, measured in K·m ·W .

The heat transfer coefficient is also known as thermal admittance, but this term has other
meanings.

Page 60
Conduction & Convection

Conduction:

In metals, the dominant method of conduction is through the movement of electrons. This
method of conduction does not operate in non-metals because there are no free
electrons (other than graphite). When a metal is heated, the electrons closest to the heat
source vibrate more rapidly. Electrons then collide with these atoms and gain more
kinetic energy (movement energy). The electrons therefore move around faster and
collide with other free electrons which then gain more kinetic energy. Kinetic energy is
therefore transferred between the electrons and through the metal from the point closest
to the heat source towards points futher away. The electrons all travel very short
distances but are very fast moving therefore conduction of heat happens very quickly.

In metals and in insulators, there is conduction of heat due to the vibration of the atoms.
As atoms closest to the heat source absorb heat/thermal energy, they make their
neighbouring atoms vibrate more rapidly which then in turn make their neighbouring
atoms vibrate more.

Examples of conduction:

The wire gauzes used on tripods are metal therefore they are good heat conductors.
Gauzes on cookers are also metal so that heat is conducted quickly and food is cooked
fast.

Poor thermal conductors (insulators) are used for saucepan handles so that they don't
heat up and can still be handled.

Metals are used for the containers which heat liquids e.g. pans, kettles on hobs

Air is a poor conductor therefore materials that trap air are used for insulation in lofts and
hot water cylinders.

Convection:

The cool particles gain kinetic energy when they are heated from the source and expands
as it heats up. The particles become less dense than the surrounding cold air therefore it
rises and displace the cool air. Cool particles are more dense therefore they fall and
move towards the heat source to take the place of the warm particles. They then heat up
and rise while other particles cool down and fall.

Example of convection:

Convection is used in fridges to cool it down. Heat is carried away, therefore the back of
fridges are always warm.

Land & sea breezes are due to convection.

Atmospheric winds.

Hot water systems.

Page 61
Insulation

Insulation is any material used to reduce or “slow down” or “resist” the flow of energy. There
are several different types of insulators:

• Thermal insulators reduce the flow of heat.


• Electrical insulators reduce the flow of electricity.
• Acoustical insulators reduce the flow of sound.

A material may insulate well in more than one way. Some materials, such as diamond, are
superb insulators in one way (electrical), but extremely poor insulators in another way
(thermal). A purified synthetic diamond conducts heat even better than copper, and has the
highest thermal conductivity of any known solid at room temperature. Thus it is the worst
thermal insulator known that's solid at room temperature.

Heat is the internal kinetic, vibrational energy that all materials contain (except at absolute
zero). Heat spontaneously flows from a high temperature region to a low temperature region,
and the greatest heat flow occurs through the path of least resistance.

The proximity of a high temperature region to a low temperature region constitutes a


temperature gradient. Thermal insulation maintains a thermal gradient by reducing the flow of
heat across the temperature gradient.

Insulation exists in most large appliances, for example, in ovens, refrigerators, freezers, and
water heaters. In some cases, the insulation serves to prevent heat loss to the environment.
In other cases, it serves to prevent heat gain from the environment.

Page 62
Heat transfer coefficients and calculation
The heat transfer coefficient is used as a fudge factor in calculating heat transfer in
thermodynamics. The heat transfer coefficent is often calculated from the Nusselt number (a
dimensionless number). Below is an example where it is used to find the heat lost from a hot
tube to the surrounding area.

where
• Q = power input or heat lost
• h = overall heat transfer coefficient
• A = outside surface area of tubing
• ∆T = difference in temperature between tubing surface and surrounding area
There are different heat transfer relations for different liquids, flow regimes, and
thermodynamic conditions. A common example pertinent to many of the necessary power
plant efficiency and thermal hydraulic calculations is the Dittus-Boelter heat transfer
corelation, valid for water in a circular pipe with Reynolds numbers between 100 000 and 120
000 and Prandtl numbers between 0.7 and 120. An example is shown below where it is used
to calculate the heat transfer from a tubing wall to water.

where

• Hdb = => Dittus-Boelter correlation


• kw = thermal conductivity of water
• Nu = Nusselt number

• Pr = Prandtl number =

• Re = Reynolds number =
• DH = hydraulic diameter
• = mass flow rate
• µ = water viscosity
• Cp = heat capacity at constant pressure
• A = cross-sectional area of flow
The heat transfer coefficient has SI units in watts per meter squared-kelvin. Often it can be
estimated by dividing the thermal conductivity by a length scale. Heat transfer coefficients
add inversely, like resistances. It can be thought of as a thermal resistance. Shown below is
an addition of heat transfer coefficients where one is estimated as a thermal conductivity
divided by a length scale.

where
• Q = power input
• h = heat transfer coefficient
• t = tubing thickness

Page 63
• k = thermal conductivity of metal tube
• A = cross-sectional area of flow
• ∆T = difference in temperature between outer wall of tubing and sample water.

Page 64
Heat exchangers, type and sizing
A heat exchanger is a component that allows the transfer of heat from one fluid (liquid or
gas) to another fluid. Reasons for heat transfer include the following: 1. To heat a cooler
fluid by means of a hotter fluid 2. To reduce the temperature of a hot fluid by means of a
cooler fluid 3. To boil a liquid by means of a hotter fluid 4. To condense a gaseous fluid by
means of a cooler fluid 5. To boil a liquid while condensing a hotter gaseous fluid Regardless
of the function the heat exchanger fulfills, in order to transfer heat the fluids involved must be
at different temperatures and they must come into thermal contact. Heat can flow only from
the hotter to the cooler fluid.
In a heat exchanger there is no direct contact between the two fluids. The heat is
transferred from the hot fluid to the metal isolating the two fluids and then to the cooler fluid.

Types of Heat Exchanger Construction


Although heat exchangers come in every shape and size imaginable, the construction of
most heat exchangers falls into one of two categories: tube and shell, or plate. As in all
mechanical devices, each type has its advantages and disadvantages.

Tube and Shell


The most basic and the most common type of heat exchanger construction is the tube and
shell, as shown in below. This type of heat exchanger consists of a set of tubes in a
container called a shell. The fluid flowing inside the tubes is called the tube side fluid and the
fluid flowing on the outside of the tubes is the shell side fluid. At the ends of the tubes, the
tube side fluid is separated from the shell side fluid by the tube sheet(s).
The tubes are rolled and press-fitted or welded into the tube sheet to provide a leak tight
seal. In systems where the two fluids are at vastly different pressures, the higher pressure
fluid is typically directed through the tubes and the lower pressure fluid is circulated on the
shell side. This is due to economy, because the heat exchanger tubes can be made to
withstand higher pressures than the shell of the heat exchanger for a much lower cost. The
support plates shown also act as baffles to direct the flow of fluid within the shell back and
forth across the tubes.

Page 65
Figure 20 Tube & Shell Heat Exchanger

Plate
A plate type heat exchanger, as illustrated below, consists of plates instead of tubes to
separate the hot and cold fluids. The hot and cold fluids alternate between each of the plates.
Baffles direct the flow of fluid between plates.
Because each of the plates has a very large surface area, the plates provide each of the
fluids with an extremely large heat transfer area. Therefore a plate type heat exchanger, as
compared to a similarly sized tube and shell heat exchanger, is capable of transferring much
more heat. This is due to the larger area the plates provide over tubes.
Due to the high heat transfer efficiency of the plates, plate type heat exchangers are usually
very small when compared to a tube and shell type heat exchanger with the same heat
transfer capacity. Plate type heat exchangers are not widely used because of the inability to
reliably seal the large gaskets between each of the plates. Because of this problem, plate
type heat exchangers have only been used in small, low pressure applications such as on oil
coolers for engines. However, new improvements in gasket design and overall heat
exchanger design have allowed some large scale applications of the plate type heat
exchanger. As older facilities are upgraded or newly designed facilities are built, large plate
type heat exchangers are replacing tube and shell heat exchangers and becoming more
common.

Page 66
Figure 21 Plate Type Heat Exchanger
Because heat exchangers come in so many shapes, sizes, makes, and models, they are
categorized according to common characteristics. One common characteristic that can be
used to categorize them is the direction of flow the two fluids have relative to each other. The
three categories are parallel flow, counter flow and cross flow. Parallel flow, as illustrated in
below, exists when both the tube side fluid and the shell side fluid flow in the same direction.
In this case, the two fluids enter the heat exchanger from the same end with a large
temperature difference. As the fluids transfer heat, hotter to cooler, the temperatures of the
two fluids approach each other. Note that the hottest cold-fluid temperature is always less
than the coldest hot-fluid temperature

Figure 22 Parallel Flow Heat Exchanger

Page 67
Counter flow, as illustrated later, exists when the two fluids flow in opposite directions. Each
of the fluids enters the heat exchanger at opposite ends. Because the cooler fluid exits the
counter flow heat exchanger at the end where the hot fluid enters the heat exchanger, the
cooler fluid will approach the inlet temperature of the hot fluid. Counter flow heat exchangers
are the most efficient of the three types. In contrast to the parallel flow heat exchanger, the
counter flow heat exchanger can have the hottest cold- fluid temperature greater than the
coldest hot-fluid temperature.

Figure 23 Counter Flow Heat Exchanger


Cross flow, as illustrated below, exists when one fluid flows perpendicular to the second fluid;
that is, one fluid flows through tubes and the second fluid passes around the tubes at 90º
angle. Cross flow heat exchangers are usually found in applications where one of the fluids
changes state (2-phase flow). An example is a steam system's condenser, in which the
steam exiting the turbine enters the condenser shell side, and the cool water flowing in the
tubes absorbs the heat from the steam, condensing it into water. Large volumes of vapour
may be condensed using this type of heat exchanger flow.

Figure 24 Cross Flow Heat Exchanger

Page 68
Steam Reboilers
A reboiler is a special kind of heat exchanger used to put heat into a distillation column
Steam may also be used to evapourate (or vapourise) a liquid, in a type of shell and tube
heat exchanger known as a reboiler. These are used in the petroleum industry to vapourise a
fraction of the bottom product from a distillation column. These tend to be horizontal, with
vapourisation in the shell and condensation in the tubes

Figure 25 A Steam reboiler

Figure 26 Reboiler schematic

Page 69
Condensers and sub-cooling
The steam condenser, shown below, is a major component of the steam cycle in power
generation facilities. It is a closed space into which the steam exits the turbine and is forced
to give up its latent heat of vapourization. It is a necessary component of the steam cycle for
two reasons. One, it converts the used steam back into water for return to the steam
generator or boiler as feedwater. This lowers the operational cost of the plant by allowing the
clean and treated condensate to be reused, and it is far easier to pump a liquid than steam.
Two, it increases the cycle's efficiency by allowing the cycle to operate with the largest
possible delta- T and delta-P between the source (boiler) and the heat sink (condenser).
Because condensation is taking place, the term latent heat of condensation is used instead
of latent heat of vapourization. The steam's latent heat of condensation is passed to the
water flowing through the tubes of the condenser.
After the steam condenses, the saturated liquid continues to transfer heat to the cooling
water as it falls to the bottom of the condenser, or hotwell. This is called subcooling, and a
certain amount is desirable. A few degrees subcooling prevents condensate pump cavitation.
The difference between the saturation temperature for the existing condenser vacuum and
the temperature of the condensate is termed condensate depression. This is expressed as a
number of degrees condensate depression or degrees subcooled. Excessive condensate
depression decreases the operating efficiency of the plant because the subcooled
condensate must be reheated in the boiler, which in turn requires more heat from the reactor,
fossil fuel, or other heat source

Figure 27 Condenser

Page 70
There are different condenser designs, but the most common, at least in the large power
generation facilities, is the straight-through, single-pass condenser illustrated above. This
condenser design provides cooling water flow through straight tubes from the inlet water box
on one end, to the outlet water box on the other end. The cooling water flows once through
the condenser and is termed a single pass. The separation between the water box areas and
the steam condensing area is accomplished by a tube sheet to which the cooling water tubes
are attached. The cooling water tubes are supported within the condenser by the tube
support sheets. Condensers normally have a series of baffles that redirect the steam to
minimize direct impingement on the cooling water tubes. The bottom area of the condenser
is the hotwell. This is where the condensate collects and the condensate pump takes its
suction. If non-condensable gasses are allowed to build up in the condenser, vacuum will
decrease and the saturation temperature at which the steam will condense increases.
Non-condensable gasses also blanket the tubes of the condenser, thus reducing the heat
transfer surface area of the condenser. This surface area can also be reduced if the
condensate level is allowed to rise over the lower tubes of the condenser. A reduction in the
heat transfer surface has the same effect as a reduction in cooling water flow. If the
condenser is operating near its design capacity, a reduction in the effective surface area
results in difficulty maintaining condenser vacuum.
The temperature and flow rate of the cooling water through the condenser controls the
temperature of the condensate. This in turn controls the saturation pressure (vacuum) of the
condenser. To prevent the condensate level from rising to the lower tubes of the condenser,
a hotwell level control system may be employed. Varying the flow of the condensate pumps
is one method used to accomplish hotwell level control. A level sensing network controls the
condensate pump speed or pump discharge flow control valve position. Another method
employs an overflow system that spills water from the hotwell when a high level is reached.
Condenser vacuum should be maintained as close to 29 inches Hg as practical. This allows
maximum expansion of the steam, and therefore, the maximum work. If the condenser were
perfectly air-tight (no air or non-condensable gasses present in the exhaust steam), it would
be necessary only to condense the steam and remove the condensate to create and
maintain a vacuum. The sudden reduction in steam volume, as it condenses, would maintain
the vacuum. Pumping the water from the condenser as fast as it is formed would maintain
the vacuum. It is, however, impossible to prevent the entrance of air and other non-
condensable gasses into the condenser. In addition, some method must exist to initially
cause a vacuum to exist in the condenser. This necessitates the use of an air ejector or
vacuum pump to establish and help maintain condenser vacuum.
Air ejectors are essentially jet pumps or eductors, as illustrated in Figure 10 below. In
operation, the jet pump has two types of fluids. They are the high pressure fluid that flows
through the nozzle, and the fluid being pumped which flows around the nozzle into the throat
of the diffuser. The high velocity fluid enters the diffuser where its molecules strike other
molecules. These molecules are in turn carried along with the high velocity fluid out of the
diffuser creating a low pressure area around the mouth of the nozzle. This process is called
entrainment. The low pressure area will draw more fluid from around the nozzle into the
throat of the diffuser. As the fluid moves down the diffuser, the increasing area converts the
velocity back to pressure. Use of steam at a pressure between 200 psi and 300 psi as the
high pressure fluid enables a single stage air ejector to draw a vacuum of about 26 inches
Hg.

Page 71
Figure 28 Change of section - change in pressure
Normally, air ejectors consist of two suction stages. The first stage suction is located on top
of the condenser, while the second stage suction comes from the diffuser of the first stage.
The exhaust steam from the second stage must be condensed. This is normally
accomplished by an air ejector condenser that is cooled by condensate. The air ejector
condenser also preheats the condensate returning to the boiler. Two-stage air ejectors are
capable of drawing vacuums to 29 inches Hg.
A vacuum pump may be any type of motor-driven air compressor. Its suction is attached to
the condenser, and it discharges to the atmosphere. A common type uses rotating vanes in
an elliptical housing. Single-stage, rotary-vane units are used for vacuums to 28 inches Hg.
Two stage units can draw vacuums to 29.7 inches Hg. The vacuum pump has an advantage
over the air ejector in that it requires no source of steam for its operation. They are normally
used as the initial source of vacuum for condenser start-up.

Page 72
Introduction to energy recovery

Energy recovery.
A process of converting used oil into usable energy, e.g., burned to recover energy, heat
building, or incinerator
At various stages in the refining process, useful energy carriers may be lost. The most
important (energy) sources are the recovery of combustible products for useful applications,
which would have been flared otherwise, as well as the recovery of hydrogen from different
flue and process gas streams. The latter will reduce the need for additional hydrogen
makeup; an energy-intensive and expensive process.

Flare gas recovery


(or zero flaring) is a strategy evolving from the need to improve environmental performance.
Generally, conventional flaring practice has been to operate at some flow greater than the
manufacturer’s minimum flow rate to avoid damage to the flare (Miles, 2001). Typically, flared
gas consists of background flaring (including planned intermittent and planned continuous
flaring) and upset-blowdown flaring. In offshore flaring, background flaring can be as much
as 50% of all flared gases. In refineries, background flaring will generally be less than 50%,
depending on practices in the individual refinery. Recent discussions on emissions from
flaring from the California Bay area refineries has highlighted the issue from an
environmental perspective (Ezerksy, 2002).7 The report highlighted the higher emissions
compared to previous assumptions of the Air Quality District, due to larger volumes of flared
gases. The report also demonstrated the differences among various refineries, and plants
within the refineries.
Reduction of flaring will not only result in reduced air pollutant emissions, but also in
increased energy-efficiency replacing fuels, as well as less negative publicity around flaring.
Reduction of flaring can be achieved by improved recovery systems, including installing
recovery compressors. New compressors and liquid-seals have been installed, and the two
flare gas recovery systems have reduced flaring to near-zero levels (Fisher and Brennan,
2002). A plantwide assessment of the Equilon refinery in Martinez (now fully owned by Shell)
highlighted the potential for flare gas recovery. The refinery will install new recovery
compressors to reduce flaring. No specific costs were available for the flare gas recovery
project, as it is part of a large package of measures for the refinery. The overall project has
projected annual savings of $52 million and a payback period of 2 years (US DOEOIT,
2002).
However, emissions can be further reduced by improved process control equipment and new
flaring technology. Development of gas-recovery systems, development of new ignition
systems with low-pilot-gas consumption or elimination of pilots altogether with the use of new
ballistic ignition systems can reduce the amount of flared gas considerably. Development
and demonstration of new ignition systems without a pilot may result in increased energy
efficiency and reduced emissions.

Hydrogen Management and Recovery.


Hydrogen is used in the refinery in processes such as hydrocrackers and desulphurisation
using hydrotreaters. The production of hydrogen is an energy-intensive process using natural
gas-fueled reformers. However, these processes and other processes generate gases that
may contain a certain amount of hydrogen not used in the processes, or generated as by-
product of distillation of conversion processes. In addition, different processes have varying
quality (purity) demands for the hydrogen feed. Reducing the need for hydrogen make-up will
reduce energy use in the reformer and reduce the need for purchased natural gas. Natural

Page 73
gas is an expensive energy input in the refinery process, and lately associated with large
fluctuation in prices. The major technology developments in the hydrogen management
within the refinery are hydrogen process integration (or hydrogen cascading) and hydrogen
recovery technology (Zagoria and Huycke, 2003). Revamping and retrofitting existing
hydrogen networks can increase hydrogen capacity between 3% and 30% (Ratan and Vales,
2002).

Hydrogen integration
at refineries is a new and important application of pinch analysis. Most hydrogen systems in
refineries feature limited integration and pure hydrogen flows are sent from the reformers to
the different processes in the refinery. But as the use of hydrogen is increasing the value of
hydrogen is more and more appreciated. Using the approach of composition curves used in
pinch analysis the production and uses of hydrogen of a refinery can be made visible.
This allows us to identify the best matches between different hydrogen sources and uses
based on quality of the hydrogen streams. It allows the user to select the appropriate and
most cost-effective technology for hydrogen purification. A recent improvement of the
analysis technology also accounts for gas pressure, to reduce compression energy needs
(Hallale, 2001). The analysis method accounts also for costs of piping, besides the costs for
generation, fuel use and compression power needs. It can be used for new and retrofit
studies.
The BP refinery at Carson, in a project with the California Energy Commission, has executed
a Hydrogen Pinch analysis of the large refinery. Total potential savings of $4.5 million on
operating costs were identified, but the refinery decided to realize a more cost effective
package saving $3.9 million per year. As part of the plant-wide assessment of the Equilon
(Shell) refinery at Martinez, an analysis of the hydrogen network has been included (US
DOE-OIT, 2002). This has resulted in the identification of large energy savings. Further
development and application of the analysis method at Californian refineries, especially as
the need for hydrogen is increasing due to reduced future sulfur content of diesel and other
fuels, may result in reduced energy needs at all refineries with hydrogen needs (all, except
San Joaquin Refining in Bakersfield) (Khorram and Swaty, 2002). One refinery identified
savings of $6 million/year in hydrogen savings without capital projects (Zagoria and Huycke,
2003).

Hydrogen recovery
is an important technology development area to improve the efficiency of hydrogen recovery,
reduce the costs of hydrogen recovery and increase the purity of the resulting hydrogen flow.
Hydrogen can be recovered indirectly by routing low-purity hydrogen streams to the
hydrogen plant (Zagoria and Huycke, 2003) or can be recovered from off gases by routing it
to the existing purifier of the hydrogen plant or by installing additional purifiers to treat the off
gases and vent gases. The cost savings of recovered hydrogen are around 50% of the costs
of hydrogen production (Zagoria and Huycke, 2003). Membranes are an attractive
technology for hydrogen recovery. If the content of recoverable products is higher than 2-5%
(or preferably 10%), recovery may make economic sense (Baker et al., 2000). New
membrane applications for the refinery and chemical industry are under development.
Membranes for hydrogen recovery from ammonia plants have first been demonstrated about
20 years ago (Baker et al., 2000), and are used in various state-of-the-art plant designs.
Refinery off gas flows have a different composition, making different membranes necessary
for optimal recovery. Membrane plants have been demonstrated for recovery of hydrogen
from hydrocracker off gases. Various suppliers offer membrane technologies for hydrogen
recovery in the refining industry, including Air Liquide, Air Products and UOP. The hydrogen
content has to be at least 25% for economic recovery of the hydrogen, with a recovery yield
of 85-95% and a purity of 95%.

Page 74
Membrane technology generally represents the lowest cost option for low product rates, but
not necessarily for high flow rates (Zagoria and Huycke, 2003). For high-flow rates PSA
technology is often the conventional technology of choice. Development of low-cost and
efficient membranes is an area of research interest to improve cost-effectiveness of
hydrogen recovery, and enable the recovery of hydrogen from gas streams with lower
concentrations.

Heat Recovery.
Heat is recovered and re-used throughout the refinery. Next to efficient integration of heat
flows throughout the refinery, the efficient operation of heat exchangers is a major area of
interest. In a complex refinery most processes occur under high temperature and pressure
conditions; the management and optimization of heat transfer among processes is therefore
key to increasing overall energy efficiency. Fouling, a deposit buildup in units and piping that
impede heat transfer, requires the combustion of additional fuel. For example, the processing
of many heavy crude oils in the U.S. increases the likelihood of localized coke deposits in the
heating furnaces, thereby reducing furnace efficiency and creating potential equipment
failure. An estimate by the Office of Industrial Technology at the U.S. Department of Energy
noted that the cost penalty for fouling could be as much as $2 billion annually in material and
energy costs. The problem of fouling is expected to increase with the trend towards
processing heavier crudes.
Fouling is the effect of several process variables and heat exchanger design. Fouling may
follow the combination of different mechanisms (Bott, 2001). Several methods of
investigation have been underway to attempt to reduce fouling including the use of sensors
to detect early fouling, physical and chemical methods to create high temperature coatings
(without equipment modification), the use of ultrasound, as well as the improved long term
design and operation of facilities. The U.S. Department of Energy initially funded preliminary
research into this area, but funding has been discontinued (Huangfu, 2000; Bott, 2000).
Initial analysis on fouling effects of a 100,000 bbl/day crude distillation unit found an
additional heating load of 12.3 kBtu/barrel (13.0 MJ/barrel) processes (Panchal and Huangfu,
2000). Reducing this additional heating load could results in significant energy savings.
This technology is still in the conceptual and basic research stage and therefore it is difficult
to assess capital costs at this time. Argonne National Laboratory (ANL) has been the lead in
working with the refining industry in the area. Progress so far has included: a basic
understanding of fouling mechanisms developed (for example, the presence of iron sulfide in
crude oil and its link to fouling), the development of a threshold fouling model by ANL, the
testing of prototype fouling detection units, the development of a Heat Exchanger Design
Handbook (1999 Edition) incorporating ANL’s petroleum fouling threshold model, and the
preparation of a guideline document on Heat Exchanger Fouling in the Crude Oil Distillation
Unit (Panchal, 2000). Besides ANL, several other groups have worked in the area of fouling
reduction. Outside the U.S., groups in Europe and Canada have worked on fouling.
While the issue of fouling is now on the radar screen of plant managers (there is a biannual
Fouling Mitigation conference held by the American Institute for Chemical Engineers), a
stronger commitment by the refining industry would be needed to advance this technology to
the next stage of development. Some sources believe that the future development of this
area is expected to be in the area of Condition-Based Maintenance of Heat-Transfer
Equipment that will be based on Knowledge-Based and Monitoring –Based Mitigation of
Fouling/Corrosion (Panchal, 2000, see also section on process control systems).
Furthermore, developments in heat exchanger design and process intensification may also
contribute to reducing the problem of fouling.

Page 75
An Introduction to Pinch Technology
While oil prices continue to climb, energy conservation remains the prime concern for many process
industries. The challenge every process engineer is faced with is to seek answers to questions related
to their process energy patterns. A few of the frequently asked questions are:
1. Are the existing processes as energy efficient as they should be?
2. How can new projects be evaluated with respect to their energy
requirements?
3. What changes can be made to increase the energy efficiency without
incurring any cost?
4. What investments can be made to improve energy efficiency?
5. What is the most appropriate utility mix for the process?
6. How to put energy efficiency and other targets like reducing emissions,
increasing plant capacities, improve product qualities etc, into a one coherent
strategic plan for the overall site?

What is Pinch Technology?

Meaning of the term "Pinch Technology"


The term "Pinch Technology" was introduced by Linnhoff and Vredeveld to represent a new set of
thermodynamically based methods that guarantee minimum energy levels in design of heat exchanger
networks. Over the last two decades it has emerged as an unconventional development in process
design and energy conservation. The term ‘Pinch Analysis’ is often used to represent the application
of the tools and algorithms of Pinch Technology for studying industrial processes. Developments of
TM TM TM
rigorous software programs like PinchExpress , SuperTarget , Aspen Pinch have proved to be
very useful in pinch analysis of complex industrial processes with speed and efficiency.

Basis of Pinch Analysis


Pinch technology presents a simple methodology for systematically analysing chemical processes and
the surrounding utility systems with the help of the First and Second Laws of Thermodynamics. The
First Law of Thermodynamics provides the energy equation for calculating the enthalpy changes (dH)
in the streams passing through a heat exchanger. The Second Law determines the direction of heat
flow. That is, heat energy may only flow in the direction of hot to cold. This prohibits ‘temperature
crossovers’ of the hot and cold stream profiles through the exchanger unit. In a heat exchanger unit
neither a hot stream can be cooled below cold stream supply temperature nor a cold stream can be
heated to a temperature more than the supply temperature of hot stream. In practice the hot stream
can only be cooled to a temperature defined by the ‘temperature approach’ of the heat exchanger. The
temperature approach is the minimum allowable temperature difference (DTmin) in the stream
temperature profiles, for the heat exchanger unit. The temperature level at which DTmin is observed in
the process is referred to as "pinch point" or "pinch condition". The pinch defines the minimum
driving force allowed in the exchanger unit.

Objectives of Pinch Analysis


Pinch Analysis is used to identify energy cost and heat exchanger network (HEN) capital cost targets
for a process and recognizing the pinch point. The procedure first predicts, ahead of design, the
minimum requirements of external energy, network area, and the number of units for a given process
at the pinch point. Next a heat exchanger network design that satisfies these targets is synthesized.
Finally the network is optimized by comparing energy cost and the capital cost of the network so that
the total annual cost is minimized. Thus, the prime objective of pinch analysis is to achieve
financial savings by better process heat integration (maximizing process-to-process heat
recovery and reducing the external utility loads). The concept of process heat integration is
illustrated in the example discussed below.

Page 76
A Simple Example of Process Integration by Pinch Analysis
Consider the following simple process [Figure 29] where feed stream to a reactor is heated before inlet
to a reactor and the product stream is to be cooled. The heating and cooling are done by use of steam
(Heat Exchanger -1) and cooling water (Heat Exchanger-2), respectively. The Temperature (T) vs.
Enthalpy (H) plot for the feed and product streams depicts the hot (Steam) and cold (CW) utility loads
when there is no vertical overlap of the hot and cold stream profiles.

Figure 29 A Simple Flow Scheme with T-H profile


An alternative, improved scheme is shown below where the addition of a new ‘Heat Exchanger–3’
recovers product heat (X) to preheat the feed. The steam and cooling water requirements also get
reduced by the same amount (X). The amount of heat recovered (X) depends on the ‘minimum
approach temperature’ allowed for the new exchanger. The minimum temperature approach between
the two curves on the vertical axis is DTmin and the point where this occurs is defined as the "pinch".
o
From the T-H plot, the X amount corresponds to a DTmin value of 20 C. Increasing the DTmin value
leads to higher utility requirements and lower area requirements..

Figure 30 Improved Flow Scheme with T-H profile

Page 77
Development of the Pinch Technology Approach
When the process involves single hot and cold streams (as in above example) it is easy to design an
optimum heat recovery exchanger network intuitively by heuristic methods. In any industrial set-up the
number of streams is so large that the traditional design approach has been found to be limiting in the
design of a good network. With the development of pinch technology in the late 1980’s, not only
optimal network design was made possible, but also considerable process improvements could be
discovered. Both the traditional and pinch approaches are depicted below

Figure 31 Graphic Representation of Traditional and Pinch Design Approaches

Traditional Design Approach:


First, the core of the process is designed with fixed flow rates and temperatures yielding the heat and
mass balance for the process. Then the design of a heat recovery system is completed. Next, the
remaining duties are satisfied by the use of the utility system. Each of these exercises is performed
independently of the others.

Pinch Technology Approach:


Process integration using pinch technology offers a novel approach to generate targets for minimum
energy consumption before heat recovery network design. Heat recovery and utility system constraints
are then considered in the design of the core process. Interactions between the heat recovery and
utility systems are also considered. The pinch design can reveal opportunities to modify the core
process to improve heat integration. The pinch approach is unique because it treats all processes with
multiple streams as a single, integrated system. This method helps to optimize the heat transfer
equipment during the design of the equipment.

Areas of Applications of Pinch Technology


Pinch originated in the petrochemical sector and is now being applied to solve a wide range of
problems in mainstream chemical engineering. Wherever heating and cooling of process materials
takes places there is a potential opportunity. Thus initial applications of the technology were found in
projects relating to energy saving in industries as diverse as iron and steel, food and drink, textiles,
paper and cardboard, cement, base chemicals, oil, and petrochemicals.
Early emphasis on energy conservation led to the misconception that conservation is the main area of
application for pinch technology. The technology, when applied with imagination, can affect reactor
design, separator design, and the overall process optimization in any plant. It has been applied to
processing problems that go far beyond energy conservation. It has been employed to solve problems
as diverse as improving effluent quality, reducing emissions, increasing product yield, debottlenecking,
increasing throughput, and improving the flexibility and safety of the processes.

Page 78
Basic Concepts of Pinch Analysis
Most industrial processes involve transfer of heat either from one process stream to another process
stream (interchanging) or from a utility stream to a process stream. In the present energy crisis
scenario all over the world, the target in any industrial process design is to maximize the process-to-
process heat recovery and to minimize the utility (energy) requirements. To meet the goal of maximum
energy recovery or minimum energy requirement (MER) an appropriate heat exchanger network
(HEN) is required. The design of such a network is not an easy task considering the fact that most
processes involve a large number of process and utility streams. As explained in the previous section,
the traditional design approach has resulted in networks with high capital and utility costs. With the
advent of pinch analysis concepts, the network design has become very systematic and methodical.
A summary of the key concepts, their significance, and the nomenclature used in pinch
analysis is given below:
Combined (Hot and Cold ) Composite Curves: Used to predict targets for
Minimum energy (both hot and cold utility) required,
• Minimum network area required, and
• Minimum number of exchanger units required.
• DTmin and Pinch Point: The DTmin value determines how closely the hot and cold
composite curves can be ‘pinched’ (or squeezed) without violating the Second Law of
Thermodynamics (none of the heat exchangers can have a temperature crossover).
• Grand Composite Curve: Used to select appropriate levels of utilities (maximize cheaper
utilities) to meet over all energy requirements.
• Energy and Capital Cost Targeting: Used to calculate total annual cost of utilities and capital
cost of heat exchanger network.
• Total Cost Targeting: Used to determine the optimum level of heat recovery or the optimum
DTmin value, by balancing energy and capital costs. Using this method, it is possible to obtain
an accurate estimate (within 10 - 15%) of overall heat recovery system costs without having to
design the system. The essence of the pinch approach is the speed of economic evaluation.
• Plus/Minus and Appropriate Placement Principles: The "Plus/Minus" Principle provides
guidance regarding how a process can be modified in order to reduce associated utility needs
and costs. The Appropriate Placement Principles provide insights for proper integration of key
equipments like distillation columns, evaporators, furnaces, heat engines, heat pumps, etc. in
order to reduce the utility requirements of the combined system.
• Total Site Analysis: This concept enables the analysis of the energy usage for an entire plant
site that consists of several processes served by a central utility system.

Steps of Pinch Analysis


In any Pinch Analysis problem, whether a new project or a retrofit situation, a well-defined stepwise
procedure is followed. It should be noted that these steps are not necessarily performed on a once-
through basis, independent of one another. Additional activities such as re-simulation and data
modification occur as the analysis proceeds and some iteration between the various steps is always
required.

Page 79
Figure 32 Steps of Pinch Analysis

Identification of the Hot, Cold and Utility Streams in the Process


‘Hot Streams’ are those that must be cooled or are available to be cooled. e.g. product
cooling before storage
‘Cold Streams’ are those that must be heated e.g. feed preheat before a reactor.
‘Utility Streams’ are used to heat or cool process streams, when heat exchange between
process streams is not practical or economic. A number of different hot utilities (steam,
hot water, flue gas, etc.) and cold utilities (cooling water, air, refrigerant, etc.) are used
in industry.
The identification of streams needs to be done with care as sometimes, despite undergoing changes
in temperature, the stream is not available for heat exchange. For example, when a gas stream is
compressed the stream temperature rises because of the conversion of mechanical energy into heat
and not by any fluid to fluid heat exchange. Hence such a stream may not be available to take part in
any heat exchange. In the context of pinch analysis, this stream may or may not be considered to be a
process stream.

2. Thermal Data Extraction for Process & Utility Streams


For each hot, cold and utility stream identified, the following thermal data is extracted from the process
material and heat balance flow sheet:
o
• Supply temperature (TS C) : the temperature at which the stream is available.
o
• Target temperature (TT C) : the temperature the stream must be taken to.
o
• Heat capacity flow rate (CP kW/ C) : the product of flow rate (m) in kg/sec and specific heat
0
(Cp kJ/kg C).
CP = m x Cp
• Enthalpy Change (dH) associated with a stream passing through the exchanger is given by
the First Law of Thermodynamics:
First Law energy equation: d H = Q ± W
In a heat exchanger, no mechanical work is being performed:
W = 0 (zero)

Page 80
The above equation simplifies to: d H = Q, where Q represents the heat supply or demand associated
with the stream. It is given by the relationship: Q= CP x (TS - TT).
Enthalpy Change, dH = CP x (TS - TT)
** Here the specific heat values have been assumed to be temperature independent within the
operating range.
The stream data and their potential effect on the conclusions of a pinch analysis should be considered
during all steps of the analysis. Any erroneous or incorrect data can lead to false conclusions. In order
to avoid mistakes, the data extraction is based on certain qualified principles. For details on principles
of data extraction, check out Link-2 at the end of the article. The data extracted is presented in the
below.

Table 2 Typical Stream Data

STREAM STREAM SUPPLY TARGET HEAT ENTH. CHANGE


NUMBER NAME TEMP. TEMP. CAP. FLOW. kW
°C °C kW /°C

1 FEED 60 205 20 2900

2 REAC.OUT 270 160 18 1980

3 PRODUCT 220 70 35 5250

4 RECYCLE 160 210 50 2500

Selection of Initial DTmin value


The design of any heat transfer equipment must always adhere to the Second Law of
Thermodynamics that prohibits any temperature crossover between the hot and the cold stream i.e. a
minimum heat transfer driving force must always be allowed for a feasible heat transfer design. Thus
the temperature of the hot and cold streams at any point in the exchanger must always have a
minimum temperature difference (DTmin). This DTmin value represents the bottleneck in the heat
recovery.
In mathematical terms, at any point in the exchanger

Hot stream Temp. ( TH ) - ( TC ) Cold stream Temp. >= DTmin

The value of DTmin is determined by the overall heat transfer coefficients (U) and the geometry of the
heat exchanger. In a network design, the type of heat exchanger to be used at the pinch will determine
the practical Dtmin for the network. For example, an initial selection for the Dtmin value for shell and
0
tubes may be 3-5 C (at best) while compact exchangers such as plate and frame often allow for an
0
initial selection of 2-3 C. The heat transfer equation, which relates Q, U, A and LMTD (Log Mean
Temperature Difference) is depicted in Figure 4.

Page 81
Figure 33 Heat Transfer Equation
For a given value of heat transfer load (Q), if smaller values of DTmin are chosen, the area
requirements rise. If a higher value of DTmin is selected the heat recovery in the exchanger decreases
and demand for external utilities increases. Thus, the selection of DTmin value has implications
for both capital and energy costs. This concept will become clearer with the help of composite
curves and total cost targeting discussed later.
Just as for a single heat exchanger, the choice of DTmin (or approach temperature) is vital in the
design of a heat exchanger networks. To begin the process an initial DTmin value is chosen and pinch
analysis is carried out. Typical DTmin values based on experience are available in literature for
reference. A few values based on Linnoff March’s application experience are tabulated below for shell
and tube heat exchangers.

No Industrial Sector Experience DTmin Values

1 Oil Refining 20-40ºC

2 Petrochemical 10-20ºC

3 Chemical 10-20ºC

4 Low Temperature 3-5ºC


Processes

Construction of Composite Curves and Grand Composite Curve


COMPOSITE CURVES: Temperature - Enthalpy (T - H) plots known as ‘Composite curves’ have
been used for many years to set energy targets ahead of design. Composite curves consist of
temperature (T) – enthalpy (H) profiles of heat availability in the process (the hot composite
curve) and heat demands in the process (the cold composite curve) together in a graphical
representation.
In general any stream with a constant heat capacity (CP) value is represented on a T - H diagram by a
straight line running from stream supply temperature to stream target temperature. When there are a
number of hot and cold streams, the construction of hot and cold composite curves simply involves the
addition of the enthalpy changes of the streams in the respective temperature intervals. An example of
hot composite curve construction is shown in Figure 5(a) and (b). A complete hot or cold composite
curve consists of a series of connected straight lines, each change in slope represents a change in
overall hot stream heat capacity flow rate (CP).

Page 82
Figure 34 Temperature-Enthalpy Relations Used to Construct Composite Curves
For heat exchange to occur from the hot stream to the cold stream, the hot stream cooling curve must
lie above the cold stream-heating curve. Because of the ‘kinked’ nature of the composite curves
(Figure 6), they approach each other most closely at one point defined as the minimum approach
temperature (DTmin). DTmin can be measured directly from the T-H profiles as being the minimum
vertical difference between the hot and cold curves. This point of minimum temperature difference
represents a bottleneck in heat recovery and is commonly referred to as the "Pinch". Increasing
the DTmin value results in shifting the of the curves horizontally apart resulting in lower process to
process heat exchange and higher utility requirements. At a particular DTmin value, the overlap shows
the maximum possible scope for heat recovery within the process. The hot end and cold end
overshoots indicate minimum hot utility requirement (QHmin) and minimum cold utility requirement
(QCmin), of the process for the chosen DTmin.
Thus, the energy requirement for a process is supplied via process to process heat exchange and/or
exchange with several utility levels (steam levels, refrigeration levels, hot oil circuit, furnace flue gas,
etc.).
Graphical constructions are not the most convenient means of determining energy needs. A numerical
approach called the "Problem Table Algorithm" (PTA) was developed by Linnhoff & Flower (1978) as a
means of determining the utility needs of a process and the location of the process pinch. The PTA
lends itself to hand calculations of the energy targets.
To summarize, the composite curves provide overall energy targets but do not clearly indicate how
much energy must be supplied by different utility levels. The utility mix is determined by the Grand
Composite Curve.

Page 83
Figure 35 Combined Composite Curves
GRAND COMPOSITE CURVE (GCC): In selecting utilities to be used, determining utility
temperatures, and deciding on utility requirements, the composite curves and PTA are not
particularily useful. The introduction of a new tool, the Grand Composite Curve (GCC), was
introduced in 1982 by Itoh, Shiroko and Umeda. The GCC (Figure 7) shows the variation of
heat supply and demand within the process. Using this diagram the designer can find which
utilities are to be used. The designer aims to maximize the use of the cheaper utility levels and
minimize the use of the expensive utility levels. Low-pressure steam and cooling water are
preferred instead of high-pressure steam and refrigeration, respectively.
The information required for the construction of the GCC comes directly from the Problem Table
Algorithm developed by Linnhoff & Flower (1978). The method involves shifting (along the temperature
[Y] axis) of the hot composite curve down by ½ DTmin and that of cold composite curve up by ½
DTmin. The vertical axis on the shifted composite curves shows process interval temperature. In other
words, the curves are shifted by subtracting part of the allowable temperature approach from the hot
stream temperatures and adding the remaining part of the allowable temperature approach to the cold
stream temperatures. The result is a scale based upon process temperature having an allowance for
temperature approach (DTmin). The Grand Composite Curve is then constructed from the enthalpy
(horizontal) differences between the shifted composite curves at different temperatures. On the GCC,
the horizontal distance separating the curve from the vertical axis at the top of the temperature scale
shows the overall hot utility consumption of the process.

Page 84
Figure 36 Grand Composite Curve
Figure 7 shows that it is not necessary to supply the hot utility at the top temperature level. The GCC
indicates that we can supply the hot utility over two temperature levels TH1 (HP steam) and TH2 (LP
steam). Recall that, when placing utilities in the GCC, intervals, and not actual utility temperatures,
should be used. The total minimum hot utility requirement remains the same: QHmin = H1 (HP steam)
+ H2 (LP steam). Similarly, QCmin = C1 (Refrigerant) +C2 (CW). The points TH2 and TC2 where the
H2 and C2 levels touch the grand composite curve are called the "Utility Pinches." The shaded green
pockets represent the process-to-process heat exchange.
In summary, the grand composite curve is one of the most basic tools used in pinch analysis for the
selection of the appropriate utility levels and for targeting of a given set of multiple utility levels. The
targeting involves setting appropriate loads for the various utility levels by maximizing the least
expensive utility loads and minimizing the loads on the most expensive utilities.

Estimation of Minimum Energy Cost Targets


Once the DTmin is chosen, minimum hot and cold utility requirements can be evaluated from the
composite curves. The GCC provides information regarding the utility levels selected to meet QHmin
and QCmin requirements.
If the unit cost of each utility is known, the total energy cost can be calculated using the energy
equation given below.

Estimation of Heat Exchanger Network ( HEN ) Capital Cost Targets


The capital cost of a heat exchanger network is dependent upon three factors:
1. the number of exchangers,
2. the overall network area,

Page 85
3. the distribution of area between the exchangers
Pinch analysis enables targets for the overall heat transfer area and minimum number of units of a
heat exchanger network (HEN) to be predicted prior to detailed design. It is assumed that the area is
evenly distributed between the units. The area distribution cannot be predicted ahead of design.
• AREA TARGETING: The calculation of surface area for a single counter-current heat
exchanger requires the knowledge of the temperatures of streams in and out (dTLM i.e. Log
Mean Temperature Difference or LMTD), overall heat transfer coefficient (U-value), and total
heat transferred (Q). The area is given by the relation
Area = Q / [ U x dTLM ]
The composite curves can be divided into a set of adjoining enthalpy intervals such that within each
interval, the hot and cold composite curves do not change slope. Here the heat exchange is assumed
to be "vertical" (pure counter-current heat exchange). The hot streams in any enthalpy interval, at any
point, exchanges heat with the cold streams at the temperature vertically below it. The total area of the
HEN (Amin) is given by the formula in Figure below, where i denotes the ith enthalpy and interval j
denotes the jth stream and dTLM denotes LMTD in the ith interval.

Figure 37 HEN AREA min Estimation from Composite Curves


The actual HEN total area required is generally within 10% of the area target as calculated above.
With inclusion of temperature correction factors area targeting can be extended to non counter-current
heat exchange as well.
NUMBER OF UNITS TARGETING: For the minimum number of heat exchanger units (Nmin)
required for MER (minimum energy requirement or maximum energy recovery), the HEN can
be evaluated prior to HEN design by using a simplified form of Euler’s graph theorem. In
designing for the minimum energy requirement (MER), no heat transfer is allowed across the
pinch and so a realistic target for the minimum number of units (NminMER) is the sum of the
targets evaluated both above and below the pinch separately.
NminMER=[Nh+Nc+Nu–1]AP +[Nh+Nc+Nu–1]BP
Where :

Nh = Number of hot streams

Nc=Number of cold streams

Nu = Number of utility streams

AP / BP : Above / Below Pinch

Page 86
• HEN TOTAL CAPITAL COST TARGETING: The targets for the minimum surface area (Amin)
and the number of units (Nmin) can be combined together with the heat exchanger cost law to
determine the targets for HEN capital cost (CHEN). The capital cost is annualized using an
annualization factor that takes into account interest payments on borrowed capital. The
equation used for calculating the total capital cost and exchanger cost law is given below.

For the Exchanger Cost Equation shown above, typical values for a carbon steel shell and tube
exchnager would be a = 16,000, b = 3,200, and c = 0.7. The installed cost can be considered to be 3.5
times the purchased cost given by the Exchanger Cost Equation.
• Estimation of Optimum DTmin Value by Energy-Capital Trade Off
To arrive at an optimum DTmin value, the total annual cost (the sum of total annual energy and capital
cost) is plotted at varying DTmin values (Figure 7). Three key observations can be made from Figure
9:
An increase in DTmin values result in higher energy costs and lower capital costs.
A decrease in DTmin values result in lower energy costs and higher capital costs.
An optimum DTmin exists where the total annual cost of energy and capital costs is minimized.
Thus, by systematically varying the temperature approach we can determine the optimum heat
recovery level or the DTminOPTIMUM for the process.

Figure 38 Energy-Capital Cost Trade Off (Optimum DTmin)

Estimation of Practical Targets for HEN Design


The heat exchanger network designed on the basis of the estimated optimum DTmin value is not
0
always the most appropriate design. A very small DTmin value, perhaps 8 C, can lead to a very
complicated network design with a large total area due to low driving forces. The designer, in practice,
0
selects a higher value (15 C) and calculates the marginal increases in utility duties and area
requirements. If the marginal cost increase is small, the higher value of DTmin is selected as the
practical pinch point for the HEN design.

Page 87
Recognizing the significance of the pinch temperature allows energy targets to be realized by design
of appropriate heat recovery network.
So what is the significance of the pinch temperature?
The pinch divides the process into two separate systems each of which is in enthalpy balance with the
utility. The pinch point is unique for each process. Above the pinch, only the hot utility is required.
Below the pinch, only the cold utility is required. Hence, for an optimum design, no heat should be
transferred across the pinch. This is known as the key concept in Pinch Technology.
To summarize, Pinch Technology gives three rules that form the basis for practical network design:
No external heating below the Pinch.
No external cooling above the Pinch.
No heat transfer across the Pinch.
Violation of any of the above rules results in higher energy requirements than the minimum
requirements theoretically possible.
Plus/Minus Principle: The overall energy needs of a process can be further reduced by introducing
process changes (changes in the process heat and material balance). There are several parameters
that could be changed such as reactor conversions, distillation column operating pressures and reflux
ratios, feed vaporization pressures, or pump-around flow rates. The number of possible process
changes is nearly infinite. By applying the pinch rules as discussed above, it is possible to identify
changes in the appropriate process parameter that will have a favorable impact on energy
consumption. This is called the "Plus/Minus Principle."
Applying the pinch rules to study of composite curves provide us the following guidelines:
• Increase (+) in hot stream duty above the pinch.
• Decrease (-) in cold stream duty above the pinch.
This will result in a reduced hot utility target, and any
• Decrease (-) in hot stream duty below the pinch.
• Increase (+) in cold stream duty below the pinch
will result in a reduced cold utility target.
These simple guidelines provide a definite reference for the adjustment of single heat duties such as
vaporization of a recycle, pump-around condensing duty, and others. Often it is possible to change
temperatures rather than the heat duties. The target should be to
• Shift hot streams from below the pinch to above and
• Shift cold streams from above the pinch to below.
The process changes that can help achieve such stream shifts essentially involve changes in following
operating parameters:
• reactor pressure/temperatures
• distillation column temperatures, reflux ratios, feed conditions, pump around conditions,
intermediate condensers
• evaporator pressures
• storage vessel temperatures
For example, if the pressure for a feed vaporizer is lowered, vaporization duty can shift from above to
below the pinch. The leads to reduction in both hot and cold utilities.
Appropriate Placement Principles: Apart from the changes in process parameters, proper
integration of key equipment in process with respect to the pinch point should also be considered. The
pinch concept of "Appropriate Placement" (integration of operations in such a way that there is
reduction in the utility requirement of the combined system) is used for this purpose. Appropriate
placement principles have been developed for distillation columns, evaporators, heat engines,

Page 88
furnaces, and heat pumps. For example, a single-effect evaporator having equal vaporization and
condensation loads, should be placed such that both loads balance each other and the evaporator can
be operated without any utility costs. This means that appropriate placement of the evaporator is on
either side of the pinch and not across the pinch.
In addition to the above pinch rules and principles, a large number of factors must also be considered
during the design of heat recovery networks. The most important are operating cost, capital cost,
safety, operability, future requirements, and plant operating integrity. Operating costs are dependent
on hot and cold utility requirements as well as pumping and compressor costs. The capital cost of a
network is dependent on a number of factors including the number of heat exchangers, heat transfer
areas, materials of construction, piping, and the cost of supporting foundations and structures.
With a little practice, the above principles enable the designer to quickly pan through 40-50 possible
modifications and choose 3 or 4 that will lead to the best overall cost effects.
The essence of the pinch approach is to explore the options of modifying the core process design,
heat exchangers, and utility systems with the ultimate goal of reducing the energy and/or capital cost.

9. Design of Heat Exchanger Network


The design of a new HEN is best executed using the "Pinch Design Method (PDM)". The systematic
application of the PDM allows the design of a good network that achieves the energy targets within
practical limits. The method incorporates two fundamentally important features: (1) it recognizes that
the pinch region is the most constrained part of the problem (consequently it starts the design at the
pinch and develops by moving away) and (2) it allows the designer to choose between match options.
In effect, the design of network examines which "hot" streams can be matched to "cold" streams via
heat recovery. This can be achieved by employing "tick off" heuristics to identify the heat loads on the
pinch exchanger. Every match brings one stream to it target temperature. As the pinch divides the
heat exchange system into two thermally independent regions, HENs for both above and below pinch
regions are designed separately. When the heat recovery is maximized the remaining thermal needs
must be supplied by hot utility.
The graphical method of representing flow streams and heat recovery matches is called a ‘grid
diagram’ (Figure 10).

Figure 39 Typical Grid Diagram


All the cold (blue lines) and hot (red line) streams are represented by horizontal lines. The entrance
and exit temperatures are shown at either end. The vertical line in the middle represents the pinch
temperature. The circles represent heat exchangers. Unconnected circles represent exchangers using
utility heating and cooling.

Page 89
The design of a network is based on certain guidelines like the "CP Inequality Rule", "Stream
Splitting", "Driving Force Plot" and "Remaining Problem Analysis".
Having made all the possible matches, the two designs above and below the pinch are then brought
together and usually refined to further minimize the capital cost. After the network has been designed
according to the pinch rules, it can be further subjected to energy optimization. Optimizing the network
involves both topological and parametric changes of the initial design in order to minimize the total
cost.
Benefits and Applications of Pinch Technology
One of the main advantages of Pinch Technology over conventional design methods is the ability to
set energy and capital cost targets for an individual process or for an entire production site ahead of
design. Therefore, in advance of identifying any projects, we know the scope for energy savings and
investment requirements.
General Process Improvements
In addition to energy conservation studies, Pinch Technology enables process engineers to achieve
the following general process improvements:
Update or Modify Process Flow Diagrams (PFDs): Pinch quantifies the savings available by
changing the process itself. It shows where process changes reduce the overall energy target, not just
local energy consumption.
Conduct Process Simulation Studies: Pinch replaces the old energy studies with information that
can be easily updated using simulation. Such simulation studies can help avoid unnecessary capital
costs by identifying energy savings with a smaller investment before the projects are implemented.
Set Practical Targets: By taking into account practical constraints (difficult fluids, layout, safety, etc.),
theoretical targets are modified so that they can be realistically achieved. Comparing practical with
theoretical targets quantifies opportunities "lost" by constraints - a vital insight for long-term
development.
Debottlenecking: Pinch Analysis, when specifically applied to debottlenecking studies, can lead to
the following benefits compared to a conventional revamp:
• Reduction in capital costs
• Decrease in specific energy demand giving a more competitive production facility
For example, debottlenecking of distillation columns by Column Targeting can be used to identify less
expensive alternatives to column retraying or installation of a new column.
Determine Opportunities for Combined Heat and Power (CHP) Generation: A well-designed CHP
system significantly reduces power costs. Pinch shows the best type of CHP system that matches the
inherent thermodynamic opportunities on the site. Unnecessary investments and operating costs can
be avoided by sizing plants to supply energy that takes heat recovery into consideration. Heat
recovery should be optimized by Pinch Analysis before specifying CHP systems.
Decide what to do with low-grade waste heat: Pinch shows, which waste heat streams, can be
recovered and lends insight into the most effective means of recovery.
Industrial Applications
The application of Pinch Technology has resulted in significant improvements in the energy and capital
efficiency of industrial facilities worldwide. It has been successfully applied in many different industries
from petroleum and base chemicals to food and paper. Both continuous and batch processes have
been successfully analyzed on an individual unit and site-wide basis. Pinch technology has been
extensively used to capitalize on the mistakes of the past. It identifies the existence of built-in spare
heat transfer areas and presents the designer with opportunities for cheap retrofits. In case of the
design of new plants, Pinch Analysis has played a very important role and minimized capital costs.
A Case Study: When Pennzoil was adding a residual catalytic cracking (RCC) unit, the gas plant
associated with the RCC and an alkylation unit at its Atlas Refining facility in Shreveport, energy
efficiency was one of their major considerations in engineering the refinery expansion. Electric Power
Research Institute (EPRI) and Pennzoil's energy provider, SWEPCO, used pinch technology to carry

Page 90
out an optimization study of the new units and the utility systems that serve them rather than simply
incorporating standard process packages provided by licensors. The pinch study identified
opportunities for saving up to 23.7% of the process heating through improved heat integration. Net
savings for Pennzoil were estimated at $13.7 million over 10 years.

Page 91
Catalysts and Reaction Engineering
• Chemical reactions
• Reaction kinetics
• Introduction to catalysis

Page 92
Chemical Reactions

Separation: heavy on the bottom, light on the top

Modern separation
involves piping oil through
hot furnaces. The
resulting liquids and
vapours are discharged
into distillation towers, the
tall, narrow columns that
give refineries their
distinctive skylines.

Inside the towers, the liquids and vapours separate into components or fractions according
to weight and boiling point. The lightest fractions, including gasoline and liquid petroleum gas
(LPG), vapourize and rise to the top of the tower, where they condense back to liquids.
Medium weight liquids, including kerosene and diesel oil distillates, stay in the middle.
Heavier liquids, called gas oils, separate lower down, while the heaviest fractions with the
highest boiling points settle at the bottom. These tarlike fractions, called residuum, are
literally the "bottom of the barrel."

The fractions now are ready for piping to the next station or plant within the refinery. Some
components require relatively little additional processing to become asphalt base or jet fuel.
However, most molecules that are destined to become high-value products require much
more processing.

Conversion: cracking and rearranging molecules to add value


This is where refining's fanciest footwork takes place--where fractions from the distillation
towers are transformed into streams (intermediate components) that eventually become
finished products. This also is where a refinery makes money, because only through
conversion can most low-value fractions become gasoline.

The most widely used conversion method is called cracking because it uses heat and
pressure to "crack" heavy hydrocarbon molecules into lighter ones. A cracking unit consists
of one or more tall, thick-walled, bullet-shaped reactors and a network of furnaces, heat
exchangers and other vessels.

Page 93
Fluid catalytic cracking, or "cat cracking," is the basic gasoline-making process. Using
intense heat (about 1,000 degrees Fahrenheit), low pressure and a powdered catalyst (a
substance that accelerates chemical reactions), the cat cracker can convert most relatively
heavy fractions into smaller gasoline molecules.

Hydrocracking applies the same principles but uses a different catalyst, slightly lower
temperatures, much greater pressure and hydrogen to obtain chemical reactions. Although
not all refineries employ hydrocracking, Chevron is an industry leader in using this
technology to cost-effectively convert medium- to heavyweight gas oils into high-value
streams. The company's patented hydrocracking process, which takes place in the
Isocracker unit, produces mostly gasoline and jet fuel.

Some refineries also have cokers, which use heat and moderate pressure to turn residuum
into lighter products and a hard, coallike substance that is used as an industrial fuel. Cokers
are among the more peculiar-looking refinery structures. They resemble a series of giant
drums with metal derricks on top.

Cracking and coking are not the only forms of conversion. Other refinery processes, instead
of splitting molecules, rearrange them to add value. Alkylation, for example, makes gasoline
components by combining some of the gaseous byproducts of cracking. The process, which
essentially is cracking in reverse, takes place in a series of large, horizontal vessels and tall,
skinny towers that loom above other refinery structures.

Reforming uses heat, moderate pressure and catalysts to turn naphtha, a light, relatively
low-value fraction, into high-octane gasoline components. Chevron's patented reforming
process is called Rheniforming for the rheniumplatinum catalyst used.

Treatment: the finishing touch


Back when the first refineries used to boil crude oil to get kerosene, they didn't have to worry
about customer specifications or government standards. Today, however, a major portion of
refining involves blending, purifying, fine-tuning and otherwise improving products to meet
these requirements.

To make gasoline, refinery technicians carefully combine a variety of streams from the
processing units. Among the variables that determine the blend are octane level, vapour
pressure ratings and special considerations, such as whether the gasoline will be used at
high altitudes. Technicians also add performance additive and dyes that distinguish the
various grades of fuel.

Refining has come a long way since the oil boiling days of refning. By the time a gallon of
gasoline is pumped into a car's tank, it contains more than 200 hydrocarbons and additives.
All that changing of molecules pays off in a product that ensures smooth, high-performance
driving.

Page 94
Reaction Kinetics
A simple chemical reaction - the rearrangement of electrons and bonding partners - occurs
between two small molecules. From understanding the kinetics of the reaction, and the
equilibrium extent to which it can proceed, come applications: the network of reactions during
combustion, the chain reactions that form polymers, the multiple steps in the synthesis of a
complex pharmaceutical molecule, the specialized reactions of proteins and metabolism.
Chemical kinetics is the chemical engineer's tool for understanding chemical change.
A catalyst influences the reaction rate. Catalysts are sought for increasing production,
improving the reaction conditions, and emphasizing a desired product among several
possibilities. The challenge is to design the catalyst, to increase its effectiveness and
stability, and to create methods to manufacture it.
The chemical reactor should produce a desired product reliably, safely, and economically. In
designing a reactor, the chemical engineer must consider how the chemical kinetics, often
modified by catalysis, interacts with the transport phenomena in flowing materials. New
microreactor designs are expanding the concept of what a reactor may do, how reactions
may be conducted, and what is required to scale a process from laboratory to production.

Page 95
Crude Distillation

Distillation is the first step in the processing of crude oil and it takes place in a tall steel tower
called a fractionation column. The inside of the column is divided at intervals by horizontal
trays. The column is kept very hot at the bottom (the column is insulated) but as different
hydrocarbons boil at different temperatures, the temperature gradually reduces towards the
top, so that each tray is a little cooler than the one below.

The crude needs to be heated up before entering the fractionation column and this is done at
first in a series of heat exchangers where heat is taken from other process streams which
require cooling before being sent to rundown. Heat is also exchanged against condensing
streams from the main column. Typically, the crude will be heated up in this way upto a
temperature of 200 - 280 0C, before entering a furnace.

As the raw crude oil arriving contains quite a bit of water and salt, it is normally sent for salt
removing first, in a piece of equipment called a desalter. Upstream the desalter, the crude is
mixed with a water stream, typically about 4 - 6% on feed. Intense mixing takes place over a
mixing valve and (optionally) as static mixer. The desalter, a large liquid full vessel, uses an
electric field to separate the crude from the water droplets. It operates best at 120 - 150 0C,
hence it is conveniently placed somewhere in the middle of the preheat train.

Part of the salts contained in the crude oil, particularly magnesium chloride, are hydrolysable
at temperatures above 120 0C. Upon hydrolysis, the chlorides get converted into hydrochloric
acid, which will find its way to the distillation column's overhead where it will corrode the
overhead condensers. A good performing desalter can remove about 90% of the salt in raw
crude.

Page 96
Downstream the desalter, crude is further heated up with heat exchangers, and starts
vapourising, which will increase the system pressure drop. At about 170 -200 0C, the crude
will enter a 'pre-flashvessel', operating at about 2 - 5 barg, where the vapours are separated
from the remaining liquid. Vapours are directly sent to the fractionation column, and by doing
so, the hydraulic load on the remainder of the crude preheat train and furnace is reduced
(smaller piping and pumps).

Just upstream the preflash vessel, a small caustic stream is mixed with the crude, in order to
neutralise any hydrochloric acid formed by hydrolysis. The sodium chloride formed will leave
the fractionation column via the bottom residue stream. The dosing rate of caustic is adjusted
based on chloride measurements in the overhead vessel (typically 10 - 20 ppm).

At about 200 - 280 0C the crude enters the furnace where it is heated up further to about 330 -
370 0C. The furnace outlet stream is sent directly to the fractionation column. Here, it is
separated into a number of fractions, each having a particular boiling range.

At 350 0C, and about 1 barg, most of the fractions in the crude oil vapourise and rise up the
column through perforations in the trays, losing heat as they rise. When each fraction reaches
the tray where the temperature is just below its own boiling point, it condenses and changes
back into liquid phase. A continuous liquid phase is flowing by gravity through 'downcomers'
from tray to tray downwards. In this way, the different fractions are gradually separated from
each other on the trays of the fractionation column. The heaviest fractions condense on the
lower trays and the lighter fractions condense on the trays higher up in the column. At
different elevations in the column, with special trays called draw-off trays, fractions can be
drawn out on gravity through pipes, for further processing in the refinery.

At top of the column, vapours leave through a pipe and are routed to an overhead condenser,
typically cooled by air fin-fans. At the outlet of the overhead condensers, at temperature about
40 0C, a mixture of gas, and liquid naphtha exists, which is falling into an overhead
accumulator. Gases are routed to a compressor for further recovery of LPG (C3/C4), while the
liquids (gasoline) are pumped to a hydrotreater unit for sulfur removal.

A fractionation column needs a flow of condensing liquid downwards in order to provide a


driving force for separation between light and heavy fractions. At the top of the column this
liquid flow is provided by pumping a stream back from the overhead accumulator into the
column. Unfortunately, a lot of the heat provided by the furnace to vapourise hydrocarbons is
lost against ambient air in the overhead fin-fan coolers. A clever way of preventing this heat
lost of condensing hydrocarbons is done via the circulating refluxes of the column. In a
circulating reflux, a hot side draw-off from the column is pumped through a series of heat
exchangers (against crude for instance), where the stream is cooled down. The cool stream is
sent back into the column at a higher elevation, where it is been brought in contact with hotter
rising vapours. This provides an internal condensing mechanism inside the column, in a
similar way as the top reflux does which is sent back from the overhead accumulator. The
main objective of a circulating reflux therefore is to recover heat from condensing vapours. A
fractionating column will have several (typically three) of such refluxes, each providing
sufficient liquid flow down the corresponding section of the column. An additional advantage
of having circulating refluxes is that it will reduce the vapour load when going upwards in the
column. This provided the opportunity to have a smaller column diameter for top sections of
the tower. Such a reduction in diameter is called a 'swage'.

Page 97
The lightest side draw-off from the fractionating column is a fraction called kerosene, boiling
in the range 160 - 280 0C, which falls down through a pipe into a smaller column called 'side-
stripper'. The purpose of the side stripper is to remove very light hydrocarbons by using steam
injection or an external heater called 'reboiler'. The stripping steam rate, or reboiled duty is
controlled such as to meet the flashpoint specification of the product. Similarly to the
atmospheric column, the side stripper has fractionating trays for providing contact between
vapour and liquid. The vapours produced from the top of the side stripper are routed back via
pipe into the fractionating column.

The second and third (optional) side draw-offs from the main fractionating column are gasoil
fractions, boiling in the range 200 - 400 0C, which are ultimately used for blending the final
diesel product. Similar as with the kerosene product, the gasoil fractions (light and heavy
gasoil) are first sent to a side stripper before being routed to further treating units.

At the bottom of the fractionation column a heavy, brown/black coloured fraction called
residue is drawn off. In order to strip all light hydrocarbons from this fraction properly, the
bottom section of the column is equipped with a set of stripping trays, which are operated by
injecting some stripping steam (1 - 3% on bottom product) into the bottom of the column.

Page 98
Catalytic Cracking

Introduction

Already in the 30's it was found that when heavy oil fractions are heated over clay type
materials, cracking reactions occur, which lead to significant yields of lighter hydrocarbons.
While the search was going on for suitable cracking catalysts based on natural clays, some
companies concentrated their efforts on the development of synthetic catalyst. This resulted
in the synthetic amorphous silica-alumina catalyst, which was commonly used until 1960,
when it was slightly modified by incorporation of some crystalline material (zeolite catalyst).
When the success of the Houdry fixed bed process was announced in the late 1930s, the
companies that had developed the synthetic catalyst decided to try to develop a process
using finely powdered catalyst. Subsequent work finally led to the development of the
fluidised bed catalytic cracking (FCC) process, which has become the most important
catalytic cracking process.

Originally, the finely powdered catalyst was obtained by grinding the catalyst material, but
nowadays, it is produced by spray-drying a slurry of silica gel and aluminium hydroxide in a
stream of hot flue gases. Under the right conditions, the catalyst is obtained in the form of
small spheres with particles in the range of 1-50 microns.

When heavy oil fractions are passed in gas phase through a bed of powdered catalyst at a
suitable velocity (0.1-0.7m/s), the catalyst and the gas form a system that behaves like liquid,
i.e. it can flow from one vessel to another under the influence of a hydrostatic pressure. If the
gas velocity is too low, the powder does not fluidise and it behaves like a solid. If velocity is
too high, the powder will just be carried away with the gas. When the catalyst is properly
fluidised, it can be continously transported from a reactor vessel, where the carcking
reactions take place and where it is fluidised by the hydrocarbon vapour, to a regenerator
vessel, where it is fluidised by the air and the products of combustion, and then back to the
reactor. In this way the proces is truly continous.

The first FCC unit went on stream in Standard Oil of New Jersey's refinery in Baton Rounge,
Louisiana in May 1942. Since that time, many companies have developed their own FCC
process and there are numerous varieties in unit configuration.

Page 99
Figure 40 Fluid catalytic cracking

FCC Process Configuration

: Hot feed, together with some steam, is introduced at the bottom of the riser via special
distribution nozzles. Here it meets a stream of hot regenerated catalyst from the regenerator
flowing down the inclined regenerator standpipe. The oil is heated and vapourised by the hot
catalyst and the cracking reactions commence. The vapour, initially formed by vapourisation
and successively by cracking, carries the catalyst up the riser at 10-20 m/s in a dilute phase.
At the outlet of the riser the catalyst and hydrocarbons are quickly separated in a special
device. The catalyst (now partly deactivated by deposited coke) and the vapour then enter
the reactor. The vapour passes overhead via cyclone separator for removal of entrained
catalyst before it enters the fractionator and further downstream equipment for product
separation. The catalyst then descends into the stripper where entrained hydrocarbons are
removed by injection of steam, before it flows via the inclined stripper standpipe into the
fluidised catalyst bed in the regenerator.

Air is supplied to the regenerator by an air blower and distributed throughout the catalyst
bed. The coke deposited is burnt off and the regenerated catalyst passes down the
regenerator standpipe to the bottom of the riser, where it joins the fresh feed and the cycle
recommences.

The flue gas (the combustion products) leaving the regenerator catalyst bed entrains catalyst
particles. In particular, it entrains "fines", a fine dust formed by mechanical rubbing of catalyst
particles taking place in the catalyst bed. Before leaving the regenerator, the flue gas
therefore passes through cyclone separators where the bulk of this entrained catalyst is
collected and returned to the catalyst bed.

Normally modern FCC is driven by an expansion turbine to mimimise energy consumption. In


this expansion turbine, the current of flue gas at a pressure of about 2 barg drives a wheel by
striking impellers fitted on this wheel. The power is then transferred to the air blower via a
common shaft. This system is usually referred to as a "power recovery system". To reduce
the wear caused by the impact of catalyst particles on the impellers (erosion), the flue gas
must be virtually free of catalyst particles. The flue gas is therefore passed through a vessel
containing a whole battery of small, highly efficient cyclone separators, where the remaining
catalyst fines are collected for disposal.

Before being disposed of via a stack, the flue gas is passed through a waste heat boiler,
where its remaining heat is recovered by steam generation.

In the version of the FCC process described here, the heat released by burning the coke in
the regenerator is just sufficient to supply the heat required for the riser to heat up, vapourise
and crack the hydrocarbon feed. The units where this balance occurs are called " heat
balanced" units. Some feeds caused excessive amounts of coke to be deposited on the
catalyst, i.e. much more than is required for burning in the regenerator and to have a "heat
balanced" unit. In such cases, heat must be removed from the regenerator, e.g. by passing
water through coils in the regenerator bed to generate steam. Some feeds cause so little
coke to be deposited on the catalyst that heat has to be supplied to the system. This is done
by preheating the hydrocarbon feed in a furnace before contacting it with the catalyst.

Main Characteristics

Page 100
• A special device in the bottom of the riser to enhance contacting of catalyst and
hydrocarbon feed.
• The cracking takes place during a short time (2-4 seconds) in a riser ("short-contact
time riser") at high temperatures ( 500-540 0C at riser outlet).
• The catalyst used is so active that a special device for quick separation of catalyst
and hydrocarbons at the outlet of the riser is required to avoid undesirable cracking
after the mixture has left the riser. Since, no cracking in thereactor is required, the
reactor no longer functions as a reactor; it merely serves as a holding vessel for
cyclones.
• The regenerator takes place at 680-720 0C. With the use of special catalysts, all the
carbon monoxide (CO) in the flue gas is combusted to carbon dioxide (CO2) in the
regenerator.
• Modern FCC includes a power recovery system for driving the air blower.

Equipment in FCC

• Large storage vessels for catalyst (fresh and equilibrium)


• Regenerator
• Reactor
• Main Fractionator
• Product Work Up section (several distillation columns in series
• Product treating facilities

Feedstock & Yield

Before the introduction of residues, vacumn distillates were used as feedstock to load the
Catalytic Cracker fully. These days, even residues are used to load the cracker. The term
used for this type of configuation is Long Residue Catalytic Cracking Complex. The only
modification or addition needed are a residue desalter and a bigger and more heat resistent
reactor.

The yield pattern of an FCC unit is typically as follows:

Product % wt on fresh feed


C3 & C4 15
Gasoline 40-50
Heavy Gas Oil 10
Coke 5

Conclusion

Page 101
The FCC Unit can a real margin improver for many refineries. It is able to convert the
residues into high value products like LPG , Butylene, Propylene and Mogas together with
Gasoil. The FCC is also a start for chemical production (poly propylene). Many FCC's have 2
modes: a Mogas mode and a Gasoil mode and FCC's can be adapted to cater for the 2
modes depending on favourabale economic conditions. The only disavantage of an FCC is
that the products produced need to be treated (sulfur removal) to be on specification.
Normally Residue FCCs act together with Residue Hydroconversion Processes and
Hydrocrackers in order to minimise the product quality give away and get a yield pattern that
better matches the market specifications. Via product blending, expensive treating steps can
be avoided and the units prepare excellent feedstock for eachother: desulfurised residue or
hydrowax is excellent FCC feed, while the FCC cycle oils are excellent Hydrocracker feed.

In the near future, many refiners will phase the challenge how to desulfurise cat cracked
gasoline without destroying its octane value. Catalytic destillation appears to be one of the
most promising candidate processes for that purpose.

Page 102
Catalysis
Catalysts and initiators start or promote chemical reactions that are used to produce organic
chemicals, polymers and adhesives. A chemical catalyst is a substance that increases the
rate at which a chemical reaction occurs; however, the catalyst itself does not undergo
chemical change. An initiator is a chemical compound that helps start a chemical reaction
such as polymerization. Unlike a catalyst, an initiator is usually consumed in the reaction.
Substances such as organic peroxides are commonly used as initiators. According to some
estimates, more than half of all petrochemical processes use catalysts and initiators. In
heterogeneous catalysis, a chemical catalyst provides a surface on which reactants become
adsorbed temporarily, and where chemical bonds in the reactants are weakened, allowing
new bonds to be created. Because the bonds between the products and the catalyst are
weaker, the products are released from the chemical catalyst. Continuous process catalysts
(CPC) are used to process industrial chemicals such as solvents, plasticizers, monomers and
intermediates. Catalytic solutions include a variety of specialized catalyst products.
There are two basic types of catalysis: homogeneous catalysis, in which both the catalyst
and reactants are in the same phase (for example, liquid or gas), and heterogeneous
catalysis, in which the catalyst and reactants are in different phases (for example, solid
catalysts and gaseous reactants). Metal catalysts and initiators are made from precious
metals such as gold, iridium, osmium, palladium, platinum, rhodium, ruthenium and silver.
They are used as heterogeneous catalysts for reactions such as hydrogenation and
isomerization. Zeolites, minerals with a porous structure, can also be used as catalysts.
Synthetic zeolites are the most important catalysts in petrochemical refineries. The proper
selection of catalysts and initiators is an important consideration. For example, using rhodium
or platinum as catalysts can produce different products depending on whether methane or
ethane are used.
ASTM International (formerly called the American Society for Testing and Materials (ASTM),
maintains standards for catalysts such as ASTM D3766, standard terminology relating to
catalysts and initiators. Some catalysts and initiators must be handled as hazardous
materials. The National Fire Protection Association (NFPA) maintains NFPA 432, a standard
which covers the catalyst organic peroxide.

Page 103
Catalysis And Distillation

Page 104
Distillation and Other Separation Processes
• Distillation basics
• Phase behavior and vapour/liquid equilibria
• Gas/Liquid separation
• Trays: function, pressure drop, efficiency, flooding, operations, and damage
• Bubble and dew points: calculation and application
• Foam: formation, detection, cause
• Packed v. trayed columns

Page 105
Distillation basics

ATMOSPHERIC DISTILLATION

The Essence of Atmospheric Distillation


The purpose of atmospheric distillation is to recover light materials and fractionate into sharp
light fractions. This is accomplished by distilling at atmospheric pressure with steam
stripping for improved cuts. Atmospheric distillation is historically the oldest refining process
and is the first step in crude oil processing.

The Development of Crude Distillation


Oil refining was sufficiently advanced by the 1500s that feed preheat, reflux, reboiling, and
temperature regulation were all utilized for distillation. But it was the idea of Samuel Kier, a
pharmacist from Tarentum Pennsylvania, that oil sludge from his father’s salt wells could be
therapeutic. A suggestion from J.G. Booth, a Philadelphia chemist, led to Kier using
distillation for purification of his “Rock Oil” cure around 1846. A simple batch still to improve
the color was devised by a whiskey distiller using first a one-barrel and then a five-barrel iron
kettle. A second stage was added to remove hydrogen sulfide. Initially Kier sold the distilled
oil as bottled medicine but soon Kier was distilling petroleum for illumination purposes.
Kerosene was born. He called it Kier's Carbon Oil.
The typical refinery was a vertical pot, many taken from coal tar distillation facilities that were
displaced by entry of kerosene as an illuminant. The overhead from the pot was condensed
in a water box with coils submerged in water that was pumped in and flowed out by gravity to
a cooling pond. It was deemed a radical improvement when the still was laid on its side and
made of thinner metal, thereby increasing heat transfer.

Figure 41 Early Batch Fractionation

Feeds and Products for Atmospheric Distillation


The feeds and products are illustrated in the Refinery Schematic and on the figure of the
atmospheric tower.

Page 106
Feed Preheat Exchanger Train
Since crude oil is elevated from atmospheric liquid temperature to over 700°F, recovery of
heat is of prime importance in crude distillation economics. The path of crude being
preheated is typical. The crude is first used to absorb part of the overhead condensation load
after which there is exchange with one or more of the liquid sidestreams withdrawn,
beginning with the top sidestream.
The crude desalting is placed within the feed preheat exchanger train. The point at which the
crude is desalted is carefully selected. Normally it is at a temperature of 250°F to 300°F and
is a function of the gravity of the crude, with lighter crudes (lighter than 40°API) being
desalted at 250°F and those heavier than 30°API being desalted at 300°F. Care must be
taken in the temperature and pressure balance of the heat exchanger train that water does
not vapourize.

Crude Electrostatic Desalting


Nearly all crudes contain "salts" the concentration of which is expressed as pounds of
sodium chloride per thousand barrels of crude. Other chlorides such as magnesium chloride
are also present. The salt is present in the emulsified water in the crude. Salt water pumped
to the surface with the crude is settled in facilities in the field and is treated with heat and
chemicals to break oil water emulsions. Nonetheless crude arrives at the battery limits of the
crude unit with emulsified salt water.
Salt can lead to deposits on heat exchangers and drastically reduce heat transfer. Formation
of hydrogen chloride by hydrolysis can lead to corrosion. In addition to salt the water contains
metals in various compounds that can deposit on various catalysts in the refinery. Water
washing is employed to remove the dissolved salt and salt crystals plus the dissolved metals
and dirt from the oil. The untreated oil is mixed with fresh wash water, demulsifiers are added
and the streams are mixed and heated and subjected to additional mixing, followed by
settling.
The oil, salt water, demulsifier and wash water mixture is separated in an electrostatic
settling drum which uses a high voltage electric field across the drum to promote coalescing
of water into droplets which collect in a water layer at the bottom of the settler. The quantity
of emulsified water in the crude is variable but the added wash water can be as much as
10% of the crude oil charge. About 90% of the water can be recovered. If that frequently is
not sufficient and a second stage of desalting is employed, the water recovery is raised to
99%.

Page 107
Figure 42 Desalting - single stage

Figure 43 Desalting - 2 stage


The wash water also washes out sediment such as fine clay, rust and other solids.
Incomplete settling of sediments in production tanks causes more solids to be moved with
the product for removal at the refinery. Normal efficient water washing will remove over half
of these suspended solids. It should be noted that crude desalting may be difficult to operate
because of the variability of emulsions. In addition the wash water now must be treated for
benzene recovery. After desalting, crude is exchange with hotter sidestream liquids further
down in the tower.

Crude Unit Furnace and Overflash


Generalizations about the furnace vary according to the crude and the refiner’s experience.
The following are rough guidelines.

Page 108
Figure 44 Crude unit furnace

Crude Furnace Duty


After further heat exchange in the train, to raise the temperature to about 550°F, the crude oil
is directed to the furnace, whose tubular configuration led to the term pipe still. Crude unit
furnaces can be fired with oil, refinery fuel gas or natural gas. They can be a box or cabin
furnace, usually with horizontal tubes or a cylindrical vertical tube furnace. Heat flux in these
furnaces is not excessively high at 10,000 btu/hr-ft2 and coking is ordinarily not a problem if
the desalter unit is operating properly.
Since the atmospheric tower does not have a reboiler, the heat content of the furnace
supports the total vapour rate to the column plus additional duty called overflash. Heat
removal in the tower is accomplished by condensation of vapour with liquid cooled in
pumparounds. Depending on the crude slate, perhaps half or more of the crude is flashed.
Heater Outlet and Transfer Line Heater outlet temperature is limited to approximately 750°F
by thermal cracking of the feedstock, which impairs distillate product smoke points and color.
Depending on the crude, this temperature may range from 700°F to 800°F. But cracking of
paraffinic and naphthenic crudes occurs at approximately 650°F - 700°F.
The outlet from the furnace is directed to the flash zone in the fractionation tower via a
transfer line. The pressure drop in the transfer line between the furnace outlet and inlet to the
tower is assumed to be 5 psid and the temperature loss roughly corresponds to 5°F - 10°F.

Overflash

Page 109
The furnace is normally operated to produce overflash. Overflash is defined as vapourization
in excess of requirements for lifting all of the products taken overhead and withdrawn as
side-streams. The purpose of overflash is to generate internal reflux in the wash trays
between the flash zone and the bottom side-stream draw tray. Overflash vapours are
condensed and wash the trays to prevent carryover and coking. Overflash is generally 3% to
5% (volume) of gross vapour from the flash zone which is essentially overhead and side-
stream products. Overflash is also defined in terms of crude charge to tower and is 2% to 3%
(volume) on that basis.
Although not shown on the schematic, when processing high vapour pressure crudes, a flash
drum is placed in the train after the desalter and before the furnace inlet control valves.
Flashing off light gases and water lowers the feed vapour pressure to avoid flashing of the
crude before the control valves, which leads to maldistribution in the furnace.

Page 110
Atmospheric Crude Fractionator

Flash Zone and Stripping Section


The flash zone pressure is set as low as possible to maximize vapourization, minimize flash
zone temperature, and reduce furnace duty while optimizing compression on the tower
overhead vapour stream. Flash zone pressure is determined by overhead condensation
pressure plus pressure drop in the tower. If the reflux drum operates at 5 psig, the pressure
drop across the overhead cooling system is 5 psid, and the pressure drop through the tower
is 5 psid, the flash zone will operate at 15 psig.
The temperature in the flash zone is a function of the onset of crude cracking, the pressure of
the flash zone, and the amount of stripping steam. With a maximum furnace outlet
temperature limit of 750°F, and transfer line temperature loss of about 10°F, the maximum
bulk temperature in the flash zone is about 740°F.
There is a 4 tray stripping section below the flash zone. Stripping steam is added to lower the
partial pressure of the hydrocarbon and increase vapourization at lower temperatures.
Traditionally, the amount of stripping steam has been 10 lb of steam per barrel of
atmospheric resid. However, stripping steam must be recovered as sour water and designs
to minimize stripping steam use 5 lb of steam per barrel of atmospheric resid

Wash Section
The wash section consists of 3 to 4 trays above the flash zone and below the bottom gas oil
draw. The purpose of the wash section is to provide reflux to the vapours from the flash zone
to wash resins and materials that may contaminate the products. The reflux is the condensed
overflash vapour. Either sieve trays or grid are utilized. Overhead System, Number of Trays
and Pressure Profile in Tower
The overhead vapours from the tower are cooled and partially condensed by exchange with
cold feed followed by condensation with air fin or water condensers. The vapours if any are
directed from the overhead accumulator to the fuel system. Operating pressures have
increased from the 1970s to be high enough to reduce noncondensing vapours to a
minimum. The intent is to reduce compression on the overhead system. On the other hand,
high operating pressures decrease vapourization, increase flash zone temperatures and
furnace duty, and affect yields.
Pressures in the reflux drum may vary according to the design and be as low as 0.5 psig to
as high as 20 psig if the overhead vapour is totally condensed. This discussion will use a
reflux drum pressure of 5 psig as a basis.
The pressure drop across the overhead condensers is also variable but is on the order of 3 –
10 psid; this discussion will assume 5 psid.
Pressure drops across trays average 0.1 psid/tray to 0.2 psid/tray. This discussion will
assume a total of 32 trays above the flash zone including 4 wash trays, resulting in a 5 psid
pressure drop in the tower from the flash zone to the top of the column. Assumed are 6 trays
below the reflux to the naphtha draw, 6 trays below to the light distillate draw, 6 trays below
to the heavy distillate draw, and 10 trays below to the gas oil draw. There are 4 wash trays
below the gas oil draw. Trays may be either sieve trays or grid.

Sidestreams

Page 111
Liquid product from the overhead is straight run gasoline, a part of which is returned to the
tower as reflux for the top section of the tower. There is a pumparound on the fractionator
where liquid is taken from a draw tray and cooled and returned to the next tray down as
subcooled reflux. This not only reduces the overhead condensing load but achieves uniform
tower loadings by providing overflow at lower points in the tower since successive product
withdrawal reduces liquid overflow.
The cut point for each sidestream fraction is the final boiling point of the stream being
withdrawn. However, the liquid has a lighter component tail that must be removed from the
sidestream. Traditionally, the liquid product sidestream was directed to a stripper that used
live steam for stripping. Emphasis on reducing steam stripping and sour water has led to the
replacement of live steam injection with reboiled strippers in some instances.

Trends and Variations in Atmospheric Unit Design


Briefly summarized are several approaches to crude unit design that represent current
thinking about stream separation. Each offers distillation combinations to provide additional
separations, additional capacity or energy conservation:
• A combined gasoline and naphtha stream is desulfurized in one hydrotreating unit
before routed to a splitter to separate the streams for further processing.
• Benzene is removed from the naphtha stream by a superfractionator after the
naphtha hydrotreater and before the catalytic reformer. Benzene goes overhead and
the naphtha bottoms is routed to the reformer.
• Three-step fractionation permits a crude unit capacity expansion with production of
LPG, isopentane, straight run and naphtha streams from the crude unit overhead.
• The Technip Progressive Distillation process minimizes energy consumption.
• Shell's Bulk CDU Integrated Processing provides for full integration of units in the
refinery.

Combined Gasoline and Naphtha Stream


One trend is the desulfurization of straight run gasoline and naphtha in one hydrotreater unit,
followed by a precise fractionation of the desulfurized materials at a cut point of about 180°F.
The overhead gasoline containing benzene and cyclohexane go to the isomerization unit
where the benzene is hydrogenated to cyclohexane, cycloparaffins are isomerized to olefins,
and normal paraffins are isomerized to iso-paraffins for octane improvement. This is usually
performed in a two-reactor system. The splitter bottoms naphtha, which contains no benzene
or cyclohexane, goes to reforming. This is desirable if there is no facility for extraction or
hydrogenation of benzene to eliminate benzene from gasoline. Additionally, both gasoline
and naphtha are also desulfurized and the atmospheric tower is simplified. However, a larger
hydrotreater plus splitter are required.

Removal of Aromatics Before Reforming


To reduce the amount of aromatics to the reformer and ultimately to the gasoline stock,
benzene is removed from the naphtha stream by a superfractionator after the naphtha
hydrotreater and before the catalytic reformer.
Benzene goes overhead and the naphtha bottoms is routed to the reformer.

Three Step Fractionation

Page 112
Three-step fractionation permits a crude unit capacity expansion with production of LPG,
isopentane, straight run gasoline, and naphtha streams from the crude unit overhead. A
combined stream of straight run gasoline and naphtha will be taken overhead on the crude
tower. The liquids are then fractionated in a splitter to make naphtha for reformer feed as the
splitter bottoms. The splitter overhead goes to a stabilizer. The overhead from the stabilizer
is LPG is directed to the saturates gas plant. The stabilizer bottoms goes to a de-
isopentanizer that produces isopentane overhead and straight run gasoline bottoms.

Progressive Distillation for Energy Reduction


In this arrangement, crude is first desalted then prefractionated in a reboiled preflash tower
with the bottoms being fractionated in a second preflash tower employing both a reboiler and
steam. The two preflash overheads are then processed in a gas plant and precision
fractionators to produce LPG, Light Naphtha, Medium Naphtha, and Heavy Naphtha. The
bottoms from the second preflash goes to a conventional heater and atmospheric column
arrangement that produces four distillate streams. Atmospheric tower bottoms go to
conventional vacuum distillation for production of gas oils and vacuum resid. This is a grass
roots design offered by Technip for minimization of energy consumption.

The Shell Bulk Crude Distillation (CDU) Process


This process combines and integrates crude distillation with hydrodesulfurization and high
vacuum separation. It also incorporates catalytic cracking, hydrocracking and visbreaking
with the separation processes. The basic concept is first to separate naphtha / gasoline
overhead and long resid bottoms from the distillate midfractions in a crude distillation. The
bulk middle distillate fraction is desulfurized and cut into distillate streams in a
superfractionator. Overhead naphtha / gasoline from the crude unit and streams from various
other plants are gathered and processed in light hydrocarbon and light oil fractionators to
make LPG, light gasoline and several naphtha fractions. Bottoms resid from the crude tower
is processed in a high vacuum unit that is integrated with cracking processes. The flow sheet
is quite complex and varies with each design.

Page 113
Phase behavior and vapour/liquid equilibrium

Single Phase, Non-Saturated Equilibrium States


For the simplest cases, a two component mixture existing as either a superheated vapour or
a subcooled liquid, Gibbs's phase rule shows that now three independent properties must be
known to establish the intensive state of the system; i.e. temperature, pressure, and
concentration. Using the Patel-Teja equation of state modified for a mixture there are as
many as three numerical solutions at a given temperature, pressure, and composition, but
only the solutions using the compressibility corresponding to the vapour (largest positive real
Z) or liquid (smallest positive real Z) apply. Equations 2-8 through 2-22 and 2-39 through 2-
42 provide an equal number of equations and unknowns.

Vapour-Liquid Equilibrium and Single Phase, Saturated Equilibrium


States
When two mixture phases are in equilibrium, Gibbs's phase rule requires only two
independent, intensive properties to define the state. For example, if a mixture exists as a
saturated liquid and vapour in equilibrium at a fixed temperature and pressure, Gibb's phase
rule finds that the compositions in each phase are defined by the two independent properties
of temperature and pressure. However, mathematically, the Patel-Teja equation of state has
solutions at many vapour and liquid compositions for a given temperature and pressure.
Equating the fugacity for each component in the liquid to its respective fugacity in the vapour
provide the two additional equations that are needed to solve for the vapour phase
composition and the liquid phase composition.

(2-45)
This equation allows the calculation of temperature composition diagrams and pressure
composition diagrams for two component mixtures. The Figure below displays a temperature
composition diagram constructed using the Patel-Teja equation of state and equation 2-45
for the ammonia-butane system at 20.7 bar (300 psi). The lower line represents the bubble
point temperature line.
For example, when a subcooled 50/50 liquid mixture of ammonia and butane is heated from
300K (point 1), the first bubble of vapour will form at just above 316 K (point 2) and the
mixture will be in vapour-liquid equilibrium. Upon further heating, the overall composition of
both the vapour and the mixture will remain 50/50, but the compositions of the liquid and
vapour phases will vary (see point 3). Finally, as the last drop of liquid is evapourated, the
50/50 mixture is now a saturated vapour (point 4). Further heating will superheat the vapour
(point 5). Point 4 lies on the dew point line so named because this is where the first drop of
condensation would form if it was approached via cooling from point 5. Note at a temperature
around 316 K, the equilibrium vapour and liquid phases are at the same composition (~0.82).
At this point, known as an azeotrope, the azeotropic mixture boils at a constant temperature
with constant vapour and liquid phase compositions (similar to a pure substance).
It is important to note that there may be two compositions required when making vapour
liquid equilibrium (VLE) calculations. At a saturated temperature and pressure in a mixture
there are two cubic equations for the compressibility; one for the liquid and one for the
vapour. The liquid compressibility cubic equation is obtained by evaluating equations 2-39
through 2-41 using liquid compositions. The vapour compressibility cubic equation is
obtained by evaluating equations 2-39 through 2-41 using vapour compositions. The liquid's
compressibility is still the smallest positive real root but of the liquid compressibility equation.
The vapour's compressibility is the largest positive real root of the vapour compressibility
equation.

Page 114
Figure 45 Temperature-Composition Diagram for Ammonia-Butane at 20.7 bar

Liquid-Liquid Equilibrium States


As shown above, a two component mixture in a single, sub-cooled liquid phase needs three
independent, intensive properties to define its equilibrium state. For example, in the
ammonia-water mixture at 4 bar and 260 K, an equilibrium state exists at all compositions.
For mixtures displaying only one liquid phase, the Patel-Teja equation of state has only one
solution for a given set of three independent, intensive properties. However, some mixtures
display liquid-liquid equilibrium (LLE) for which two liquid phases are in equilibrium with each
other. In such mixtures, the phase rule requires only two independent, intensive properties.
In other words, at a given temperature and pressure, the compositions of the two liquid
phases are actually dependent properties. For example, in the ammonia-butane system at
5.29 bar and 273.15 K, a two liquid phase equilibrium state exists in which the composition of
butane is 0.8526 mole fraction in one of the liquid phases and 0.106 mole fraction in the
other (Wilding, 1996). The equilibrium condition for liquid-liquid equilibrium is:

(2-46)

Vapour-Liquid-Liquid Equilibrium States


When a mixture displaying liquid-liquid equilibrium is heated to its bubble point a third phase,
vapour, comes into equilibrium with the two liquid phases. This point is known as the three
phase flash point. According to Gibbs's phase rule, this state is dictated by only one
independent, intensive property. The equilibrium condition for the three phase vapour-liquid-
liquid (VLLE) equilibrium state is:

(2-47)
Calculations produce three different cubic equations for the compressibility since there are
three sets of compositions. The compressibility of each liquid is represented by the smallest
positive real root of each liquid's compressibility cubic equation while the compressibility of
the vapour is the largest positive real root of the vapour's cubic compressibility equation.

Page 115
Figure 46 T-x-y Diagram for Ammonia-Butane at 20.7 bar

Figure 47 T-x-y Diagram for Ammonia-Butane at 4, 10, and 20.7 bar

Page 116
Gas/Liquid separation
Recent Development In Liquid/Gas Separation Technology
Removing liquids and solids from a gas stream is very important in refining and gas
processing applications. Effective removal of these contaminants can prevent costly
problems and downtime with downstream equipment like compressors, turbines, and
burners. In addition, hydrocarbons and solid contaminants can induce foaming in an amine
contactor tower and can contribute to premature catalyst changeouts in catalytic processes.
In compressors that use oil to lubricate cylinders, the lube oil often gets into the discharge
gas causing contamination downstream. A thin film of hydrocarbon deposited on heat
exchangers will thicken and coke, decreasing heat transfer efficiency, increasing energy
consumption and creating a risk of hot spots and leaks.
Several technologies are available to remove liquids and solids from gases. This paper will
first provide selection criteria for the following gas/liquid separation technologies:
• gravity separators
• centrifugal separators
• filter vane separators
• mist eliminator pads
• liquid/gas coalescers
and then focus on the separation of fine aerosols from gases using liquid/gas coalescing
technology.
• Removal Mechanisms
• Liquid/Gas Separation Technologies
• Formation of Fine Aerosols
• Ratings/Sizing
• Design and Its Impact on Sizing
• Field Testing For Liquid/Gas Coalescers
• Test Procedure
• Field Test Results
• Conclusions

Removal Mechanisms
Before evaluating specific technologies, it is important to understand the mechanisms used
to remove liquids and solids from gases. These can be divided into four different categories.2
The first and easiest to understand is gravity settling, which occurs when the weight of the
droplets or particles (ie. the gravitation force) exceeds drag created by the flowing gas.
A related and more efficient mechanism is centrifugal separation which occurs when the
centrifugal force exceeds the drag created by the flowing gas. The centrifugal force can be
several times greater than gravitational force.

Page 117
The third separation mechanism is called inertial impaction which occurs when a gas passes
through a network, such as fibers and impingement barriers. In this case, the gas stream
follows a tortuous path around these obstacles while the solid or liquid droplets tend to go in
straighter paths, impacting these obstacles. Once this occurs, the droplet or particle loses
velocity and/or coalesces, and eventually falls to the bottom of the vessel or remains trapped
in the fiber medium.
And finally, a fourth mechanism of separation occurs with very small aerosols (less than 0.1
µm). Called diffusional interception or Brownian Motion, this mechanism occurs when small
aerosols collide with gas molecules. These collisions cause the aerosols to deviate from the
fluid flow path around barriers increasing the likelihood of the aerosols striking a fiber surface
and being removed.3
Throughout this paper, reference to droplet and particle sizes will be in the unit micron. One
micron is 1/1000 of a millimeter or 39/1,000,000 of an inch. Figure 1 shows the size of
various material in microns.

Figure 48 Particle Diameters of Typical Contaminants

Liquid/Gas Separation Technologies


Gravity Separators
In a gravity separator or knock-out drum, gravitational forces control separation. The lower
the gas velocity and the larger the vessel size, the more efficient the liquid/gas separation.
Because of the large vessel size required to achieve settling, gravity separators are rarely
designed to remove droplets smaller than 300 microns.4 A knock-out drum is typically used
for bulk separation or as a first stage scrubber. A knock-out drum is also useful when vessel
internals are required to be kept to a minimum as in a relief system or in fouling service.5
Gravity separators are not recommended as the soul source of removal if high separation
efficiency is required.
Centrifugal Separators
In centrifugal or cyclone separators, centrifugal forces can act on an aerosol at a force
several times greater than gravity. Generally, cyclonic separators are used for removing
aerosols greater than 100 µm in diameter and a properly sized cyclone can have a
reasonable removal efficiency of aerosols as low as 10 µm. A cyclone’s removal efficiency is
very low on mist particles smaller than 10µm.6 Both cyclones and knock-out drums are
recommended for waxy or coking materials.
Mist Eliminators

Page 118
The separation mechanism for mist eliminator pads is inertial impaction. Typically, mist
eliminator pads, consisting of fibers or knitted meshes, can remove droplets down to 1-5
microns but the vessel containing them is relatively large because they must be operated at
low velocities to prevent liquid reentrainment.
Filter Vane Separators
Vane separators are simply a series of baffles or plates within a vessel. The mechanism
controlling separation again is inertial impaction. Vane separators are sensitive to mass
velocity for removal efficiency, but generally can operate at higher velocities than mist
eliminators, mainly because a more effective liquid drainage reduces liquid reentrainment.
However, because of the relatively large paths between the plates constituting the tortuous
network, vane separator can only remove relatively large droplet sizes (10 microns and
above). Often, vane separators are used to retrofit mist eliminator pad vessels when gas
velocity exceeds design velocity.7
Liquid/Gas Coalescers
Liquid/gas coalescer cartridges combine features of both mist eliminator pads and vane
separators, but are usually not specified for removing bulk liquids. In bulk liquid systems, a
high efficiency coalescer is generally placed downstream of a knock-out drum or
impingement separator. Gas flows through a very fine pack of bound fibrous material with a
wrap on the outer surface to promote liquid drainage (See Figure 2 below). A coalescer
cartridge can trap droplets down to 0.1 micron. When properly designed and sized, drainage
of the coalesced droplets from the fibrous pack allows gas velocities much higher than in the
case of mist eliminator pads and vane separators with no liquid reentrainment or increase in
pressure drop across the assembly.

Page 119
Figure 49 Coalescer Cut-away View
Table 2 summarizes each of these technologies and provides guidelines for proper selection.
As you can see, for systems containing very fine aerosols, under 5 µm, a coalescer should
be selected. Removing very fine aerosols from gases results in major economic, reliability,
and maintenance benefits in compressor systems.

Page 120
Table 3 Types of Liquid/Gas Separators

Technology Droplet Size Removed

Gravity Separator Down to 300µm

Centrifugal Separator Down to 8-10µm

Mist Eliminator Pad Down to 10µm

Vane Separator Down to 10µm

High Efficiency L/G Coalescer Down to 0.1µm

Formation of Fine Aerosols


There are several different ways that very fine liquid aerosols can get into a gas stream. •
• Condensation from a saturated vapour,
• Atomization (spray effect through a flow restriction) and,

Liquid reentrainment.
Recent studies on aerosol size distribution in a natural gas stream have identified that
significant quantities of droplets below 5 microns are the norm whenever choke valves and
other restrictions are present9 or when vapours are at their dew points.10 The
measurements shown in Figure 3 were performed to determine concentration ofliquid
aerosols in natural gas stream sampled downstream of vane separators (combination of
gravity separator and horizontal filter barrier and equivalent to a mist eliminator pad). Results
show that in many cases, large quantities of aerosols can go through this type of separator
because the droplets are too small to be trapped by these separation devices. As a result, a
liquid/gas coalescer should be the technology of choice whenever high recovery rates are
required to protect downstream equipment or to recover valuable liquid products.

Figure 50 Aerosol Sizes

Ratings/Sizing

Page 121
It is important to note that a coalescer is different from a filter in that it performs both filtration
of fine solid particles and coalescence of liquid aerosols from a gas stream. The sizing and
rating criteria for coalescers, as it pertains to liquids removal, is very critical to the ultimate
performance of the coalescer. An undersized coalescer will result in continuous liquid
reentrainment, very low liquid separation efficiency and will be vulnerable to any process
changes. The critical nature of coalescer sizing is illustrated in Figure 4 which shows that
coalescer performance can drop very rapidly once the coalescer is challenged by too much
liquid (either because of high aerosol concentration in the gas stream or because of a high
gas flowrate). This marks a dramatic departure from most other separation equipment whose
performance gradually diminishes as it is pushed passed its rated maximum.

Figure 51 Coalescer Efficiency Change vs. Gas Flow Rate


Traditional means of coalescer performance validation is the DOP (dioctylphthalate) test.11
In this test, a monodispersed aerosol of 0.3 µm diameter is continuously generated by a
condensation of DOP vapour under controlled conditions. When aerosol generation is
stabilized (constant particle size and aerosol concentration), the concentration of DOP is
measured upstream and downstream of the coalescer by a light scattering photometer.
Results are expressed as a percent of DOP penetration at the flow rate used.
Some major drawbacks of the DOP test include:1
1. The test is performed on a dry or unsaturated cartridge. A dry cartridge, in
essence, acts like a sponge, absorbing any liquid which goes through it. What
the DOP test does not measure is the coalescer’s ability to retain liquids when
liquids saturate the coalescer medium and could be re-entrained downstream.
2. This leads to a second drawback; the pressure drop measured across the
assembly is underestimated when compared with actual pressure drops
across a saturated element. The saturated DP is approximately 2-4 times
greater than the clean DP.
3. The test is performed under a partial vacuum where gas properties (density
and viscosity) are very different from those prevailing at actual operating
pressure. DOP test conditions tend to overstate the efficiency of the coalescer
element.
In order to avoid shortcomings of the DOP test, Pall has developed the Liquid Aerosol
Separation Efficiency (LASE) test. This test was developed solely for the purpose of
measuring coalescer performance in a compressed gas stream under conditions more
similar to those found in a refinery or a gas processing plant. The system used for this test is
schematically represented in Figure 5.

5:

Page 122
Figure 52 Liquid Aerosol Separation Efficiency Test Schematic

The LASE test differs from the DOP test in the following ways:
1. It gives a more accurate and meaningful measure of efficiency. The DOP
efficiency essentially tells you what percent of 0.3 µm dioctylphthalate droplets
will be removed by a dry coalescer; the LASE test tells you what ppmw of
contaminants will be in the gas downstream of the coalescer. In other words,
what the LASE test tells you is how much contaminant your downstream
equipment will be exposed to.
2. The DOP uses monodispersed (ie. same sized) droplets of DOP, a liquid not
commonly encountered in a gas processing or refinery gas streams; the LASE
test uses a lube oil which has droplet sizes that range from 0.1-0.9 µm.
3. The LASE test more closely simulates process conditions, by being run on a
saturated cartridge and being performed under positive pressure.
Table 4 comparison of the DOP and LASE

Page 123
Design and Its Impact on Sizing
The goal for improving coalescer design is to maximize efficiency while preventing liquid
reentrainment. Reentrainment occurs when liquid droplets accumulated on a coalescer
element are carried off by the exiting gas. This occurs when velocity of the exiting gas, or
annular velocity exceeds the gravitational forces of the draining droplet.
We earlier discussed the importance of correct coalescer sizing. In designing and sizing a
coalescer, the following parameters must be taken into account:
• Gas velocity through the media,
• Annular velocity of gas exiting the media,
• Solid and liquid aerosol concentration in the inlet gas, and
• Drainability of the coalescer
Each of these factors with the exception of the inlet aerosol concentration can be controlled.
At a constant gas flow rate, media velocity can be controlled by either changing the
coarseness of the medium’s pore structure or by increasing or decreasing the number of
cartridges used. The coarser the medium, however, the less efficient the coalescer will be at
removing liquid.
At a constant gas flow rate, the exiting velocity of the gas can be controlled by increasing or
decreasing the size of the vessel or the space between the cartridges.
Drainage can be improved by either selecting low surface energy coalescer materials or by
treating the coalescer medium with a chemical that lowers the surface energy of the medium
to a value lower than the surface tension of the liquid to be coalesced.13 Having a low
surface energy material prevents liquid from wetting the filter medium and accelerates
drainage of liquids down along the medium’s fibers. The liquid coalesced on the fibrous
material falls rapidly through the network of fibers without accumulating in the pores where it
would otherwise be pushed through by the gas and be reentrained. Figure 6 shows the effect
that a chemical treatment can have on a coalescer. It shows that the maximum flowrate of a
chemically treated cartridge is more than twice that of a similar cartridge that is not treated.

Page 124
Figure 53 Effect of Chemical Treatment on Coalescer Performance
One can conclude from these design parameters that a large housing with a large number of
cartridges that have very fine pores would easily eliminate any liquid problems you may
encounter in a gas stream. Obviously, the costs associated with such a vessel is very high.
As vessel size and cartridge quantity are reduced so is the probability of reentrainment and
poorer removal efficiency. In addition, as the assembly size decreases, the pressure drop
increases which can result in increased operating costs. So, an optimization is required.
When evaluating a coalescer assembly, make sure that all of these parameters are taken
into consideration when the assembly is sized. A coalescer is best used in conjunction with a
knock-out drum or other impingement separator.

Field Testing For Liquid/Gas Coalescers


Field testing a gas stream where liquids need to be removed can provide the following
information:
1. the amount of liquid in the gas,
2. the ability to efficiently coalesce liquids, and
3. the amount of solid particulate matter present.
As a result, accurate sampling becomes critical. It is very important to measure accurately
gas flow rates through a test coalescer cartridge to determine the amount and the nature of
the liquid present in the gas.
For that purpose, a complete test kit has been designed to perform side stream liquid/gas
coalescer testing. This test kit is shown in Figure 7. It includes: (1) a coalescer housing for
one cartridge connected to an independent sump by a small ball valve; (2) an orifice
flowmeter downstream of the coalescer housing that includes flanges, orifice plate and
differential pressure gauge; (3) a needle valve to regulate the flow of gas through the
coalescer housing; (4) two sample ports, upstream and downstream of the coalescer
housing, to which two of the gas test kits can be hooked up simultaneously to analyze
influent and effluent gas quality; and (5) two long flexible stainless steel hoses connecting the
test kit to the main gas line and the discharge line.

Page 125
Figure 54 Schematic of Pall LG Coalescer Test Stand

Test Procedure
Before going on-site for a field test, the plant is contacted to obtain system conditions
(pressure, temperature, gas flow rate, type of gas and if possible liquid concentration in the
gas stream). Based on this information, an orifice plate is selected to measure gas flow rates
in the range indicated. The orifice is also selected to minimize pressure drop so that gas
condensation and hydrate formation is not induced.
After putting the side stream test kit on-line, the flow rate is adjusted below the critical flow
rate, so as not to get reentrained. Once the coalescer cartridge is saturated, test membranes
are inserted in the test jigs upstream and downstream of the coalescer housing, the sump is
emptied of any liquid that may have been accumulated during the cartridge saturation period,
and the actual test begins.
t the end of the test, the volume of liquid accumulated in the sump is measured and collected
in a sample bottle for subsequent lab analysis. Test membranes are also collected to
determine the amount of solids suspended in the gas and for qualitative identification of the
solid contaminants. Liquid aerosol concentration is determined from the amount of liquid
coalesced and the quantity of gas sampled.

Field Test Results


The results of field tests on 49 gas streams (natural gas, carbon dioxide, hydrogen and fuel
gas) in both gas processing plants and refineries show that significant quantities of liquid are
present in most gas streams. Figure 8 summarizes these results of tests. Of the 49 streams
tested, over 85% (43 out of 49 tests) had liquids concentration greater than 1 ppmw. This
concentration of liquid can result in significant rotating equipment problems and can
contribute to poor process operations in an amine contacting unit.

Page 126
Figure 55 Field Test Results of Gas Streams in Refineries and Gas Processing Plants

Conclusions
4. Selecting gas/liquid separation technologies requires not only knowledge of
the process conditions, but a knowledge of the characteristics of the liquid
contaminants. Selection should be made based on droplet size, concentration,
and whether the liquid has waxing or fouling tendencies.
5. Through an analysis of field data, it was shown that due to the presence of
very fine liquid droplets (below 1 micron) in most gas processes, high
efficiency liquid/gas coalescers should be recommended whenever high
recovery rates are required to protect downstream equipment or to recover
valuable liquids.
6. The sizing and design of a coalescer is of critical importance. Once a
coalescer is challenged with too much liquid, either because of excessive
aerosol concentrations or large gas flow rates, its efficiency will decrease
rapidly.
7. The Liquid Aerosol Separation Efficiency (LASE) test is a meaningful
performance test of liquid/gas coalescers, as it allows coalescer cartridges to
be tested under conditions closely resembling actual operating conditions
(saturated element, realistic pressure drops and gas properties (density,
viscosity).
8. A surface treatment of the coalescer medium improved liquid drainage in the
fibrous materials and decreased by 50% the number of cartridges required to
handle a given flow.
9. Field testing has demonstrated that significant amounts of liquids are present
in gas stream in refinery and gas processing plants.

Page 127
Industrial uses of Fractional Distillation
Distillation is the most common form of separation technology in the chemical industry. In
most chemical processes, the distillation is continuous steady state, where batch
fractionation is not as economical. New feed is always being added to the distillation column
and products are always being removed. Unless the process is disturbed due to changes in
feed, heat, ambient temperature, or condensing, the amount of feed being added and the
amount of product being removed are normally equal. This is known as continuous, steady-
state fractional distillation.
The most widely used industrial applications of continuous, steady-state fractional distillation
are in petroleum refineries, petrochemical plants and natural gas processing plants.

Figure 56 Typical distillation towers in oil refineries


Industrial distillation is typically performed in large, vertical cylindrical columns known as
"distillation towers" or "distillation columns" with diameters ranging from about 65 centimeters
to 6 meters and heights ranging from about 6 meters to 60 meters or more. The distillation
towers have liquid outlets at intervals up the column which allow for the withdrawal of
different fractions or products having different boiling points or boiling ranges. The "lightest"
products (those with the lowest boiling point) exit from the top of the columns and the
"heaviest" products (those with the highest boiling point) exit from the bottom of the column.
Large-scale industrial towers also use reflux to achieve more complete separation of
products.
Fractional distillation is also used in air separation, producing liquid oxygen, liquid nitrogen,
and high purity argon. Distillation of chlorosilanes also enable the production of high-purity
silicon for use as a semiconductor.
In industrial uses, sometimes a packing material is used in the column instead of trays,
especially when low pressure drops across the column are required, as when operating
under vacuum. This packing material can either be random dumped packing (1-3" wide) or
structured sheet metal. Typical manufacturers are Koch, Sulzer and other companies.
Liquids tend to wet the surface of the packing and the vapours pass across this wetted
surface, where mass transfer takes place. Unlike conventional tray distillation in which every
tray represents a separate point of vapour liquid equilibrium, the vapour liquid equilibrium
curve in a packed column is continuous. However, when modeling packed columns it is
useful to compute a number of "theoretical stages" to denote the separation efficiency of the
packed column with respect to more traditional trays. Differently shaped packings have
different surface areas and void space between packings. Both of these factors affect
packing performance.

Page 128
Trays: function, pressure drop, efficiency, flooding, operations, and
damage

Trays and Plates

The terms "trays" and "plates" are used interchangeably.


There are many types of tray designs, but the most common
ones are :
Bubble cap trays
A bubble cap tray has a riser or chimney fitted over each hole,
and a cap that covers the riser. The cap is mounted so that
there is a space between riser and cap to allow the passage
of vapour. Vapour rises through the chimney and is directed
downward by the cap, finally discharging through slots in the
cap, and finally bubbling through the liquid on the tray.
Valve trays

In valve trays, perforations are covered by liftable caps. Vapour flows lifts the caps,
thus self creating a flow area for the passage of vapour. The lifting cap directs the
vapour to flow horizontally into the liquid, thus providing better mixing than is possible
in sieve trays.

Figure 57 Valve trays (photos courtesy of Paul Phillips)

Sieve trays
Sieve trays are simply metal plates with holes in them. Vapour passes straight
upward through the liquid on the plate. The arrangement, number and size of the
holes are design parameters.

Page 129
Because of their efficiency, wide operating range, ease of maintenance and cost factors,
sieve and valve trays have replaced the once highly thought of bubble cap trays in many
applications.

Liquid and Vapour Flows in a Tray Column


The next few figures show the direction of vapour and liquid flow
across a tray, and across a column.

Figure 58 Vapour & Liquid Flow across Column/Tray

Each tray has 2 conduits, one on each side, called ‘downcomers’. Liquid falls through the
downcomers by gravity from one tray to the one below it. The flow across each plate is
shown in the above diagram on the right.

A weir on the tray ensures that there is always some liquid (holdup) on the tray and is
designed such that the the holdup is at a suitable height, e.g. such that the bubble caps are
covered by liquid.

Being lighter, vapour flows up the column and is forced to pass through the liquid, via the
openings on each tray. The area allowed for the passage of vapour on each tray is called the
active tray area.

Page 130
The picture on the left is a photograph of a section of a pilot
scale column equipped with bubble capped trays. The tops of
the 4 bubble caps on the tray can just be seen. The down-
comer in this case is a pipe, and is shown on the right. The
frothing of the liquid on the active tray area is due to both
passage of vapour from the tray below as well as boiling.

As the hotter vapour passes through the liquid on the tray above, it transfers heat to the
liquid. In doing so, some of the vapour condenses adding to the liquid on the tray. The
condensate, however, is richer in the less volatile components than is in the vapour.
Additionally, because of the heat input from the vapour, the liquid on the tray boils,
generating more vapour. This vapour, which moves up to the next tray in the column, is
richer in the more volatile components. This continuous contacting between vapour and
liquid occurs on each tray in the column and brings about the separation between low boiling
point components and those with higher boiling points.

Tray Designs
A tray essentially acts as a mini-column, each accomplishing a fraction of the separation
task. From this we can deduce that the more trays there are, the better the degree of
separation and that overall separation efficiency will depend significantly on the design of the
tray. Trays are designed to maximise vapour-liquid contact by considering the

liquid distribution and

vapour distribution
on the tray. This is because better vapour-liquid contact means better separation at each
tray, translating to better column performance. Less trays will be required to achieve the
same degree of separation. Attendant benefits include less energy usage and lower
construction costs.

Figure 59 Liquid distributors - Gravity (left), Spray (right)(photos courtesy of Paul Phillips)

Packings

Page 131
There is a clear trend to improve separations by supplementing the use of trays by additions
of packings. Packings are passive devices that are designed to increase the interfacial area
for vapour-liquid contact. The following pictures show 3 different types of packings.

Figure 60 Tray Packings

These strangely shaped pieces are supposed to impart good vapour-liquid contact when a
particular type is placed together in numbers, without causing excessive pressure-drop
across a packed section. This is important because a high pressure drop would mean that
more energy is required to drive the vapour up the distillation column.

Figure 61 Structured packing (photo courtesy of Paul Phillips)

Packings versus Trays


A tray column that is facing throughput problems may be de-bottlenecked by replacing a
section of trays with packings. This is because:

• packings provide extra inter-facial area for liquid-vapour contact

• efficiency of separation is increased for the same column height

• packed columns are shorter than trayed columns


Packed columns are called continuous-contact columns while trayed columns are called
staged-contact columns because of the manner in which vapour and liquid are contacted.

Page 132
Tower Capacity:
Factors, calculation, modification

Equipment and Column Sizing

In order to have stable operation in a distillation column, the vapour and liquid flows must be
managed. Requirements are:

• vapour should flow only through the open regions


of the tray between the downcomers
• liquid should flow only through the downcomers
• liquid should not weep through tray perforations
• liquid should not be carried up the column
entrained in the vapour
• vapour should not be carried down the column in
the liquid
• vapour should not bubble up through the
downcomers

These requirements can be met if the column is properly


sized and the tray layouts correctly determined.

Tray layout and column internal design is quite specialized, so final designs are
usually done by specialists; however, it is common for preliminary designs to be
done by ordinarily superhuman process engineers. These notes are intended to give
you an overview of how this can be done, so that it won't be a complete mystery
when you have to do it for your design project.

Basically in order to get a preliminary sizing for you column, you need to obtain values for

• the tray efficiency


• the column diameter
• the pressure drop
• the column height

Tray Construction & Hydraulics


Three main types of trays are to be discussed:

• Bubble Cap Trays


• Sieve Trays
• Valve Trays

Typically, the liquid flow between trays is governed by a weir on each tray. The flow depends
on the length of the weir and how high the liquid level on the tray is above the weir. The
Francis weir equation is one example of how the flow off a tray may be modeled.

Tray Efficiency

Page 133
Ideally, tray efficiencies are determined by measurements of the performance of actual trays
separating the materials of interest; however, this is usually not practical in the early phases
of a design. Consequently, some form of estimation is required. Estimates can be based on
theory or on data collected from other columns.

The O'Connell correlation is based on data collected from actual columns. It is based on
bubble cap trays and is conservative for sieve and valve trays. It correlates the overall
efficiency of the column with the product of the feed viscosity and the relative volatility of the
key component in the mixture. These properties should be determined at the arithmetic mean
of the column top and bottom temperatures. A fit of the data has been determined:

This, or a similar data set, can be used to get preliminary estimates of efficiency numbers.

Column Diameter
Column diameter is found based on the constraints imposed by flooding. The number of ideal
stages isn't needed to find the diameter -- only the vapour and liquid loads. You do need the
number of actual stages to get the column height.

Before beginning a diameter calculation, you want to know the vapour and liquid rates
throughout the column. You then do a diameter calculation for each point where the loading
might be an extreme: the top and bottom trays; above and below feeds, sidedraws, or heat
addition or removal; and any other places where you suspect peak loads.

Once you've calculated these diameters, you select one to use for the column, then check it
to make sure it will work. Some columns will have two sections with different diameters --
consider this possibility if you end up with regions where the estimated diameter varies by
20% or more, but realize it will be more expensive than a column that is the same all the way
up.

One issue that ought to be considered is the validity of your design numbers. If you are
following the "traditional" approach, you've probably designed your column for reflux rates in
the range of 1.1 to 1.2 times the minimum. This may not give you a column that can handle
"upsets" well, so you may want to design for a capacity slightly greater than that -- increasing
the flows by about 20% might be wise.

Flooding

Downcomer flooding occurs when liquid backs up on a tray because the downcomer area is
two small. This is not usually a problem. More worrisome is entrainment flooding, caused by
too much liquid being carried up the column by the vapour stream.

A number of correlations and techniques exist for calculating the flooding velocity; from this,
the active area of the column is calculated so that the actual velocity can be kept to no more
than 80-85% of flood; values down to 60% are sometimes used.

A force balance can be made on droplets entrained by the vapour stream (which can lead to
entrainment flooding). This balance yields an expression relating the vapour and liquid
densities and a capacity factor (C, with velocity units) to the flooding velocity:

Page 134
Capacity Factors

The capacity factor can be determined from theory (it depends on droplet diameter, drag
coefficient, etc.), but is usually obtained from correlations based on experimental data from
distillation tray tests. Depending on the correlation used, C may include the effects of surface
tension, tendency to foam, and other parameters.

A common correlation is one proposed by Fair in the late 50s - early 60s. The version for
sieve trays is available in a wide range of sources (including Figure 21.28 of MSH). The
correlation takes the form of a plot of a capacity factor (which must be corrected for surface
tension) vs. a functional group based on the liquid to vapour mass ratio:

Enter the plot from the bottom with this number, and then read the capacity factor from the
left. This capacity factor applies to nonfoaming systems and trays meeting certain hole and
weir size restrictions. It will need to be corrected for surface tension:

where the surface tension is in dynes/cm.

Other correlations for the capacity factor are also available. Several are based on more
recent information, and may well be more accurate than the Fair plot; however, they also
tend to be less broadly known and often require more a priori information on the system. You
should use a correlation that is acceptable for your problem.

Diameter

Once you have the capacity factor, you can readily solve for the flooding velocity:

(this solution is for the Fair correlation, and adds the surface tension correction).

We know that flow=velocity*area, so we can calculate the flow area from the known vapour
flow rate and the desired velocity (a fraction of flood). This area needs to be increased to
account for the downcomer area which is unavailable for mass transfer. The resulting tray
area can then be used to calculate the column diameter. So, with everything lumped
together, we have:

Page 135
The only "new" term is the ratio of downcomer area to tray area. This should probably never
be less than 0.1, and probably seldom will be greater than 0.2.

Trays probably aren't a good idea for columns less than about 1.5 ft in diameter (you can't
work on them) -- these are normally packed. Packing is less desirable for large diameter
columns (over about 5 ft in diameter).

Pressure Drop

There is a pressure gradient through the column -- otherwise the vapour wouldn't flow. This
gradient is normally expressed in terms of a pressure drop per tray, usually on the order of
0.10 psi.

The best source of pressure drop information is to measure the actual drop between trays,
but this isn't always feasible at the beginning of a design. Detailed calculations are possible,
but these depend so much on the actual tray specifications that final values are usually
obtained from experts, but approximate methods can be used to get values to put in your
design basis.

There are two main components to the pressure drop: the "dry tray" drop caused by
restrictions to vapour flow imposed by the holes and slots in the trays and the head of the
liquid that the vapour must flow through.

Dry Tray Losses

The dry tray head loss can be related to an orifice flow equation:

This equation determines the dry tray drop in inches of fluid (your text has a similar equation
in SI units). The constant 0.186 takes care of the units and is appropriate for sieve trays. The
orifice size coefficient Co depends on the tray configuration and will usually fall between 0.65
and 0.85. The hole velocity can be obtained by dividing the vapour flow rate by the total hole
area of the tray.

Liquid Losses

The liquid head pressure drop includes the effects of surface tension and of the frothing on
the tray. It is typically represented as the product of an aeration factor and the height of liquid
on the tray:

Page 136
Correlations are available for the aeration factor (beta); a value of 0.6 is good for a wide
variety of situations.

The height of liquid on the tray is the sum of the weir height and the height of liquid over the
weir. The total height can be calculated directly from the volume of liquid on the tray and its
active area. Another approach is to back the height out of a version of the Francis weir
equation (which relates flow off a tray to liquid height and weir length). One version, for a
straight weir, in units of inches and gal/min is:

Realize that these equations depend on the size and shape of the weir.

Column Height

The height of a trayed column is calculated by multiplying the number of (actual) stages by
the tray separation. Tray spacing can be determined as a cost optimum, but is usually set by
mechanical factors. The most common tray spacing in 24 inches -- it allows enough space to
work on the trays whenever the column is big enough around (>5 ft diameter) that workers
must crawl inside. Smaller diameter columns may be able to get by with 18 inch tray
spacings.

In addition to the space occupied by the trays, height is needed at the top and bottom of the
column. Space at the top -- typically an additional 5 to 10 ft -- is needed to allow for
disengaging space.

The bottom of the tower must be tall enough to serve as a liquid reservoir. Depending on
your boss's feelings about keeping inventory in the column, you will probably design the base
for about 5 minutes of holdup, so that the total material entering the base can be contained
for at least 5 minutes before reaching the bottom tray.

The total of height added to the top and bottom will usually amount to about 15% or so added
to that required by the trays.

You rarely will see a real tower that is more than about 175 ft. tall. Tall, skinny towers are not
a good idea, so watch the height/diameter ratio. You generally want to keep it less than 20 or
30. If your tower ends up exceeding these values, you probably want to look at a redesign,
maybe by reducing the tray spacing, or splitting the tower into two parts.

Page 137
Absorption & Adsorption

Separation > Absorption

Absorption processes are employed to recover valuable light components such as


propane/propylene and butane/butylene from the vapours that leave the top of crude-oil or
process-unit fractionating columns within the refinery. These volatile gases are bubbled
through an absorption fluid, such as kerosene or heavy naphtha, in equipment resembling a
fractionating column. The light products dissolve in the oil while the dry gases—such as
hydrogen, methane, ethane, and ethylene—pass through undissolved. Absorption is more
effective under pressures of about 7 to 11 kilograms per square centimetre (100 to 150
pounds per square inch) than it is at atmospheric pressure.

The enriched absorption fluid is heated and passed into a stripping column, where the light
product vapours pass upward and are condensed for recovery as liquefied petroleum gas
(LPG). The unvapourised absorption fluid passes from the base of the stripping column and
is reused in the absorption tower.

Absorption is generally used to separate a higher-boiling constituent from other components


of a system of vapours and gases. The absorption medium is usually an oil in the range of
gas oil. Absorption is widely employed in the recovery of natural gasoline from well gas and
of vapours given off by storage tanks. Absorption also obtains light hydrocarbons from many
refining processes (catalytic cracking, hydrocracking, coking etc.). The solvent oil may be
heavy gasoline, kerosenes, or even heavier oils. The absorbed products are recovered by
fractionating or steamstripping.

Separation > Adsorption

Certain highly porous solid materials have the ability to select and adsorb specific types of
molecules, thus separating them from other materials. Silica gel is used in this way to
separate aromatics from other hydrocarbons, and activated charcoal is used to remove liquid
components from gases. Adsorption is thus somewhat analogous to the process of
absorption with an oil, although the principles are different. Layers of adsorbed material only
a few molecules thick are formed on the extensive interior surface of the adsorbent; the
interior surface may amount to several hectares per kilogram of material.

Adsorption is employed for about the same purpose as absorption; in the processjust
mentioned natural gasoline may be separated from natural gas by adsorption on charcoal.
Adsorption is also used to remove undesirable colours from lubricating oils, usually
employing activated clay. The use of molecular sieves in separating close boiling
components deserves a special mention.

Molecular sieves are a special form of adsorbent. Such sieves are produced by the
dehydration of naturally occurring or synthetic zeolites (crystalline alkali-metal
aluminosilicates). The dehydration leaves intercrystalline cavities that have pore openings of
definite size, depending on the alkali metal of the zeolite. Under adsorptive conditions,
normal paraffin molecules can enter the crystalline lattice and be selectively retained,
whereas all other molecules are excluded. This principle is used commercially for the
removal of normal paraffins from gasoline fuels, thus improving their combustion properties.
The use of molecular sieves is also extensive in the manufacture of high-purity solvents.

Solid Liquid Separation

Page 138
Introduction
Separation techniques concentrate contaminated solids through physical and chemical
means. These processes seek to detach contaminants from their medium (i.e., the soil,
sand, and/or binding material that contains them).
Description:

Figure 62 Typical Gravity Separation System


The separation processes are used for removing contaminated concentrates from soils, to
leave relatively uncontaminated fractions that can then be regarded as treated soil. Ex situ
separation can be performed by many processes. Gravity separation and sieving/physical
separation are two well-developed processes that have long been primary methods for
treating municipal wastewaters. Magnetic separation, on the other hand, is a much newer
separation process that is still being tested.

Gravity Separation
Gravity separation is a solid/liquid separation process, which relies on a density difference
between the phases. Equipment size and effectiveness of gravity separation depends on the
solids settling velocity, which is a function of the particles size, density difference, fluid
viscosity, and particle concentration (hindered settling). Gravity separation is also used for
removing immiscible oil phases, and for classification where particles of different sizes are
separated. It is often preceded by coagulation and flocculation to increase particle size,
thereby allowing removal of fine particles.

Magnetic Separation
Magnetic separation is used to extract slightly magnetic radioactive particles from host
materials such as water, soil, or air. All uranium and plutonium compounds are slightly
magnetic while most host materials are nonmagnetic. The process operates by passing
contaminated fluid or slurry through a magnetized volume. The magnetized volume contains
a magnetic matrix material such as steel wool that extracts the slightly magnetic
contamination particles from the slurry.

Sieving/Physical Separation
Sieving and physical separation processes use different size sieves and screens to
effectively concentrate contaminants into smaller volumes. Physical separation is based on

Page 139
the fact that most organic and inorganic contaminants tend to bind, either chemically or
physically, to the fine (i.e., clay and silt) fraction of a soil. The clay and silt soil particles are, in
turn, physically bound to the coarser sand and gravel particles by compaction and adhesion.
Thus, separating the fine clay and silt particles from the coarser sand and gravel soil
particles would effectively concentrate the contaminants into a smaller volume of soil that
could then be further treated or disposed

Page 140
Module 5 – Process Control & Economics

Page 141
Process Control Basics

Measured Variables
Definition: The physical quantity, property, or condition which is to be measured. Common
measured variables are temperatures, pressure, rate of flow, thickness, speed, etc. 2. The
pan of the process that is monitored to determine the actual condition of the controlled
variable.

Process Control Systems


This part of the course starts with an outline and overview of basic control concepts.
Questions which process engineers routinely have to answer about process control include
the following:
• I have this process. What should I control?
• Where on the process do I put my control loops?
• As I proceed with the design of a process, what aspects of control should I consider
at which stages?
Most books with the words `process control' in the title do little to answer these questions.
Classical linear control theory, which forms the basis of most books on control, is much
concerned with how to design controllers and is less helpful on how to design complete
control systems. Other problems with this classical approach, for most process engineers
wishing to design control systems for real chemical processes, are the restriction of most of
its methods to idealised process models, and the extensive use of rather specialised
mathematics.
Satisfactory answers to questions such as the above frequently require little conventional
mathematics. What they do require, however, is a good understanding of what a process is
intended to do and how it works.
In this book we will approach process control from the standpoint of a chemical or process
engineer, and address these questions and others like them. We will consider the process
and its control system in the language of process engineering. We will use mathematics, as
such, only when necessary, and the language of classical control engineering only when it is
unavoidable, or will add very significantly to the process engineer's understanding.

Why Control?
Chemical plants are intended to be operated under known and specified conditions. There
are several reasons why this is so:
Safety:
Formal safety and environmental constraints must not be violated.
Operability:
Certain conditions are required by chemistry and physics for the desired reactions or
other operations to take place. It must be possible for the plant to be arranged to
achieve them.
Economic:

Page 142
Plants are expensive and intended to make money. Final products must meet market
requirements of purity, otherwise they will be unsaleable. Conversely the manufacture
of an excessively pure product will involve unnecessary cost.
A chemical plant might be thought of as a collection of tanks in which materials are heated,
cooled and reacted, and of pipes through which they flow. Such a system will not, in general,
naturally maintain itself in a state such that precisely the temperature required by a reaction
is achieved, a pressure in excess of the safe limits of all vessels be avoided, or a flowrate
just sufficient to achieve the economically optimum product composition arise.

Control Objectives
Control systems in chemical plants have, as noted, three functions.
• Safety.
• Operability, i.e. to ensure that particular flows and holdup are maintained at chosen
values within operating ranges.
• To control product quality, process energy consumption etc.
To a large extent these are quite separate objectives. Indeed, in the case of safety systems
separate equipment is generally used. The aims of control for operability are secondary to
those of strategic control for quality etc., which directly affect process profitability.

Control for Safety


Concern for safety is paramount in designing a chemical plant and its control systems.
Ideally a process design should be `intrinsically safe', that is, plant and equipment should be
such so that any deviation, such as an increase in reactor pressure, will itself change
operating conditions so that it is rapidly removed, for example by a fall in reaction rate. For
many perturbations this type of responsive, passive safety system will not be possible and
active systems will be required.
These active safety systems must be robust and of high integrity. Current processes achieve
this through simplicity. The ultimate safety system is in most cases the mechanical relief
valve which simply vents the plant to atmosphere, possibly through a flare or scrubber.
We will not discuss control for safety explicitly in this book. Generally speaking a complete
and separate system is provided to handle emergency control action. The need for this, and
its design requirements, are established in hazard and operability or hazop studies. These
are typicaly carried out on the complete process with its `normal' control systems in place.
A number of safety issues will be addressed in the course of developing the design of the
control systems for normal operation, but it must be emphasised that our treatment of this
vital issue will be relatively restricted.

Control for Operability


The operator of a process quite simply has to
• know what it is doing
• be able to make it do what he or she wants, rather than to follow its natural
inclinations.
The issue of making a plant behave in this way is called operability.

Page 143
The majority of control loops in a plant control system are associated with operability.
Specific flow rates have to be set, levels in vessels maintained and chosen operating
temperatures for reactors and other equipment achieved.

Control for Profitability


There is no point in building a plant which is totally safe and can be made to take up any
(safe) conditions of flow, temperature etc., if the conditions under which it is operated do not
produce the correct amount of product to the correct specification, thus allowing its operators
to make a profit.
The top level of process control, what we will refer to as the strategic control level is thus
concerned with achieving the appropriate values principally of:
• Production rate,
• Product quality, and
• Energy economy.

Techniques of Control

Basic Concepts of Feedback Control


The task of maintaining these required conditions falls to one or, more usually several,
process control systems with which the plant will be equipped. The practical aspects of
these will be discussed more fully in the following module. The underlying principle of most
process control, however, is already understood by anyone who has grasped the operation
of the domestic hot water thermostat:
• The quantity whose value is to be maintained or regulated, e.g. the temperature of the
water in a cistern, is measured.
• Comparison of the measured and required values provides an error, e.g. `too hot' or
`too cold'.
• On the basis of the error, a control algorithm decides what to do.
Such an algorithm might be:
If the temperature is too high then turn the heater off. If it is too low then turn the
heater on.
• The adjustment chosen by the control algorithm is applied to some adjustable
variable, such as the power input to the water heater.
This summarises the basic operation of a feedback control system such as one would
expect to find carrying out nearly all control operations on chemical plants, and indeed in
most other circumstances where control is required. The diagram belows a feedback control
loop.

Page 144
Figure 63 Feedback control loop
Notice that this extremely simple idea has a number of very convenient properties. The
feedback control system seeks to bring the measured quantity to its required value or
setpoint. The control system does not need to know why the measured value is not
currently what is required, only that this is so. There are two possible causes of such a
disparity:
• The system has been disturbed. This is the common situation for a chemical plant
subject to all sorts of external upsets. However, the control system does not need to
know what the source of the disturbance was.
• The setpoint has been changed. In the absence of external disturbance, a change in
setpoint will introduce an error. The control system will act until the measured quantity
reaches its new setpoint.
A control system of this sort should also handle simultaneous changes in setpoint and
disturbances.

Advantages of Feedback Control


Not only does the feedback control system require no knowledge of the source or nature of
disturbances, but it requires minimal detailed information about how the process itself works.
Feedback control action is entirely empirical, so long as an adjustment is being made in the
correct `sense', e.g. more heat means increasing temperature and vice versa, then the
control system should remove the effect of an external disturbance.
As we will see, it helps to know more than this, but the minimum information required to
make a feedback control system work is whether the adjustment makes the measurement go
up or down.

Disadvantages of Feedback Control


The main disadvantage of feedback control is that the disturbance enters into the process
and upsets it. It is after the process output is different from the setpoint that the controller
takes some corrective actions. Although most processes allow some fluctuation of controlled
variable within a certain range, there are two process conditions which can make the overall
effectiveness of feedback control quite unsatisfactory. One of these is the occurrence of
disturbances of a large magnitude that is strong enough to seriously affect or even damage

Page 145
the process. The other is the occurrence of a large amount of lag (time delay) within the
process. These are discussed further below.

Large Magnitude Disturbance


An example where the occurrence of disturbances of large magnitude that are strong enough
to seriously effect the process, is temperature control of a catalyst reactor in which strong
exothermic reaction takes place. The reaction heat is very high, therefore the reactant gas
mixture is diluted by a inert gas to carry away most reaction heat, although the temperature
of the reactor is maintained by feedback control of a coolant flowrate in coils inside.
Assuming a large magnitude disturbance, the sudden large increase in the reactant
concentration in the feed, enters the reactor, a sudden increase in the temperature is so
large and so quick that the catalyst is burnt out before the control system senses the change
and takes any actions. A diagram of this situation is shown below.

Figure 64 Large magnitude disturbance

Large Time Delay


A simple example of a large time delay is the distillation column as outlined in the figure
below. If we use feedback control to regulate the purity of the top product, when the feed
composition changes (disturbance), the control system is not aware any takes no action until
the effects of the disturbance travels and arrives at the sensor position at the top. When the
controller takes the correction, the whole column may be far away from the designed
conditions.

Page 146
Figure 65 Time delay
The question of importance of either occurrence is defined in economic terms. In either case,
the principle concern is the existence of errors that have significant economic consequences
in the overall process operation. In these cases, feedforward control can be used to deal with
these disadvantages or inadequacies of feedback control.

Page 147
Process Economics

Refinery Economics
The overall economics or viability of a refinery depends on the interaction of three key
elements: the choice of crude oil used (crude slates), the complexity of the refining
equipment (refinery configuration) and the desired type and quality of products produced
(product slate). Refinery utilization rates and environmental considerations also influence
refinery economics.
Using more expensive crude oil (lighter, sweeter) requires less refinery upgrading but
supplies of light, sweet crude oil are decreasing and the differential between heavier and
more sour crudes is increasing. Using cheaper heavier crude oil means more investment in
upgrading processes. Costs and payback periods for refinery processing units must be
weighed against anticipated crude oil costs and the projected differential between light and
heavy crude oil prices.
Crude slates and refinery configurations must take into account the type of products that will
ultimately be needed in the marketplace. The quality specifications of the final products are
also increasingly important as environmental requirements become more stringent.

Crude Slate
Different types of crude oil yield a different mix of products depending on the crude oil´s
natural qualities. Crude oil types are typically differentiated by their density (measured as
API gravity) and their sulphur content. Crude oil with a low API gravity is considered a heavy
crude oil and typically has a higher sulphur content and a larger yield of lower-valued
products. Therefore, the lower the API of a crude oil, the lower the value it has to a refiner as
it will either require more processing or yield a higher percentage of lower-valued by-
products such as heavy fuel oil, which usually sells for less than crude oil.
Crude oil with a high sulphur content is called a sour crude while sweet crude has a low
sulphur content. Sulphur is an undesirable characteristic of petroleum products, particularly
in transportation fuels. It can hinder the efficient operation of some emission control
technologies and, when burned in a combustion engine, is released into the atmosphere
where it can form sulphur dioxide. With increasingly restrictive sulphur limits on
transportation fuels, sweet crude oil sells at a premium. Sour crude oil requires more severe
processing to remove the sulphur. Refiners are generally willing to pay more for light, low
sulphur crude oil.
Most refineries in Western Canada and Ontario were designed to process the light sweet
crude oil that is produced in Western Canada. Unlike leading refineries in the U.S.,
Canadian refineries in these regions have been slower to reconfigure their operations to
process lower cost, less desirable crude oils, instead choosing to rely extensively on the
abundant, domestically-produced, light, sweet crudes. As long as these lighter crudes were
available, refining economics were insufficient to warrant new investment in heavy oil
conversion capacity.
However, with growing oil sands production and the declining production of conventional light
sweet crudes, refineries in Western Canada and Ontario have started to make the
investment required to process the increasing supply of heavier crudes. In 2003, Shell
Canada completed the conversion of their Scotford refinery to use bitumen feedstock. In the
fall of 2003, Consumer´s Co-operative Refineries Ltd completed a 35,000 bbls/day
expansion of their refinery in Regina, Saskatchewan. This increased their heavy oil refining
capacity to approximately 85,000 bbls/day. Petro-Canada has also announced plans to do a
major refitting of their Edmonton refinery. Although this construction is not expected to
increase their capacity, it will allow them to upgrade and refine oil sands feedstock. The

Page 148
$1.2 billion CDN project will significantly expand the existing coker at Edmonton allowing for
approximately 53,000 bbls/day of bitumen upgrading. Similarly, Suncor is expected to do a
feedstock conversion at its Sarnia refinery to run more lower value oil sands feedstock.
Much of this investment by the large integrated oil companies (companies that are involved
in both the production of crude oil and the manufacturing and distribution of petroleum
products) is associated with ensuring a market for their growing oil sands production.
In Western Canada and Ontario, almost 50% of the crude oil processed by refiners is
conventional light, sweet crude oil and another 25% is high quality synthetic crude oil.
Synthetic crude is a light crude oil that is derived by upgrading oil sands. Most of the
remaining crude oil processed by these refineries is heavy, sour crude. The crude slate is
expected to change significantly in the years ahead as refiners increase their capacity to
process heavy crude oil and lower quality synthetic crudes.
Refineries in Atlantic Canada and Quebec are dependent on imported crudes and tend to
process a more diverse crude slate than their counterparts in Western Canada and Ontario.
These refiners have the capacity to purchase crude oil produced almost anywhere in the
world and therefore have incredible flexibility in their crude buying decisions. Approximately
1/3 of crude processed in Eastern Canada and Quebec is conventional, light sweet crude
and another 1/3 is medium sulphur, heavy crude oil. The remaining 1/3 is a combination of
sour light, sour heavy and very heavy crude oil. The crude slate in Eastern Canada is
expected to remain much more static than that in Western Canada and Ontario, as these
refiners are not constrained by the quality or volume of domestic crude production.
Figure 3 illustrates the product yield for six typical types of crude oil processed in Canada. It
includes both light and heavy as well as sweet and sour crude oils. A very light condensate
(62 API) and a synthetic crude oil are also included. The chart compares the different output
when each crude type is processed in a simple distillation refinery. The output is broken
down into five main product groups: gasoline, propane and butane (C3/C4), Cat feed (a
partially processed material that requires further refining to make usable products), distillate
(which includes diesel oil and furnace oil) and residual fuel (the heaviest and lowest-valued
part of the product output, used to make heavy fuel oil and asphalt).

Refinery Configuration
A refiner´s choice of crude oil will be influenced by the type of processing units at the
refinery. Refineries fall into three broad categories. The simplest is a topping plant, which

Page 149
consists only of a distillation unit and probably a catalytic reformer to provide octane. Yields
from this plant would most closely reflect the natural yields from the crude processed.
Typically only condensates or light sweet crude would be processed at this type of facility
unless markets for heavy fuel oil (HFO) are readily and economically available. Asphalt
plants are topping refineries that run heavy crude oil because they are only interested in
producing asphalt.
The next level of refining is called a cracking refinery. This refinery takes the gas oil portion
from the crude distillation unit (a stream heavier than diesel fuel, but lighter than HFO) and
breaks it down further into gasoline and distillate components using catalysts, high
temperature and/or pressure.
The last level of refining is the coking refinery. This refinery processes residual fuel, the
heaviest material from the crude unit and thermally cracks it into lighter product in a coker or
a hydrocraker. The addition of a fluid catalytic cracking unit (FCCU) or a hydro cracker
significantly increases the yield of higher-valued products like gasoline and diesel oil from a
barrel of crude, allowing a refinery to process cheaper, heavier crude while producing an
equivalent or greater volume of high-valued products.
Hydrotreating is a process used to remove sulphur from finished products. As the
requirement to produce ultra low sulphur products increases, additional hydrotreating
capability is being added to refineries. Refineries that currently have large hydrotreating
capability have the ability to process crude oil with a higher sulphur content.
Figure 4 demonstrates that using the same crude input (heavy crude with a 27 API) yields a
very different range of petroleum products depending on the refining units and processes
used. In the case of the cracking refinery, the addition of other blending materials at various
stages of production is required but the resulting volumetric output is greater than the volume
of the crude oil input. Each refinery is unique due to age / technology and modifications over
time, but generalizations are possible. The installation of additional conversion capability
increases the yield of clean products and reduces the yield of heavy fuel oil. However,
increased conversion capability would generally result in higher energy use and, therefore,
higher operating costs. These higher operating and capital costs must be weighed against
the lower cost of the heavier crude oil.
Canada has primarily cracking refineries. These refineries run a mix of light and heavy crude
oils to meet the product slate required by Canadian consumers. Historically, the abundance
of domestically produced light sweet crude oils and a higher demand for distillate products,
such as heating oil, than in some jurisdictions reduced the need for upgrading capacity in
Canada. However, in more recent years, the supply of light sweet crude has declined and
newer sources of crude oil tend to be heavier. Many of the Canadian refineries are now
being equipped with upgraders to handle the heavier grades of crude oil currently being
produced.

Page 150
[1] Source: NRCan surveys

Product Slate
Refinery configuration is also influenced by the product demand in each region. Refineries
produce a wide range of products including: propane, butane, petrochemical feedstock,
gasolines (naphtha specialties, aviation gasoline, motor gasoline), distillates (jet fuels, diesel,
stove oil, kerosene, furnace oil), heavy fuel oil, lubricating oils, waxes, asphalt and still gas.
Nationally, gasoline accounts for about 40% of demand with distillate fuels representing
about one third of product sales and heavy fuel oil accounting for only eight percent of sales.
Total petroleum product demand is distributed almost equally across the regions, with
Atlantic/Quebec, Ontario and the West each accounting for about one third of total sales.
However, the mix of products varies quite significantly among the regions. [2]
In the Atlantic provinces, where furnace oil (light heating oil) is the primary source of home
heating, distillate fuels make up 40% of product demand, and heavy fuel oil, used to
generate electricity, accounts for another 24%. Gasoline sales account for less than 30% of
product demand.
In Quebec, where natural gas and hydroelectricity are prevalent, distillate fuel has a 34%
share of sales and gasoline is about 40%. Similarly, in Ontario, gasoline sales outpace
distillate sales and account for more than 45% of total product demand, with distillates at less
than 30%.
In Western Canada, agricultural use is one of the primary drivers behind distillate demand
and gasoline and distillate each account for about 40% of total petroleum product sales.
These regional differences in product demand have influenced the configurations of the
refineries in each area.
By comparison, in the U.S., the demand for gasoline is much larger than distillate demand
and, therefore, refiners configure their installations to maximize gasoline production.

Page 151
Gasoline sales account for nearly 50% of demand while distillate sales account for less than
30% of product demand. In several Western European countries, most notably Germany
and France, policies exist that encourage the use of diesel engines creating a much stronger
distillate component. Gasoline accounts for less than 20% of petroleum product sales in
Europe.
The US refineries are configured to process a large percentage of heavy, high sulphur crude
and to produce large quantities of gasoline, and low amounts of heavy fuel oil. U.S. refiners
have invested in more complex refinery configurations, which allow them to use cheaper
feedstock and have a higher processing capability.
Canada´s refineries do not have the high conversion capability of the US refineries, because,
on average, they process a lighter, sweeter crude slate. Canadian refineries also face a
higher distillate demand, as a percent of crude, than those found in the U.S. so gasoline
yields are not as high as those in the US, but are still significantly higher than European
yields.
The relationship between gasoline and distillate sales can also create challenges for refiners.
A refinery has a limited range of flexibility in setting the gasoline to distillate production ratio.
Beyond a certain point, distillate production can only be increased by also increasing
gasoline production. For this reason, Europe is a major gasoline exporter, primarily to the
U.S.

Page 152
Mass-exchange processes, such as distillation, absorption, extraction, adsorption and drying
are used in chemical technology for separation of substances into their components. These
processes have common features.
At least three substances are used in a mass-exchange process, namely, a distributive
substance, which forms the first phase, the second distributive substance - the second
phase, and a distributed substance, that migrates from one phase to another. The driving
force of this process is determined by the difference between current and equilibrium
concentrations of substances. The correlation between these concentrations could be linear
(for absorption or extraction) or non-linear (for distillation). However, they (concentrations)
strongly depend on the process parameters (temperature, pressure), and on the presence of
various additives. Industrial apparatus is designed with respect to the certain values of these
parameters and certain concentrations of initial products. In reality, disturbances could lead
to the distortion of material and thermal balances in apparatus, to the deviation of pressure
and temperatures from the desired values, and , finally, to the deviation of the composition
(quality) of final products from the required ones. Therefore, the objective of control systems
is to stabilise these process parameters in order to maintain the material and thermal
balances by suppressing various disturbances.
The majority of mass-exchange processes occur in columns, which have several meters in
diameter and several dozens meters in height. Time delays in these apparatus could vary
from several minutes to several dozens of minutes. Therefore, single-loop control systems
have large offsets and long time transient processes. Employing cascade control systems
one can improve the performance of these processes. Deficiency of instrumentation for
continuous measuring of the composition of intermediate and final products creates
difficulties for the accurate control. In such cases, control of the quality is performed
indirectly, i.e. by controlling the boiling temperatures, densities or viscosities of mixtures.

22.1. Control of distillation columns.


These columns are used for the separation of liquid homogeneous mixtures into its
components or groups of components.
Let's consider possible disturbances, manipulated and output variables. Since the initial
product comes to the distillation column from the previous process units, therefore, variations
of feed flowrate, its concentration and temperature are the major disturbances. Enthalpies of
a heating vapor (steam) and a coolant, and heat losses are possible disturbances. Usually,
only the feed temperature is stabilised (controlled), whereas the flowrate of feed is measured
only.
Flowrates of vapour for heating, heat-transfer agent, coolant, distillate product, bottoms
product and reflux are manipulated variables.
The concentration of distillate and bottom product, the levels of fluid in the column and the
level of distillate in the tank, pressure in the column are the output variables.

Page 153
Example 1.
Fig. 22.1 presents a schematic view of a simple control system, which comprise six single-
loop control systems. This control system stabilises the composition of the distillate product
and maintains the material and thermal balances in the distillation column.
The major controller, which stabilises the composition of the distillate, is the temperature
controller (pos. 5-2). It manipulates the flow rate of reflux (pos. 5-3). Temperature controller
(pos. 1-2) controls feed temperature by manipulating the flowrate of heat-transfer agent.
Level controllers (pos. 7-2) and (pos. 8-2) maintain the material balance of liquid phase in the
column, and pressure controller (pos. 4-2) maintains the material balance of vapor phase.
The flowrate controller (pos. 3-3) stabilises the flowrate of heating vapour into the re-boiler.
If our task is to control the composition of the bottom product, then the flowrate of steam for
heating is manipulated by the control signal from the temperature controller (pos. 2-2), and
the flowrate of reflux is manipulated by the flowrate controller (pos. 6-3). Simultaneous
control of compositions of the distillate and bottom products or temperatures at the top and at
the bottom of the column usually is not used because these process variables are
interdependent. An application of feedback control systems may reduce the stability of these
control systems.

This control system has several disadvantages:


• stabilisation of the vapour flowrate without a respect to other process parameters
causes an excessive consumption of the vapour. This happens because the set point
value supplied to this controller is slightly higher in order to take into account possible
variations in the enthalpy of steam, supercooling of reflux, etc.;
• since the effect of disturbances, such as flowrate or temperature of the feed, is not
suppressed, this can lead to significant deviations in the composition of the final
products from their desired values. This happens because the temperature controller
at the top of the column receives the signal about the deviation in temperature
(composition) of the product only after the composition of the fluid mixture has been
changed along the height of the column.

Page 154
Coolant out
PT
TT PC
TC
1-1
4-1 1-2
4-2

LT
TT LC
TC
Coolant in 1-1
7-1 1-2
7-2

TT TC 4-3
1-1
5-1 1-2
5-2

TC TT 5-3 Distillate
1-2 1-1 FE
TC
1-2
6-1
Reflux 7-3

FT
TT
Heat-transfer 1-2
1-1
6-2
agent
FC
FT
TT
1-3 1-2
1-1
6-3

TC TT
Feed mixture 1-2
2-2 1-1
2-1

LT
TT LC
TC
3-4 1-1
8-1 1-2
8-2
FE
TC
1-2
3-1
Steam for
heating FT
TT
1-2
1-1
3-2
8-3 Bottom product
FC
FT
TT
1-2
1-1
3-3

Figure 66 Distillation column with six single-loop control systems.

155
Coolant out
PT
TT PC
TC
1-1
3-1 1-2
3-2

LT
TT LC
TC
Coolant in 1-1
5-1 1-2
5-2

3-3

TC TT Reflux 4-5 Distillate


1-2 1-1

TT 5-3
1-1
4-1
Heat-transfer
agent TC
4-2

1-3 FT
TT TC
1-2
1-1
4-3 4-4

Feed mixture
FE
TC
1-2
2-3

FT
TT
1-2
1-1
2-4
Steam for LT
TT LC
TC
heating 2-6 1-1
6-1 1-2
6-2
FE
TC
1-2
2-1

FT
TT
1-2
1-1
2-2

Bottom product
FFC
FT
TT
1-2
1-1
2-5
6-3

Figure 67 Distillation column with single-loop and cascade control systems

156
Another significant disadvantage of using the temperature of the product to control its
composition is as follows: variations of this temperature due to changes in the
composition of the product are comparable with its variations caused by pressure
changes in the column. These temperature variations may be comparable with the
accuracy of a temperature sensor. Let variations in the composition of the product
can not exceed of ±1%. The difference in boiling temperatures of components is 20
°C. Then corresponding variations of temperature are equal to ±0.2 °C. For a
potentiometer with a temperature range from 0 to 150 °C and an accuracy of 0.5%
the error in temperature measurements is 0.75 °C. One should take this into account
when select a temperature sensing device.

Example 2.
In Fig. 22.2 the controller (pos. 2-5) controls the flowrate ratio between a feed
mixture and steam for heating in a re-boiler, thus reducing the consumption of energy
for the separation of mixture into its components. A cascade control system is used
to control temperature at the top of the distillation column by introduction of a
correction signal (pos. 4-3) from the loop for measuring temperature on the selected
tray of the column.
These are only two simple examples, whereas in reality more complex control
systems are used.
22.2. Control of absorption columns

Absorption columns (or absorbers) are used as intermediate units in chemical


processes. The objective of absorption processes is to maximise the degree of
absorption or to minimise the consumption of energy for the separation of the
mixture.

The major sources of disturbances are the flowrate, composition and temperature of
a gas stream entering for absorption, and, sometimes, the temperature and
composition of the liquid absorbing stream. The major manipulated variables are the
flowrate of the liquid absorbing stream and flowrate of the bottom product.

When one controls pressure and level in the column this maintains the material
balance between gaseous and liquid phases. A control system with several single
control loops (see Fig. 22.3a) keeps the material and thermal balances by using the
level controller (pos.2-2) and pressure controller (pos. 1-2); and keeps the
composition of the bottoms product at the desired value by using the composition
controller (pos. 3-2). An introduction of a control signal using a flow ratio controller
(pos. 3-5 in Fig. 22.3b) suppresses the effect of the variation of the gaseous mixture
flowrate (this is the disturbance) and improves the performance of this cascade
control system. Cascade control system (see Fig. 22.3c) uses the composition of the
gaseous-liquid mixture on the certain tray of the column as an auxiliary controlled
variable. In this case the composition controller (pos. 3-2) is the primary, or master,
controller, whereas the composition controller (pos. 3-5) is the secondary, or slave
controller.
4 4 4
1-3 1-3 1-3

PT PC PT PC PT PC
1-1 1-2 1-1 1-2 1-1 1-2
2
FT
2 3-3 2
3-3 3-6 3-5

FFC CT
3-5 3-3

1 FT
1
1
3-4 CC
LT LT 3-4
2-1 2-1
LT
2-1
LC LC
2-2 2-2
LC
2-2

2-3 2-3 2-3

CT CC CT CC CT CC
3-1 3-2 3-1 3-2 3-1 3-2

3 3 3

a b c
Figure 22.3. Control of absorbers.

1 - gas mixture;
2 - liquid absorbent;
3 - bottom product;
4 - end gases.

158

You might also like