Professional Documents
Culture Documents
mengaku membenarkan kertas projek ini disimpan di Perpustakaan Universiti Malaysia Pahang
dengan syarat-syarat kegunaan seperti berikut:
1. Hakmilik kertas projek adalah di bawah nama penulis melainkan penulisan sebagai
projek bersama dan dibiayai oleh UMP, hakmiliknya adalah kepunyaan UMP.
2. Naskah salinan di dalam bentuk kertas atau mikro hanya boleh dibuat dengan
kebenaran bertulis daripada penulis.
3. Perpustakaan Universiti Malaysia Pahang dibenarkan membuat salinan untuk tujuan
pengajian mereka.
4. Kertas projek hanya boleh diterbitkan dengan kebenaran penulis. Bayaran royalti
adalah mengikut kadar yang dipersetujui kelak.
5. *Saya membenarkan Perpustakaan membuat salinan kertas projek ini sebagai bahan
pertukaran di antara institusi pengajian tinggi.
6. **Sila tandakan (9)
9 TIDAK TERHAD
Disahkan oleh
_______________________ _________________________
(TANDATANGAN PENULIS) (TANDATANGAN PENYELIA)
Tandatangan : ....................................................
Nama Penyelia : En. Noor Asma Fazli bin Abdul Samad
Tarikh : 18 April 2008
i
APRIL 2008
ii
Signature : ....................................................
DEDICATION
All praised and thanks are due to Allah Almighty and peace and blessing be upon
His Messenger
iv
ACKNOWLEDGEMENT
I would like to express my deepest gratitude to the following persons for their
unlimited and kindness help as well as guidance enabled me to complete this
research project in time as a requirement in receiving my degree.
To Mr. Muhamamd Bin Awang and Mdm. Azni Binti Che Ngah, my beloved
parents, my utmost gratitude towards both of you will never fade. As the person who
brought and taught me of the world, your kindness shall I repay by being a successful
and meaningful human being. For all my family members, thank you for understand
and care so much for me.
Mr. Noor Asma Fazli Bin Abdul Samad, my supervisor, Miss Noralisa Binti
Harun and Miss Sureena Binti Abdullah as my research panel and Mdm. Zailinshah
Binti Yusoff as my thesis writing panel, thanks a lot for helping me during the
progress of the research project in many ways. Without their generosity in sparing
their precious time to guide and help me, the aim of project may not be fulfilled.
ABSTRACT
ABSTRAK
TABLES OF CONTENTS
DECLARATION ii
DEDICATION iii
ACKNOWLEDGEMENT iv
ABSTRACT v
ABSTRAK vi
TABLES OF CONTENT vii
LIST OF TABLES x
LIST OF FIGURES xi
LIST OF APPENDICES xiii
1 INTRODUCTION
1.1 Introduction 1
1.2 Problem Statement 3
1.3 Objectives and Scope Research 4
1.4 Summary 5
2 LITERATURE REVIEW
2.1 Introduction 6
2.2 Principle of Safety 7
2.3 Principle of Fault 8
2.4 Fault Detection 10
2.5 Neural Network 17
2.5.1 Background of Neural 17
Network
viii
3 PLANT SIMULATION
3.1 Introduction 27
3.2 Process Description 27
3.3 Modeling the Vinyl Acetate Process 28
3.4 Steady State Data and Dynamic 35
Simulation
3.5 VAC Plant MATLAB Program 36
3.6 Simulation Data Validation 40
3.7 Summary 41
4 METHODOLOGY
4.1 Phases in research 43
4.2 Fault detection scheme 45
4.3 Summary 46
ix
REFERENCES 68
APPENDICES 73
x
LIST OF TABLES
LIST OF FIGURES
LIST OF APPENDIX
INTRODUCTION
1.1 INTRODUCTION
Research of fault detection system had gain more interest lately not just only
due to cost saving, yet more importantly; it serves as a safety mechanism. The
disaster in Bhopal and Chernobyl is a good example why advance controller can
plays a vital role in preventing the incident from happen in the first place. The
implementation of advance controller in term of fault detection will help to reduce
2
the probability of accident and loss as a result of human or mechanical error. The
emergence of artificial intelligence (AI) also plays a role in the development of
control system. The cognitive approach of AI is focusing on imitates the rational
thinking of human (Lee, 2006). AI system such as fuzzy logic, neural network and
genetic programming had been integrated with the conventional control system to
produce an intelligent controller system. In this case, the intelligent controller will
help the operator to handle and deal with various abnormal conditions or fault that
happen with more reliable, efficient and faster.
process predictor and classifies the cause of faults. The development of both models
utilizes the nonlinear mapping capability of neural networks
As we are heading towards the future, the advance knowledge and technology
had contributing to the improvement on reability, safety and efficiency of fault
detection and diagnosis system. This system is very important as it will prevent
4
accident, failure and disaster from happen and save many life. Today, safety and
health has becoming a main agenda in developing and managing technical processes.
Consequently, the development of Neural Network in various fields especially in
fault detecting has shown great progress. Neural Network has the potential to be
developed further to be applied in chemical plant such as Vinyl Acetate Process Plant
as the process control mechanism. Furthermore, Matlab 7.0 had been used to model
and stimulate the Neural Network in terms of monitoring and supervising the Vinyl
Acetate Process route. Matlab is a high-performance language for technical
computing software that had in been used widely in the engineering field to calculate
and solve many mathematical and technical problems. Thus, this research will be
focusing on fault detection on Vinyl Acetate Process Plant by using Neural Network.
These researches will emphasis on how and how far Neural Network can contribute
to overcome Vinyl Acetate Process Plant failure and fault problems.
The main aim of this research is to develop a fault detection system using
neural network. By using the Vinyl Acetate Plant as the case study, the
implementation of neural network will help the controller to detect fault more
efficient. The work covered the following scope:
the actual plant signal and the estimated normal plant signal is termed as
residual. This process is done in Matlab.
1.4 Summary
LITERATURE REVIEW
2.1 Introduction
In the area of plant-wide control at the supervisory level, the process fault
detection and system plays a key role. The fault detection usually includes the fault
diagnosis and fault correction system. Fault diagnosis is the identification of the root
causes of process upset. Meanwhile, fault correction is the provision of
recommended corrective actions to restore the process to normal operating condition.
In this regard, real-time appropriate actions must be taken in present chemical and
petrochemical manufacturing plants. The technical personnel in most of these
industries is responsible for process monitoring status, detecting abnormal events,
diagnosing the source causes and administering proper intervention to bring the
process to normal operation. Nevertheless, the complexity of the supervision tasks
has increased considerably due to the high level of development in process design
and control. A decision support system is needed to assist process operators in
understanding and assessing process status, and responding quickly to abnormal
events, thereby enabling processing plants to maintain operational integrity and
improve product quality at a reduced cost (Ruiz, et al., 2000). However, a very
important control task in managing process plants still remains largely a manual
activity, performed by human operators. This is the task of responding to abnormal
events in a process. This involves the timely detection of an abnormal event,
diagnosing its causal origins and then taking appropriate supervisory control
decisions and action to bring the process back to a normal, safe, operating state. This
7
entire activity has come to be called Abnormal Event Management (AEM), a key
component of supervisory control.
Safety aspect is one of the most important aspects in operating a plant rather
than profit and process route. The terminology of safety is the ability of a system not
to cause danger to persons or equipment or the environment (Isermann and Ballè,
1997). According to America Institute of Chemical Engineering (AICHE) Code of
Professional Ethics, one of its fundamental principles is to use the knowledge and
skills to enhance the human welfare. Thus, the usage of the advance control system
in the plant is one of a good effort towards achieving the highest level of safety. In
the Layer of Protective Analysis (LOPA) Model, the priority step in analyzing and
assessing risk of fault is of the process design and control system of the plant (Crowl
and Louvar, 2002).
60
50
44
40
Accidents (%)
30
22
20
12 11
10
5 5
1
0
Mechanical Operator Unknown Process Natural Design Sabotage
Error Upset Hazards
Causes of losses
From the figure 2.1 above, there are seven main causes of losses that occurred
in a typical chemical plant. By far, the largest cause of losses is due to mechanical
failure. Failure of this type is usually due to improper control system and
maintenance service. The most damage can be caused by improper action or lack of
awareness in safety is fatal casualties. Accident in Bhopal in 1984, which kills nearly
2000 people and injuring more than 20,000 people and catastrophic in Seveso in
1976, can be prevented if the plant involved had implemented the proper application
of fundamental engineering safety principles for instance the fault detection system.
Figure 2.2 Time-dependency of faults: Abrupt (a), Incipient (b), and Intermittent(c)
by Isermann (1997).
With regard to the process models, the faults can be further classified. According to
Figure 2.3, additive faults influence a variable Y by an addition of the fault f, and
multiplicative faults by the product of another variable U with f. Additive faults
appear, for example offsets of sensors, whereas multiplicative faults are parameter
changes within a process (Isermann, 2005).
Figure 2.3 Basic models of faults: (a) Additive fault (b) Multiplicative faults
(Isermann, 2005).
ii. Reaction speed: the ability of the technique to detect faults with reasonably
small delay after their occurrences.
iii. Robustness: the ability of the technique to operate in the presence of noise,
disturbances and modeling errors, with few false alarms.
In general, one has to deal with three classes of failures or malfunctions as described
below (Hamid, 2004):
A large variety of techniques for fault detection had been proposed in the
literature these days (Choudhury et al., 2006, Thornhill and Horch, 2006 and Xia and
Howell, 2005). Due to the broad scope of the process fault diagnosis problem and the
difficulties in its real time solution, various computer-aided approaches have been
developed over the years (Hamid, 2004). They cover a wide variety of techniques
such as the early attempts using fault trees and diagraphs, analytical approaches, and
knowledge-based systems and neural networks in more recent studies. From a
modeling perspective, there are methods that require accurate process models, semi-
quantitative models, or qualitative model. On the other hand, there are methods that
do not assume any form of model information and rely only on process history
information. These techniques can be classified as Model based methods and
Historical data based methods (Detroja, et al., 2007):
The automatic control system can distinguish a fault from various parameters
by using supervisory function to take appropriate action to maintain the process and
to avoid any losses. There are three main elements in automatic control that can be
classified as (Isermann, 2005):-
iii. Supervision with fault diagnosis: Based on the measured variables, the
current features are calculated, symptoms are generated via change
detection, a fault diagnosis is performed and decision for counter action is
permitted.
Figure 2.4 The general scheme of process model-based fault-detection and diagnosis
(Isermann, 1997).
expressed by mathematical process models. Figure 2.4 shows the basic structure of
model based fault-detection.
In this research, there are three common methods that will be review in term
of fault detection system based on Halim in 2004 which are State Estimation
Approaches Statistical Process Control Approach and Knowledge-Based
Approaches:-
16
Since the early days, Artificial Intelligent (AI) researchers had already
focusing their study at modeling the function of human brain. In the mid-1940s,
Warren McCulloch and Waiter Pitts had proposed the first artificial neural network
(ANN or neural network for short) model (McCulloch & Pitts, 1943). These neurons
were presented as models of biological neurons and as conceptual components for
circuits that could perform computational tasks (Abdi, et al., 1999). Afterward, the
further explorations of the neural networks in the late 1980s give significant result in
solving vital AI problems. The main architecture of ANN emulates the functionality
of the human nervous system. The human nervous system consists of extremely large
18
numbers (over 1011) of nerve cells or neurons with operate to process data in human
minds.
Input weights
X1
W1 Transfer Function
X2 W2
Y = F (Σxiwi) Y
W3
X3
.
. Wn Output
. Signal
Xn
Input
Signal
Figure 2.8 Schematic Model of neural network (Seborg and Edgar, 2004)
Based on the figure above, the neuron computes the weighted sum of the
input signals and compares the result with a threshold value. If the net input is less
than the threshold, θ, the neuron output is -1. But if the neuron net input is greater
than or equal to the threshold, the neuron becomes activated and its output attains a
value of +1. (Negnevitsky, 2001). In other words the neuron uses the following
transfer function or activation function:
n
X = ∑
i =1
xi wi (2.1)
Y = +1 if X ≥ θ (2.2)
= -1 if X < θ
Where X in the equation 2.1 is the net weighted input to the neuron, xi, is the value of
input, wi, is the weight of input i, n is the number of neurons input, Y as in equation
2.2 is the output of the neurons. Each neurons input are collected from other neurons
to be evaluated. These inputs are then summed and compare with a standard level
and an appropriate output is determined. The output signal is then computed as sum
of input signals, transformed by the transfer functions. The learning process of a
neural network is achieve by adjusting the weight in accordance with a predefined
21
learning algorithm, usually by ΔWij = ασXj where α is the learning rate and σ is the
momentum rate.
Basically neural network can be generally being separated into two groups
according Lee in 2006:-
ii. Unsupervised neural network- neural network that do not need any
supervised learning and training strategies, including all kinds of self
organizing, self clustering, and learning networks such as SOM, ART
(Adaptive Resonant Theory), and so on.
ANN
System Architecture
The Feed-forward network is when the data flow from input to output units.
The data processing can extend over multiple layers of units, but no feedback
connections (connections extending from outputs of units to inputs of units in the
same layer or previous layers) are present. Moreover, for Recurrent networks, its
contain feedback connections. Different from the feed-forward networks, the
dynamical properties of the network are important. In certain times, the activation
values of the units will undergo a relaxation process such that the network will
develop to a stable state in which these activations do not change anymore. In other
applications, the changes of the activation values of the output neurons are
significant, such that the dynamical behavior constitutes the output of the network
(Hampshire and Pearlmutter, 1990)
One of the oldest and most widely know principles of biological learning
mechanism were describe by Hebb (1949), and is sometimes called “Hebbian
Learning.” Hebb’s principle is as follows: “When an axon of cell A is near enough to
excite a cell B and repeatedly or persistently take place in firing it, some growth
process or metabolic change take place in one or both cell such that A’s efficiency,
as one of the cell firing B is increased”.
Δ w i , j = cx i x j (2.3)
Where c is some small constant, denotes the strength of the connection from the jth
node to the ith node, and xi and xj are the activation level of this nodes. Many
modification of this rule have been developed and widely used in artificial neural
network models.
caused the adoption). This leads to the development of neural networks in which
each node specializes to be the winner for asset of similar pattern.
Living things learn based on the feedback of its action towards the
surrounding and environment. Positive feedback will reinforce the creature behavior
in respond to the presented input. In context of neural network, the increasing of
particular weight will leads to diminished performance or larger error, then that
weight is decreased as the network is train to perform better.
The amount of change made at every step is very small in most networks to
ensure that a network does not stray too far from its partially evolved state and so
that the network withstand some mistake made by the teacher, feedback of
performance evaluation mechanism. If the incremental change is infinitesimal,
however, the neural network will enquire large excessive training times. Some
training method clearly vary the rate at which networks is modified.
Although ANN is seems to be a perfect and flawless system, it also had some
limitation that can affect the result and the performance of the any mechanism that
implemented it. The advantage of neural networks lies in their ability to represent
both linear and non-linear relationships and in their ability to learn these
25
relationships directly from the data being modeled. Generally, ANN has several
advantages as described by Baughman and Liu (1995):
ii. Learning ability of ANN. ANN is able to adjust its parameters in order to
adapt itself to changes in the surrounding systems by using an error-
correction training algorithm.
iii. Extensive knowledge indexing. ANN is also able to store a large amount
of information and access it easily when needed. Knowledge is kept in the
network through the connection between nodes and the weights of every
connection.
iv. Imitation of the human learning process. The network can be trained
iteratively, and by tuning the strengths of the parameters based on
observed results. The network can develop its own knowledge base and
determine cause and effect relations after repeated training and
adjustments.
v. Potential for on-line use. Once trained, ANN can yield results from a
given input relatively quickly, which is a desired feature for the on-line
use.
In contrast, some of the limitation of ANN also summarized by Baughman and Liu
(1995):
i. Long training time. Training time for ANN can take too much time especially
for large networks.
26
ii. Requires large amount of data. ANN needs large amount of input-output data
for a better generalization. Therefore, if there is only a small amount of input-
output data available, ANN may not be suitable for modeling the system.
iii. No guarantee to optimal results and reliability. Although the network contains
parameters that can be tuned by the training algorithm, there is no guarantee
that the resulting model is perfect for the system. The tuned model may be
accurate in one region but inaccurate in another.
iv. Difficulty in selecting good sets of input variables. Selection of input
variables is difficult because too many input variables or wrongly selected
input variables will cause over fitting and poor generalization. Too little or
inappropriate input variables will lead to poor mapping of the system.
2.6 Summary
This chapter is about the introduction of the fault and its principles.
Fault can be summarized as deviation to normal condition. There are several
types of fault that had been discussed earlier. The safety aspect is also
covered in this chapter as to justify the importance of the fault detection
system to the safety and reliability of the process. Furthermore, fault
detection is a fault monitoring process. There is also several class of fault
according to its situation. Fault detection is widely studied for the past years,
with many different approaches. In this research, the implementation of fault
detection is using Neural Network due some of its unique ability. Neural
Network will be study in this chapter in terms of its architecture and training,
as well as its limitation.
CHAPTER 3
PLANT SIMULATION
3.1 Introduction
In the recent years, the study on the plant wide design, control, and
optimization had been done on plant simulation to generate better control system and
optimize the process. One of the most popular plant simulations is the Tennessee
Eastman challenge process which was proposed by Downs and Vogel (1993). Then,
an additional model of a large and industrially relevant system, a Vinyl Acetate
monomer (VAC) manufacturing process, was published by Luyben and Tyreus
(1998). The VAC process contains several standard unit operations that are typical of
many chemical plants. Both gas and liquid recycle streams are present as well as
process-to-process heat integration. Luyben and Tyreus had presented a plant wide
control test problem based on the VAC process. This research will focus on VAC
plant simulation as its case study.
In the VAC process, there are 10 basic unit operations, which include a
vaporizer, a catalytic plug flow reactor, a feed-effluent heat exchanger (FEHE), a
separator, a gas compressor, an absorber, a carbon dioxide (CO2) removal system, a
gas removal system, a tank for the liquid recycle stream, and an azeotropic
distillation column with a decanter. Figure 3.1 shows the process flow sheet with
locations of the manipulated variables. The numbers on the streams are the same as
28
those given by Luyben and Tyreus. Totally, the VAC MATLAB model includes 246
states, 26 manipulated variables, and 43 measurements. There are seven chemical
components in the VAC process. Ethylene (C2H4), pure oxygen (O2), and acetic acid
(HAc) are converted into the vinyl acetate (VAc) product, and water (H2O) and
carbon dioxide (CO2) are by-products. An inert, ethane (C2H6), enters with the fresh
C2H4 feed stream. The following reactions take place:
discussion of the thermodynamics and physical property data. For each unit, the state
and manipulated variables are identified:-
b) The Vaporizer
The vaporizer is implemented as a well-mixed system with seven components. It
has a gas input stream (F1), which is a mixture of the C2H4 feed stream and the
absorber vapor effluent stream. It also has a liquid input stream (F2), which
comes from the HAc tank. There are 8 state variables in the vaporizer, including
the liquid level, the mole fractions of O2, CO2, C2H4, VAc, H2O, and HAc
components in the liquid, and the liquid temperature. The liquid level is defined
by the ratio of the liquid holdup volume over the total working volume. Since the
dynamics of the vapor phase are ignored, total mass, component and an energy
balance are used to calculate the dynamics in the liquid as:
M VAP xVAP = F VAP ⎛⎜ xVAP − xVAP ⎞⎟ + F VAP ⎛⎜ xVAP − xVAP ⎞⎟ − F VAP ⎛⎜ yVAP − xVAP ⎞⎟ (3.4)
L L, i 1 ⎝ 1, i L, i ⎠ 2 ⎝ 2, i L, i ⎠ V ⎝ V ,i L, i ⎠
CpVAP M VAP T VAP = F VAP ⎛⎜ hVAP − hVAP ⎞⎟ + F VAP ⎛⎜ hVAP − hVAP ⎞⎟ − F VAP ⎛⎜ H VAP − hVAP ⎞⎟ + QVAP (3.5)
L L L 1 ⎝ 1 L ⎠ 2 ⎝ 2 L ⎠ V ⎝ V L ⎠
30
Vapor liquid equilibrium (VLE) is assumed in the vaporizer, and as a result, the
vaporizer pressure and the vapor compositions are determined by a bubble point
calculation. Two manipulated variables (QVAPand FVVAP ) are available in the
3
vaporizer. In the base operation, the liquid holdup, VLVAP , is 2.8 m , which is 70%
of the working level volume. The vaporizer is followed by a heater, and the
heater duty is a manipulated variable. In the base operation, the heater exit
o
temperature is specified to be 150 C.
ii. It is assumed that the mass and heat transfer between the fluid and catalyst
are very fast and therefore the concentrations and temperatures in the two
phases are always equal.
iii. Pressure drop is assumed linear along the length of a tube, and it is time-
independent. Equation 3.6 is used to calculate the pressure drop in each
section:
directly from the TMODS model, instead of calculating it using the Ergun’s
method.
iv. As stated earlier, the shell temperature is assumed uniform, and it is used as a
manipulated variable in the MATLAB model. Thus, the steam drum
dynamics are not modeled. Material and energy balances on the reactor,
which are based on a tubular reactor dynamic model developed by Reyes and
Luyben, are given by Equation 3.7 and 3.8:
∂C i , j ∂ (C i , jVi )
ε =− + φi ρ b (θ1, j r1,i + θ 2, j r2,i ) (3.7)
∂t ∂z
⎛⎛ 7 ⎞⎞
∂⎜⎜⎜ε ∑Ci,k Cpi,k ⎟⎟⎟Ti )
⎛ 7 ⎞ ∂T ⎝ k=1 ⎠⎠
⎜ε ∑Ci,kCpi,k + ρbCpb ⎟ i = − ⎝ −φi ρb (r1.i E1 + r2.i E2 ) − QiRCT (3.8)
⎝ k=1 ⎠ ∂t ∂z
where index i represents the section number and index j represents component j,
φi is the catalyst activity in section i, given by Equation 8 in reference. θ1, j ,θ2, j
are the stoichiometric coefficients for component j in the two reactions, r1,i ,r2,i
are the reaction rates in section i, given by equations 3.3 and 3.4 reference, and
E1, E2 are the heats of reactions. QiRCT is the external heat flux per unit volume in
section i, and it is calculated by QiRCT =UA (Ti −TS), where TS is the shell
temperature. In the MATLAB model, the molar concentrations of components
O2, CO2, C2H4, and VAc, H2O and HAc and the tube temperature in each section
of the reactor are state variables. Therefore totally 70 state variables are present
in the reactor. The molar concentration of component C2H6 can be calculated
based on the ideal gas law. Only one manipulated variable, TS is available in the
o
reactor. In the base operation, the reactor exit temperature is equal to 159.17 C.
capture temperature dynamics. The inverse of the total thermal resistance, UA, is
calculated by Equation 3.9, which shows that the effective UA is a function of
the mass flow rates of the two streams:
FEHE FEHE
where F1 is the mass flowrate of the cold stream and F2 is the mass
flowrate of the hot stream. There is one manipulated variable, the bypass ratio,
and no state variable in the FEHE. In the base operation, the FEHE hot effluent
o
temperature is equal to 134 C.
d) Separator
In the MATLAB model, the separator is modeled as a partial condenser. At each
point in time, a steady state equilibrium-flash calculation is carried out to obtain
the flow rates and properties of the vapor and liquid streams immediately after the
pressure letdown valve on the separator feed stream. The pressure letdown valve
is not shown in the process flowsheet in Figure 3.1. A standard algorithm is used
to solve the isothermal flash problem, assuming that the flash temperature and
pressure are known. In reality, the flash temperature cannot be easily obtained.
The amount of the stream that condenses is a function of the heat removed, but the
heat removed is a function of the flash temperature, which, in turn, is determined
by the amount of the stream that condenses.
o
In the MATLAB model, the flash temperature is approximated by adding 5 C to
the cooling jacket temperature, and the jacket is assumed well mixed so that the
jacket temperature is uniform. Then the vapor and liquid streams are split into the
vapor and liquid phases respectively. It is assumed that there is no driving force
for material and heat transfer between the two phases. In the vapor phase, it is
assumed that the vapor volume, which represents the total gas loop volume, is a
constant. A mass balance is used to model the vapor pressure dynamics. In the
MATLAB model, the separator vapor exit stream flowrate is fixed. In the liquid
phase, a total energy balance determines the temperature dynamics. There are 16
state variables in the separator, including the liquid level, vapor phase pressure,
33
mole fractions of components O2, CO2, C2H4, VAc, H2O, and HAc, and
temperatures in both phases. The ideal gas law is applied to the vapor phase. In
the separator, three manipulated variables are available, the liquid exit stream
flowrate, the vapor exit stream flowrate, and the cooling jacket temperature. In
3
the base operation, the liquid holdup is 4 m , which is 50% of the working level
volume. The separator pressure is 84.25 psia, and the separator liquid phase
o
temperature is 40 C.
f) Compressor
In the MATLAB model, the pressure increase across the compressor is calculated
by equations. 3.10 and 3.11:
COM
POUT = PINCOM + Δ P (3.10)
∆P =γρCOM (3.11)
where γ is the compressor coefficient, ρCOM is the compressor inlet stream density.
The exit temperature is calculated by assuming an isentropic compression. The
compressor is followed by a cooler, and the cooler duty is a manipulated variable.
o
In the base operation, the cooler exit temperature is 80 C.
g) Absorber
In the MATLAB model, the gas absorber is divided into two parts. The top part
contains six theoretical stages. Its inlet liquid stream is from the HAc tank, and its
inlet vapor stream is from the top of the bottom part of the absorber. The bottom
part contains two theoretical stages. Its inlet liquid stream is a combination of the
liquid stream from the top part and a circulation stream. Its inlet vapor stream is
from the compressor. It is assumed that the absorber pressure, which is specified
at 128 psia in the base operation, is uniform in the two parts of the absorber. On
each stage, the liquid and vapor phases are not in equilibrium, and a rate-based
model is implemented to capture the liquid phase dynamics. The vapor phase
dynamics are ignored. On each stage, the mass transferred from the vapor phase to
the liquid phase is given by Equation.3.12:
34
where Qj is the heat transferred between the two phases on stage j (kcal/min), QMT
,j
is a constant heat transfer coefficient, TV, j is the temperature of the vapor inlet
stream, TL, j
is the temperature of the liquid phase. During stage-to-stage
calculations, total mass, component and an energy balance around the vapor phase
are used to calculate the vapor exit stream flowrate, composition, and temperature.
A total mass, component and an energy balance around the liquid phase, which
are similar to equation. 3.3 to 3.5 are used to model the absorber dynamics. In the
energy balance, the enthalpy of the material transferred between
the decanter are always same. There are totally 69 state variables in the distillation
column. There are six manipulated variables, reflux flowrate, reboiler duty,
condenser duty, organic product flowrate, aqueous product flowrate and bottom
flowrate. In the base operation, the bottom liquid holdup is 2.33 m3, which is 50%
of the working level volume. The organic liquid holdup and the aqueous liquid
holdup are 0.85 m3, which are 50% of their working level volumes. In the base
operation, the decanter temperature is 45.85 oC.
k) HAc Tank
The HAc tank is only used to mix the liquid recycle stream and the fresh HAc
feed stream. There are totally 4 state variables in the tank, which are the liquid
holdup, mole fractions of VAc, and HAc in the liquid, and the liquid temperature.
The flowrates of all the streams connected to the tank are manipulated variables.
In the earlier publication on the VAC process an HAC tank was also used, but it
was not shown in the process flowsheet given in these publications.
The steady state data for the VAC process are obtained after a control
structure similar to that developed by Luyben et al. in 1999 is implemented. The
control system used is shown in Appendix A4 and since its major loops are the same
as those used by Luyben et al. However, there are some small differences due to the
differences in control structure and how loops were tuned (see discussion below),
and the simplifications used in the dynamic model presented in this paper and
discussed above. The initial values of all the state variables and manipulated
variables come from the TMODS results, and the MATLAB model converges to a
steady state (the base operation) that is very close to the TMODS results. The steady
state values of manipulated variables are given in Appendix A1. The control
structure and controller parameters are given in Appendix A2. Steady state values
for the measurements are listed in Appendix A3. Four set point disturbances are used
to illustrate the dynamic behavior of the MATLAB model with the control structure
is implemented.
36
The user needs to include the time lags and time delays in the code that is
used to control the process. The m-file test_VAcPlant (t, ID) gives details on how to
control the VAC process with a multiloop SISO architecture. In this routine the
transmitter lags are assumed to be 3 seconds, and the two analyzers on the gas
recycle and column bottoms also have a 10 minute time delay. An Euler integration
approach with a 1/3 sec. time stepis used to calculate dynamic responses. A 1 sec.
sampling time is used for the controllers and transmitters, except for the controllers
that are involved with the analyzers that have a 10 minute time delay. These
controllers have a 10 minute sampling time. In the model also integrated with eight
different parameter of disturbance in order to study the effect of disturbance. The
disturbance criteria are listed in the Table 3.1. The Figure 3.2 until 3.6 is the
simulation run in Matlab using zero disturbances or at normal condition at 100
minutes. Yet, there are still some variable that did not achieve the set point.
Eventually, the entire variable will achieve steady state. It depends on the controller
action and the environment of the parameter. The full list of Control Variable and
Manipulated Variable in VAC plant can be referred at the Appendix.
37
ID Disturbance Criteria
0 no disturbance
1 setpoint of the reactor outlet temperature decreases 8 degC (from 159 to 151)
2 setpoint of the reactor outlet temperature increases 6 degC (from 159 to 165)
setpoint of the H2O composition in the column bottom increases 9% (from 9%
3
to 18%)
4 the vaporizer liqiud inlet flowrate increases 0.44 kmol/min (from 2.2 to 2.64)
5 HAc fresh feed stream lost for 5 minutes
6 O2 fresh feed stream lost for 5 minutes
7 C2H6 composition changes from 0.001 to 0.003 in the C2H4 fresh feed stream
8 column feed lost for 5 minutes
0.075 128.02
0.075
% O2
P re s
128
0.075
0.075 127.98
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
0.5002 0.7
0.7
H A c -L
V a p -L
0.5001
0.7
0.5 0.7
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
128.02 150.002
150.001
V a p -P
P re -T
128
150
127.98 149.999
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
159.17 0.5001
R C T -T
S e p -L
159.168 0.5
159.166 0.5
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
40.002 18
40.001 17
Sep-T
Sep-V
40 16
39.999 15
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
80.002 0.5
0.5
Com-T
Abs-L
80
0.4999
79.998 0.4999
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
17 25.001
16 25.0005
Cir-F
Cir-T
15 25
14 24.9995
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
2 25.001
1 25.0005
Scr-F
Scr-T
0 25
-1 24.9995
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
-3
x 10
7.8 0.2505
% C 2H 6
7.6 0.25
% CO2
7.4 0.2495
7.2 0.249
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
134.001 0.094
F E H E -T
134
% H 2O
0.0935
133.999
133.998 0.093
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
110.02 60
40
O rg-L
C ol-T
110
20
109.98 0
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
0.5005 0.5
0.5
A qu-L
C ol-L
0.4998
0.4995
0.499 0.4996
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
0.5234
0.835
F-C2H4
0.5233
F-O2
0.5232
0.8345
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4
x 10
0.79 2.188
Q-Vap
F-HAc
0.7895
2.1878
0.789
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
9008.5
18.7284
Q-Heat
F-Vap
9008
18.7282
9007.5
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
135.03
F-SepL
ShellT
2.7545
135.025
135.02 2.7544
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
36.005 17
F-SepV
T-Sep
36 16
35.995
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4
x 10
2.7192
1.2136
Q-Comp
F-AbsL
2.719 1.2134
1.2132
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4
x 10
16
1.0729
F-Circ
Q-Circ
15 1.0728
1.0727
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
1.5
2018.8
Q-Scru
F-Scru
1
0.5 2018.6
0
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
-3
x 10
6.5531 3.162
F -CO 2
P urge
3.16
6.5531
3.158
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
0.3131 4.995
by pas s
Reflux
0.3131 4.99
0.313
4.985
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4 4
x 10 x 10
6.728
6.726 6.045
Q -Rebo
F -O rga
6.724
6.722 6.04
6.72
6.718
6.716 6.035
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
0.829 0.836
F -A que
0.8285 F -B ot
0.8355
0.828
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
The VAC plant simulation data had to be validated with the actual process to
ensure its reliability, exactness and relevant. The VAC simulation is compare with
several variables on the actual plant to make the comparison as in Table 3.2. The
result is very overwhelming and the VAC plant simulation is proven to be a reliable
and relevant plant simulation.
41
3.6 Summary
The VAC plant simulation is a well established plant simulation similar to the
Tennessee Eastman plant simulation. The plant also had provided a robust and
reliable simulation result. The VAC process, there are 10 basic unit operations. The
VAC MATLAB model includes 246 states, 26 manipulated variables, and 43
measurements. There are seven chemical components in the VAC process. With the
current result, the plant will be a perfect choice for the implementation of fault
detection system since there are no other researchers based on Science Direct
database had done the fault diction system on the VAC plant so far.
42
Table 3.2 The comparison of the VAC plant simulation with actual plant process on selected stream.
METHODOLOGY
The objective of this research is to develop a fault detection system based on neural
network on the Vinyl Acetate Process plant. In order to achieve this objective, this
chapter will be devoted to elaborate the methodology of this research. The
methodology of this research has been divided into seven phases:
i. Plant simulation
a. Plant simulation involving MATLAB software runs on the
computer.
b. Comparison of simulation data with the actual plant data. This is
to validate the model of the plant developed within MATLAB
environment.
c. Data analysis on key variables is obtained via simulation of the
plant model.
For fault detection scheme, two types of networks are needed. First is
predictor, to predict behavior of reactor temperature and second is classifier, to
classify type of fault. In this study, the focus will be on manipulated variable (MV)
on the column. There are three MV that are had been chosen to study; Column
Reflux Flowrate set point, Column Condenser Duty, and Organic Product Flowrate.
Elman network is used for predictor. Meanwhile for classifier multilayer feed
forward neural network. These two networks are trained using Levenberg-Marquardt
learning algorithm. Input for classifier is in the form of residual signal from
predictor. The outputs of the classifier are set between the values of 0 and 1. In this
study, the classifier is designed in such a way that the faults are monitored and alarm
signal is generated when the classifier’s output reached the output index threshold
value. Threshold value is an assigned value for the residual reactor pressure. When
the residual reactor pressure beyond the assigned value, it indicated that the actual
reactor pressure has deviated from its normal operating condition.
46
4.3 Summary
There are six phases of completing this research. The primary aspect
is the project conception and literature review. From here, the information
will be evaluated and used to develop a fault detection system using the most
suitable approached desired. In this case, neural network is chosen as the best
method and Vinyl Acetate Plant is selected as the case study. The next stage
is to develop the Neural Network as predictor and classifier. Both of them
will undergo training and validation. After it had shown good results, the
system will be implemented on the VAC plant. All the results and findings
are documented and will be discussed in the thesis.
CHAPTER 5
5.1 Introduction
recurrent connection. The delay in this connection stores values from previous time
step, to be applied in the current time step. Due to the network can store information
for future reference; it is able to learn temporal patterns as well as spatial patterns.
The Elman network can be trained to respond to, and to generate, both kinds of
patterns. This ability is very important to simulate process that similar to the actual
process.
5.1 The Neural Network based fault detection scheme (Hamid, 2004)
The development of neural netwok model follows the standard procedure of system
identification model. Generally, standard procedure of system identification model
involves several procedures in order to make sure that the model is properly
developed:
These entire variables are located at the column. Selection of the input is made based
on the effect of the input variables on the selected outputs. In order to simplify the
model structure of neural network, only input variables that have significant impact
to the process outputs are selected, others are neglected. In this case, the input
variable is neglected.
All the variables in VAC plant can be obtain in the references. The graph for
the entire variable can be acquired in VAC plant simulation result for 100 minutes
are shown here by using disturbance column feed lost (ID 8) for 5 minutes.
50
0.075 128.4
128.2
%O2
Pres
0.075
128
0.075 127.8
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
0.6 0.71
0.55 0.705
HAc-L
Vap-L
0.5 0.7
0.45 0.695
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
128.4 150.2
128.2
Vap-P
Pre-T
150
128
127.8 149.8
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
159.2 0.501
159.18
RCT-T
Sep-L
159.16
0.5
159.14
159.12
0.499
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
40.2 18
17
Sep-T
Sep-V
40
16
39.8 15
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
80.1 0.5005
Com-T
0.5
Abs-L
80
0.4995
79.9 0.499
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
17 25.1
16 25.05
Cir-F
Cir-T
15 25
14
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
2 25.5
1
Scr-F
Scr-T
25
0
-1 24.5
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
-3
x 10
7.8 0.252
%C2H6
7.6
%CO2
0.25
7.4
7.2 0.248
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
134.1 0.2
FEHE-T
% H2O
134 0.1
133.9 0
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
112 60
40
Org-L
Col-T
110
20
108 0
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
1 0.5
0.49
Aqu-L
Col-L
0.5
0.48
0 0.47
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
0.523 0.834
F-C2H4
F-O2
0.522
0.832
0.521
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4
x 10
1.2
2.25
Q-V ap
F-HA c
2.2
0.8
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
18.74 9000
Q-Heat
F-V ap
18.73 8950
18.72
8900
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
2.765
135.06
2.76
F-S epL
S hellT
135.04 2.755
2.75
135.02
2.745
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
17
36.2
F-SepV
36
T-Sep
35.8 16
35.6
35.4
35.2
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4
x 10
2.72 1.21
Q-Comp
F-AbsL
2.71
1.205
2.7
1.2
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4
x 10
16 1.072
1.07
Q-Circ
F-Circ
15 1.068
1.066
1.064
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
1.5 2000
Q-Scru
F-Scru
1
1900
0.5
0 1800
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
-3
x 10
6.553 3.18
F-CO2
6.5525
Purge
6.552 3.16
6.5515
3.14
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
0.313 4.8
0.312 4.6
bypass
Reflux
0.311 4.4
0.31 4.2
0.309 4
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4 4
x 10 x 10
8
8
Q-Rebo
F-Orga
6
6
4
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
1
0.8 0.82
F-Aque
F-Bot
0.6 0.8
0.4
0.2 0.78
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
From Figure 5.1 until Figure 5.3, it shows the development of simulation result of the
parameter in Control Variable (CV). The list of all CV for the VAC process is
attached in the Appendix. A.2. CV is the process variables that are controlled
53
(Seborg et al., 2004). The desired value of the CV is referred as set point (SP).
Therefore, the parameter that had been determine as CV will have the certain value
or set point to be retained The failure of not maintaining CV can bring many
problems in term of safety and cost. In the meantime, the figure 5.4 until 5.6 is
regards to the Manipulated Variables (MV). The list of all MV for the VAC process
is also attached in the Appendix. A.2. In this case, MV will be the variable that can
be adjusted in order to keep the set point at its normal values.
The type of the structure for artificial neural network that are used in this
study is Multi-Input Single-Output (MISO) networks. The process estimator
developed in this research is an Elman network. Elman network is constructed to
estimate the normal or fault-free process condition. Thus, actual process outputs
cannot be used as inputs because they are affected directly by process faults. The
network should be independent of the actual process outputs to enable the generation
of residual as a measure of actual process departure from normal operating condition.
Three MISO networks are required to model every single selected process
outputs. The schematic diagram of MISO network is shown in the Figure 5.10. Here,
54
y1 (t) is referred as the network output, which either the Column Reflux Column
Condenser Duty and Column Organic Exit, while un(t) and un(t-1) are referred to as
process inputs. In this study, Elman neural network is used and Levenberg-
Marquardt algorithm is used for training and validation. Levenberg-Marquardt (LM)
method which is hybrid of the Gauss-Newton nonlinear regression method and
gradient steepest descent method is recommended in most optimization packages
such as MATLAB (Chen, 2004). Before starting to use any neural network, the
simulation data had to be scaled and prepared according to the neural architecture.
The scaling of training data is needed to prevent data with larger magnitude from
overriding the smaller and impede the premature learning process Data quality and
preparation can affect the result on the neural network. In order to find the best
hidden node, the estimator or predictor is simulate using four until 20 hidden nodes.
The lowest Mean Square Error (MSE) for training will be chosen as the best hidden
node for the estimator. Then, the predictor is run using the selected parameter to get
the estimated process of VAC plant
After the training and validating process, the Predictor 1 best hidden neuron is 5,
Predictor 2 is 12 and Predictor 3 is 6. This is will be used in the development of the
classifier using the most optimum hidden neuron.
56
After the output variable had been selected to be test, the neural network had
to be train and validate before ready to be implemented on the VAC simulation data.
Neural network can be train by two different styles of training. In incremental
training the weights and biases of the network are updated each time an input is
presented to the network. In batch training the weights and biases are only updated
after all the inputs are presented. Training is important to achieve and train the
weights and biases that can estimation that similar to the actual plant. Meanwhile,
validation is a process to verify the Neural Network using unseen or other data to test
the reliability and robustness of the created neural network.
From the training and validation graph above, the estimated process for the Column
Reflux is not very accurate yet still in a reasonable state. As for the Column
Condenser Duty and Column Organic Exit, both of the variables achieve 100%
similarity with the actual process. This is a clear indication that the neural network is
now working as expected.
5.6 Summary
6.1 Introduction
i. Sensitivity Analysis
A sensitivity analysis is conducted to determine the degree of sensor bias
which will cause violation of the operation limits. The analysis is done on
trial and error basis with various sensor biases simulated at steady state base
case condition. The relatively small sensor bias will cause the process to
change to another steady state condition. But if the bias is relatively large, the
process will not be able to absorb this disturbance and will eventually end up
without of control and violation of the operation limit. Eventually, in this
research the Sensitivity Analysis is not implemented due to the simplicity of
this research and not on the job scope of the research.
Based on a general overview of the fault classifier results, the neural network had
managed to achieve the target. The value in the graph is the residual versus time
depend on each parameter. Meaning that, the neural network had calculated the error
or residual from the actual process and plotted a graph of fault residual versus time
for better visualization. The threshold limit value is the set point limit where the
boundary can be classified as the danger limit. The threshold limit can be set up
based on the average safe margin of the process limit. When the value or process
exceeded the threshold limit, basically the neural network will send signal to
controller or made notification in order to minimize the residual to the safe condition
again. The classifier will act as the monitoring system along side with the controller
to observe any abnormality and fault that might happen anytime anywhere.
Based on figure 6.1, the Column Reflux graph is stable and not higher than
the threshold limit value. Therefore, no fault had been generated. Meanwhile, in
Column Condenser Duty from figure 6.2, there are certain times, that Column
Condenser Duty value had passing the threshold limit value from the 30th until 50th
minutes and 85th until 90th minutes. In this circumstance, the neural network had
detected the first deviation on the minute 30th and react in order to reduce the error.
After 10 minute, the process returns to normal. Then, the same thing happen in the
85th minutes, and the network will respond by generating signal to any involved
controller or equipment to restore the normal condition. For the last parameter, graph
on the Column Organic Product had produced many small deviations. This is due to
the controller at the parameter is Proportional (P) Controller instead of Proportional
Integrated (PI) Controller. PI controller serve better control and maintaining
variables than P controller (Seborg et al., 2004). Although it has a very fast and large
deviation on the 17th minutes, yet the process return to safe condition in short time
also. The high amplitude of deviation had caused the signal become very strong so
that give the fast counter action.
64
6.4 Summary
The result for the fault classifier is very promising. The developed neural
network can detect the fault as expected. According to the parameter results, the
Column Reflux parameter shows the most optimistic performance. Since the VAC
Plant is using mainly PI and P controller system, the implementation of the fault
detection would serve huge benefits.
CHAPTER 7
7.1 Overview
Over the years, the aspect of safety and cost is the being the main reason for
the continuingly study for better controlling and system management in process. In
the past, the control community showed how regulatory control could be automated
using computers and thereby removing it from the hands of human operators. This
has led to great progress in product quality and consistency, process safety and
process efficiency. The current challenge is the automation of AEM using intelligent
control systems, thereby providing human operators the assistance in this most
pressing area of need. People in the process industries view this as the next major
milestone in control systems research and application.
There are many method and approach in handling of AEM using intelligent
control systems and one of the approaches is the implementing of Neural Network as
fault detection system. Fault detection system is one of main element in safety
measurement in the chemical plant. It is very ironic to think such small system would
bring such big difference and impact on the safety, reliability and cost effective of
the process. Neural Network has the ability to process information characteristic such
as nonlinearity, high parallelism, fault tolerance as well as capability to generalize
and handle imprecise information (Basheer and Hajmeer, 2000). Yet, there are
limitations in using Neural Network in order to give best performance such as the
66
need for large supply of good data, quality neural architecture, and the suitable
training algorithm.
7.2 Conclusion
a. The data was successfully generated from the Vinyl Acetate Plant
process simulation
a. The data quality and quantity can be improve to provide better result
if using Neural Network
b. Provide the Neural Network with out of control or unseen data during
training and validation for more reliable and sustainable Neural
Network
c. Implementation of Fault Detection at the whole VAC Plant
Equipment for the plant wide control system
d. Develop Fault Detection and as well as Diagnosis using Neural
Network to improve the fault detection system ability.
68
REFERENCE
Abdi, H., Valentin, D., & Edelman, B., Neural Networks, Thousand Oaks (CA):
Sage. (1999).
Ahmad, A., & A. Hamid., M. K. (2001), Neural Networks for Process Monitoring,
Control and Fault Detection: Application to Tennessee Eastman Plant,
Malaysian Science and Technology Congress, Melaka,
Baughman, D., and Liu, Y. (1995). Neural Networks in Bioprocessing and Chemical
Engineering, Academic Press, San Diego, CA.
Choudhury, M. A. A. S., Shah, S. L., Thornhill, N. F., & Shook, D. S., (2006),
Automatic detection and quantification of stiction in control valves, Control
Engineering Practice, 14(12), 1395–1412.
Chen, J., & Patton, R. J., (1999), Robust model-based fault diagnosis for dynamic
systems, Boston: Kluwer.
Domínguez, E., Muñoz, J., A neural model for the p-median problem, Computers &
Operations Research 35 (2008) 404 – 416
Downs, J. J., & Vogel, E. F. (1993). A plant-wide industrial process control problem.
Computers and Chemical Engineering, 17(3), 245–255
Frank, P. M., (1990), Fault diagnosis in dynamic systems using analytical and
knowledge-based redundancy. Automatica, 26, 459–474.
Gertler, J. J., (1998), Fault detection and diagnosis in engineering systems, New York:
Marcel Dekker.
Himmelblau, D.M. (1978). Fault detection and diagnosis in chemical and petrochemical
processes. Amsterdam: Elsevier Press
K.P. Detroja, R.D. Gudib, & S.C. Patwardhan, (2007), Plant-wide detection and
diagnosis using correspondence analysis. Control Engineering Practice
doi:10.1016/j.conengprac.2007.02.007.
Lennox, B., Montague, G. A., Frith, A. M., Gent, C., & Bevan, V., Industrial
Application of Neural Networks-an investigation, Journal of Process Control
11 (2001) 497-507.
Luyben, M. & Tyreus, B. An Industrial Design/Control Study for the Vinyl Acetate
Monomer Process, Computers Chem. Engng, 1998, 22, 867.
Luyben, W., Tyreus, B., and Luyben, M., Plantwide Process Control, McGraw Hill,
New York, Chapter 11, 1999.
Mohd. Kamaruddin Bin Abd. Hamid, Multiple faults detection using artificial neural
network, Master. Thesis. Universiti Teknologi Malaysia; 2004.
71
McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in
nervous activity. Bulletin of Mathematical Biophysics, 5, 115-133.
Patton, R. J., Frank, P. M., & Clark, P. N., (2000), Issues of fault diagnosis for
dynamic systems. Berlin: Springer.
Ruiz, D., Nougue´s, J. M., Puigjaner, L. (2001), Fault diagnosis support system for
complex chemical plants, Computers and Chemical Engineering, 25 151–160
Seborg, D. E., & Edgar, T. F., Mellichmap, D. E, Process Dynamic and Control.
Second edition. Hoboken, NJ: John Wiley & Sons Inc. 2004.
Thornhill, N. F., & Horch, A., (2006), Advances and new directions in plant-wide
disturbance detection and diagnosis, Control engineering practice,
doi:10.1016/j.conengprac.2006.10.011
Xia, C., & Howell, J., (2005), Isolating multiple sources of plant-wide oscillations
via independent component analysis, Control Engineering Practice, 13(8),
1027–1035.
Yu, D.L., Shields, D.N., and Daley, S. (1996). A Hybrid Fault Diagnosis Approach
using Neural Networks. Neural Computing and Application, 3(4): 21-26.
Zhou, Y., Hahn, J, and Mannan, S. M., Fault detection and classification in chemical
processes based on neural networks with feature extraction, ISA Transactions
42(2003), 651–664
73
Appendix A1
Appendix A2
Manipulated TR
LOOP Controlled Variable C.V. Value Type KC
Variable (min)
1 %O2 in the Reactor O2 fresh feed 7.5% (0 – 20) PI 10 10
Inlet sp
2 Gas Recycle Stream C2H4 fresh 128 psia (0 – PI 0.3 20
Pressure feed valve 200)
3 HAc Tank Level HAc fresh feed 50% (0 – 100) P 2
valve
4 Vaporizer Level Vaporizer 70% (0 – 100) PI 0.1 30
Heater Valve
5 Vaporizer Pressure Vaporizer 128 psia (0 – PI 5 10
Vapor Exit 200)
Valve
6 Heater Exit Temp. Reactor 150 oC (120 – PI 1 5
Preheater Valve 170)
7 Reactor Exit Temp. Steam Drum 159.17 oC (0 – PI 3 10
Pressure sp 200)
8 Separator Level Separator 50% (0 – 100) P 5
Liquid Exit
Valve
9 Separator Temp. Separator 40 oC (0 – 80) PI 5 20
Coolant Valve
10 Separator Vapor Separator Fixed
Flowrate Vapor Exit
Valve
11 Compressor Exit Temp. Compressor 80 oC (70 – 90) PI 1 5
Heater Valve
12 Absorber level Absorber 50% (0 – 100) P 5
Liquid Exit
Valve
13 Absorber Scrub HAc Tank Exit Fixed
Flowrate Valve 2
14 Circulation Stream Absorber Scrub 25 oC (10 – 40) PI 1 5
Temp. Heater Valve
15 Absorber Circulation Absorber Fixed
Flowrate Circulation
Valve
16 Scrub Stream Temp. Circulation 25 oC (10 – 40) PI 1 5
Cooler Valve
75
Appendix A2 (continued)
Manipulated TR
LOOP Controlled Variable C.V. Value Type KC
Variable (min)
%CO2 in the Gas CO2 Purge 0.764% (0 –
17 P 1
Recycle Flowrate sp 50%)
%C2H6 in the Gas Purge Flowrate
18 25% (0 – 100%) P 1
Recycle sp
134 oC (0 –
19 FEHE Hot Exit Temp. Bypass Valve PI 5 10
200)
%H2O in the Column Column Reflux
20 9.344% (0 - 20) PI 0.5 60
Bottom Flowrate sp
Reboiler Steam 110 oC (0 –
21 5th tray Temperature PI 20 30
Valve 120)
Column 45.845 oC (40
22 Decanter Temperature PI 1 5
Condenser Duty – 50)
Decanter Organic Organic Product
23 50% (0 – 100) P 1
Level Flowrate
Aqueous
Decanter Aqueous
24 Product 50% (0 – 100) P 1
Level
Flowrate
Column Bottom
25 Column Bottom Level 50% (0 – 100) P 1
Flowrate
HAc Tank Exit
26 Liquid Recycle Flow Fixed
Valve 1
76
Appendix A3
Appendix A3 (Continued)
Appendix A4
Appendix A5
Data
a(i,j) O2 CO2 C2H4 C2H6 VAc H2O HAc
O2 0 0 0 0 0 0 0
CO2 0 0 0 0 0 0 0
C2H4 0 0 0 0 0 0 0
C2H6 0 0 0 0 0 0 0
VAc 0 0 0 0 0 1384.6 -136.1
H2O 0 0 0 0 2266.4 0 670.7
HAc 0 0 0 0 726.7 230.6 0
Vi O2 64.178
CO2 37.400
C2H4 49.347
C2H6 52.866
VAc 101.564
H2O 18.01
HAc 61.445
80
Appendix A6
Equipment Data
Appendix A6
Appendix B1
function [datas,p,min,max]=dscale
%DSCALE
%-------------------------------------------------------------------
-
% This subfunction scales data to value between 0 and 1
%
% datas = scaled data
% data = actual data before scaling
% min = actual data at their minimum
% max = actual data at their maximum
load data_vac8.mat;
input=u_history;
[r,m]=size(input);
refl=input(:,1); % Reflux flowrate
cond=input(:,2); % Condenser flowrate
pump=input(:,3); % Pumparound return flowrate
toptemp=input(:,4); % Top Stage Temperature
dist=input(:,5); % Distillate flowrate
bott=input(:,6); % Bottom flowrate
feed=input(:,7); % Feed flowrate
toppres=input(:,20); % Top stage pressure
bottemp=input(:,22); % Bottom stage temperature
C8=input(:,23); % C8 flowrate
j=r;
for i=1:r
j=r;
dataq(1,i)=refl(j);
dataq(2,i)=cond(j);
dataq(3,i)=pump(j);
dataq(4,i)=toptemp(j);
dataq(5,i)=dist(j);
dataq(6,i)=bott(j);
dataq(7,i)=feed(j);
dataq(8,i)=toppres(j);
dataq(9,i)=bottemp(j);
dataq(10,i)=C8(j);
r=r-1;
end
[n,p]=size(dataq);
for i=1:n
max(i)=dataq(i,1);
min(i)=dataq(i,1);
for j=1:p
if dataq(i,j)>max(i)
max(i)=dataq(i,j);
end
if dataq(i,j)<min(i)
min(i)=dataq(i,j);
end
end
datas(i,:)=(dataq(i,:)-min(i))/(max(i)-min(i));
% datad(i,1:p-1)=datas(i,2:p); % 1 delayed term
% datad1(i,1:p-2)=datas(i,3:p); % 2 delayed term
end
83
Appendix B2
function [input,output,X,min,max]=dprep
%DPREP
%-------------------------------------------------------------------
----------
% This subfunction creates data for training and cross-validation
%
% input,output = training data
% vinput,voutput = cross-validation data
[datas,p,min,max]=dscale;
X=p;
% Training data
input(1,1:X)=datas(1,1:X); % Reflux flowrate
input(2,1:X)=datas(2,1:X); % Condenser flowrate
input(3,1:X)=datas(3,1:X); % Pumparound return flowrate
input(4,1:X)=datas(4,1:X); % Top stage temperature
input(5,1:X)=datas(5,1:X); % Distillate flowrate
input(6,1:X)=datas(6,1:X); % Bottom flowrate
input(7,1:X)=datas(7,1:X); % Feed flowrate
output(1,1:X)=datas(8,1:X); % Top stage pressure
output(2,1:X)=datas(9,1:X); % Bottom stage temperature
output(3,1:X)=datas(10,1:X); % C8 flowrate
% % Cross-validation data
% input(1,1:X)=datas(1,1:X); % Reflux flowrate
% input(2,1:X)=datas(2,1:X); % Condenser flowrate
% input(3,1:X)=datas(3,1:X); % Pumparound return flowrate
% input(4,1:X)=datas(4,1:X); % Top stage temperature
% input(5,1:X)=datas(5,1:X); % Distillate flowrate
% input(6,1:X)=datas(6,1:X); % Bottom flowrate
% input(7,1:X)=datas(7,1:X); % Feed flowrate
% output(1,1:X)=datas(8,1:X); % Top stage pressure
% output(2,1:X)=datas(9,1:X); % Bottom stage temperature
% output(3,1:X)=datas(10,1:X); % C8 flowrate
####################################################################
function [Tinput,Toutput,X,min,max]=dprepT
%DPREPT
%-------------------------------------------------------------------
----------
% This subfunction creates data for training and cross-validation
%
% input,output = training data
% vinput,voutput = cross-validation data
[datas,p,min,max]=dscale;
X=p;
% Cross-validation data
Tinput(1,1:X)=datas(1,1:X); % Reflux flowrate
Tinput(2,1:X)=datas(2,1:X); % Condenser flowrate
Tinput(3,1:X)=datas(3,1:X); % Pumparound return flowrate
Tinput(4,1:X)=datas(4,1:X); % Top stage temperature
84
####################################################################
function [Vinput,Voutput,X,min,max]=dprepV
%DPREPV
%-------------------------------------------------------------------
----------
% This subfunction creates data for training and cross-validation
%
% input,output = training data
% vinput,voutput = cross-validation data
[datas,p,min,max]=dscale;
X=p;
% Cross-validation data
Vinput(1,1:X)=datas(1,1:X); % Reflux flowrate
Vinput(2,1:X)=datas(2,1:X); % Condenser flowrate
Vinput(3,1:X)=datas(3,1:X); % Pumparound return flowrate
Vinput(4,1:X)=datas(4,1:X); % Top stage temperature
Vinput(5,1:X)=datas(5,1:X); % Distillate flowrate
Vinput(6,1:X)=datas(6,1:X); % Bottom flowrate
Vinput(7,1:X)=datas(7,1:X); % Feed flowrate
Voutput(1,1:X)=datas(8,1:X); % Top stage pressure
Voutput(2,1:X)=datas(9,1:X); % Bottom stage temperature
Voutput(3,1:X)=datas(10,1:X); % C8 flowrate
85
Appendix B3
% clc;
% clear;
% [datas,p,min,max]=dscale
[input,output,X,min,max]=dprep;
[Vinput,Voutput,X,min,max]=dprepV;
[Tinput,Toutput,X,min,max]=dprepT;
ptr=input; ttr=output(1,:); % Training
v.P=Vinput; v.T=output(1,:); % Validation
t.P=Tinput; t.T=Toutput(1,:); % Testing
S1=5; % Number of nodes
net1=newelm(minmax(input),[S1 1],{'tansig' 'purelin'},'trainlm');
net1.trainparam.epochs=500; % Max epoch number
net1.trainParam.goal=1e-8;
net1.trainParam.max_fail=10;
net1.trainParam.show=50;
net1=init(net1);
[net1,tr]=train(net1,ptr,ttr,[],[],v,t);
an1=sim(net1,input);
error=an1-input(1,:);
trainmse=sumsqr(error)/X;
Van1=sim(net1,Vinput);
valmse=sumsqr(Van1-Vinput(1,:))/X;
Tan1=sim(net1,Vinput);
testmse=sumsqr(Tan1-Tinput(1,:))/X;
fprintf('TrainMSE=%e, ValMSE=%e,
TestMSE=%e\n',trainmse,valmse,testmse);
time=1:X;
figure(1)
subplot(2,1,1),plot(time,an1,'r',time,input(1,:),'b');
ylabel('Molar Flowtare, Kmol/min');
title('Column Reflux Predictor (Training)')
subplot(2,1,2),plot(time,Van1,'r',time,Vinput(1,:),'b');
ylabel('Molar Flowtare, Kmol/min');
title('Column Reflux Predictor (Validation)')
legend ('Actual','Predicted',4)
save net1.mat
86
Appendix B4
% clc;
% clear;
% [datas,p,min,max]=dscale
[input,output,X,min,max]=dprep;
[Vinput,Voutput,X,min,max]=dprepV;
[Tinput,Toutput,X,min,max]=dprepT;
ptr=input; ttr=input(2,:); % Training
v.P=Vinput; v.T=Vinput(2,:); % Validation
t.P=Tinput; t.T=Tinput(2,:); % Testing
S1=12; % Number of nodes
net2=newelm(minmax(input),[S1 1],{'tansig' 'purelin'},'trainlm');
net2.trainparam.epochs=500; % Max epoch number
net2.trainParam.goal=1e-8;
net2.trainParam.max_fail=10;
net2.trainParam.show=5;
net2=init(net2);
[net2,tr]=train(net2,ptr,ttr,[],[],v,t);
an2=sim(net2,input);
error=an2-input(2,:);
trainmse=sumsqr(error)/X;
Van2=sim(net2,Vinput);
valmse=sumsqr(Van2-Vinput(2,:))/X;
Tan2=sim(net2,Vinput);
testmse=sumsqr(Tan2-Tinput(2,:))/X;
fprintf('TrainMSE=%e, ValMSE=%e,
TestMSE=%e\n',trainmse,valmse,testmse);
time=1:X;
figure(1)
subplot(2,1,1),plot(time,an2,'r',time,input(2,:),'b');
ylabel('Duty Rate, Kcal/min');
title('Column Condenser Duty (Training)')
subplot(2,1,2),plot(time,Van2,'r',time,Vinput(2,:),'b');
ylabel('Duty Rate, Kcal/min');
title('Column Condenser Duty (Validation)')
legend ('Actual','Predicted',4)
save net2.mat
87
Appendix B5
% clc;
% clear;
% [datas,p,min,max]=dscale
[input,output,X,min,max]=dprep;
[Vinput,Voutput,X,min,max]=dprepV;
[Tinput,Toutput,X,min,max]=dprepT;
ptr=input; ttr=input(3,:); % Training
v.P=Vinput; v.T=Vinput(3,:); % Validation
t.P=Tinput; t.T=Tinput(3,:); % Testing
S1=6; % Number of nodes
net3=newelm(minmax(input),[S1 1],{'tansig' 'purelin'},'trainlm');
net3.trainparam.epochs=500; % Max epoch number
net3.trainParam.goal=1e-8;
net3.trainParam.max_fail=10;
net3.trainParam.show=5;
net3=init(net3);
[net3,tr]=train(net3,ptr,ttr,[],[],v,t);
an3=sim(net3,input);
error=an3-input(3,:);
trainmse=sumsqr(error)/X;
Van3=sim(net3,Vinput);
valmse=sumsqr(Van3-Vinput(3,:))/X;
Tan3=sim(net3,Vinput);
testmse=sumsqr(Tan3-Tinput(3,:))/X;
fprintf('TrainMSE=%e, ValMSE=%e,
TestMSE=%e\n',trainmse,valmse,testmse);
time=1:X;
figure(1)
subplot(2,1,1),plot(time,an3,'r',time,input(3,:),'b');
ylabel('Molar Flowrate, Kmol/min');
title('Column Organic Exit (Training)')
subplot(2,1,2),plot(time,Van3,'r',time,Vinput(3,:),'b');
ylabel('Molar Flowrate, Kmol/min');
title('Column Organic Exit (Validation)')
legend ('Actual','Predicted',4)
save net3.mat
88
Appendix B6
clc;
clear;
load net1.mat;
load net2.mat;
load net3.mat;
[input,output,X,min,max]=dprep;
[Vinput,Voutput,X,min,max]=dprepV;
an1=sim(net1,input);
error1=output(1,:)-an1;
an2=sim(net2,input);
error2=output(2,:)-an2;
an3=sim(net3,input);
error3=output(3,:)-an3;
data(1,:)=(error1*(max(8)-min(8))+min(8));
data(2,:)=(error2*(max(9)-min(9))+min(9));
data(3,:)=(error3*(max(10)-min(10))+min(10));
Van1=sim(net1,Vinput);
Verror1=Voutput(1,:)-Van1;
Van2=sim(net2,Vinput);
Verror2=Voutput(2,:)-Van2;
Van3=sim(net3,Vinput);
Verror3=Voutput(3,:)-Van3;
Vdata(1,:)=(Verror1*(max(8)-min(8))+min(8));
Vdata(2,:)=(Verror2*(max(9)-min(9))+min(9));
Vdata(3,:)=(Verror3*(max(10)-min(10))+min(10));
[n,p]=size(data);
min(1)=0.6679; max(1)=0.6567;
min(2)=-6930.4172; max(2)=981.7492;
min(3)=-14.455; max(3)=0.9978;
for i=1:n
data(i,:)=(data(i,:)-min(i))/(max(i)-min(i));
Vdata(i,:)=(Vdata(i,:)-min(i))/(max(i)-min(i));
end
ptr=data; ttr=data(3,:); % Training
v.P=Vdata; v.T=Vdata(3,:); % Validation
S1=5; % Number of nodes
net7=newff(minmax(data),[S1 1],{'tansig' 'purelin'},'trainlm');
net7.trainparam.epochs=500; % Max epoch number
net7.trainParam.goal=1e-8;
net7.trainParam.max_fail=10;
net7.trainParam.show=5;
net7=init(net7);
[net7,tr]=train(net7,ptr,ttr,[],[],v);
Fan1=sim(net7,data);
Ferror1=Fan1-data(3,:);
trainmse1=sumsqr(Ferror1)/X;
Fan2=sim(net7,Vdata);
valmse1=sumsqr(Fan2-Vdata(3,:))/X;
fprintf('TrainMSE=%e, ValMSE=%e\n',trainmse1,valmse1);
time=1:X;
upper(1,:)=1;
lower(1,:)=0.9;
figure(1)
subplot(2,1,1),plot(time,Fan1,'r',time,upper,'k',time,lower,'k');
axis([0 100 0.88 1.05])
89
Appendix B7
clc;
clear;
load net1.mat;
load net2.mat;
load net3.mat;
[input,output,X,min,max]=dprep;
[Vinput,Voutput,X,min,max]=dprepV;
an1=sim(net1,input);
error1=output(1,:)-an1;
an2=sim(net2,input);
error2=output(2,:)-an2;
an3=sim(net3,input);
error3=output(3,:)-an3;
data(1,:)=(error1*(max(8)-min(8))+min(8));
data(2,:)=(error2*(max(9)-min(9))+min(9));
data(3,:)=(error3*(max(10)-min(10))+min(10));
Van1=sim(net1,Vinput);
Verror1=Voutput(1,:)-Van1;
Van2=sim(net2,Vinput);
Verror2=Voutput(2,:)-Van2;
Van3=sim(net3,Vinput);
Verror3=Voutput(3,:)-Van3;
Vdata(1,:)=(Verror1*(max(8)-min(8))+min(8));
Vdata(2,:)=(Verror2*(max(9)-min(9))+min(9));
Vdata(3,:)=(Verror3*(max(10)-min(10))+min(10));
[n,p]=size(data);
min(1)=0.6679; max(1)=0.6567;
min(2)=-6930.4172; max(2)=981.7492;
min(3)=-14.455; max(3)=0.9978;
for i=1:n
data(i,:)=(data(i,:)-min(i))/(max(i)-min(i));
Vdata(i,:)=(Vdata(i,:)-min(i))/(max(i)-min(i));
end
ptr=data; ttr=data(2,:); % Training
v.P=Vdata; v.T=Vdata(2,:); % Validation
S1=5; % Number of nodes
net7=newff(minmax(data),[S1 1],{'tansig' 'purelin'},'trainlm');
net7.trainparam.epochs=500; % Max epoch number
net7.trainParam.goal=1e-8;
net7.trainParam.max_fail=10;
net7.trainParam.show=5;
net7=init(net7);
[net7,tr]=train(net7,ptr,ttr,[],[],v);
Fan1=sim(net7,data);
Ferror1=Fan1-data(2,:);
trainmse1=sumsqr(Ferror1)/X;
Fan2=sim(net7,Vdata);
valmse1=sumsqr(Fan2-Vdata(2,:))/X;
fprintf('TrainMSE=%e, ValMSE=%e\n',trainmse1,valmse1);
time=1:X;
upper(1,:)=7;
lower(1,:)=1;
figure(1)
subplot(2,1,1),plot(time,Fan1,'r',time,upper,'k',time,lower,'k');
91
Appendix B8
clc;
clear;
load net1.mat;
load net2.mat;
load net3.mat;
[input,output,X,min,max]=dprep;
[Vinput,Voutput,X,min,max]=dprepV;
an1=sim(net1,input);
error1=output(1,:)-an1;
an2=sim(net2,input);
error2=output(2,:)-an2;
an3=sim(net3,input);
error3=output(3,:)-an3;
data(1,:)=(error1*(max(8)-min(8))+min(8));
data(2,:)=(error2*(max(9)-min(9))+min(9));
data(3,:)=(error3*(max(10)-min(10))+min(10));
Van1=sim(net1,Vinput);
Verror1=Voutput(1,:)-Van1;
Van2=sim(net2,Vinput);
Verror2=Voutput(2,:)-Van2;
Van3=sim(net3,Vinput);
Verror3=Voutput(3,:)-Van3;
Vdata(1,:)=(Verror1*(max(8)-min(8))+min(8));
Vdata(2,:)=(Verror2*(max(9)-min(9))+min(9));
Vdata(3,:)=(Verror3*(max(10)-min(10))+min(10));
[n,p]=size(data);
min(1)=0.6679; max(1)=0.6567;
min(2)=-6930.4172; max(2)=981.7492;
min(3)=-14.455; max(3)=0.9978;
for i=1:n
data(i,:)=(data(i,:)-min(i))/(max(i)-min(i));
Vdata(i,:)=(Vdata(i,:)-min(i))/(max(i)-min(i));
end
ptr=data; ttr=data(1,:); % Training
v.P=Vdata; v.T=Vdata(1,:); % Validation
S1=50; % Number of nodes
net7=newff(minmax(data),[S1 1],{'tansig' 'purelin'},'trainlm');
net7.trainparam.epochs=500; % Max epoch number
net7.trainParam.goal=1e-8;
net7.trainParam.max_fail=10;
net7.trainParam.show=5;
net7=init(net7);
[net7,tr]=train(net7,ptr,ttr,[],[],v);
Fan1=sim(net7,data);
Ferror1=Fan1-data(1,:);
trainmse1=sumsqr(Ferror1)/X;
Fan2=sim(net7,Vdata);
valmse1=sumsqr(Fan2-Vdata(1,:))/X;
fprintf('TrainMSE=%e, ValMSE=%e\n',trainmse1,valmse1);
time=1:X;
upper(1,:)=-280;
figure(1)
subplot(2,1,1),plot(time,Fan1,'r',time,upper,'k');
93