You are on page 1of 109

FAULT DETECTION USING NEURAL NETWORK

DINIE BIN MUHAMMAD

UNIVERSITI MALAYSIA PAHANG


UNIVERSITI MALAYSIA PAHANG

BORANG PENGESAHAN STATUS TESIS


JUDUL : FAULT DETECTION USING NEURAL NETWORK

SESI PENGAJIAN: 2007/2008

Saya DINIE BIN MUHAMMAD


(HURUF BESAR)

mengaku membenarkan kertas projek ini disimpan di Perpustakaan Universiti Malaysia Pahang
dengan syarat-syarat kegunaan seperti berikut:

1. Hakmilik kertas projek adalah di bawah nama penulis melainkan penulisan sebagai
projek bersama dan dibiayai oleh UMP, hakmiliknya adalah kepunyaan UMP.
2. Naskah salinan di dalam bentuk kertas atau mikro hanya boleh dibuat dengan
kebenaran bertulis daripada penulis.
3. Perpustakaan Universiti Malaysia Pahang dibenarkan membuat salinan untuk tujuan
pengajian mereka.
4. Kertas projek hanya boleh diterbitkan dengan kebenaran penulis. Bayaran royalti
adalah mengikut kadar yang dipersetujui kelak.
5. *Saya membenarkan Perpustakaan membuat salinan kertas projek ini sebagai bahan
pertukaran di antara institusi pengajian tinggi.
6. **Sila tandakan (9)

SULIT (Mengandungi maklumat yang berdarjah keselamatan atau


kepentingan Malaysia seperti yang termaktub di dalam AKTA
RAHSIA RASMI 1972)

TERHAD (Mengandungi maklumat TERHAD yang telah ditentukan oleh


organisasi/badan di mana penyelidikan dijalankan)

9 TIDAK TERHAD

Disahkan oleh

_______________________ _________________________
(TANDATANGAN PENULIS) (TANDATANGAN PENYELIA)

Alamat Tetap: EN. NOOR ASMA FAZLI BIN ABDUL SAMAD


184-E, Taman Bijaya Sura Nama Penyelia
23000 Dungun,
Terengganu

Tarikh: 18 OKTOBER 2006 Tarikh: 18 APRIL 2008

CATATAN: * Potong yang tidak berkenaan.


** Jika tesis ini SULIT atau TERHAD, sila lampirkan surat daripada pihak
berkuasa/organisasi berkenaan dengan menyatakan sekali sebab dan tempoh tesis ini
perlu dikelaskan sebagai SULIT atau TERHAD.
‹ Tesis dimaksudkan sebagai tesis bagi Ijazah Doktor Falsafah dan Sarjana secara
penyelidikan, atau disertai bagi pengajian secara kerja kursus dan penyelidikan,atau
Laporan Projek Sarjana Muda (PSM).
“Saya akui bahawa saya telah membaca karya ini dan pada pandangan
saya karya ini adalah memadai dari segi skop dan kualiti untuk tujuan
penganugerahan Ijazah Sarjana Muda Kejuruteraan Kimia”

Tandatangan : ....................................................
Nama Penyelia : En. Noor Asma Fazli bin Abdul Samad
Tarikh : 18 April 2008
i

FAULT DETECTION USING NEURAL NETWORK

DINIE BIN MUHAMMAD

A thesis submitted in fulfillment of the requirements for the award of the


degree
of Bachelor of Chemical Engineering

Faculty of Chemical & Natural Resources Engineering


University Malaysia Pahang

APRIL 2008
ii

I declare that this thesis entitled “FAULT DETECTION USING NEURAL


NETWORK” is the result of my own research except as cited in the references. The
thesis has not been accepted for any degree and is not concurrently submitted in
candidature of any other degree.

Signature : ....................................................

Name of Candidate : Dinie Bin Muhammad

Date : 18 April 2008


iii

DEDICATION

In the Name of Allah, Most Gracious and Most Merciful

All praised and thanks are due to Allah Almighty and peace and blessing be upon
His Messenger
iv

ACKNOWLEDGEMENT

As a vicegerent who is being created, I put myself down to express my


highest thankful to Allah S.W.T for giving me strength and spirit to complete this
final project with full of pride and dignity.

I would like to express my deepest gratitude to the following persons for their
unlimited and kindness help as well as guidance enabled me to complete this
research project in time as a requirement in receiving my degree.

To Mr. Muhamamd Bin Awang and Mdm. Azni Binti Che Ngah, my beloved
parents, my utmost gratitude towards both of you will never fade. As the person who
brought and taught me of the world, your kindness shall I repay by being a successful
and meaningful human being. For all my family members, thank you for understand
and care so much for me.

Mr. Noor Asma Fazli Bin Abdul Samad, my supervisor, Miss Noralisa Binti
Harun and Miss Sureena Binti Abdullah as my research panel and Mdm. Zailinshah
Binti Yusoff as my thesis writing panel, thanks a lot for helping me during the
progress of the research project in many ways. Without their generosity in sparing
their precious time to guide and help me, the aim of project may not be fulfilled.

My highest appreciation to all my teammates and fellow friends who had


helped and inspired me a lot. I am very grateful. There is no amount of gratitude
could repay their kindness of being there with me always. May Allah bless us all.
Amen.
v

ABSTRACT

This thesis is about the application of Artificial Neural Network


(ANN) as fault detection in the chemical process plant. At the present time, the
process and development in chemical plants are getting more complex and hard to
control. Therefore, the needs for a system that can help to supervise and control the
process in the plant have to be accomplished in order to achieve higher performance
and profitability. As the emergence of Artificial Neural Network application
nowadays had help to solve problems in various fields had given a great significant
effect as the system are reliable to be adapted in the chemical plant. Furthermore, this
thesis will be focusing more on the application of Artificial Neural Network as fault
detection scheme in term of estimator and classifier in the chemical plant. Fault
detection is popular in the present time as a mechanism to detect early malfunction
and abnormal process or equipment in the plant. By implementing such system, we
can boost up the production and the safety level of the plant. For this thesis, the
Vinyl Acetate Plant had been chosen as the case study to provide the necessary data
and information to run the research. Vinyl Acetate Plant process will provides a
dependable source of data and an appropriate test for alternative control and
optimization strategies for continuous chemical processes.
vi

ABSTRAK

Thesis ini adalah berkenaan aplikasi Rangkaian Saraf Buatan (Artificial


Neural Network) sebagai pengesan kesilapan pada kilang pemprosesan kimia. Pada
masa kini, proses dan pembangunan dalam kilang kimia telah menjadi semakin
kompleks dan susah untuk dikawal. Oleh itu, satu sistem yang dapat menyelia dan
mengawal proses di kilang perlu diadakan untuk mencapai prestasi dan keuntungan
yang lebih baik. Peningkatan penggunaan Rangkaian Saraf Buatan telah membantu
dalam menyelesaikan masalah di pelbagai lapangan di zaman ini telah memberi
kesan yang positif yang sistem tersebut yang turut dapat diaplikasikan di dalam
kilang kimia. Selain itu, thesis ini akan lebih memfokuskan pengaplikasian
Rangkaian Saraf Buatan sebagai pengesan kesilapan mekanisme dalam konteks
peramal proses dan pengelasan kesilapan di dalam kilang kimia. Pengesan kesilapan
adalah sangat popular pada masa sekarang sebagai alat untuk mengesan
ketidakfungsian dan proses yang tidak normal atau kerosakan peralatan di dalam
kilang. Dengan pengaplikasian sistem ini, tahap produktiviti dan keselamatan di
kilang akan bertambah. Dalam thesis ini, kilang pemprosesan Vinyl Acetate telah
dijadikan sebagai rujukan untuk mendapatkan maklumat dan data yang diperlukan
untuk menjalankan kajian. Proses Vinyl Acetate akan memberikan sumber data yang
tepat dan merupakan tempat yang sesuai untuk mengadakan kajian berkenaan
pengawalan dan strategi pengoptimumtasi untuk proses kimia yang berterusan.
vii

TABLES OF CONTENTS

CHAPTER TITLE PAGE

DECLARATION ii
DEDICATION iii
ACKNOWLEDGEMENT iv
ABSTRACT v
ABSTRAK vi
TABLES OF CONTENT vii
LIST OF TABLES x
LIST OF FIGURES xi
LIST OF APPENDICES xiii

1 INTRODUCTION
1.1 Introduction 1
1.2 Problem Statement 3
1.3 Objectives and Scope Research 4
1.4 Summary 5

2 LITERATURE REVIEW
2.1 Introduction 6
2.2 Principle of Safety 7
2.3 Principle of Fault 8
2.4 Fault Detection 10
2.5 Neural Network 17
2.5.1 Background of Neural 17
Network
viii

2.5.2 Neural Network 19


Architecture Element
2.5.3 Neural Network 21
Classification
2.5.4 Neural Learning 22
2.5.4.1 Correlation Learning 23
2.5.4.2 Competitive Learning 23
2.5.4.3 Feedback based 24
weight adoption
2.5.5 Advantages and Limitation 24
of ANN
2.6 Summary 26

3 PLANT SIMULATION
3.1 Introduction 27
3.2 Process Description 27
3.3 Modeling the Vinyl Acetate Process 28
3.4 Steady State Data and Dynamic 35
Simulation
3.5 VAC Plant MATLAB Program 36
3.6 Simulation Data Validation 40
3.7 Summary 41

4 METHODOLOGY
4.1 Phases in research 43
4.2 Fault detection scheme 45
4.3 Summary 46
ix

5 PROCESS ESTIMATION FOR FAULT


DETECTION
5.1 Introduction 47
5.2 Design Process Estimator 47
5.3 Selection of input and output variable 49
5.4 Selection of Model Structure 53
5.5 Selection of Training and Validation 56
5.6 Summary 58

6 NEURAL NETWORK FAULT CLASSIFIER


6.1 Introduction 59
6.2 Fault Classifier 59
6.3 Fault Classifier Result 61
6.4 Summary 64

7 CONCLUSION AND RECOMMENDATION


7.1 Overview 65
7.2 Conclusion 66
7.3 Recommendation for future work 67

REFERENCES 68
APPENDICES 73
x

LIST OF TABLES

TABLE NO. TITLE PAGE

3.1 Disturbance available in VAC plant simulation 37


3.2 The comparison of the VAC plant simulation 42
with actual plant process on selected stream
5.1 The selected variable 49
5.2 Optimization for Predictor 1 54
5.3 Optimization for Predictor 2 55
5.4 Optimization for Predictor 3 55
xi

LIST OF FIGURES

FIGURE NO. TITLE PAGE

1.1 The proposed Model-Based Fault Detection 3


2.1 Causes of losses in the largest hydrocarbon 7
chemical plant accident
2.2 Time-dependency of faults: 9
2.3 Basic models of faults 9
2.4 General schemes of process model-based 14
Fault detection and diagnosis.
2.5 Process configuration for model-based on 15
Fault detection
2.6 A human neuron design 18
2.7 Feedfoward Neural Network Model 19
2.8 Schematic Model of neural network 20
2.9 Classification of Neural Network 21
3.1 PFD of Vinyl Acetate Process Plant 28
3.2 Control Variable: CV 1 – CV 8in VAC Plant 37
3.3 Control Variable: CV 9 – CV 16 in VAC Plant 38
3.4 Control Variable: CV 17 – CV 24 in VAC Plant 38
3.5 Manipulated Variable: MV 1 – MV 8 in 39
VAC Plant
3.6 Manipulated Variable: MV 9 – MV 16 in 39
VAC Plant
3.7 Manipulated Variable: MV 17 – MV 24 in 40
VAC Plant
xii

5.1 The Neural Network based fault detection scheme 48


5.2 Control Variable from CV 1 to CV 8 50
5.3 Control Variable from CV 9 to CV 16 50
5.4 Control Variable from CV 17 to CV 24 51
5.5 Manipulated Variable from MV 1 to MV 8 51
5.6 Manipulated Variable from MV 9 to MV 16 52
5.7 Manipulated Variable from MV 17 to MV 24 52
5.8 Elman Network 53
5.9 Column Reflux Training and Validation 56
5.10 Column Condenser Duty Training and Validation 57
5.11 Column Organic Exit Training and Validation 57
6.1 Fault Classifier graph on the Column Reflux 61
6.2 Fault Classifier graph on the Column Condenser 62
Duty
6.3 Fault Classifier graph on the Column Organic 62
Product
xiii

LIST OF APPENDIX

APPENDIX TITLE PAGE

A1 Steady State Values of Manipulated Variables 73


A2 Control Structure and Controller Parameters 74
A3 Measurements at Steady State 76
A4 Control System in VAC Plant 78
A5 Wilson parameters and molar volumes 79
A6 Equipment Data 80
B1 Main Program for Data Scaling 82
B2 Main Program for Data Preparation 83
B3 Main Program for Predictor 1 85
B4 Main Program for Predictor 2 86
B5 Main Program for Predictor 3 87
B6 Main Program for Classifier 1 88
B7 Main Program for Classifier 2 90
B8 Main Program for Classifier 3 92
CHAPTER 1

INTRODUCTION

1.1 INTRODUCTION

Nowadays, technology of control system had achieved a tremendous impact


on the engineering applications. Control system had been applied in various fields
such as sensor data analysis, fault detection, nonlinear process identification, and
pattern recognition, process modelling and plant wide control (Hussain, 1998) The
purpose of plant wide control is to develop coordinated control of several important
variables of a multi unit process, overall plant process system and sustain product
quality. In the conventional way, the control problem that happen on the plant is
divided into smaller parts according to the unit operation involved. For realizing the
approach, the engineers would set up an appropriate inventory of items, equipments
and man power. These units will consume a portion of space in the plant including
the material compartment. This can increase the capital cost of the plant to support
the process including the increasing of safety and environmental hazard. Therefore,
the implementation of advance control system in industries is necessary in term of
cost reduction and effective controlling system (Lyman and Georgakis, 1995).

Research of fault detection system had gain more interest lately not just only
due to cost saving, yet more importantly; it serves as a safety mechanism. The
disaster in Bhopal and Chernobyl is a good example why advance controller can
plays a vital role in preventing the incident from happen in the first place. The
implementation of advance controller in term of fault detection will help to reduce
2

the probability of accident and loss as a result of human or mechanical error. The
emergence of artificial intelligence (AI) also plays a role in the development of
control system. The cognitive approach of AI is focusing on imitates the rational
thinking of human (Lee, 2006). AI system such as fuzzy logic, neural network and
genetic programming had been integrated with the conventional control system to
produce an intelligent controller system. In this case, the intelligent controller will
help the operator to handle and deal with various abnormal conditions or fault that
happen with more reliable, efficient and faster.

Fault detection is essentially a pattern recognition problem, in which a


functional mapping from the measurement space to a fault space is calculated A wide
variety of techniques have been proposed to detect and diagnose faults. Generally,
there are three different options available to approach a fault diagnosis problem: state
estimation methods, statistical process control methods, and knowledge-based
methods. A neural network, a type of knowledge-based system, possesses many
desirable and preferred properties for chemical process fault diagnosis. These
properties include its abilities to learn from example, extract salient features from
data, reason in the presence of novel, imprecise or incomplete information, tolerate
noisy and random data, and degrade gracefully in performance when encountering
data beyond its range of training (Venkatasubramanian and Chan, 1989). Reviewing
the development of neural network fault detection and diagnosis systems, the general
trend in research is to increase the robustness of the system to unmodelled patterns,
realise fast and reliable diagnosis in dynamic processes and dynamically filter noisy
data used for detection (Hamid, 2004).

In this research, a model-based fault detection system proposed by Ahmad


and Leong (2001) will be further developed. Figure 1.1 displays the overall structure
of the system. Here, a model-based fault detection system consisting of a process
predictor and a fault classifier is proposed. The process predictor is used to predict
the normal fault free operating condition of a column in the Vinyl Acetate Plant. The
deviation of the actual condition from the output of this predictor, termed the
residual, is then fed to the classifier, which identifies the residual signal from the
3

process predictor and classifies the cause of faults. The development of both models
utilizes the nonlinear mapping capability of neural networks

Figure 1.1 The proposed Model-Based Fault Detection

The hierarchical approach is advantageous because it lessen the chances of


misidentification of normal operation trend that is due to the manipulation of the
feeds condition. In practice, there are always possibilities that the manipulation of
feeds will produce process conditions that coincidentally match the fault pattern and
the classifier will tend to misinterpret the situation. The use of residuals provides
some protection to the system

In this thesis, the implementations of Neural Network for fault detection in


Vinyl Acetate plant are proposed. Fault diagnosis system can provides fault
information to operate and schedule levels that can improve product quality,
facilitate active scheduling, and reduce risk of accidents (Ricker, 1995). Therefore
the implementation of Neural Network will help engineers to design a more effective
control and monitoring system in the plant to achieve zero lost time.

1.2 Problem Statement

As we are heading towards the future, the advance knowledge and technology
had contributing to the improvement on reability, safety and efficiency of fault
detection and diagnosis system. This system is very important as it will prevent
4

accident, failure and disaster from happen and save many life. Today, safety and
health has becoming a main agenda in developing and managing technical processes.
Consequently, the development of Neural Network in various fields especially in
fault detecting has shown great progress. Neural Network has the potential to be
developed further to be applied in chemical plant such as Vinyl Acetate Process Plant
as the process control mechanism. Furthermore, Matlab 7.0 had been used to model
and stimulate the Neural Network in terms of monitoring and supervising the Vinyl
Acetate Process route. Matlab is a high-performance language for technical
computing software that had in been used widely in the engineering field to calculate
and solve many mathematical and technical problems. Thus, this research will be
focusing on fault detection on Vinyl Acetate Process Plant by using Neural Network.
These researches will emphasis on how and how far Neural Network can contribute
to overcome Vinyl Acetate Process Plant failure and fault problems.

1.3 Objective and Scope of Research

The main aim of this research is to develop a fault detection system using
neural network. By using the Vinyl Acetate Plant as the case study, the
implementation of neural network will help the controller to detect fault more
efficient. The work covered the following scope:

i. Simulation of case study : Vinyl Acetate Plant


Simulation of the plant was carried out within Matlab 7.0 based on a research
developed by Luyben and Tyreus (1998). The simulation is done in order to
generate the base data for the neural network

ii. Development of process predictor.


Process predictor is used to predict the normal behavior of the process. The
development of the process predictor involves selecting suitable artificial
neural network (ANN) architecture to differentiate between the abnormal
behaviors of the process with the normal condition. The difference between
5

the actual plant signal and the estimated normal plant signal is termed as
residual. This process is done in Matlab.

iii. Development of fault classifier.


Fault classifier is a decision making system used to detect process faults.
Residual signals generated from the process predictor serve as an input to the
classifier. Structure selection and training method are the criteria that must be
taken into consideration. The fault classifier is developed in Matlab.

iv. Implementation of the proposed fault detection strategy.


The proposed model-based fault detection strategy is implemented to detect
faults in the column in the Vinyl Acetate plant.

1.4 Summary

Following this introductory chapter is Chapter 2 that will elaborates some of


the fundamental theory and application of safety, fault detection and Neural
Network. The architecture, learning, advantage and limitation of Neural Network
will also be discussed. In Chapter 3, the chapter will cover about the Vinyl Acetate
Process plant simulation. The process, equipments and condition of the simulated
plant are also roofed in this chapter. Chapter 4 will shows the methodology and
planning of this research. Chapter 5 will commence with the discussion on the
description of Neural Network as process estimator with its architecture specification
and training process. This is then followed by the result of the estimator with training
and validation. Chapter 6 will discuss about the analysis on Neural Network
reliability as process classifier and its results. Finally, Chapter 7 will summarize the
thesis and concludes all the findings. In addition, this research had provided some
recommendations for future works.
CHAPTER 2

LITERATURE REVIEW

2.1 Introduction

In the area of plant-wide control at the supervisory level, the process fault
detection and system plays a key role. The fault detection usually includes the fault
diagnosis and fault correction system. Fault diagnosis is the identification of the root
causes of process upset. Meanwhile, fault correction is the provision of
recommended corrective actions to restore the process to normal operating condition.
In this regard, real-time appropriate actions must be taken in present chemical and
petrochemical manufacturing plants. The technical personnel in most of these
industries is responsible for process monitoring status, detecting abnormal events,
diagnosing the source causes and administering proper intervention to bring the
process to normal operation. Nevertheless, the complexity of the supervision tasks
has increased considerably due to the high level of development in process design
and control. A decision support system is needed to assist process operators in
understanding and assessing process status, and responding quickly to abnormal
events, thereby enabling processing plants to maintain operational integrity and
improve product quality at a reduced cost (Ruiz, et al., 2000). However, a very
important control task in managing process plants still remains largely a manual
activity, performed by human operators. This is the task of responding to abnormal
events in a process. This involves the timely detection of an abnormal event,
diagnosing its causal origins and then taking appropriate supervisory control
decisions and action to bring the process back to a normal, safe, operating state. This
7

entire activity has come to be called Abnormal Event Management (AEM), a key
component of supervisory control.

2.2 Principle of Safety

Safety aspect is one of the most important aspects in operating a plant rather
than profit and process route. The terminology of safety is the ability of a system not
to cause danger to persons or equipment or the environment (Isermann and Ballè,
1997). According to America Institute of Chemical Engineering (AICHE) Code of
Professional Ethics, one of its fundamental principles is to use the knowledge and
skills to enhance the human welfare. Thus, the usage of the advance control system
in the plant is one of a good effort towards achieving the highest level of safety. In
the Layer of Protective Analysis (LOPA) Model, the priority step in analyzing and
assessing risk of fault is of the process design and control system of the plant (Crowl
and Louvar, 2002).

60

50
44

40
Accidents (%)

30

22
20

12 11
10
5 5
1
0
Mechanical Operator Unknown Process Natural Design Sabotage
Error Upset Hazards

Causes of losses

Figure 2.1 Causes of losses in the largest hydrocarbon-chemical plant accident


(Crowl and Louvar, 2002)
8

From the figure 2.1 above, there are seven main causes of losses that occurred
in a typical chemical plant. By far, the largest cause of losses is due to mechanical
failure. Failure of this type is usually due to improper control system and
maintenance service. The most damage can be caused by improper action or lack of
awareness in safety is fatal casualties. Accident in Bhopal in 1984, which kills nearly
2000 people and injuring more than 20,000 people and catastrophic in Seveso in
1976, can be prevented if the plant involved had implemented the proper application
of fundamental engineering safety principles for instance the fault detection system.

Moreover, based on this circumstance, the chemical plants had to setup a


monitoring and supervising system to identify the fault or hazard effectively. This is
to prevent losses in terms of profit, human resources, and product quality. Therefore,
risk assessment or fault detecting in the chemical plant is one of significant studies
that have to considerably taken for granted.

2.3 Principle of Fault

A fault is defined as an unpermitted deviation of at least one characteristic


property of a variable from an acceptable behavior (Isermann and Ballé, 1997). In the
meantime, Himmelblau in 1978 defines a fault as a process abnormality or symptom,
such as high temperature in a reactor or low product quality. In general, fault is
deviations from the normal operating behavior in the plant that are not due to
disturbance change or set point change in the process, which may cause performance
deteriorations, malfunctions or breakdowns in the monitored plant or in its
instrumentation. Therefore, the fault is a state that may lead to a malfunction or
failure of the system. The time dependency of faults can be distinguished, as shown
in Figure 2.2, as abrupt fault such as overheating and overpressure, incipient fault
such as continuing overflow, and intermittent fault such as fault in gear or valve.
9

Figure 2.2 Time-dependency of faults: Abrupt (a), Incipient (b), and Intermittent(c)
by Isermann (1997).

With regard to the process models, the faults can be further classified. According to
Figure 2.3, additive faults influence a variable Y by an addition of the fault f, and
multiplicative faults by the product of another variable U with f. Additive faults
appear, for example offsets of sensors, whereas multiplicative faults are parameter
changes within a process (Isermann, 2005).

Figure 2.3 Basic models of faults: (a) Additive fault (b) Multiplicative faults
(Isermann, 2005).

According to Gertler in 1998, faults can be categorized into the following


categories:-

i. Additive process faults


Unknown inputs acting on the plant, which are normally zero. They cause a
change in the plant outputs independent of the known input. Such fault can be
best described as plant leaks and load.
10

ii. Multiplicative process faults


These are gradual or abrupt changes in some plant parameters. They cause
changes in the plant outputs, which also depend on the magnitude of the
known inputs. Such faults can be best described as the deterioration of plant
equipment, such as surface contamination, clogging, or the partial or total
loss of power.

iii. Sensor faults


These are difference between the measured and actual values of individual
plant variables. These faults are usually considered additive (independent of
the measured magnitude), though some sensor faults (such as sticking or
complete failure) may be better characterized as multiplicative.

iv. Actuator faults


These are difference between the input command of an actuator and its actual
output. Actuator faults are usually handled as additive though, some kind
(such as sticking or complete failure) may be described as multiplicative.

2.4 Fault Detection

Currently, the advance development of reliability, safety and intelligent of


process control system has taken technical process especially chemical plant safety
measurement to a new level. As the need for a better and safe plant as well as saving
in cost, the advance control system would be a perfect solution. The traditional
approach of control system is usually limited based upon the output variable and
cannot give a deeper insight of the problems as well as not providing fault diagnosis.
The new advance application of control system is analyzing the input and output by
applying the dynamic process model. This can give a new view concerning the fault
from different perspective. Furthermore, advance instrumentation can measure and
evaluate hundreds of variables in just a few second to produce signatures in relation
to the status of the process.
11

Fault detection is a monitoring process to determine the occurrence of an


abnormal event in a process, whereas fault diagnosis is to identify its reason or
sources. The detection performance is characterized by a number of important and
quantifiable benchmarks namely (Harun, 2005):

i. Fault sensitivity: the ability of the technique to detect faults of reasonably


small size.

ii. Reaction speed: the ability of the technique to detect faults with reasonably
small delay after their occurrences.

iii. Robustness: the ability of the technique to operate in the presence of noise,
disturbances and modeling errors, with few false alarms.

In general, one has to deal with three classes of failures or malfunctions as described
below (Hamid, 2004):

i. Gross parameter changes in a model


In any modeling, there are processes occurring below the selected level of
detail of the model. These processes which are not modeled are typically
lumped as parameters and these include interactions across the system
boundary. Parameter failures arise when there is a disturbance entering the
process from the environment through one or more exogenous (independent)
variables. An example of such a malfunction is a change in the concentration
of the reactant from its normal or steady-state value in a reactor feed. Here,
the concentration is an exogenous variable, a variable whose dynamics is not
provided with that of the process. Another example is the change in the heat
transfer coefficient due to fouling in a heat exchanger.

ii. Structural changes


Structural changes refer to changes in the process itself. They occur due to
hard failures in equipment. Structural malfunctions result in a change in the
information flow between various variables. To handle such a failure in a
12

diagnostic system would require the removal of the appropriate model


equations and restructuring the other equations in order to describe the
current situation of the process. An example of a structural failure would be
failure of a controller. Other examples include a stuck valve, a broken or
leaking pipe.

iii. Malfunctioning sensors and actuators


Gross errors usually occur with actuators and sensors. These could be due to
a fixed failure, a constant bias (positive and negative) or an out-of range
failure. Some of the instruments provide feedback signals which are essential
for the control of the plant. A failure in one of the instruments could cause the
plant state variables to deviate beyond acceptable limits unless the failure is
detected promptly and corrective actions are accomplished in time. It is the
purpose of diagnosis to quickly detect any instrument fault which could
seriously degrade the performance of the control system.

A large variety of techniques for fault detection had been proposed in the
literature these days (Choudhury et al., 2006, Thornhill and Horch, 2006 and Xia and
Howell, 2005). Due to the broad scope of the process fault diagnosis problem and the
difficulties in its real time solution, various computer-aided approaches have been
developed over the years (Hamid, 2004). They cover a wide variety of techniques
such as the early attempts using fault trees and diagraphs, analytical approaches, and
knowledge-based systems and neural networks in more recent studies. From a
modeling perspective, there are methods that require accurate process models, semi-
quantitative models, or qualitative model. On the other hand, there are methods that
do not assume any form of model information and rely only on process history
information. These techniques can be classified as Model based methods and
Historical data based methods (Detroja, et al., 2007):

i. Model based methods: detect and isolate signals indicating abnormal


(fault) operation for large scale system.
13

ii. Historical data based methods: attempt to synthesis utmost information


from archive database and required minimum first principle knowledge of
the plant.

The automatic control system can distinguish a fault from various parameters
by using supervisory function to take appropriate action to maintain the process and
to avoid any losses. There are three main elements in automatic control that can be
classified as (Isermann, 2005):-

i. Monitoring: All related variables are supervised in regards to tolerances


and alarms as identification instrument to the operators

ii. Automatic Protection: During abnormal, non linear or danger process


state, the monitoring system will initiates an appropriated
countermeasure.

iii. Supervision with fault diagnosis: Based on the measured variables, the
current features are calculated, symptoms are generated via change
detection, a fault diagnosis is performed and decision for counter action is
permitted.

The advantage of the classical limit-value fault detection is their simplicity


and reliability in handling monitoring process and automatic protection. However,
fault can only be detected if there is a large change in the feature such as after either
a large sudden fault or a long-lasting gradually increasing fault. Besides that, the
system will not provide information regarding the fault diagnosis. The new advance
method emphasis on the supervision and fault detection is needed to satisfy
following requirement:-

i. Early detection of small faults with abrupt or incipient time behavior

ii. Diagnosis of faults in the actuator, process components or sensors


14

iii. Detection of faults in closed loops

iv. Supervision of processes in transient states

A general survey of supervision, fault-detection and diagnosis methods is


given in (Isermann, 1997). In the following model-based fault-detection methods are
considered, which allow a deep insight into the process behavior.

Figure 2.4 The general scheme of process model-based fault-detection and diagnosis
(Isermann, 1997).

Different approaches for fault-detection using mathematical models have


been developed in the last 20 years (Chen and Patton, 1999, Frank, 1990, Gertler,
1998, Himmelblau, 1978, Isermann, 1997, Patton et al., 2000). The task consists of
the detection of faults in the processes, actuators and sensors by using the
dependencies between different measurable variables. These dependencies are
15

expressed by mathematical process models. Figure 2.4 shows the basic structure of
model based fault-detection.

Based on measured input signals U and output signals Y, the detection


methods generate residuals r, parameter estimates Θ̂ or state estimates x̂ , which are
called features. By comparison with the normal features (nominal values), changes of
features are detected, leading to analytical symptoms s. For the application of model-
based fault-detection methods, the process configurations according to Figure 5 had
to be implemented. Considering the inherent dependencies used for fault-detection,
and the possibilities for distinguishing between different faults, the situation
improves greatly from case (a) to (b) or (c) or (d), by the availability of some more
measurements.

Figure 2.5 Process configuration for model-based fault-detection based on Isermann


in 1997: (a) SISO (single-input single-output) (b) SISO with intermediate
measurements (c) SIMO (single-input multi-output) (d) MIMO (multi-input multi-
output).

In this research, there are three common methods that will be review in term
of fault detection system based on Halim in 2004 which are State Estimation
Approaches Statistical Process Control Approach and Knowledge-Based
Approaches:-
16

i. State Estimation Approaches


The approaches using fundamental models are usually based on state
estimation methods, which combine a fundamental model of the process with
on-line measurements to provide on-line, recursive estimates of the
underlying theoretical states of the process. The advantage of the model-
based method is that it can estimate immeasurable parameters, if they are
observable. However, they require an exact process model, which is not
always available or economical to obtain in an industrial environment. In
practice, the heavy computation load involved in this approach can also be a
problem.

ii. Statistical Process Control Approach


The idea of statistical process control to monitor batch processes with
empirical models was developed from a multi-way principal component
analysis (MPCA) of the existing batch data. In the batch-monitoring problem,
the data takes the form of three-way arrays. MPCA provides a tool for
investigating very large databases of batch data and obtain valuable
knowledge about the process. However, MPCA and MPLS (Multi-way
Partial Least Squares) model is not cause and effects models, but rather only
models of the correlation structure of the process variables under routine
operating conditions. They cannot be used to predict the effects that
independent changes in some of the measurement variables will have on the
quality of the final product. Moreover, their capability to model any nonlinear
relationship is still limited by the assumed basis functions used in the
regression.

iii. Knowledge-Based Approaches


Knowledge-based approaches use expert systems or artificial intelligence
methods to process data. In rule-based expert systems, the process model is
represented by a set of qualitative and quantitative governing rules, based on
the knowledge about the process variable from operators and engineers.
These behavioral and causal descriptions are arranged in a hierarchical
structure, and diagnostic rules for each node in the hierarchy are generated
from those descriptions. Fault diagnosis using rule-based expert systems
17

needs an extensive database of rules and the accuracy of the diagnosis


depends on the rules. The weakness of this method is the data had to be rich
and details. This can be time consuming. Also, the inability of the system to
learn or dynamically improve its performance, and unpredictability of the
system outside its domain of expertise are obvious problems when large
industrial plants are considered.

One of the Knowledge based approaches is using Neural Network. Neural


Network is a non-linear statistical data modeling tools. They can be used to model
complex relationships between inputs and outputs or to find patterns in data. ANN is
attractive due to its information processing characteristic such as nonlinearity, high
parallelism, fault tolerance as well as capability to generalize and handle imprecise
information (Basheer and Hajmeer, 2000). These characteristics have made ANN
suitable for solving a variety of problems. In fault detection case, the neural network
serves as pattern recognition to identify the process fault by reasoning based on
generalizing a set of data. With parallel computation and ability to adapt to changes,
neural network is a best choice for fault detection system.

2.5 Neural Network

2.5.1 Background of Neural Network

Since the early days, Artificial Intelligent (AI) researchers had already
focusing their study at modeling the function of human brain. In the mid-1940s,
Warren McCulloch and Waiter Pitts had proposed the first artificial neural network
(ANN or neural network for short) model (McCulloch & Pitts, 1943). These neurons
were presented as models of biological neurons and as conceptual components for
circuits that could perform computational tasks (Abdi, et al., 1999). Afterward, the
further explorations of the neural networks in the late 1980s give significant result in
solving vital AI problems. The main architecture of ANN emulates the functionality
of the human nervous system. The human nervous system consists of extremely large
18

numbers (over 1011) of nerve cells or neurons with operate to process data in human
minds.

Generally, a biological neural network is composed of a group or groups of


physically connected or functionally associated neurons. A single neuron can be
connected to many other neurons and the total number of neurons and connections in
a network can be extremely large. Tree like networks of nerve fiber called dendrites
are connected to the cell body or soma, where the cell nucleus is located. Extending
from the cell body is the single long fiber called axon, which eventually branched
into strands and sub strands, and are connected to other neurons through contact
points known as synapses. Transmission of signal from one neuron to another at
synapse is a complex chemical process with specific transmitter substance is released
from the sending point of junction. The effect is to adjust the electrical potential
inside of the body of the receiving cell. If the potential reaches a threshold, a pulse
will be generated down the axon, known as “firing” (Lee, 2006) as in Figure 2.6.

Figure 2.6 Human neuron design


19

2.5.2 Neural Network Architecture Element

As a resemblance to the human biological neurons, the neural network consist


of a number of simple node elements which are connected together to form a
network either in a single or multiple layer. This can be interpreted as a
computational model in which the synapses act as weight to alter the effect of the
associated input signal. This weight is the unknown parameter and are predicted
based on the process data input or output to be modeled. A positive weight is
considered is called excitatory and a negative weight is called inhibitory. The
network usually works in several layers. The essential layers are the first layer, called
the input layer and the last layer, called the output layer. The intermediate layer is
called the hidden layer. The information that is needed to be analyzed is supplied at
the first layer for processing. Subsequently, this process is continued by the second
layer and so on until the last layer. Each unit will receive the information based on
the layers that will be converted lastly into the output of the unit. This output signal
can be used straight away or can be furthered processed by proper method.

Figure 2.7 Feedfoward Neural Network Model (Chen, 2005)


20

Input weights
X1
W1 Transfer Function

X2 W2
Y = F (Σxiwi) Y
W3
X3

.
. Wn Output
. Signal

Xn
Input
Signal

Figure 2.8 Schematic Model of neural network (Seborg and Edgar, 2004)

Based on the figure above, the neuron computes the weighted sum of the
input signals and compares the result with a threshold value. If the net input is less
than the threshold, θ, the neuron output is -1. But if the neuron net input is greater
than or equal to the threshold, the neuron becomes activated and its output attains a
value of +1. (Negnevitsky, 2001). In other words the neuron uses the following
transfer function or activation function:

n
X = ∑
i =1
xi wi (2.1)

Y = +1 if X ≥ θ (2.2)

= -1 if X < θ

Where X in the equation 2.1 is the net weighted input to the neuron, xi, is the value of
input, wi, is the weight of input i, n is the number of neurons input, Y as in equation
2.2 is the output of the neurons. Each neurons input are collected from other neurons
to be evaluated. These inputs are then summed and compare with a standard level
and an appropriate output is determined. The output signal is then computed as sum
of input signals, transformed by the transfer functions. The learning process of a
neural network is achieve by adjusting the weight in accordance with a predefined
21

learning algorithm, usually by ΔWij = ασXj where α is the learning rate and σ is the
momentum rate.

2.5.3 Neural Network Classification

Basically neural network can be generally being separated into two groups
according Lee in 2006:-

i. Supervised neural network- neural network operating with supervised


learning and training strategies, which is major of ANNs such as Hopfield
Network, FFBPN (Feedfoward Backpropagation Network), RBF (Radial
Basis Function), etc.

ii. Unsupervised neural network- neural network that do not need any
supervised learning and training strategies, including all kinds of self
organizing, self clustering, and learning networks such as SOM, ART
(Adaptive Resonant Theory), and so on.

ANN
System Architecture

Single Layer ANNs Multi-Layer ANNs Recurrent ANNs

Adaline Madaline BAM


Perceptron FFBPN ART
Hopfield Network RBFN Hopfield Network
LVQ Neocognitron Boltzman Machine

Figure 2.9: Classification of Neural Network (Patterson 1996)


22

The Feed-forward network is when the data flow from input to output units.
The data processing can extend over multiple layers of units, but no feedback
connections (connections extending from outputs of units to inputs of units in the
same layer or previous layers) are present. Moreover, for Recurrent networks, its
contain feedback connections. Different from the feed-forward networks, the
dynamical properties of the network are important. In certain times, the activation
values of the units will undergo a relaxation process such that the network will
develop to a stable state in which these activations do not change anymore. In other
applications, the changes of the activation values of the output neurons are
significant, such that the dynamical behavior constitutes the output of the network
(Hampshire and Pearlmutter, 1990)

2.5.4 Neural Learning

In neural network, learning refers to the method of modifying the weights of


connections between the nodes of a specific network. The training session of the
neural network uses the error in the output values to update the weights connecting
layers, until the accuracy is within the tolerance level. The training time for a feed
forward neural networks using one of the variations of backpropagation can be
substantial. For a simple two-input two-output system with 50 training samples,
100000 training iterations are common (Zhou, et al., 2003). For large-scale systems,
memory and computation time required for training a neural network can exceed
hardware limits. This has been a bottleneck in developing fault diagnosis algorithms
for industrial applications. Like other data-driven methods, the performance of neural
networks is determined by the available data. It is highly possible that neural
networks will generate unpredictable output when presented with an input out of the
range of the training data. This suggests that the neural networks need to be retrained
when there is a slight change of the normal operation conditions, e.g., a molecular
weight specification change in a polymerization reactor. This is not a big problem if
the neural networks are trained offline then used online in fault diagnosis systems.
23

2.5.4.1 Correlation Learning

One of the oldest and most widely know principles of biological learning
mechanism were describe by Hebb (1949), and is sometimes called “Hebbian
Learning.” Hebb’s principle is as follows: “When an axon of cell A is near enough to
excite a cell B and repeatedly or persistently take place in firing it, some growth
process or metabolic change take place in one or both cell such that A’s efficiency,
as one of the cell firing B is increased”.

For Artificial neural networks, this implies a gradual increase in strength of


connection among nodes have similar output when presented with the same input.
The strength of connection between neurons eventually comes to represent the
correlation between the outputs. The simplest form of this weight modification rule
for neural network can be stated by:

Δ w i , j = cx i x j (2.3)

Where c is some small constant, denotes the strength of the connection from the jth
node to the ith node, and xi and xj are the activation level of this nodes. Many
modification of this rule have been developed and widely used in artificial neural
network models.

2.5.4.2 Competitive Learning

Another principle for neural computation is that when an input pattern is


presented to the network, different nodes compete to be winners with high level
activity. The competitive process involves self excitation and mutual inhibition
among nodes, until a single winner remerges. The connection between input nodes
and the winner node are then modified, increasing the like hood that the same winner
continues to win in future competition (for input patterns similar to the ones that
24

caused the adoption). This leads to the development of neural networks in which
each node specializes to be the winner for asset of similar pattern.

Competition may be viewed as the consequence of resource is being limited,


drawing from the analogy of ecological system. A competitive mechanism can be
viewed as a way of ensuring selective neural response to various input stimuli.
Resource conversation is also achieved by allowing connection strength to decay
with time.

2.5.4.3 Feedback based weight adoption

Living things learn based on the feedback of its action towards the
surrounding and environment. Positive feedback will reinforce the creature behavior
in respond to the presented input. In context of neural network, the increasing of
particular weight will leads to diminished performance or larger error, then that
weight is decreased as the network is train to perform better.
The amount of change made at every step is very small in most networks to
ensure that a network does not stray too far from its partially evolved state and so
that the network withstand some mistake made by the teacher, feedback of
performance evaluation mechanism. If the incremental change is infinitesimal,
however, the neural network will enquire large excessive training times. Some
training method clearly vary the rate at which networks is modified.

2.5.5 Advantages and Limitation of ANN

Although ANN is seems to be a perfect and flawless system, it also had some
limitation that can affect the result and the performance of the any mechanism that
implemented it. The advantage of neural networks lies in their ability to represent
both linear and non-linear relationships and in their ability to learn these
25

relationships directly from the data being modeled. Generally, ANN has several
advantages as described by Baughman and Liu (1995):

i. Distribution of information over a field of nodes. This feature allows


greater flexibility and robustness of ANN because a slight error or
failure in certain sections of the network will not affect the entire system.

ii. Learning ability of ANN. ANN is able to adjust its parameters in order to
adapt itself to changes in the surrounding systems by using an error-
correction training algorithm.

iii. Extensive knowledge indexing. ANN is also able to store a large amount
of information and access it easily when needed. Knowledge is kept in the
network through the connection between nodes and the weights of every
connection.

iv. Imitation of the human learning process. The network can be trained
iteratively, and by tuning the strengths of the parameters based on
observed results. The network can develop its own knowledge base and
determine cause and effect relations after repeated training and
adjustments.

v. Potential for on-line use. Once trained, ANN can yield results from a
given input relatively quickly, which is a desired feature for the on-line
use.

In contrast, some of the limitation of ANN also summarized by Baughman and Liu
(1995):

i. Long training time. Training time for ANN can take too much time especially
for large networks.
26

ii. Requires large amount of data. ANN needs large amount of input-output data
for a better generalization. Therefore, if there is only a small amount of input-
output data available, ANN may not be suitable for modeling the system.

iii. No guarantee to optimal results and reliability. Although the network contains
parameters that can be tuned by the training algorithm, there is no guarantee
that the resulting model is perfect for the system. The tuned model may be
accurate in one region but inaccurate in another.
iv. Difficulty in selecting good sets of input variables. Selection of input
variables is difficult because too many input variables or wrongly selected
input variables will cause over fitting and poor generalization. Too little or
inappropriate input variables will lead to poor mapping of the system.

2.6 Summary

This chapter is about the introduction of the fault and its principles.
Fault can be summarized as deviation to normal condition. There are several
types of fault that had been discussed earlier. The safety aspect is also
covered in this chapter as to justify the importance of the fault detection
system to the safety and reliability of the process. Furthermore, fault
detection is a fault monitoring process. There is also several class of fault
according to its situation. Fault detection is widely studied for the past years,
with many different approaches. In this research, the implementation of fault
detection is using Neural Network due some of its unique ability. Neural
Network will be study in this chapter in terms of its architecture and training,
as well as its limitation.
CHAPTER 3

PLANT SIMULATION

3.1 Introduction

In the recent years, the study on the plant wide design, control, and
optimization had been done on plant simulation to generate better control system and
optimize the process. One of the most popular plant simulations is the Tennessee
Eastman challenge process which was proposed by Downs and Vogel (1993). Then,
an additional model of a large and industrially relevant system, a Vinyl Acetate
monomer (VAC) manufacturing process, was published by Luyben and Tyreus
(1998). The VAC process contains several standard unit operations that are typical of
many chemical plants. Both gas and liquid recycle streams are present as well as
process-to-process heat integration. Luyben and Tyreus had presented a plant wide
control test problem based on the VAC process. This research will focus on VAC
plant simulation as its case study.

3.2 Process Description

In the VAC process, there are 10 basic unit operations, which include a
vaporizer, a catalytic plug flow reactor, a feed-effluent heat exchanger (FEHE), a
separator, a gas compressor, an absorber, a carbon dioxide (CO2) removal system, a
gas removal system, a tank for the liquid recycle stream, and an azeotropic
distillation column with a decanter. Figure 3.1 shows the process flow sheet with
locations of the manipulated variables. The numbers on the streams are the same as
28

those given by Luyben and Tyreus. Totally, the VAC MATLAB model includes 246
states, 26 manipulated variables, and 43 measurements. There are seven chemical
components in the VAC process. Ethylene (C2H4), pure oxygen (O2), and acetic acid
(HAc) are converted into the vinyl acetate (VAc) product, and water (H2O) and
carbon dioxide (CO2) are by-products. An inert, ethane (C2H6), enters with the fresh
C2H4 feed stream. The following reactions take place:

C2H4 + CH3COOH + 1/2O2 CH2 = CHOCOCH3 + H2O (3.1)

C2H4 + 3O2 2CO2 + 2H2O (3.2)

Figure 3.1 PFD of Vinyl Acetate Process Plant

3.3 Modeling the Vinyl Acetate process

This section discusses design assumptions, equipment data, and modeling


formulations for each unit operation. The design details differ slightly from those in
Luyben and Tyreus and why these differences occur is explained. In this section, the
simulation model used for each major unit is discussed in detail after a brief
29

discussion of the thermodynamics and physical property data. For each unit, the state
and manipulated variables are identified:-

a) Thermodynamics and Physical Property Data


In the MATLAB model, the vapor-liquid equilibrium (VLE) calculations are
performed assuming an ideal vapor phase and a standard Wilson liquid activity
coefficient model. The Wilson parameters and molar volumes are listed in
Appendix A1, and they are obtained directly from the TMODS model which is a
proprietary DuPont in-house simulation environment. The molar volumes are
different from what is given in Luyben and Tyreus, who only gave some of these
values. The pure component physical property data are the same as those given in
Appendix A6 except that the molecular weights are calculated to three decimal
places rather than two decimal places. The reason of this change is that if the
molecular weights given in reference are used then a slight generation of moles
results from the round off of the molecular weights and the overall material
balance is not satisfied. The component vapor pressures are calculated using the
Antoine equation.

b) The Vaporizer
The vaporizer is implemented as a well-mixed system with seven components. It
has a gas input stream (F1), which is a mixture of the C2H4 feed stream and the
absorber vapor effluent stream. It also has a liquid input stream (F2), which
comes from the HAc tank. There are 8 state variables in the vaporizer, including
the liquid level, the mole fractions of O2, CO2, C2H4, VAc, H2O, and HAc
components in the liquid, and the liquid temperature. The liquid level is defined
by the ratio of the liquid holdup volume over the total working volume. Since the
dynamics of the vapor phase are ignored, total mass, component and an energy
balance are used to calculate the dynamics in the liquid as:

ρ VAPV VAP = F VAP MW VAP + F VAP MW VAP − F VAP MW VAP (3.3)


L L 1 1 2 2 V V

M VAP xVAP = F VAP ⎛⎜ xVAP − xVAP ⎞⎟ + F VAP ⎛⎜ xVAP − xVAP ⎞⎟ − F VAP ⎛⎜ yVAP − xVAP ⎞⎟ (3.4)
L L, i 1 ⎝ 1, i L, i ⎠ 2 ⎝ 2, i L, i ⎠ V ⎝ V ,i L, i ⎠

CpVAP M VAP T VAP = F VAP ⎛⎜ hVAP − hVAP ⎞⎟ + F VAP ⎛⎜ hVAP − hVAP ⎞⎟ − F VAP ⎛⎜ H VAP − hVAP ⎞⎟ + QVAP (3.5)
L L L 1 ⎝ 1 L ⎠ 2 ⎝ 2 L ⎠ V ⎝ V L ⎠
30

Vapor liquid equilibrium (VLE) is assumed in the vaporizer, and as a result, the
vaporizer pressure and the vapor compositions are determined by a bubble point
calculation. Two manipulated variables (QVAPand FVVAP ) are available in the
3
vaporizer. In the base operation, the liquid holdup, VLVAP , is 2.8 m , which is 70%
of the working level volume. The vaporizer is followed by a heater, and the
heater duty is a manipulated variable. In the base operation, the heater exit
o
temperature is specified to be 150 C.

c) Catalytic Plug Flow Reactor


The reactor is implemented as a distributed system with ten sections in the axial
direction. The two irreversible exothermic reactions which are represented by
equation 3.1 and 3.2 will take place here. In the MATLAB model, the following
assumptions are made for the purpose of model simplification:

i. Plug flow is assumed so that there are no radial gradients in velocity,


concentration, or temperature. Diffusion occurring in the axial direction is
considered negligible compared to the bulk flow. Potential and kinetic energy
and work are considered negligible in the energy balance calculation.

ii. It is assumed that the mass and heat transfer between the fluid and catalyst
are very fast and therefore the concentrations and temperatures in the two
phases are always equal.

iii. Pressure drop is assumed linear along the length of a tube, and it is time-
independent. Equation 3.6 is used to calculate the pressure drop in each
section:

∆P / ∆Z =f * ρ1 RCT *(υ1 RCT )2 (3.6)

where: ∆P / ∆Z is the pressure drop per unit length (psia/m), f is a constant


3
RCT RCT
friction factor, ρ1 is the mass density of the feed stream (kg/m ), ν1 is
3
the volumetric flowrate of the feed stream (m /min). The value of f is taken
31

directly from the TMODS model, instead of calculating it using the Ergun’s
method.

iv. As stated earlier, the shell temperature is assumed uniform, and it is used as a
manipulated variable in the MATLAB model. Thus, the steam drum
dynamics are not modeled. Material and energy balances on the reactor,
which are based on a tubular reactor dynamic model developed by Reyes and
Luyben, are given by Equation 3.7 and 3.8:

∂C i , j ∂ (C i , jVi )
ε =− + φi ρ b (θ1, j r1,i + θ 2, j r2,i ) (3.7)
∂t ∂z

⎛⎛ 7 ⎞⎞
∂⎜⎜⎜ε ∑Ci,k Cpi,k ⎟⎟⎟Ti )
⎛ 7 ⎞ ∂T ⎝ k=1 ⎠⎠
⎜ε ∑Ci,kCpi,k + ρbCpb ⎟ i = − ⎝ −φi ρb (r1.i E1 + r2.i E2 ) − QiRCT (3.8)
⎝ k=1 ⎠ ∂t ∂z

where index i represents the section number and index j represents component j,
φi is the catalyst activity in section i, given by Equation 8 in reference. θ1, j ,θ2, j
are the stoichiometric coefficients for component j in the two reactions, r1,i ,r2,i
are the reaction rates in section i, given by equations 3.3 and 3.4 reference, and
E1, E2 are the heats of reactions. QiRCT is the external heat flux per unit volume in

section i, and it is calculated by QiRCT =UA (Ti −TS), where TS is the shell
temperature. In the MATLAB model, the molar concentrations of components
O2, CO2, C2H4, and VAc, H2O and HAc and the tube temperature in each section
of the reactor are state variables. Therefore totally 70 state variables are present
in the reactor. The molar concentration of component C2H6 can be calculated
based on the ideal gas law. Only one manipulated variable, TS is available in the
o
reactor. In the base operation, the reactor exit temperature is equal to 159.17 C.

d) Feed Effluent Heat Exchanger (FEHE)


In the MATLAB model, the NTU-Effectiveness method is used to calculate the
steady state exchanger exit temperatures and the exact FEHE dynamics are not
modeled. A small time constant is added to the exit temperature sensors to
32

capture temperature dynamics. The inverse of the total thermal resistance, UA, is
calculated by Equation 3.9, which shows that the effective UA is a function of
the mass flow rates of the two streams:

FEHE 0.8 FEHE 0.8


UA =UA0 *[(F1 / FC _ REF ) + (F2 / FH _ REF ) ]/2 (3.9)

FEHE FEHE
where F1 is the mass flowrate of the cold stream and F2 is the mass
flowrate of the hot stream. There is one manipulated variable, the bypass ratio,
and no state variable in the FEHE. In the base operation, the FEHE hot effluent
o
temperature is equal to 134 C.

d) Separator
In the MATLAB model, the separator is modeled as a partial condenser. At each
point in time, a steady state equilibrium-flash calculation is carried out to obtain
the flow rates and properties of the vapor and liquid streams immediately after the
pressure letdown valve on the separator feed stream. The pressure letdown valve
is not shown in the process flowsheet in Figure 3.1. A standard algorithm is used
to solve the isothermal flash problem, assuming that the flash temperature and
pressure are known. In reality, the flash temperature cannot be easily obtained.
The amount of the stream that condenses is a function of the heat removed, but the
heat removed is a function of the flash temperature, which, in turn, is determined
by the amount of the stream that condenses.

o
In the MATLAB model, the flash temperature is approximated by adding 5 C to
the cooling jacket temperature, and the jacket is assumed well mixed so that the
jacket temperature is uniform. Then the vapor and liquid streams are split into the
vapor and liquid phases respectively. It is assumed that there is no driving force
for material and heat transfer between the two phases. In the vapor phase, it is
assumed that the vapor volume, which represents the total gas loop volume, is a
constant. A mass balance is used to model the vapor pressure dynamics. In the
MATLAB model, the separator vapor exit stream flowrate is fixed. In the liquid
phase, a total energy balance determines the temperature dynamics. There are 16
state variables in the separator, including the liquid level, vapor phase pressure,
33

mole fractions of components O2, CO2, C2H4, VAc, H2O, and HAc, and
temperatures in both phases. The ideal gas law is applied to the vapor phase. In
the separator, three manipulated variables are available, the liquid exit stream
flowrate, the vapor exit stream flowrate, and the cooling jacket temperature. In
3
the base operation, the liquid holdup is 4 m , which is 50% of the working level
volume. The separator pressure is 84.25 psia, and the separator liquid phase
o
temperature is 40 C.

f) Compressor
In the MATLAB model, the pressure increase across the compressor is calculated
by equations. 3.10 and 3.11:

COM
POUT = PINCOM + Δ P (3.10)

∆P =γρCOM (3.11)

where γ is the compressor coefficient, ρCOM is the compressor inlet stream density.
The exit temperature is calculated by assuming an isentropic compression. The
compressor is followed by a cooler, and the cooler duty is a manipulated variable.
o
In the base operation, the cooler exit temperature is 80 C.

g) Absorber
In the MATLAB model, the gas absorber is divided into two parts. The top part
contains six theoretical stages. Its inlet liquid stream is from the HAc tank, and its
inlet vapor stream is from the top of the bottom part of the absorber. The bottom
part contains two theoretical stages. Its inlet liquid stream is a combination of the
liquid stream from the top part and a circulation stream. Its inlet vapor stream is
from the compressor. It is assumed that the absorber pressure, which is specified
at 128 psia in the base operation, is uniform in the two parts of the absorber. On
each stage, the liquid and vapor phases are not in equilibrium, and a rate-based
model is implemented to capture the liquid phase dynamics. The vapor phase
dynamics are ignored. On each stage, the mass transferred from the vapor phase to
the liquid phase is given by Equation.3.12:
34

Ni = min{NMT *(yi − yINT ,i ), 0.5* FV,i * yi} (3.12)

where Ni is the molar flowrate of component i (kmol/min), N MT


is a constant
mass transfer coefficient, yi is the mole fraction of component i in the vapor inlet
stream, yINT ,i is the mole fraction of component i at the gas-liquid interface, which
is obtained from an equilibrium calculation using the liquid phase compositions
and temperature. FV ,i
is the mole flowrate of component i in the inlet vapor
stream. To avoid a large mass-transfer rate between the two phases, it is assumed
that the largest amount of component i transferred between two phases is the half
of the amount of component i in the inlet vapor stream. The heat transferred from
the vapor phase to the liquid phase is given by:

Qj = QMT, j *(TV, j − TL, j ) (3.13)

where Qj is the heat transferred between the two phases on stage j (kcal/min), QMT

,j
is a constant heat transfer coefficient, TV, j is the temperature of the vapor inlet
stream, TL, j
is the temperature of the liquid phase. During stage-to-stage
calculations, total mass, component and an energy balance around the vapor phase
are used to calculate the vapor exit stream flowrate, composition, and temperature.
A total mass, component and an energy balance around the liquid phase, which
are similar to equation. 3.3 to 3.5 are used to model the absorber dynamics. In the
energy balance, the enthalpy of the material transferred between

j) Azeotropic Distillation Tower


The distillation column contains 20 theoretical stages, whose liquid holdup can
vary. It is assumed that the column is homogeneous, and only one liquid phase is
present. To reduce the system stiffness, the pressure profile in the column is
assumed known. A bubble-point calculation is used to determine temperature and
compositions on each stage, and then the energy balance is used to solve for the
vapor flowrate from stage to stage. The decanter is modeled in the same way as
discussed in Section 5.5 in reference. Since the Wilson model can’t be used in the
decanter due to the liquid-liquid equilibrium, the equilibrium partition
coefficients, β, used in the decanter are assumed constant and independent of
temperature. It is also assumed that the temperatures of the two liquid phases in
35

the decanter are always same. There are totally 69 state variables in the distillation
column. There are six manipulated variables, reflux flowrate, reboiler duty,
condenser duty, organic product flowrate, aqueous product flowrate and bottom
flowrate. In the base operation, the bottom liquid holdup is 2.33 m3, which is 50%
of the working level volume. The organic liquid holdup and the aqueous liquid

holdup are 0.85 m3, which are 50% of their working level volumes. In the base
operation, the decanter temperature is 45.85 oC.

k) HAc Tank
The HAc tank is only used to mix the liquid recycle stream and the fresh HAc
feed stream. There are totally 4 state variables in the tank, which are the liquid
holdup, mole fractions of VAc, and HAc in the liquid, and the liquid temperature.
The flowrates of all the streams connected to the tank are manipulated variables.
In the earlier publication on the VAC process an HAC tank was also used, but it
was not shown in the process flowsheet given in these publications.

3.4 Steady State Data and Dynamic Simulation

The steady state data for the VAC process are obtained after a control
structure similar to that developed by Luyben et al. in 1999 is implemented. The
control system used is shown in Appendix A4 and since its major loops are the same
as those used by Luyben et al. However, there are some small differences due to the
differences in control structure and how loops were tuned (see discussion below),
and the simplifications used in the dynamic model presented in this paper and
discussed above. The initial values of all the state variables and manipulated
variables come from the TMODS results, and the MATLAB model converges to a
steady state (the base operation) that is very close to the TMODS results. The steady
state values of manipulated variables are given in Appendix A1. The control
structure and controller parameters are given in Appendix A2. Steady state values
for the measurements are listed in Appendix A3. Four set point disturbances are used
to illustrate the dynamic behavior of the MATLAB model with the control structure
is implemented.
36

3.5 VAC Plant MATLAB Program

VAC Plant simulation uses MATLAB programming software to simulate.


The program was created by Chen and David in 2002. The model equations for the
Vinyl Acetate monomer process have been coded in MATLAB and then translated
into the C programming language. The C-coded files have been written in such a way
that they can be compiled into “MEX functions”. As a result, the C-coded model
becomes available from within the MATLAB environment. The purpose of
providing an interface between the C-coded model and MATLAB is to have a very
high execution speed and, at the same time, take advantage of the excellent graphing,
data analysis and advanced control functionalities available in the numerous
MATLAB toolboxes. It is important to note that the analyzer/transmitter time lags
and time delays are not implemented in the VAModel subroutine, discussed above.

The user needs to include the time lags and time delays in the code that is
used to control the process. The m-file test_VAcPlant (t, ID) gives details on how to
control the VAC process with a multiloop SISO architecture. In this routine the
transmitter lags are assumed to be 3 seconds, and the two analyzers on the gas
recycle and column bottoms also have a 10 minute time delay. An Euler integration
approach with a 1/3 sec. time stepis used to calculate dynamic responses. A 1 sec.
sampling time is used for the controllers and transmitters, except for the controllers
that are involved with the analyzers that have a 10 minute time delay. These
controllers have a 10 minute sampling time. In the model also integrated with eight
different parameter of disturbance in order to study the effect of disturbance. The
disturbance criteria are listed in the Table 3.1. The Figure 3.2 until 3.6 is the
simulation run in Matlab using zero disturbances or at normal condition at 100
minutes. Yet, there are still some variable that did not achieve the set point.
Eventually, the entire variable will achieve steady state. It depends on the controller
action and the environment of the parameter. The full list of Control Variable and
Manipulated Variable in VAC plant can be referred at the Appendix.
37

Table 3.1 Disturbance available in VAC plant simulation

ID Disturbance Criteria
0 no disturbance
1 setpoint of the reactor outlet temperature decreases 8 degC (from 159 to 151)
2 setpoint of the reactor outlet temperature increases 6 degC (from 159 to 165)
setpoint of the H2O composition in the column bottom increases 9% (from 9%
3
to 18%)
4 the vaporizer liqiud inlet flowrate increases 0.44 kmol/min (from 2.2 to 2.64)
5 HAc fresh feed stream lost for 5 minutes
6 O2 fresh feed stream lost for 5 minutes
7 C2H6 composition changes from 0.001 to 0.003 in the C2H4 fresh feed stream
8 column feed lost for 5 minutes

0.075 128.02

0.075
% O2

P re s

128
0.075

0.075 127.98
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
0.5002 0.7

0.7
H A c -L

V a p -L

0.5001
0.7

0.5 0.7
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
128.02 150.002

150.001
V a p -P

P re -T

128
150

127.98 149.999
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
159.17 0.5001
R C T -T

S e p -L

159.168 0.5

159.166 0.5
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

Figure 3.2 Control Variable: CV 1 – CV 8in VAC Plant


38

40.002 18

40.001 17

Sep-T

Sep-V
40 16

39.999 15
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
80.002 0.5

0.5
Com-T

Abs-L
80
0.4999

79.998 0.4999
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
17 25.001

16 25.0005
Cir-F

Cir-T
15 25

14 24.9995
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
2 25.001

1 25.0005
Scr-F

Scr-T
0 25

-1 24.9995
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

Figure 3.3 Control Variable: CV 9 – CV 16 in VAC Plant

-3
x 10
7.8 0.2505
% C 2H 6

7.6 0.25
% CO2

7.4 0.2495

7.2 0.249
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
134.001 0.094
F E H E -T

134
% H 2O

0.0935
133.999

133.998 0.093
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
110.02 60

40
O rg-L
C ol-T

110
20

109.98 0
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
0.5005 0.5

0.5
A qu-L

C ol-L

0.4998
0.4995

0.499 0.4996
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

Figure 3.3 Control Variable: CV 17 – CV 24 in VAC Plant


39

0.5234
0.835

F-C2H4
0.5233

F-O2
0.5232
0.8345
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4
x 10
0.79 2.188

Q-Vap
F-HAc

0.7895
2.1878
0.789
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

9008.5
18.7284

Q-Heat
F-Vap

9008
18.7282
9007.5
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

135.03

F-SepL
ShellT

2.7545
135.025

135.02 2.7544
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

Figure 3.4 Manipulated Variable: MV 1 – MV 8 in VAC Plant

36.005 17
F-SepV
T-Sep

36 16

35.995
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4
x 10
2.7192
1.2136
Q-Comp

F-AbsL

2.719 1.2134

1.2132
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4
x 10
16
1.0729
F-Circ

Q-Circ

15 1.0728
1.0727
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

1.5
2018.8
Q-Scru
F-Scru

1
0.5 2018.6
0
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

Figure 3.5 Manipulated Variable: MV 9 – MV 16 in VAC Plant


40

-3
x 10
6.5531 3.162

F -CO 2

P urge
3.16
6.5531
3.158

0 20 40 60 80 100 0 20 40 60 80 100
minute minute

0.3131 4.995
by pas s

Reflux
0.3131 4.99
0.313
4.985
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4 4
x 10 x 10
6.728
6.726 6.045
Q -Rebo

F -O rga
6.724
6.722 6.04
6.72
6.718
6.716 6.035
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

0.829 0.836
F -A que

0.8285 F -B ot
0.8355
0.828
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

Figure 3.6 Manipulated Variable: MV 17 – MV 24 in VAC Plant

Simulation Data Validation

The VAC plant simulation data had to be validated with the actual process to
ensure its reliability, exactness and relevant. The VAC simulation is compare with
several variables on the actual plant to make the comparison as in Table 3.2. The
result is very overwhelming and the VAC plant simulation is proven to be a reliable
and relevant plant simulation.
41

3.6 Summary

The VAC plant simulation is a well established plant simulation similar to the
Tennessee Eastman plant simulation. The plant also had provided a robust and
reliable simulation result. The VAC process, there are 10 basic unit operations. The
VAC MATLAB model includes 246 states, 26 manipulated variables, and 43
measurements. There are seven chemical components in the VAC process. With the
current result, the plant will be a perfect choice for the implementation of fault
detection system since there are no other researchers based on Science Direct
database had done the fault diction system on the VAC plant so far.
42

Table 3.2 The comparison of the VAC plant simulation with actual plant process on selected stream.

Reactor Out Absorber Vapor Out Organic Product Aqueous Product


Plant Simulation Plant Simulation Plant Simulation Plant Simulation
O2(mol
fract) 0.049 0.049 0.058 0.058

CO2 0.011 0.011 0.014 0.014

C2H4 0.551 0.551 0.658 0.658

C2H6 0.221 0.221 0.263 0.263


VAc 0.043 0.043 0.002 0.002 0.95 0.95 0.002 0.002

H2O 0.055 0.055 0.001 0.001 0.05 0.05 0.998 0.998

HAc 0.07 0.07 0.004 0.004 370 370 370 370

Reactor Feed Temperature(oC) 148.5 148.5

Absorber Feed Temperature(oC) 80 80


Reactor Feed Pressure(psia) 128 128
CHAPTER 4

METHODOLOGY

4.1 Phases in research

The objective of this research is to develop a fault detection system based on neural
network on the Vinyl Acetate Process plant. In order to achieve this objective, this
chapter will be devoted to elaborate the methodology of this research. The
methodology of this research has been divided into seven phases:

Phase1: Project conception, software familiarization and literature review

i. Preliminary discussions and brainstorming – general briefing by the


supervisor regarding the project. This involved discussion on case
study background, basic of neural network and genetic programming,
general structure of the thesis and introduction on fault detection
mechanism.
ii. Software familiarization – a general reading and testing of the
software to be studied with appropriate methods that are used for the
system in the project.
iii. Literature review – a study based on the project is conducted by
referring journals, books and related material as reference. This
information and knowledge gathering will become the fundamental of
the thesis.
44

Phase 2: Data collection

i. Plant simulation
a. Plant simulation involving MATLAB software runs on the
computer.
b. Comparison of simulation data with the actual plant data. This is
to validate the model of the plant developed within MATLAB
environment.
c. Data analysis on key variables is obtained via simulation of the
plant model.

ii. Data collected from the plant control system.


i. Analysis of data gathered from the distributed control system
(DCS) record and then gets the overall dynamic behavior of the
process.

Phase 3: Development and implementation of Neural Network (NN) scheme as fault


detection for Vinyl Acetate process

i. The variables and measurements on the equipment that will be


monitor for fault is selected and engaged in the Vinyl Acetate
process.

ii. Data on the variables based on the simulation of Vinyl Acetate


process is applied as the input.

iii. Neural network scheme for Vinyl Acetate process will be


developed and implemented:
ƒ Develop Estimator (Elman backpropagation network)
ƒ Develop Classifier (Feedforward backpropagation network)

iv. Neural network training and validation.


45

v. Dynamic response on the test-bed will be executed to produce


desired output.

vi. All tasks will be conducted within MATLAB software.

Phase 4: Model Testing and Validation


The Neural Network was test on different set of data from difference disturbances in
Vinyl Acetate process simulation

Phase 5: Performance evaluation for the proposed scheme


Recommendations to further improve the performance of the proposed schemes will
be suggested.

Phase 6: Thesis writing


Completing thesis writing with guidance from the supervisors.

4.2 Fault detection scheme

For fault detection scheme, two types of networks are needed. First is
predictor, to predict behavior of reactor temperature and second is classifier, to
classify type of fault. In this study, the focus will be on manipulated variable (MV)
on the column. There are three MV that are had been chosen to study; Column
Reflux Flowrate set point, Column Condenser Duty, and Organic Product Flowrate.
Elman network is used for predictor. Meanwhile for classifier multilayer feed
forward neural network. These two networks are trained using Levenberg-Marquardt
learning algorithm. Input for classifier is in the form of residual signal from
predictor. The outputs of the classifier are set between the values of 0 and 1. In this
study, the classifier is designed in such a way that the faults are monitored and alarm
signal is generated when the classifier’s output reached the output index threshold
value. Threshold value is an assigned value for the residual reactor pressure. When
the residual reactor pressure beyond the assigned value, it indicated that the actual
reactor pressure has deviated from its normal operating condition.
46

4.3 Summary

There are six phases of completing this research. The primary aspect
is the project conception and literature review. From here, the information
will be evaluated and used to develop a fault detection system using the most
suitable approached desired. In this case, neural network is chosen as the best
method and Vinyl Acetate Plant is selected as the case study. The next stage
is to develop the Neural Network as predictor and classifier. Both of them
will undergo training and validation. After it had shown good results, the
system will be implemented on the VAC plant. All the results and findings
are documented and will be discussed in the thesis.
CHAPTER 5

PROCESS ESTIMATION FOR FAULT DETECTION

5.1 Introduction

The model-based technique as proposed by Ahmad and Leong (2001) is


further investigated in this study. The proposed fault detection and diagnosis scheme
for the neural network implemented for several manipulated variable at Column in
Vinyl Acetate Plant (VAC) as shown in the Figure 3.1 The scheme is hierarchical in
structure and involves two types of model which is an estimator and a classifier. Both
models are based on neural network. The first is the process estimator that estimates
the normal or fault-free process behavior. The estimated output is then compared to
the actual measurement. The differences between the two termed as residuals are sent
to the second model that serves as a classifier. The role of the classifier is to classify
the residuals into a number of distinguishable patterns corresponding to different
faulty situations. On occasion of fault, the residuals will have some finite values. A
value of zero is expected when the plant is in normal operation condition.

5.2 Design of Process Estimator

The purpose of process estimator is to predict the fault free or normal


condition in the process based on the data from the plant simulation. The process
estimator is built from Elman Network which is a type of Recurrent Neural Network.
The reason to use the recurrent Neural Network is due to the Elman network differs
from conventional two-layer feedforward networks since the first layer has a
48

recurrent connection. The delay in this connection stores values from previous time
step, to be applied in the current time step. Due to the network can store information
for future reference; it is able to learn temporal patterns as well as spatial patterns.
The Elman network can be trained to respond to, and to generate, both kinds of
patterns. This ability is very important to simulate process that similar to the actual
process.

5.1 The Neural Network based fault detection scheme (Hamid, 2004)

The development of neural netwok model follows the standard procedure of system
identification model. Generally, standard procedure of system identification model
involves several procedures in order to make sure that the model is properly
developed:

i. Selection of input and output variable

ii. Selection of training and validation

iii. Selection of model structure


49

iv. Parameter estimation

v. Validating the resulting model

5.3 Selection of input and output variable

The selection of proper input and output is very important in process


estimator so the resulting generalization of the model would be very comparable with
the actual. In this study, the focus will be given to the three output manipulated
variable (MV) in the VAC plant. The variables are:

Table 5.1 The selected variable


Steady
MV Description Unit
state
20 Column Reflux 4.9849 Kmol/min
22 Column Condenser Duty 60367 Kcal/min
23 Column Organic Exit 0.829 Kmol/min

These entire variables are located at the column. Selection of the input is made based
on the effect of the input variables on the selected outputs. In order to simplify the
model structure of neural network, only input variables that have significant impact
to the process outputs are selected, others are neglected. In this case, the input
variable is neglected.

All the variables in VAC plant can be obtain in the references. The graph for
the entire variable can be acquired in VAC plant simulation result for 100 minutes
are shown here by using disturbance column feed lost (ID 8) for 5 minutes.
50

0.075 128.4

128.2

%O2

Pres
0.075
128

0.075 127.8
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
0.6 0.71

0.55 0.705
HAc-L

Vap-L
0.5 0.7

0.45 0.695
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
128.4 150.2

128.2
Vap-P

Pre-T
150
128

127.8 149.8
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
159.2 0.501
159.18
RCT-T

Sep-L
159.16
0.5
159.14
159.12
0.499
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

Figure 5.2 Control Variable from CV 1 to CV 8

40.2 18

17
Sep-T

Sep-V

40
16

39.8 15
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
80.1 0.5005
Com-T

0.5
Abs-L

80
0.4995

79.9 0.499
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
17 25.1

16 25.05
Cir-F

Cir-T

15 25

14
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
2 25.5

1
Scr-F

Scr-T

25
0

-1 24.5
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

Figure 5.3 Control Variable from CV 9 to CV 16


51

-3
x 10
7.8 0.252

%C2H6
7.6

%CO2
0.25
7.4

7.2 0.248
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
134.1 0.2
FEHE-T

% H2O
134 0.1

133.9 0
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
112 60

40

Org-L
Col-T

110
20

108 0
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
1 0.5

0.49
Aqu-L

Col-L
0.5
0.48

0 0.47
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

Figure 5.4 Control Variable from CV 17 to CV 24

0.523 0.834
F-C2H4
F-O2

0.522
0.832
0.521
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4
x 10
1.2
2.25
Q-V ap
F-HA c

2.2
0.8
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

18.74 9000
Q-Heat
F-V ap

18.73 8950
18.72
8900
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

2.765
135.06
2.76
F-S epL
S hellT

135.04 2.755
2.75
135.02
2.745
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

Figure 5.5 Manipulated Variable from MV 1 to MV 8


52

17
36.2

F-SepV
36

T-Sep
35.8 16
35.6
35.4
35.2
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4
x 10

2.72 1.21
Q-Comp

F-AbsL
2.71
1.205
2.7
1.2
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4
x 10
16 1.072
1.07

Q-Circ
F-Circ

15 1.068
1.066
1.064
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

1.5 2000

Q-Scru
F-Scru

1
1900
0.5
0 1800
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

Figure 5.6 Manipulated Variable from MV 9 to CV 16

-3
x 10
6.553 3.18
F-CO2

6.5525
Purge

6.552 3.16
6.5515
3.14
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

0.313 4.8
0.312 4.6
bypass

Reflux

0.311 4.4
0.31 4.2
0.309 4
0 20 40 60 80 100 0 20 40 60 80 100
minute minute
4 4
x 10 x 10
8
8
Q-Rebo

F-Orga

6
6
4
0 20 40 60 80 100 0 20 40 60 80 100
minute minute

1
0.8 0.82
F-Aque

F-Bot

0.6 0.8
0.4
0.2 0.78

0 20 40 60 80 100 0 20 40 60 80 100
minute minute

Figure 5.7 Manipulated Variable from MV 17 to CV 24

From Figure 5.1 until Figure 5.3, it shows the development of simulation result of the
parameter in Control Variable (CV). The list of all CV for the VAC process is
attached in the Appendix. A.2. CV is the process variables that are controlled
53

(Seborg et al., 2004). The desired value of the CV is referred as set point (SP).
Therefore, the parameter that had been determine as CV will have the certain value
or set point to be retained The failure of not maintaining CV can bring many
problems in term of safety and cost. In the meantime, the figure 5.4 until 5.6 is
regards to the Manipulated Variables (MV). The list of all MV for the VAC process
is also attached in the Appendix. A.2. In this case, MV will be the variable that can
be adjusted in order to keep the set point at its normal values.

5.4 Selection of Model Structure

The type of the structure for artificial neural network that are used in this
study is Multi-Input Single-Output (MISO) networks. The process estimator
developed in this research is an Elman network. Elman network is constructed to
estimate the normal or fault-free process condition. Thus, actual process outputs
cannot be used as inputs because they are affected directly by process faults. The
network should be independent of the actual process outputs to enable the generation
of residual as a measure of actual process departure from normal operating condition.

Figure 5.8 Elman Network (Chen, 2004)

Three MISO networks are required to model every single selected process
outputs. The schematic diagram of MISO network is shown in the Figure 5.10. Here,
54

y1 (t) is referred as the network output, which either the Column Reflux Column
Condenser Duty and Column Organic Exit, while un(t) and un(t-1) are referred to as
process inputs. In this study, Elman neural network is used and Levenberg-
Marquardt algorithm is used for training and validation. Levenberg-Marquardt (LM)
method which is hybrid of the Gauss-Newton nonlinear regression method and
gradient steepest descent method is recommended in most optimization packages
such as MATLAB (Chen, 2004). Before starting to use any neural network, the
simulation data had to be scaled and prepared according to the neural architecture.
The scaling of training data is needed to prevent data with larger magnitude from
overriding the smaller and impede the premature learning process Data quality and
preparation can affect the result on the neural network. In order to find the best
hidden node, the estimator or predictor is simulate using four until 20 hidden nodes.
The lowest Mean Square Error (MSE) for training will be chosen as the best hidden
node for the estimator. Then, the predictor is run using the selected parameter to get
the estimated process of VAC plant

Table 5.2 Optimization for Predictor 1


Number of
TrainMSE ValMSE
Hidden Nodes
5 0.05144961 0.05144961
6 0.051604 0.051604
7 0.05164484 0.05164484
8 0.05181865 0.05181865
9 0.05178851 0.05178851
10 0.05177512 0.05177512
11 0.05181307 0.05181307
12 0.05170676 0.05170676
13 0.05178579 0.05178579
14 0.05178099 0.05178099
15 0.05180883 0.05180883
16 0.05181234 0.05181234
17 0.05179709 0.05179709
18 0.05180694 0.05180694
19 0.05177472 0.05177472
20 0.05180714 0.05180714
55

Table 5.3 Optimization for Predictor 2


Number of
TrainMSE ValMSE
Hodden Nodes
5 9.93057E-09 9.93057E-09
6 9.96513E-09 9.96513E-09
7 9.95108E-09 9.95108E-09
8 9.1371E-09 9.1371E-09
9 6.78365E-09 6.78365E-09
10 5.31317E-09 5.31317E-09
11 7.95535E-09 7.95535E-09
12 4.3876E-09 4.3876E-09
13 7.72217E-09 7.72217E-09
14 8.53923E-09 8.53923E-09
15 6.25844E-09 6.25844E-09
16 8.97656E-09 8.97656E-09
17 5.49157E-09 5.49157E-09
18 9.16284E-09 9.16284E-09
19 5.9356E-09 5.9356E-09
20 6.16571E-09 6.16571E-09

Table 5.4 Optimization for Predictor 3


Number of
TrainMSE ValMSE
Hodden Nodes
5 2.18217E-09 2.18217E-09
6 1.72855E-09 1.72855E-09
7 9.40844E-09 9.40844E-09
8 9.17998E-09 9.17998E-09
9 8.83618E-09 8.83618E-09
10 3.52288E-09 3.52288E-09
11 7.64483E-09 7.64483E-09
12 9.4637E-09 9.4637E-09
13 8.41106E-09 8.41106E-09
14 6.83433E-09 6.83433E-09
15 6.4302E-09 6.4302E-09
16 9.7801E-09 9.7801E-09
17 7.325E-09 7.325E-09
18 5.48838E-09 5.48838E-09
19 9.88087E-09 9.88087E-09
20 9.53641E-09 9.53641E-09

After the training and validating process, the Predictor 1 best hidden neuron is 5,
Predictor 2 is 12 and Predictor 3 is 6. This is will be used in the development of the
classifier using the most optimum hidden neuron.
56

5.5 Selection of training and validation

After the output variable had been selected to be test, the neural network had
to be train and validate before ready to be implemented on the VAC simulation data.
Neural network can be train by two different styles of training. In incremental
training the weights and biases of the network are updated each time an input is
presented to the network. In batch training the weights and biases are only updated
after all the inputs are presented. Training is important to achieve and train the
weights and biases that can estimation that similar to the actual plant. Meanwhile,
validation is a process to verify the Neural Network using unseen or other data to test
the reliability and robustness of the created neural network.

Figure 5.9 Column Reflux Training and Validation


57

Figure 5.10 Column Condenser Duty Training and Validation

Figure 5.11 Column Organic Exit Training and Validation


58

From the training and validation graph above, the estimated process for the Column
Reflux is not very accurate yet still in a reasonable state. As for the Column
Condenser Duty and Column Organic Exit, both of the variables achieve 100%
similarity with the actual process. This is a clear indication that the neural network is
now working as expected.

5.6 Summary

From the result, the performance of Neural Network as predictor revealed a


significant result. By using Elman Network based on Multi Input and Single Output
with Levenberg-Marquardt training algorithm on the three parameters that were
tested, all of them show positive progress. The training and validation for Column
Condenser Duty and Column Organic Exit had achieved 100 % accuracy between
the simulated and the actual process. Besides that, estimated process for the Column
Reflux is not very accurate yet still in a reasonable state. This is proven that the
development of the estimator is successful.
CHAPTER 6

NEURAL NETWORK FAULT CLASSIFIER

6.1 Introduction

The next stage of the neural network-based fault detection approach is


decision making using another neural network model as a classifier. The residual
vector has different structures for different faults. This feature can be used to detect
these faults, and has been used in the neural network method by Yu et al. (1996) and
Ahmad and Leong (1997) using Multilayer Feedforward (MLFF) Neural Network. In
this work, the neural network classifier is simulated by a MLFF network. In this
research, three type of fault is studied based on the selected variable earlier. The
classifier will measure the deviation or residual between the actual and the predicted.
Then, sensor fault will be simulated as if the sensor is encountered with a sudden
mechanical failure and caused a sustained bias to its measurement. The degree of
bias will be used as a measurement for severity of the sensor fault.

6.2 Fault Classifier

Multilayer Feedforward neural network will be used to develop the classifier.


Design considerations of the neural network fault classifier involved are stated as
follows:
60

i. Sensitivity Analysis
A sensitivity analysis is conducted to determine the degree of sensor bias
which will cause violation of the operation limits. The analysis is done on
trial and error basis with various sensor biases simulated at steady state base
case condition. The relatively small sensor bias will cause the process to
change to another steady state condition. But if the bias is relatively large, the
process will not be able to absorb this disturbance and will eventually end up
without of control and violation of the operation limit. Eventually, in this
research the Sensitivity Analysis is not implemented due to the simplicity of
this research and not on the job scope of the research.

ii. Preparation of training data and cross-validation data


The input data for the artificial neural network (ANN) classifier are in the
form of residual signal generated by ANN predictor. The faults data are fed to
the ANN predictor and the responses of the process obtained are shown in the
Figure 6.1. However, prior to arrangement to the neural network classifier,
input data is scaled between 0 and 1. Scaling is necessary in order to get the
same order of magnitude variables and to avoid numerical instability
problems.

iii. Network training and cross-validation


The Multilayer Feedforward neural network will be train in Levenberg-
Marquardt training algorithm. The training will use the data from the plant
simulation and validation process will happen simultaneously using the other set
of data. This is due to the setting by the main program of Multilayer
Feedforward neural network in MATLAB. The training and validation is up to 500
epochs.
61

6.3 Fault Classifier Result

In the end of this research, the implementation of the Multilayer


Feedforward neural network as the fault classifier can be determined by the
following figures:-

Figure 6.1 Fault Classifier graph on the Column Reflux


62

Figure 6.2 Fault Classifier graph on the Column Condenser Duty

Figure 6.3 Fault Classifier graph on the Column Organic Product


63

Based on a general overview of the fault classifier results, the neural network had
managed to achieve the target. The value in the graph is the residual versus time
depend on each parameter. Meaning that, the neural network had calculated the error
or residual from the actual process and plotted a graph of fault residual versus time
for better visualization. The threshold limit value is the set point limit where the
boundary can be classified as the danger limit. The threshold limit can be set up
based on the average safe margin of the process limit. When the value or process
exceeded the threshold limit, basically the neural network will send signal to
controller or made notification in order to minimize the residual to the safe condition
again. The classifier will act as the monitoring system along side with the controller
to observe any abnormality and fault that might happen anytime anywhere.

Based on figure 6.1, the Column Reflux graph is stable and not higher than
the threshold limit value. Therefore, no fault had been generated. Meanwhile, in
Column Condenser Duty from figure 6.2, there are certain times, that Column
Condenser Duty value had passing the threshold limit value from the 30th until 50th
minutes and 85th until 90th minutes. In this circumstance, the neural network had
detected the first deviation on the minute 30th and react in order to reduce the error.
After 10 minute, the process returns to normal. Then, the same thing happen in the
85th minutes, and the network will respond by generating signal to any involved
controller or equipment to restore the normal condition. For the last parameter, graph
on the Column Organic Product had produced many small deviations. This is due to
the controller at the parameter is Proportional (P) Controller instead of Proportional
Integrated (PI) Controller. PI controller serve better control and maintaining
variables than P controller (Seborg et al., 2004). Although it has a very fast and large
deviation on the 17th minutes, yet the process return to safe condition in short time
also. The high amplitude of deviation had caused the signal become very strong so
that give the fast counter action.
64

6.4 Summary

The result for the fault classifier is very promising. The developed neural
network can detect the fault as expected. According to the parameter results, the
Column Reflux parameter shows the most optimistic performance. Since the VAC
Plant is using mainly PI and P controller system, the implementation of the fault
detection would serve huge benefits.
CHAPTER 7

CONCLUSIONS AND RECOMMENDATIONS

7.1 Overview

Over the years, the aspect of safety and cost is the being the main reason for
the continuingly study for better controlling and system management in process. In
the past, the control community showed how regulatory control could be automated
using computers and thereby removing it from the hands of human operators. This
has led to great progress in product quality and consistency, process safety and
process efficiency. The current challenge is the automation of AEM using intelligent
control systems, thereby providing human operators the assistance in this most
pressing area of need. People in the process industries view this as the next major
milestone in control systems research and application.

There are many method and approach in handling of AEM using intelligent
control systems and one of the approaches is the implementing of Neural Network as
fault detection system. Fault detection system is one of main element in safety
measurement in the chemical plant. It is very ironic to think such small system would
bring such big difference and impact on the safety, reliability and cost effective of
the process. Neural Network has the ability to process information characteristic such
as nonlinearity, high parallelism, fault tolerance as well as capability to generalize
and handle imprecise information (Basheer and Hajmeer, 2000). Yet, there are
limitations in using Neural Network in order to give best performance such as the
66

need for large supply of good data, quality neural architecture, and the suitable
training algorithm.

Nevertheless, the emerging of hybrid intelligent system could improve the


Neural Network if integrated with other intelligent system such as expert systems,
statistical method, fuzzy logic, wavelet transform and as well as Genetic Algorithm
(GA). The hybrid intelligent system will be the new era technology as being
implemented in the plant wide control and optimization.

7.2 Conclusion

At the end of this research, the implementation of Neural Network as fault


detection had achieved the following objectives:-

a. The data was successfully generated from the Vinyl Acetate Plant
process simulation

b. The dynamic response on both Neural Network model was


implemented and achieved the desired output.

c. The development of Neural Network as process estimator using


Elman Network and as process Classifier using Multilayer
Feedfoward Neural Network had shown reliable and promising result.

d. The implementation of Neural Network had provided a reliable


prediction as fault detection on Vinyl Acetate Plant was successfully
developed.
67

7.2 Recommendations for Future Work

On the other hand, although the development of Neural Network can be


consider as successful, there are still areas and aspect that can be improve in the
future work:

a. The data quality and quantity can be improve to provide better result
if using Neural Network
b. Provide the Neural Network with out of control or unseen data during
training and validation for more reliable and sustainable Neural
Network
c. Implementation of Fault Detection at the whole VAC Plant
Equipment for the plant wide control system
d. Develop Fault Detection and as well as Diagnosis using Neural
Network to improve the fault detection system ability.
68

REFERENCE

Abdi, H., Valentin, D., & Edelman, B., Neural Networks, Thousand Oaks (CA):
Sage. (1999).

Ahmad, A., & A. Hamid., M. K. (2001), Neural Networks for Process Monitoring,
Control and Fault Detection: Application to Tennessee Eastman Plant,
Malaysian Science and Technology Congress, Melaka,

Ahmad, A., & Leong, W. H. (2001), Model-based Fault Detection Using


Hierarchical Artificial Neural Network. Regional Symposium on Chemical
Engineering, Bandung, 29-31 Oct.

Ahmad, A. & A. Hamid, M. K. (2002), Detection of Multiple Sensor Faults in a


Palm Oil Fractionation Plant using Artificial Neural Network, Regional
Symposium on Chemical Engineering (RSCE 2002) in conjunction with 16th
Symposium of Malaysian Chemical Engineers (SOMChE 2002), Petaling
Jaya.

Ahmad, A. & A. Hamid, M. K. (2003), Pipeline Leak Detection System in a Palm


Oil Fractionation Plant Using Artificial Neural Network, International
Conference on Chemical and Bioprocess Engineering (ICCBPE 2003), Kota
Kinabalu.

Ahmad, A. & A. Hamid, M. K. (2002), Detection of Sensor Failure in a Palm Oil


Fractionation Plant Using Artificial Neural Network, International
Conference on Artificial Intelligence Applications in Engineering and
Technology (ICAIET 2002), Kota Kinabalu, 17-18 June.
69

Basheer , I.A. and Hajmeer M. (2000), Artificial Neural Networks: Fundamentals,


Computing, Design, and Application, Journal of Microbiological Methods,
43: 3–31.

Baughman, D., and Liu, Y. (1995). Neural Networks in Bioprocessing and Chemical
Engineering, Academic Press, San Diego, CA.

Choudhury, M. A. A. S., Shah, S. L., Thornhill, N. F., & Shook, D. S., (2006),
Automatic detection and quantification of stiction in control valves, Control
Engineering Practice, 14(12), 1395–1412.

Chen, J., & Patton, R. J., (1999), Robust model-based fault diagnosis for dynamic
systems, Boston: Kluwer.

Chen W. S., Application of Artificial Neural Network -Genetic Algorithm in


Inferential Estimation and Control of a Distillation Column, Master. Thesis.
Universiti Teknologi Malaysia; 2005.

Crowl, D. A. & Louvar, J. F., Chemical Process Safety: Fundamental with


Applications, Second edition, Upper Saddle River, NJ: Prentice Hall. 2002

Domínguez, E., Muñoz, J., A neural model for the p-median problem, Computers &
Operations Research 35 (2008) 404 – 416

Downs, J. J., & Vogel, E. F. (1993). A plant-wide industrial process control problem.
Computers and Chemical Engineering, 17(3), 245–255

Frank, P. M., (1990), Fault diagnosis in dynamic systems using analytical and
knowledge-based redundancy. Automatica, 26, 459–474.

Gertler, J. J., (1998), Fault detection and diagnosis in engineering systems, New York:
Marcel Dekker.

Hebb, D. O., The Organization of Behavior, Wiley, New York, 1949


70

Himmelblau, D.M. (1978). Fault detection and diagnosis in chemical and petrochemical
processes. Amsterdam: Elsevier Press

Isermann, R., (1997). Supervision, Fault-detection and Fault-diagnosis Methods - an


introduction. Control Eng. Practice. Vol. 5, No. 5, pp. 639-652.

Isermann, R., (2005), Model-based fault-detection and diagnosis – status and


applications, Annual Reviews in Control 29, 71–85

J.B. Hampshire, II and B.A. Pearlmutter, Equivalence proofs for multi-layer


perceptron classifiers and the Bayesian discriminant function. In: S.
Touretzky, G. Elman, T. Sejnowski and J. Hinton, Editors, Proceedings of the
1990 Connectionists Models Summer School, Morgan Kaufmann, San Mateo,
CA (1990).

K.P. Detroja, R.D. Gudib, & S.C. Patwardhan, (2007), Plant-wide detection and
diagnosis using correspondence analysis. Control Engineering Practice
doi:10.1016/j.conengprac.2007.02.007.

Lee, R. S. T., Fuzzy-Neuro Approach to Agent Applications, Spinger-Verlag Berlin


Heidelberg (2006).

Lennox, B., Montague, G. A., Frith, A. M., Gent, C., & Bevan, V., Industrial
Application of Neural Networks-an investigation, Journal of Process Control
11 (2001) 497-507.

Luyben, M. & Tyreus, B. An Industrial Design/Control Study for the Vinyl Acetate
Monomer Process, Computers Chem. Engng, 1998, 22, 867.

Luyben, W., Tyreus, B., and Luyben, M., Plantwide Process Control, McGraw Hill,
New York, Chapter 11, 1999.

Mohd. Kamaruddin Bin Abd. Hamid, Multiple faults detection using artificial neural
network, Master. Thesis. Universiti Teknologi Malaysia; 2004.
71

McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in
nervous activity. Bulletin of Mathematical Biophysics, 5, 115-133.

Michael Negnevitsky, Artificial Intelligence: A Guide to Intelligent Systems,


Edinburgh Gate, Harlow, Pearson Education Limited, 2002

Noorlisa Harun, Fault detection and diagnosis vi a improved multivariate statistical


process control, Master. Thesis. Universiti Teknologi Malaysia; 2005.

Patton, R. J., Frank, P. M., & Clark, P. N., (2000), Issues of fault diagnosis for
dynamic systems. Berlin: Springer.

Patterson D. W. (1996), Artificial neural Networks: Theory and Application, Prentice


Hall.

Ricker, N. L., (1995), Optimal Steady-State Operation of the Tennessee Eastman


Challenge Process, Computers Chem. Engng Vol.19. No. 9. pp. 949-959

Ruiz, D., Nougue´s, J. M., Puigjaner, L. (2001), Fault diagnosis support system for
complex chemical plants, Computers and Chemical Engineering, 25 151–160

Smagt, P. V. D. & Krose, B. J. A., An introduction to Neural Network, 8th edition,


The University of Amsterdam Publication, 1996.

Seborg, D. E., & Edgar, T. F., Mellichmap, D. E, Process Dynamic and Control.
Second edition. Hoboken, NJ: John Wiley & Sons Inc. 2004.

Thornhill, N. F., & Horch, A., (2006), Advances and new directions in plant-wide
disturbance detection and diagnosis, Control engineering practice,
doi:10.1016/j.conengprac.2006.10.011

Venkatasubramanian, V., and Chan, K. (1989). A Neural Network Methodology for


Process Fault Diagnosis. AIChE Journal, 35(12): 1993-2002.
72

Xia, C., & Howell, J., (2005), Isolating multiple sources of plant-wide oscillations
via independent component analysis, Control Engineering Practice, 13(8),
1027–1035.

Yu, D.L., Shields, D.N., and Daley, S. (1996). A Hybrid Fault Diagnosis Approach
using Neural Networks. Neural Computing and Application, 3(4): 21-26.

Zhou, Y., Hahn, J, and Mannan, S. M., Fault detection and classification in chemical
processes based on neural networks with feature extraction, ISA Transactions
42(2003), 651–664
73

Appendix A1

Steady State Values of Manipulated Variables

MV Description Steady State Range Unit


1 Fresh O2 Feed 0.52343 0 – 2.268 Kmol/min
2 Fresh C2H4 Feed 0.83522 0 – 7.56 Kmol/min
3 Fresh HAc Feed 0.79003 0 – 4.536 Kmol/min
4 Vaporizer Steam Duty 21877 0 – 1433400 Kcal/min
5 Vaporizer Vapor Exit 18.728 0 – 50 Kmol/min
6 Vaporizer Heater Duty 9008.54 0 – 15000 Kcal/min
7 Reactor Shell Temp. 135.02 110 – 150 oC
8 Separator Liquid Exit 2.7544 0 – 4.536 Kmol/min
9 Separator Jacket Temp. 36.001 0 – 80 oC
10 Separator Vapor Exit 16.1026 0 – 30 Kmol/min
11 Compressor Heater Duty 27192 0 – 50000 Kcal/min
12 Absorber Liquid Exit 1.2137 0 – 4.536 Kmol/min
13 Absorber Circulation Flow 15.1198 0 – 50 Kmol/min
14 Circulation Cooler Duty 10730 0 – 30000 Kcal/min
15 Absorber Scrub Flow 0.756 0 – 7.560 Kmol/min
16 Scrub Cooler Duty 2018.43 0 – 5000 Kcal/min
17 CO2 Removal Inlet 6.5531 0 – 22.68 Kmol/min
18 Purge 0.003157 0 – 0.02268 Kmol/min
19 FEHE Bypass Ratio 0.31303 0–1
20 Column Reflux 4.9849 0 – 7.56 Kmol/min
21 Column Reboiler Duty 67179 0 – 100000 Kcal/min
22 Column Condenser Duty 60367 0 – 150000 Kcal/min
23 Column Organic Exit 0.829 0 – 2.4 Kmol/min
24 Column Aqueous Exit 0.8361 0 – 2.4 Kmol/min
25 Column Bottom Exit 2.1584 0 – 4.536 Kmol/min
26 Vaporizer Liquid Inlet 2.1924 0 – 4.536 Kmol/min
74

Appendix A2

Control Structure and Controller Parameters

Manipulated TR
LOOP Controlled Variable C.V. Value Type KC
Variable (min)
1 %O2 in the Reactor O2 fresh feed 7.5% (0 – 20) PI 10 10
Inlet sp
2 Gas Recycle Stream C2H4 fresh 128 psia (0 – PI 0.3 20
Pressure feed valve 200)
3 HAc Tank Level HAc fresh feed 50% (0 – 100) P 2
valve
4 Vaporizer Level Vaporizer 70% (0 – 100) PI 0.1 30
Heater Valve
5 Vaporizer Pressure Vaporizer 128 psia (0 – PI 5 10
Vapor Exit 200)
Valve
6 Heater Exit Temp. Reactor 150 oC (120 – PI 1 5
Preheater Valve 170)
7 Reactor Exit Temp. Steam Drum 159.17 oC (0 – PI 3 10
Pressure sp 200)
8 Separator Level Separator 50% (0 – 100) P 5
Liquid Exit
Valve
9 Separator Temp. Separator 40 oC (0 – 80) PI 5 20
Coolant Valve
10 Separator Vapor Separator Fixed
Flowrate Vapor Exit
Valve
11 Compressor Exit Temp. Compressor 80 oC (70 – 90) PI 1 5
Heater Valve
12 Absorber level Absorber 50% (0 – 100) P 5
Liquid Exit
Valve
13 Absorber Scrub HAc Tank Exit Fixed
Flowrate Valve 2
14 Circulation Stream Absorber Scrub 25 oC (10 – 40) PI 1 5
Temp. Heater Valve
15 Absorber Circulation Absorber Fixed
Flowrate Circulation
Valve
16 Scrub Stream Temp. Circulation 25 oC (10 – 40) PI 1 5
Cooler Valve
75

Appendix A2 (continued)

Control Structure and Controller Parameters

Manipulated TR
LOOP Controlled Variable C.V. Value Type KC
Variable (min)
%CO2 in the Gas CO2 Purge 0.764% (0 –
17 P 1
Recycle Flowrate sp 50%)
%C2H6 in the Gas Purge Flowrate
18 25% (0 – 100%) P 1
Recycle sp
134 oC (0 –
19 FEHE Hot Exit Temp. Bypass Valve PI 5 10
200)
%H2O in the Column Column Reflux
20 9.344% (0 - 20) PI 0.5 60
Bottom Flowrate sp
Reboiler Steam 110 oC (0 –
21 5th tray Temperature PI 20 30
Valve 120)
Column 45.845 oC (40
22 Decanter Temperature PI 1 5
Condenser Duty – 50)
Decanter Organic Organic Product
23 50% (0 – 100) P 1
Level Flowrate
Aqueous
Decanter Aqueous
24 Product 50% (0 – 100) P 1
Level
Flowrate
Column Bottom
25 Column Bottom Level 50% (0 – 100) P 1
Flowrate
HAc Tank Exit
26 Liquid Recycle Flow Fixed
Valve 1
76

Appendix A3

Measurements at Steady State

Measurement Description Value Unit


1 Vaporizer Pressure 128 Psia
2 Vaporizer Level 0.7
3 Vaporizer Temperature 119.145 oC
4 Heater Exit Temperature 150 oC
5 Reactor Exit Temperature 159.17 oC
6 Reactor Exit Flowrate 18.857 Kmol/min
7 FEHE Cold Exit Temperature 97.1 oC
8 FEHE Hot Exit Temperature 134 oC
9 Separator Level 0.5
10 Separator Temperature 40 oC
11 Compressor Exit Temperature 80 oC
12 Absorber Pressure 128 Psia
13 Absorber Level 0.5
14 Circulation Cooler Exit 25
oC
Temperature
15 Scrub Cooler Exit Temperature 25 oC
16 Gas Recycle Flowrate 16.5359 Kmol/min
17 Organic Product Flowrate 0.829 Kmol/min
18 Decanter Level (Organic) 0.5
19 Decanter Level (Aqueous) 0.5
20 Decanter Temperature 45.845 oC
21 Column Bottom Level 0.5
22 5th Tray Temperature 110 oC
23 HAc Tank Level 0.5
24 Organic Product Composition 0.949786 %mol
(VAc)
Organic Product Composition
25 0.049862 %mol
(H2O)
Organic Product Composition
26 0.000352 %mol
(HAc)
27 Column Bottom Composition 0.000010 %mol
(VAc)
Column Bottom Composition
28 0.093440 %mol
(H2O)
Column Bottom Composition
29 0.906550 %mol
(HAc)
77

Appendix A3 (Continued)

Measurements at Steady State

Measurement Description Value Unit


30 0.055664 %mol
31 0.007304 %mol
32 0.681208 %mol
Gas Recycle Composition (O2, CO2,
33 C2H4, C2H6, VAc, H2O, HAc) 0.249191 %mol
34 0.001597 %mol
35 0.000894 %mol
36 0.004142 %mol
37 0.075 %mol
38 0.006273 %mol
39 Reactor Feed Composition (O2, CO2, 0.58511 %mol
40 C2H4, C2H6, VAc, H2O, HAc) 0.214038 %mol
41 0.001373 %mol
42 0.008558 %mol
43 0.109648 %mol
78

Appendix A4

Control System in VAC Plant


79

Appendix A5

Wilson parameters and molar volumes

Data
a(i,j) O2 CO2 C2H4 C2H6 VAc H2O HAc
O2 0 0 0 0 0 0 0
CO2 0 0 0 0 0 0 0
C2H4 0 0 0 0 0 0 0
C2H6 0 0 0 0 0 0 0
VAc 0 0 0 0 0 1384.6 -136.1
H2O 0 0 0 0 2266.4 0 670.7
HAc 0 0 0 0 726.7 230.6 0
Vi O2 64.178
CO2 37.400
C2H4 49.347
C2H6 52.866
VAc 101.564
H2O 18.01
HAc 61.445
80

Appendix A6

Equipment Data

Equipment Data Variable Value


Vaporizer Total Volume 17 m3
Working Level Volume 4 m3
Reactor Catalyst Density ρb 385 Kg/m3
Catalyst Heat Capacity Cpb 0.23 kcal/(kg* oC)
Catalyst Porosity ε 0.8
UA per section UA_rct 269.84 kcal/(min* oC
*m3)
Tube Number N 622
Tube Length LTube 10 m
Tube Diameter d 0.0371 m
Friction Factor F 0.000795
psia*(min)2/(kg*m3)
Heat of Reaction -42100 kcal/kmol -
E1 E2 316000 kcal/kmol
FEHE Reference UA UA0 113.35 kcal/min/ oC
Reference Mass Flowrate of FC_REF 498.95 kg/min
Cold Stream
Reference Mass Flowrate of FH_REF 589.67 kg/min
Hot Stream
Separator Vapor Volume 170 m3
Working Level Volume 8 m3
UA UA_sep 9075.18 kcal/(min*
oC)
Compressor Compressor Coefficient γ 15000
Absorber Working Level Volume 8.5 m3
Bottom Section 2 theoretical stages
Top Section 6 theoretical stages
Tray Holdup 13.61 kmol
Hydraulic time τ_abs 0.1 min
Material transfer coefficients Nmt 27.22 kmol/min
for both sections
Heat transfer coefficients for Qmt_bot 100.8 kcal/( oC *min)
the bottom section
81

Appendix A6

Equipment Data (Continued)

Equipment Data Variable Value


Absorber Heat transfer coefficients for Qmt_top 50.4 kcal/( oC *min)
the top section
Column Theoretical Stage Number 20
Feed Stage 15 from bottom
Tray Holdup 2.3 kmol
Hydraulic time τ_col 0.1 min
Top Pressure 18 psia
Bottom Pressure 29.4 psia
Reboiler Pressure 30 psia
Base Working Level Volume 5.66 m3
Decanter Equilibrium Partition β_VAC 395 0.05 1
Coefficient β_ H2O
β_HAC
Organic Working Level 1.7 m3
Volume
Aqueous Working Level 1.7 m3
Volume
HAc Tank Working Level Volume 2.83 m3
82

Appendix B1

Main Program for Data Scaling

function [datas,p,min,max]=dscale
%DSCALE
%-------------------------------------------------------------------
-
% This subfunction scales data to value between 0 and 1
%
% datas = scaled data
% data = actual data before scaling
% min = actual data at their minimum
% max = actual data at their maximum
load data_vac8.mat;
input=u_history;
[r,m]=size(input);
refl=input(:,1); % Reflux flowrate
cond=input(:,2); % Condenser flowrate
pump=input(:,3); % Pumparound return flowrate
toptemp=input(:,4); % Top Stage Temperature
dist=input(:,5); % Distillate flowrate
bott=input(:,6); % Bottom flowrate
feed=input(:,7); % Feed flowrate
toppres=input(:,20); % Top stage pressure
bottemp=input(:,22); % Bottom stage temperature
C8=input(:,23); % C8 flowrate
j=r;
for i=1:r
j=r;
dataq(1,i)=refl(j);
dataq(2,i)=cond(j);
dataq(3,i)=pump(j);
dataq(4,i)=toptemp(j);
dataq(5,i)=dist(j);
dataq(6,i)=bott(j);
dataq(7,i)=feed(j);
dataq(8,i)=toppres(j);
dataq(9,i)=bottemp(j);
dataq(10,i)=C8(j);
r=r-1;
end
[n,p]=size(dataq);
for i=1:n
max(i)=dataq(i,1);
min(i)=dataq(i,1);
for j=1:p
if dataq(i,j)>max(i)
max(i)=dataq(i,j);
end
if dataq(i,j)<min(i)
min(i)=dataq(i,j);
end
end
datas(i,:)=(dataq(i,:)-min(i))/(max(i)-min(i));
% datad(i,1:p-1)=datas(i,2:p); % 1 delayed term
% datad1(i,1:p-2)=datas(i,3:p); % 2 delayed term
end
83

Appendix B2

Main Program for Data Preparation

function [input,output,X,min,max]=dprep
%DPREP
%-------------------------------------------------------------------
----------
% This subfunction creates data for training and cross-validation
%
% input,output = training data
% vinput,voutput = cross-validation data
[datas,p,min,max]=dscale;
X=p;
% Training data
input(1,1:X)=datas(1,1:X); % Reflux flowrate
input(2,1:X)=datas(2,1:X); % Condenser flowrate
input(3,1:X)=datas(3,1:X); % Pumparound return flowrate
input(4,1:X)=datas(4,1:X); % Top stage temperature
input(5,1:X)=datas(5,1:X); % Distillate flowrate
input(6,1:X)=datas(6,1:X); % Bottom flowrate
input(7,1:X)=datas(7,1:X); % Feed flowrate
output(1,1:X)=datas(8,1:X); % Top stage pressure
output(2,1:X)=datas(9,1:X); % Bottom stage temperature
output(3,1:X)=datas(10,1:X); % C8 flowrate

% % Cross-validation data
% input(1,1:X)=datas(1,1:X); % Reflux flowrate
% input(2,1:X)=datas(2,1:X); % Condenser flowrate
% input(3,1:X)=datas(3,1:X); % Pumparound return flowrate
% input(4,1:X)=datas(4,1:X); % Top stage temperature
% input(5,1:X)=datas(5,1:X); % Distillate flowrate
% input(6,1:X)=datas(6,1:X); % Bottom flowrate
% input(7,1:X)=datas(7,1:X); % Feed flowrate
% output(1,1:X)=datas(8,1:X); % Top stage pressure
% output(2,1:X)=datas(9,1:X); % Bottom stage temperature
% output(3,1:X)=datas(10,1:X); % C8 flowrate

####################################################################

function [Tinput,Toutput,X,min,max]=dprepT
%DPREPT
%-------------------------------------------------------------------
----------
% This subfunction creates data for training and cross-validation
%
% input,output = training data
% vinput,voutput = cross-validation data
[datas,p,min,max]=dscale;
X=p;

% Cross-validation data
Tinput(1,1:X)=datas(1,1:X); % Reflux flowrate
Tinput(2,1:X)=datas(2,1:X); % Condenser flowrate
Tinput(3,1:X)=datas(3,1:X); % Pumparound return flowrate
Tinput(4,1:X)=datas(4,1:X); % Top stage temperature
84

Tinput(5,1:X)=datas(5,1:X); % Distillate flowrate


Tinput(6,1:X)=datas(6,1:X); % Bottom flowrate
Tinput(7,1:X)=datas(7,1:X); % Feed flowrate
Toutput(1,1:X)=datas(8,1:X); % Top stage pressure
Toutput(2,1:X)=datas(9,1:X); % Bottom stage temperature
Toutput(3,1:X)=datas(10,1:X); % C8 flowrate

####################################################################

function [Vinput,Voutput,X,min,max]=dprepV
%DPREPV
%-------------------------------------------------------------------
----------
% This subfunction creates data for training and cross-validation
%
% input,output = training data
% vinput,voutput = cross-validation data
[datas,p,min,max]=dscale;
X=p;

% Cross-validation data
Vinput(1,1:X)=datas(1,1:X); % Reflux flowrate
Vinput(2,1:X)=datas(2,1:X); % Condenser flowrate
Vinput(3,1:X)=datas(3,1:X); % Pumparound return flowrate
Vinput(4,1:X)=datas(4,1:X); % Top stage temperature
Vinput(5,1:X)=datas(5,1:X); % Distillate flowrate
Vinput(6,1:X)=datas(6,1:X); % Bottom flowrate
Vinput(7,1:X)=datas(7,1:X); % Feed flowrate
Voutput(1,1:X)=datas(8,1:X); % Top stage pressure
Voutput(2,1:X)=datas(9,1:X); % Bottom stage temperature
Voutput(3,1:X)=datas(10,1:X); % C8 flowrate
85

Appendix B3

Main Program for Predictor 1

% clc;
% clear;
% [datas,p,min,max]=dscale
[input,output,X,min,max]=dprep;
[Vinput,Voutput,X,min,max]=dprepV;
[Tinput,Toutput,X,min,max]=dprepT;
ptr=input; ttr=output(1,:); % Training
v.P=Vinput; v.T=output(1,:); % Validation
t.P=Tinput; t.T=Toutput(1,:); % Testing
S1=5; % Number of nodes
net1=newelm(minmax(input),[S1 1],{'tansig' 'purelin'},'trainlm');
net1.trainparam.epochs=500; % Max epoch number
net1.trainParam.goal=1e-8;
net1.trainParam.max_fail=10;
net1.trainParam.show=50;
net1=init(net1);
[net1,tr]=train(net1,ptr,ttr,[],[],v,t);
an1=sim(net1,input);
error=an1-input(1,:);
trainmse=sumsqr(error)/X;
Van1=sim(net1,Vinput);
valmse=sumsqr(Van1-Vinput(1,:))/X;
Tan1=sim(net1,Vinput);
testmse=sumsqr(Tan1-Tinput(1,:))/X;
fprintf('TrainMSE=%e, ValMSE=%e,
TestMSE=%e\n',trainmse,valmse,testmse);
time=1:X;
figure(1)
subplot(2,1,1),plot(time,an1,'r',time,input(1,:),'b');
ylabel('Molar Flowtare, Kmol/min');
title('Column Reflux Predictor (Training)')
subplot(2,1,2),plot(time,Van1,'r',time,Vinput(1,:),'b');
ylabel('Molar Flowtare, Kmol/min');
title('Column Reflux Predictor (Validation)')
legend ('Actual','Predicted',4)

save net1.mat
86

Appendix B4

Main Program for Predictor 2

% clc;
% clear;
% [datas,p,min,max]=dscale
[input,output,X,min,max]=dprep;
[Vinput,Voutput,X,min,max]=dprepV;
[Tinput,Toutput,X,min,max]=dprepT;
ptr=input; ttr=input(2,:); % Training
v.P=Vinput; v.T=Vinput(2,:); % Validation
t.P=Tinput; t.T=Tinput(2,:); % Testing
S1=12; % Number of nodes
net2=newelm(minmax(input),[S1 1],{'tansig' 'purelin'},'trainlm');
net2.trainparam.epochs=500; % Max epoch number
net2.trainParam.goal=1e-8;
net2.trainParam.max_fail=10;
net2.trainParam.show=5;
net2=init(net2);
[net2,tr]=train(net2,ptr,ttr,[],[],v,t);
an2=sim(net2,input);
error=an2-input(2,:);
trainmse=sumsqr(error)/X;
Van2=sim(net2,Vinput);
valmse=sumsqr(Van2-Vinput(2,:))/X;
Tan2=sim(net2,Vinput);
testmse=sumsqr(Tan2-Tinput(2,:))/X;
fprintf('TrainMSE=%e, ValMSE=%e,
TestMSE=%e\n',trainmse,valmse,testmse);
time=1:X;
figure(1)
subplot(2,1,1),plot(time,an2,'r',time,input(2,:),'b');
ylabel('Duty Rate, Kcal/min');
title('Column Condenser Duty (Training)')
subplot(2,1,2),plot(time,Van2,'r',time,Vinput(2,:),'b');
ylabel('Duty Rate, Kcal/min');
title('Column Condenser Duty (Validation)')
legend ('Actual','Predicted',4)

save net2.mat
87

Appendix B5

Main Program for Predictor 3

% clc;
% clear;
% [datas,p,min,max]=dscale
[input,output,X,min,max]=dprep;
[Vinput,Voutput,X,min,max]=dprepV;
[Tinput,Toutput,X,min,max]=dprepT;
ptr=input; ttr=input(3,:); % Training
v.P=Vinput; v.T=Vinput(3,:); % Validation
t.P=Tinput; t.T=Tinput(3,:); % Testing
S1=6; % Number of nodes
net3=newelm(minmax(input),[S1 1],{'tansig' 'purelin'},'trainlm');
net3.trainparam.epochs=500; % Max epoch number
net3.trainParam.goal=1e-8;
net3.trainParam.max_fail=10;
net3.trainParam.show=5;
net3=init(net3);
[net3,tr]=train(net3,ptr,ttr,[],[],v,t);
an3=sim(net3,input);
error=an3-input(3,:);
trainmse=sumsqr(error)/X;
Van3=sim(net3,Vinput);
valmse=sumsqr(Van3-Vinput(3,:))/X;
Tan3=sim(net3,Vinput);
testmse=sumsqr(Tan3-Tinput(3,:))/X;
fprintf('TrainMSE=%e, ValMSE=%e,
TestMSE=%e\n',trainmse,valmse,testmse);
time=1:X;
figure(1)
subplot(2,1,1),plot(time,an3,'r',time,input(3,:),'b');
ylabel('Molar Flowrate, Kmol/min');
title('Column Organic Exit (Training)')
subplot(2,1,2),plot(time,Van3,'r',time,Vinput(3,:),'b');
ylabel('Molar Flowrate, Kmol/min');
title('Column Organic Exit (Validation)')
legend ('Actual','Predicted',4)

save net3.mat
88

Appendix B6

Main Program for Classifier 1

clc;
clear;
load net1.mat;
load net2.mat;
load net3.mat;
[input,output,X,min,max]=dprep;
[Vinput,Voutput,X,min,max]=dprepV;
an1=sim(net1,input);
error1=output(1,:)-an1;
an2=sim(net2,input);
error2=output(2,:)-an2;
an3=sim(net3,input);
error3=output(3,:)-an3;
data(1,:)=(error1*(max(8)-min(8))+min(8));
data(2,:)=(error2*(max(9)-min(9))+min(9));
data(3,:)=(error3*(max(10)-min(10))+min(10));
Van1=sim(net1,Vinput);
Verror1=Voutput(1,:)-Van1;
Van2=sim(net2,Vinput);
Verror2=Voutput(2,:)-Van2;
Van3=sim(net3,Vinput);
Verror3=Voutput(3,:)-Van3;
Vdata(1,:)=(Verror1*(max(8)-min(8))+min(8));
Vdata(2,:)=(Verror2*(max(9)-min(9))+min(9));
Vdata(3,:)=(Verror3*(max(10)-min(10))+min(10));
[n,p]=size(data);
min(1)=0.6679; max(1)=0.6567;
min(2)=-6930.4172; max(2)=981.7492;
min(3)=-14.455; max(3)=0.9978;
for i=1:n
data(i,:)=(data(i,:)-min(i))/(max(i)-min(i));
Vdata(i,:)=(Vdata(i,:)-min(i))/(max(i)-min(i));
end
ptr=data; ttr=data(3,:); % Training
v.P=Vdata; v.T=Vdata(3,:); % Validation
S1=5; % Number of nodes
net7=newff(minmax(data),[S1 1],{'tansig' 'purelin'},'trainlm');
net7.trainparam.epochs=500; % Max epoch number
net7.trainParam.goal=1e-8;
net7.trainParam.max_fail=10;
net7.trainParam.show=5;
net7=init(net7);
[net7,tr]=train(net7,ptr,ttr,[],[],v);
Fan1=sim(net7,data);
Ferror1=Fan1-data(3,:);
trainmse1=sumsqr(Ferror1)/X;
Fan2=sim(net7,Vdata);
valmse1=sumsqr(Fan2-Vdata(3,:))/X;
fprintf('TrainMSE=%e, ValMSE=%e\n',trainmse1,valmse1);
time=1:X;
upper(1,:)=1;
lower(1,:)=0.9;
figure(1)
subplot(2,1,1),plot(time,Fan1,'r',time,upper,'k',time,lower,'k');
axis([0 100 0.88 1.05])
89

% ylim ([0 1]);


ylabel('Column Reflux');
xlabel('Minute')
title('Classifier (Training)')
subplot(2,1,2),plot(time,Fan2,'r',time,Vdata(3,:),'b',time,upper,'k'
,time,lower,'k');
ylabel('Column Reflux');
xlabel('Minute')
title('Classifier (Validation)')
axis([0 100 0.88 1.05])
90

Appendix B7

Main Program for Classifier 2

clc;
clear;
load net1.mat;
load net2.mat;
load net3.mat;
[input,output,X,min,max]=dprep;
[Vinput,Voutput,X,min,max]=dprepV;
an1=sim(net1,input);
error1=output(1,:)-an1;
an2=sim(net2,input);
error2=output(2,:)-an2;
an3=sim(net3,input);
error3=output(3,:)-an3;
data(1,:)=(error1*(max(8)-min(8))+min(8));
data(2,:)=(error2*(max(9)-min(9))+min(9));
data(3,:)=(error3*(max(10)-min(10))+min(10));
Van1=sim(net1,Vinput);
Verror1=Voutput(1,:)-Van1;
Van2=sim(net2,Vinput);
Verror2=Voutput(2,:)-Van2;
Van3=sim(net3,Vinput);
Verror3=Voutput(3,:)-Van3;
Vdata(1,:)=(Verror1*(max(8)-min(8))+min(8));
Vdata(2,:)=(Verror2*(max(9)-min(9))+min(9));
Vdata(3,:)=(Verror3*(max(10)-min(10))+min(10));
[n,p]=size(data);
min(1)=0.6679; max(1)=0.6567;
min(2)=-6930.4172; max(2)=981.7492;
min(3)=-14.455; max(3)=0.9978;
for i=1:n
data(i,:)=(data(i,:)-min(i))/(max(i)-min(i));
Vdata(i,:)=(Vdata(i,:)-min(i))/(max(i)-min(i));
end
ptr=data; ttr=data(2,:); % Training
v.P=Vdata; v.T=Vdata(2,:); % Validation
S1=5; % Number of nodes
net7=newff(minmax(data),[S1 1],{'tansig' 'purelin'},'trainlm');
net7.trainparam.epochs=500; % Max epoch number
net7.trainParam.goal=1e-8;
net7.trainParam.max_fail=10;
net7.trainParam.show=5;
net7=init(net7);
[net7,tr]=train(net7,ptr,ttr,[],[],v);
Fan1=sim(net7,data);
Ferror1=Fan1-data(2,:);
trainmse1=sumsqr(Ferror1)/X;
Fan2=sim(net7,Vdata);
valmse1=sumsqr(Fan2-Vdata(2,:))/X;
fprintf('TrainMSE=%e, ValMSE=%e\n',trainmse1,valmse1);
time=1:X;
upper(1,:)=7;
lower(1,:)=1;
figure(1)
subplot(2,1,1),plot(time,Fan1,'r',time,upper,'k',time,lower,'k');
91

axis([0 100 0 8])


% ylim ([0 1]);
ylabel('Column Condenser Duty');
xlabel('Minute')
title('Classifier (Training)')
subplot(2,1,2),plot(time,Fan2,'r',time,Vdata(2,:),'b',time,upper,'k'
,time,lower,'k');
axis([0 100 0 8])
ylabel('Column Condenser Duty');
xlabel('Minute')
title('Classifier (Validation)')
92

Appendix B8

Main Program for Classifier 3

clc;
clear;
load net1.mat;
load net2.mat;
load net3.mat;
[input,output,X,min,max]=dprep;
[Vinput,Voutput,X,min,max]=dprepV;
an1=sim(net1,input);
error1=output(1,:)-an1;
an2=sim(net2,input);
error2=output(2,:)-an2;
an3=sim(net3,input);
error3=output(3,:)-an3;
data(1,:)=(error1*(max(8)-min(8))+min(8));
data(2,:)=(error2*(max(9)-min(9))+min(9));
data(3,:)=(error3*(max(10)-min(10))+min(10));
Van1=sim(net1,Vinput);
Verror1=Voutput(1,:)-Van1;
Van2=sim(net2,Vinput);
Verror2=Voutput(2,:)-Van2;
Van3=sim(net3,Vinput);
Verror3=Voutput(3,:)-Van3;
Vdata(1,:)=(Verror1*(max(8)-min(8))+min(8));
Vdata(2,:)=(Verror2*(max(9)-min(9))+min(9));
Vdata(3,:)=(Verror3*(max(10)-min(10))+min(10));
[n,p]=size(data);
min(1)=0.6679; max(1)=0.6567;
min(2)=-6930.4172; max(2)=981.7492;
min(3)=-14.455; max(3)=0.9978;
for i=1:n
data(i,:)=(data(i,:)-min(i))/(max(i)-min(i));
Vdata(i,:)=(Vdata(i,:)-min(i))/(max(i)-min(i));
end
ptr=data; ttr=data(1,:); % Training
v.P=Vdata; v.T=Vdata(1,:); % Validation
S1=50; % Number of nodes
net7=newff(minmax(data),[S1 1],{'tansig' 'purelin'},'trainlm');
net7.trainparam.epochs=500; % Max epoch number
net7.trainParam.goal=1e-8;
net7.trainParam.max_fail=10;
net7.trainParam.show=5;
net7=init(net7);
[net7,tr]=train(net7,ptr,ttr,[],[],v);
Fan1=sim(net7,data);
Ferror1=Fan1-data(1,:);
trainmse1=sumsqr(Ferror1)/X;
Fan2=sim(net7,Vdata);
valmse1=sumsqr(Fan2-Vdata(1,:))/X;
fprintf('TrainMSE=%e, ValMSE=%e\n',trainmse1,valmse1);
time=1:X;
upper(1,:)=-280;

figure(1)
subplot(2,1,1),plot(time,Fan1,'r',time,upper,'k');
93

% axis([0 120 0.7 1.5])


% ylim ([0 1]);
ylabel('Column Organic Exit');
title('Classifier (Training)')
subplot(2,1,2),plot(time,Fan2,'r',time,Vdata(1,:),'b',time,upper,'k'
);
ylabel('Column Organic Exit');
title('Classifier (Validation)')

You might also like