Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
AND CONTROL
A Thesis Submitted in Partial Fulfillment for the Award of the Degree
Of
MASTER OF TECHNOLOGY
In
CHEMICAL ENGINEERING
By
SESHU KUMAR DAMARLA
Chemical Engineering Department
National Institute of Technology
Rourkela 769008
May 2011
MULTIVARIATE STATISTICAL PROCESS MONITORING
AND CONTROL
A Thesis Submitted in Partial Fulfillment for the Award of the Degree
Of
MASTER OF TECHNOLOGY
In
CHEMICAL ENGINEERING
By
SESHU KUMAR DAMARLA
Under the guidance of
Dr. Madhusree Kundu Dr. Madhusree Kundu Dr. Madhusree Kundu Dr. Madhusree Kundu
Chemical Engineering Department
National Institute of Technology
Rourkela 769008
May 2011
ii
Department of Chemical Engineering
National Institute of Technology
Rourkela 769008 (ORISSA)
CERTIFICATE
This is to certify that the thesis entitled “Multivariate Statistical
Process Monitoring and Control”, being submitted by Sri Seshu
Kumar Damarla for the award of M.Tech. degree is a record of
bonafide research carried out by him at the Chemical Engineering
Department, National Institute of Technology, Rourkela, under my
guidance and supervision. The work documented in this thesis has not
been submitted to any other University or Institute for the award of
any other degree or diploma.
Dr. Madhushree Kundu
Department of Chemical Engineering
National Institute of Technology
Rourkela  769008
iii
ACKNOWLEDGEMENT
I would like to express my sincere gratefulness to my father who has struggled every minute
of his life to keep me at higher level and shown a path in which I am traveling now. My each
and every success always dedicates to him.
The second person l love and respect is Prof. Madhusree Kundu who has brought me from
low level to certain level. She encourages and supports me in all the way. Her innovative
thoughts and valuable suggestions made this dissertation possible. She inspires me through
her attitude and dedication towards research. I always obliged to her. I never forget her. I
wish and love to work with Prof. Madhusree Kundu wherever I will be.
I would like to express my indebtedness to Prof. Palash Kundu who has given wonderful
ideas and an amazing path to work. Without his help we could not have proceeded further. He
is hero off the screen.
I express my gratitude and indebtedness to Professor S. K. Agarwal and Professor K. C.
Biswal, Head, Chemical Engineering Department, for providing me with the necessary
computer laboratory and departmental facilities.
I express my gratitude and indebtedness to Prof. Basudeb Munshi, Prof. Santanu Paria, Prof.
S. Khanam, Prof. A. Sahoo, Prof. A. Kumar, Prof. S. Mishra of the Department of Chemical
Engineering, for their valuable suggestions and instructions at various stages of the work.
Today I am in some respectful position because of my two brothers Pavan and Prabhas.
They support and motivate me in every moment of my life.
Throughout these two years of journey I have wonderful friendship with Pradeep, Kasi,
Satish garu, Jeevan garu, Tarangini akka.
I always responsible to my sisters and their families, who scarified their education and desires
for my brother and me.
Finally, I express my humble regards to my mom who always thinks and worships God about
us. She scarifies everything for us. Her tireless endeavors keep me good.
SESHU KUMAR DAMARLA
NATIONAL INSTITUTE OF TECHNOLOGY
ROURKELA769008(ORISSA)
CONTENTS
Page No.
Certificate
Acknowledgement
Abstract
List of Figures
List of Tables
Chapter 1 INTRODUCTION TO MULTIVARIATE
STATISTICAL PROCESS MONITORING AND
CONTROL
1.1 GENERAL BACKGROUND
1.2 DATA BASED MODELS
1.2.1 Chemometric Techniques
1.3 STATISTICAL PROCESS MONITORING
1.4 MULTIVARIATE STATISTICAL AND NEURO
STATISTICAL CONTROLLER
1.5 PLANTWIDE CONTROL
1.6 OBJECTIVE
1.7 ORGANIZATION OF THESIS
REFERENCES
Chapter 2 CHEMOMETRIC TECHNIQUES AND PREVIOUS
WORK
2.1 INTRODUCTION
2.2 THEORITICAL POSTULATIONS ON PCA
2.3 SIMILARITY
2.3.1 PCA Based Similarity
2.3.2 Distance Based Similarity
2.3.3 Combined Similarity Factor
2.4 THEORITICAL POSTULATION ON PLS
2.4.1 Linear PLS
2.4.2 Dynamic PLS
2.5 PREVIOUS WORK
REFERENCES
ii
iii
viii
x
xiii
1
1
1
2
3
3
4
5
6
6
7
8
8
8
10
10
11
12
12
12
14
16
19
Chapter 3 MULTIVARIATE STATISTICAL PROCESS
MONITORING
3.1 INTRODUCTION
3.2 CLUSTERING TIME SERIES DATA: APPLICATION IN
PROCESS MONITORING
3.2.1 Kmeans Clustering Using Similarity Factors
3.2.2 Selection of Optimum Number of Clusters
3.2.3 Clustering Performance Evaluation
3.3 MOVING WINDOW BASED PATTERN MATCHING
3.4 DRUM BOILER PROCESS
3.4.1 The Brief Introduction of Drum Boiler System
3.4.2 Modeling
3.4.2.1 Proposed Second Order Boiler Model
3.4.2.2 Distribution of Steam in Risers and Drum
3.4.2.2.1 Distribution of Steam and Water in Risers
3.4.2.2.2 Lumped Parameter Model
3.4.2.2.3 Circulation Flow
3.4.2.3 Distribution of Steam in the Drum
3.4.2.4 Drum Level
3.4.2.5 The Model
3.4.2.6 State Variable Model
3.4.2.6.1 Derivation of State Equations
3.4.2.6.2 Equilibrium Values
3.4.2.6.3 Parameters
3.5 BIOREACTOR PROCESS
3.5.1 MODELING
3.6 CONTNUOUS STIRRED TANK REACTOR WITH
COOLING JACKET
3.7 TENNESSEE EASTMAN CHALLENGE PROCESS
3.7.1 Generation of Database
3.8 RESULTS & DISCUSSIONS
3.8.1 Drum Boiler Process
3.8.2 Bioreactor Process
3.8.3 Jacketed CSTR
3.8.4 Tennessee Eastman Process
3.9 CONCLUSION
REFERENCES
22
22
22
23
24
25
25
27
28
29
30
31
31
32
33
33
34
34
35
35
37
38
38
38
40
40
41
42
42
43
44
44
44
62
Chapter 4 APPLICATION OF MULTIVARIATE
STATISTICAL AND NEURAL PROCESS
CONTROL STRATEGIES
4.1 INTRODUCTION
4.2 DEVELOPMENT OF PLS AND NNPLS CONTROLLER
4.2.1 PLS Controller
4.2.2 Development of Neural Network PLS controller
4.3 IDENTIFICATION & CONTROL OF {2 × 2{
DISTILLATION PROCESS
4.3.1 PLS Controller
4.3.2 NNPLS Controller
4.4 IDENTIFICATION & CONTROL OF {3 × 3{
DISTILLATION PROCESS
4.5 IDENTIFICATION & CONTROL OF {4 × 4{
DISTILLATION PROCESS
4.6 PLANT WIDE CONTROL
4.7 CONCLUSIONS
REFERENCES
Chapter 5 CONCLUSIONS AND RECOMMENDATIONS
FOR FUTURE WORK
5.1 CONCLUSIONS
5.2 RECOMMENDATIONS FOR FUTURE WORK
64
64
65
65
66
69
69
71
72
73
75
77
90
91
91
92
viii
ABSTRACT
Application of statistical methods in monitoring and control of industrially
significant processes are generally known as statistical process control (SPC).
Since most of the modern day industrial processes are multivariate in nature,
multivariate statistical process control (MVSPC), supplanted univariate SPC
techniques. MVSPC techniques are not only significant for scholastic pursuit; it
has been addressing industrial problems in recent past.
. Monitoring and controlling a chemical process is a challenging task
because of their multivariate, highly correlated and nonlinear nature. Present
work based on successful application of chemometric techniques in implementing
machine learning algorithms. Two such chemometric techniques; principal
component analysis (PCA) & partial least squares (PLS) were extensively
adapted in this work for process identification, monitoring & Control. PCA, an
unsupervised technique can extract the essential features from a data set by
reducing its dimensionality without compromising any valuable information of it.
PLS finds the latent variables from the measured data by capturing the largest
variance in the data and achieves the maximum correlation between the predictor
and response variables even if it is extended to time series data. In the present
work, new methodologies; based on clustering time series data and moving
window based pattern matching have been proposed for detection of faulty
conditions as well as differentiating among various normal operating conditions
of Biochemical reactor, Drumboiler, continuous stirred tank with cooling jacket
and the prestigious Tennessee Eastman challenge processes. Both the techniques
emancipated encouraging efficiencies in their performances.
The physics of data based model identification through PLS, and NNPLS,
their advantages over other time series models like ARX, ARMAX, ARMA,
were addressed in the present dissertation. For multivariable processes, the PLS
based controllers offered the opportunity to be designed as a series of decoupled
SISO controllers. For controlling nonlinear complex processes neural network
based PLS (NNPLS) controllers were proposed. Neural network; a supervised
category of data based modeling technique was used for identification of process
ABSTRACT
ix
dynamics. Neural nets trained with inverse dynamics of the process or direct
inverse neural networks (DINN) acted as controllers. Latent variable based
DINNS’ embedded in PLS framework termed as NNPLS controllers. {2 × 2{,
{3 × 3{, and {4 × 4{ Distillation processes were taken up to implement the
proposed control strategy followed by the evaluation of their closed loop
performances.
The subject plant wide control deals with the inter unit interactions in a
plant by the proper selection of manipulated and measured variables, selection of
proper control strategies. Model based Direct synthesis and DINN controllers
were incorporated for controlling brix concentrations in a multiple effect
evaporation process plant and their performances were compared both in servo
and regulator mode.
x
LIST OF FIGURES
Page No.
Figure 2.1 Standard Linear PLS algorithms
Figure 2.2 Schematic of PLS based Dynamics
Figure 3.1 Moving Window based Pattern Matching
Figure 3.2 Drum Boiler Process
Figure 3.3 Tennessee Eastman Plant
Figure 3.4 (a) Response of reactor pressure at FMD13
.
Figure 3.4 (b) Response of stripper level at FMD13
Figure 3.5 (a) Response of reactor pressure at FMD14
.
Figure 3.5 (b) Response of reactor level at FMD14.
Figure 3.6 (a) Response of reactor pressure at FMD31
Figure 3.6 (b) Response of stripper level at FMD31
Figure 3.7 (a) Response of stripper level at FMD32
Figure 3.7 (b) Response of reactor pressure at FMD32
Figure 3.8 Response of Drumboiler process for a faulty operating
condition
Figure 3.9 Response for the Jacketed CSTR process under faulty
operating condition
Figure 4.1 Schematic of PLS based Control
14
15
26
28
56
57
57
58
58
59
59
60
60
61
61
66
xi
Figure 4.2 (a) NNPLS Servo mode of control
Figure 4.2 (b) NNPLS Regulatory mode of control System
Figure 4.3 Comparison between actual and ARX based PLS
predicted dynamics for output1 (top product
compositionI
) in distillation process.
Figure 4.4 Comparison between actual and ARX based PLS
predicted dynamics for output2 (bottom product
compositionI
) in distillation process
Figure 4.5 Comparison of the closed loop performances of ARX
based and FIR PLS controllers for a set point change in
I
from 0.99 to 0.996
Figure 4.6 Comparison of the closed loop performances of ARX
based and FIR PLS controllers for a set point change in
I
from 0.01 to 0.005
Figure 4.7 Comparison between actual and NN identified outputs of a
{2 × 2{ Distillation process using projected variables in
latent space
Figure 4.8 Closed loop response of top and bottom product
composition using NNPLS control in servo mode
Figure 4.9 Closed loop response of top and bottom product
composition using NNPLS control in regulatory mode
Figure 4.10 Comparison between actual and NNPLS identified outputs
of a {3 × 3{ Distillation process using projected variables
in latent space
Figure 4.11 Closed loop response of the three outputs of a {3 × 3{
distillation process using NNPLS control in servo mode
Figure 4.12 Closed loop response of the three outputs of a {3 × 3{
distillation process using NNPLS control in regulator
mode
Figure 4.13 Comparison between actual and NNPLS identified outputs
of a {4 × 4{ Distillation process
Figure 4.14 Closed loop response of the four outputs of a {4 × 4{
distillation process using NNPLS control in servo mode
Figure 4.15 Closed loop response of the four outputs of a {4 × 4{
68
68
79
79
80
80
81
82
83
84
84
85
85
86
xii
distillation process using NNPLS control in regulator
mode
Figure 4.16 Juice concentration plant
Figure 4.17 (a) Comparison between NN and model based control in
servo mode for maintaining brix from 3
rd
effect of a sugar
evaporation process plant using plant wide control
strategy.
Figure 4.17 (b) Comparison between NN and model based control in
servo mode for maintaining brix from 5
th
effect of a sugar
evaporation process plant using plant wide control
strategy
Figure 4.18 (a) Comparison between NN and model based control in
regulator mode for maintaining brix (at its base value)
from 3
rd
effect of a sugar evaporation process plant using
plant wide control strategy
Figure 4.18 (b) Comparison between NN and model based control in
regulator mode for maintaining brix (at its base value)
from 5
th
effect of a sugar evaporation process plant using
plant wide control strategy
87
87
88
88
89
89
xiii
LIST OF TABLES
Page No.
Table 3.1 Values of the Drumboiler model parameters
Table 3.2 Bioreactor process Parameters
Table 3.3 CSTR parameters used in CSTR simulation.
Table 3.4 Heat and material balance data for TE process..
Table 3.5 Component physical properties (at 100
o
c) in TE process.
Table 3.6 Process manipulated variables in TE process.
.
Table 3.7 Process disturbances in TE process
Table 3.8 Continuous process measurements in TE process
Table 3.9 Sampled process measurements in TE process
Table 3.10 Various operating conditions for Drumboiler process
Table 3.11 Database corresponding to various operating conditions for
Bioreactor process
Table 3.12 Generated database for CSTR process at various operating
modes
Table 3.13 Operating conditions for TE process
Table 3.14 Constraints in TE process
Table 3.15 Combined similarity factor based clustering performance for
45
46
46
46
47
48
48
49
49
50
50
50
51
51
xiv
Drumboiler process
Table 3.16 Similarity factors in dataset wise moving window based pattern
matching implementation for Drumboiler process
Table 3.17 Moving window based pattern matching performance for
Drumboiler process
Table 3.18 Combined similarity factor based clustering performance for
Bioreactor process
Table 3.19 Similarity factors in dataset wise moving window based pattern
matching implementation for Bioreactor process
Table 3.20 Moving window based pattern matching performance for
Bioreactor process
Table 3.21 Combined similarity factor based clustering performance for
Jacketed CSTR process
Table 3.22 Similarity factors in dataset wise moving window based pattern
matching implementation for Jacketed CSTR process
Table 3.23 Moving window based pattern matching performance in
Jacketed CSTR process
Table 3.24 Combined similarity factors in dataset wise moving window
based pattern matching implementation for TE process
Table 3.25 Pattern matching performance for TE process
Table 4.1 Designed networks and their performances in (2 × 2,3 × 3 &
4 × 4 ) Distillation processes.
Table 4.2 Steady state values for variables of Evaporation plant
51
52
52
52
53
53
53
54
54
54
55
77
78
INTRODUCTION TO MULTIVARIATE
STATISTICAL PROCESS MONITORING AND
CONTROL
1
INTRODUCTION TO MULTIVARIATE STATISTICAL
PROCESS MONITORING AND CONTROL
1.1 GENERAL BACKGROUND
Application of statistical methods in monitoring and control of industrially significant
processes are included in a field generally known as statistical process control (SPC) or
statistical quality control (SQC). The most widely used and popular SPC techniques involve
univariate methods, that is, observing and analyzing a single variable at a time. Industrial
quality problems are multivariate in nature, since they involve measurements on a number of
characteristics, rather than one single characteristic. As a result, univariate SPC methods and
techniques provide little information about their mutual interactions. The conventional SPC
charts such as Shewhart chart and CUSUM chart have been widely used for monitoring
univariate processes, but they do not function well for multivariable processes with highly
correlated variables. Most of the limitations of univariate SPC can be addressed through the
application of Multivariate Statistical Process Control (MVSPC) techniques, which consider
all the variables of interest simultaneously and can extract information on the behavior of
each variable or characteristic relative to the others. MVSPC research is having high value in
theoretical as well as practical application and is certainly be conducive to process
monitoring, fault detection and process identification & control.
INTRODUCTION TO STATISTICAL PROCESS MONITORING AND CONTROL
2
1.2 DATA BASED MODELS
Data based modeling is one of the very recently added crafts in process
identification, monitoring and control. The black box models are data dependent and model
parameters are determined by experimental results /wet labs, hence these models are called
data based models or experimental models. Unlike the white box models derived from first
principles, the black box/data based models or empirical models do not describe the
mechanistic phenomena of the process; they are based on inputoutput data and only
describing the overall behavior of the process. The data based models are especially
appropriate for problems that are data rich, but hypothesis and/or information poor. Sufficient
numbers of quality data points are required to propose a good model. Quality data is defined
by noise free data; free of outliers and is ensured by data pre conditioning.
The phases in the Data based modeling are:
• System analysis
• Data collection
• Data conditioning
• Key variable analysis
• Model structure design
• Model identification
• Model evaluation
In this era of data explosion, rational as well as potential conclusions can be drawn from the
data, and in this regard, we owe a profound debt to multivariate statistics. Several Data
driven multivariate statistical techniques such as Principal Component Analysis (PCA),
Canonical Correlation Analysis (CCA), Partial Least Squares (PLS), Principal component
regression (PCR) and Canonical Variate and Subspace State Space modeling have been
proposed for these purposes.
Data based models can be divided in to two major categories namely:
• Unsupervised models: These are the models which try to extract the different
features present in the data without any prior knowledge of the patterns present in
the data. Examples are Principal Component Analysis (PCA), Hierarchical
INTRODUCTION TO STATISTICAL PROCESS MONITORING AND CONTROL
3
Clustering Techniques (Dendrograms), nonhierarchical Clustering Techniques
(Kmeans).
• Supervised models: These are the models which try to learn the patterns in the
data under the guidance of a supervisor who trains these models with inputs along
with their corresponding outputs. Examples include Artificial Neural Networks
(ANN), Partial Least Squares (PLS) and Auto Regression Models etc.
Efficient data mining, hence, efficient data based modeling will enable the future era to
exploit the huge database available; in newer dimensions and perspective; embraced with
never expected possibilities.
1.2.1 Chemometric Techniques
Chemometrics is the application of mathematical and statistical methods for handling,
interpreting, and predicting chemical data. Present work is based on successful application
and implementation of Chemometric techniques like PCA & PLS in process identification,
monitoring & Control. PCA is a multivariate statistical technique that can extract the
essential features from a data set by reducing its dimensionality without compromising any
valuable information of it. PLS finds the latent variables from the measured data by
capturing the largest variance in the data and achieves the maximum correlation between the
predictor (ʹ) variables and response (͵) variables. First proposed by Wold [1]; PLS has
been successfully applied in diverse fields including process monitoring; identification of
process dynamics & control and it deals with noisy and highly correlated data, quite often,
only with a limited number of observations available. A tutorial description along with some
examples on the PLS model was provided by Geladi and Kowalaski (1986) [2].
1.3 STATISTICAL PROCESS MONITORING
Process safety and product quality are the goals, people pursuit unremittingly.
Monitoring a chemical processes is preferably a data based approach because of their
multivariate, highly correlated and nonlinear nature. Data collection has become a mature
technology over the years and the analysis of process historical database has become an
active area of research. [35]In order to ensure product quality and process safety, extraction
of meaningful information from the process database seems to be a natural and logical
INTRODUCTION TO STATISTICAL PROCESS MONITORING AND CONTROL
4
choice. Statistical performance monitoring of a process detects process faults or abnormal
situations, hidden danger in the process followed by the diagnosis of the fault. The diagnosis
of abnormal plant operation can be greatly facilitated if periods of similar plant performance
can be located in the historical database. New methodologies based on clustering time series
data and moving window based pattern matching can be used for detection of faulty
conditions as well as differentiating among various normal operating conditions.
1.4 MULTIVARIATE STATISTICAL AND NEURO STATISTICAL
CONTROLLER
Since the last decade, design & development of data based model control has taken its
momentum. This very trend owes an explanation. Identification and control of chemical
process is a challenging task because of their multivariate, highly correlated and nonlinear
nature. Very often there are a large number of process variables are to be measured thus
giving rise to a high dimensional data base characterizing the process of interest. To extract
meaningful information from such a data base; meticulous preprocessing of data is
mandatory. Otherwise those high dimensional dataset maybe seen through a smaller window
by projecting the data along some selected fewer dimensions of maximum variability.
Principal component decomposition of the data into the latent subspaces provides us the
opportunity to compress the data without losing meaningful information of it. Originally
process static data were used for the aforesaid PCA decomposition.
Stationary time series data consisting of process input and output may be correlated
by models like ARX (Auto regressive model with exogenous inputs). ARMA (Auto
regressive moving average), ARMAX (Auto regressive moving average with exogenous
inputs) ARIMA (Auto regressive integrated moving average) etc... This kind of identification
of process dynamics requires a priori selection of model order and handle a lot of parameters,
hence, encourages empiricism. Prior to apply those models for time series identification,
firstly nonstationarity, if any; has to be removed from the data and secondly, the cross
correlation coefficients. have to be determined in order to detect the maximum effect of
inputs having lag ranging from 0 , 1, 2, 3…. on specific outputs. Partial least squares
technique with inner PCA core offers an alternative for identification of process dynamics.
PLS reduces the dimensionality of the measured data, finds the latent variables from the
measured data by capturing the largest variance in the data and achieves the maximum
correlation between the predictor X variables and response Y variables.
INTRODUCTION TO STATISTICAL PROCESS MONITORING AND CONTROL
5
In PLS based process dynamics, the inner relationship between (I) and (I) scores;
hence the process dynamics in latent subspace could not be well identified by linear or
quadratic relationships. In view of this, neural identification of process dynamics in latent
subspace or NNPLS based process identification deserves attention.
.Any Physical model based on first principle is designed on the basis of certain
heuristics. The broad generality of such model is at low ebb when the process is nonlinear
and complex. Development of transfer function using such a model resulted in a poor
controller performance irrespective of the controllers chosen. To get rid of this situation, data
based models is a logical choice. Discrete time domain (z) transfer functions developed using
ARX, ARMA, ARMAX models are less desirable for their extreme dependency on a large
numbers of model parameters. In view of this, data driven PLS and NNPLS identified
processes (transfer function) are logical alternative to design multivariable controllers.
Development of PLS and NNPLS controllers and their performance evaluation (both
in servo and regulator mode) for some of the selected benchmark processes is one of the
scopes of the present work. Discrete inputoutput time series data (XY) were used for this
purpose. By combining the PLS with inner ARX model structure, nonlinear dynamic
processes could be modeled apart from using neural networks, which logically built up the
framework for PLS based process controllers. The inverse dynamics of the latent variable
based process was identified as the direct inverse neural controller (DINN) using the
historical database of latent variables. The disturbance process in lower subspace was also
identified using NN.
For multivariable processes, the Partial least squares (PLS) controllers offer the
opportunity to be designed as a series of SISO controllers (Qin and McAvoy (1992, 1993))[6
7]. Because of the diagonal structure of the dynamic part of the PLS model, inputoutput
pairings are automatic. Series of SISO controllers designed on the basis of the dynamic
models identified into latent subspaces and embedded in the PLS framework are used to
control the process. Till date there is no reference on NNPLS controllers in the open literature
though PLS & NNPLS based process identification, PLS controllers are well documented.
1.5 PLANTWIDE CONTROL
Most of the industrial processes are having multiple units, which interact with each
other. The subject plant wide control deals with these inter unit interactions by the proper
selection of manipulated and measured variables, selection of proper control strategies. The
INTRODUCTION TO STATISTICAL PROCESS MONITORING AND CONTROL
6
goal of any process design is to minimize capital costs while operating with optimum
utilization of material and energy. Recycle streams in process plants have been a source of
major challenge for control system design.
SIMULINK, can be used as the platform for plant wide process simulation. Apart
from using classical controllers, artificial neural network (ANN) based controllers can be
used to control the plant wide process of interest. Over the last decade, a focused R&D
activity on plant wide control has been taken up by several researchers [811].
1.6 OBJECTIVE
In the context of aforesaid discussion, it is worthy to mention that application of
multivariate statistical process monitoring and control deserves extensive research before its
widespread use and commercialization. For very complex and nonlinear processes, data
based process monitoring, identification & control seems to be unequivocally superior to
physical model based approaches. Expert systems or knowledge based system can be
developed by integration of process knowledge derived from the process data base. In view
of this, the objectives of the present dissertation are as follows:
• Multivariable process monitoring using MVSPC techniques
• Process identification in latent subspaces and multivariate statistical process
control for benchmark multivariable processes
• Plant wide process control.
1.7 ORGANIZATION OF THE THESIS
First chapter renders an overview of multivariate process, chemometric techniques
and their use in process monitoring, identification of process dynamics, control as well as
plant wide process control. This chapter also presents the objectives of the present work with
the thesis outline. The second chapter emphasized on different chemometric techniques used
with the mention of significant previous contributions in this field. In chapter three statistical
process monitoring algorithms with case studies are presented. Application of statistical/
neuro statistical process identification and control and plant wide control are the excerpts of
the fourth chapter. In an ending note, the fifth chapter concludes with recommendation of
future research initiatives.
INTRODUCTION TO STATISTICAL PROCESS MONITORING AND CONTROL
7
REFERENCE
1. Wold H., “Estimation of principal components and related models by iterative least
squares.” In MultiVariate Analysis II. Krishnaiah, P. R., Ed., Academic Press, New
York, 1966: 391420.
2. Geladi, P., Kowalski, B. R., “Partial leastsquares regression: A tutorial.” Anal. Chim.
Acta.1986, 185: 117.
3. Agrawal, R., Stoloroz, P., Piatetsky, S. G., Eds. Proceedings of the 4th International
Conference on Knowledge Discovery and Data Mining; AAAI Press: Menlo Park,
CA, 1998.
4. Apte´, C., “Data mining: an industrial research perspective.” IEEE Trans. Comput.
Sci. Eng. 1997, 4 (2): 69.
5. Fayyad, U., Piatetsky, S. G., Smyth, P., “The KDD process for extracting useful
knowledge from volumes of data.” Commun. ACM. 1996, 39: 2734.
6. Qin, S. J.,McAvoy, T. J., “Nonlinear PLS modeling using neural network.” Comput.
Chem. Engr.1992, 16(4): 379391.
7. Qin, S. J.,“A statistical perspective of neural networks for process modelling and
control,” In Proceedings of the 1993 International Symposium on Intelligent Control,
Chicago, IL, 1993; pp 559604.
8. Luyben, W. L., Tyreus, B. D., Luyben, M. L., “Plantwide process control.” McGraw
Hill, New York, 1999.
9. Stephanopoulos, G., “Chemical process control.” Prentice Hall, Englewood Cliffs,
NJ., 1984.
10. Douglas, J. M., “Conceptual design of chemical processes.” McGraw Hill, New York,
1988.
11. Downs, J. J., Vogel, E. F., “A plantwide industrial process control problem.”
Comput. Chem. Engr. 1995, 17: 245255.
DATA DRIVEN MULTIVARIATE STATISTICAL
TECHNIQUES AND PREVIOUS WORK
8
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
2.1 INTRODUCTION
Chemometric is the science relating measurements made on a chemical system to the
state of the system via application of mathematical or statistical methods. The data based
approaches; supervised learning, unsupervised learning and multivariate statistical techniques
are falling under this category. Data based modeling is one of the very recently added crafts
in process identification, monitoring and control. PCA belongs to unsupervised and PLS
belongs to supervised category of chemometric models, which have been the mathematical
foundation of present work. A parsimonious presentation of the statistical rationale is
included in the chapter. Chemometric techniques offer the new approaches for process
monitoring & control, hence it might be appropriate to describe the state of art of those
advancements.
2.2 THEORETICAL POSTULATION ON PCA
PCA is a MVSPC technique used for the purpose of data compression without losing
any valuable information. Principal components (PCs) are transformed set of coordinates
orthogonal to each other. The first PC is the direction of largest variation in the data set. The
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
9
projection of original data on the PCs produces the score data or transformed data as a linear
combination of those fewer mutually orthogonal dimensions. PCA technique was applied on
the autoscaled data matrix to determine the principal eigenvectors, associated eigen values
and scores or the transformed data along the principal components. The drawbacks are that
the new latent variables often have no physical meaning and the user has a little control over
the possible loss of information.
Generally, PCA is a mathematical transform used to find correlations and explain
variance in a data set. The goal is to map the raw data vector E onto vectors S, where, the
vector x can be represented as a linear combination of a set of m orthonormal vectors ˯
ì
˲ = ¿ ˴
ì
˯
ì
m
ì=1
(2.1)
where the coefficients ˴
ì
can be found from the equation ˴
ì
= ˯
ì
1
˲. This corresponds to a
rotation of the coordinate system from the original ˲ to a new set of coordinates given by z.
To reduce the dimensions of the data set, only a subset (k <m) of the basic feature vectors are
preserved. The remaining coefficients are replaced by constants b
ì
and each vector x is then
approximated as
˲´ = ¿ ˴
ì
˯
ì
m
ì=1
+ ¿ b
ì
˯
ì
d
ì=1
(2.2)
The basic vectors ˯
ì
are called principal components which are equal to the eigenvectors of
the covariance matrix of the data set. The coefficients b
ì
and the principal components should
be chosen such that the best approximation of the original vector on an average is obtained.
However, the reduction of dimensionality from m to k causes an approximation error. The
sum of squares of the errors over the whole data set is minimized if we select the vectors
˯
ì
that correspond to the largest Eigen values of the covariance matrix. As a result of the PCA
transformation, the original data set is represented in fewer dimensions (typically 23) and the
measurements can be plotted in the same coordinate system. These plots show the relation
between different observations or experiments. Grouping of data points in those biplots
suggest some common properties and those can be used for classification.
Considering the following matrix:
X=
˲
11
⋯ ˲
1m
⋮ ⋱ ⋮
˲
n1
⋯ ˲
nm
 (2.3)
where, each row in X represents one measurement and the number of columns m is equal to
the length of the measurement sequence or features. Following the step described above, the
covariance matrixC = co˰(ʹ) and its Eigen values λ were calculated. Its eigenvectors ˯
ì
form
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
10
an orthonormal basis ͱ = ˯
1
, ˯
2
, ˯
3
, …˯
m
] ; that is ͱ
1
ͱ = 1 . The original data set can be
represented in the new basis using the relation: Ͷ = ͱ
1
ʹ. After this transformation, a new
data matrix of reduced dimension can be constructed with the help of Eigen values of the
matrix C. This is done by selecting the highest values λ since they correspond to the principal
components with highest significance. The number of PCs to be included should be high
enough to ensure good separation between the classes. Principal components with low
contribution (low values of λ) should be neglected. Let the first k PCs as new features be
selected neglecting the remaining (mk) principal components. In this way, a new data matrix
D of dimension n × k was obtained.
D=
˴
11
⋯ ˴
1k
⋮ ⋱ ⋮
˴
n1
⋯ ˴
nk
 (2.4)
The PCA score data sets are grouped into number of classes following the rule of nearest
neighborhood clustering algorithm. In the present work PCA based similarity was used for
process monitoring purposes.
2.3 SIMILARITY
Similarity measures among the datasets pertaining to various operating conditions of a
process provide the opportunity to cluster them into various groups, hence their
classification. Fault detection and diagnosis (FDD), is comfortably relying on this
principle which is conducive of safe plant operation. Apart from Euclidian and
Mahalanobis distance, recently some new criterions have been proposed to determine the
similarity or dissimilarity among the process historical database.
2.3.1 PCA Based Similarity
PCA similarity factor was developed by choosing largest k principal components of
each multivariate time series dataset that describe at least 95 % of variance in the each
dataset. These principal components are the eigen vectors of the covariance matrix. The PCA
similarity factor between two datasets is defined by equation (2.5)
∑∑
= =
=
k
i
k
j
ij PCA
Cos
k
S
1 1
2
1
θ (2.5)
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
11
Where k is the number of selected principal components in both datasets,
ij
θ is the angle
between the
th
i principal component of
1
X and j
th
principal component of
2
X . When first two
principal components explain 95% of variance in the datasets, may not capture the
degree of similarity between two datasets because it weights all PCs equally. Obviously
PCA
S
has to modify to weight each PC by its explained variance. The modified
λ
PCA
S is defined as
∑ ∑
∑∑
= =
=
k
j
j i
k
i
ij j i
PCA
Cos
S
1
) 2 ( ) 1 (
1
2 ) 2 ( ) 1 (
) (
λ λ
θ λ λ
λ
(2.6)
Where
) 2 ( ) 1 (
,
i i
λ λ are the eigen values of the first and second datasets respectively.
2.3.2 Distance Based Similarity
In addition to above similarity measure, distance similarity factor can be used to
cluster multivariate time series data. Distance similarity factor compares two datasets that
may have similar spatial orientation. The distance similarity finds its worth when the process
variables pertaining to different operating conditions may have similar principal components.
The distance similarity factor is defined as
dz
z
e dz
z
e
dist
S
∫
−∞
−
− × =
∫
∞
−
× =
φ
π
φ
π
2
2
2
1
1 [ 2
2
2
2
1
2 (2.7)
Where , & are sample means row vector, is the
covariance matrix for dataset and is pseudo inverse of . Dataset is assumed
to be a reference dataset. In equation (2.7), a one side Gaussian distribution is used because
. The error function can be calculated by using any software or standard error function
tables. The integration in equation (2.7) normalizes between zero and one.
PCA
S
T
x x x x )
1 2
(
1 *
1
)
1 2
( −
−
∑ − = Φ
2
x
1
x
1
∑
1
X
1 *
1
−
∑
1
X
1
X
0 ≥ Φ
dist
S
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
12
2.3.2 Combined Similarity Factor
The combined similarity factor (SF) combines
λ
PCA
S and
dist
S using weighted average
of the two quantities and used for clustering of multivariate time series data. The combined
similarity is defined as
dist PCA
S S SF
2 1
α α
λ
+ = (2.8)
The selection of
1
α and
2
α is up to the user but ensure that the sum of them is equal to one. In
this work we selected the values of
1
α &
2
α are 0.67 and 0.33.
2.4 THEORETICAL POSTULATION ON PLS
Projection to latent structures or partial least squares (PLS) is a multivariable
statistical regression method based on projecting/viewing the information in a high
dimensional data space down onto a low dimensional one defined by some latent variables. It
selects the latent variables so that variations in predictor data ʹ which is most predictive of ͵
data. PLS already has been successfully applied in diverse fields including process
monitoring and quality control; identification of process dynamics & control and deals with
noisy and highly correlated data, quite often, only with a limited number of observations
available. When dealing with nonlinear systems, this approach assumes that the underlying
nonlinear relationship between predictor data (ʹ) and response data (͵) can be approximated
by quadratic PLS (QPLS) or neural network based PLS (NNPLS) while retaining the outer
mapping framework of linear PLS algorithm.ʹ and ͵ matrices were autoscaled before they
were processed by PLS algorithm. PLS model consists of outer relations (ʹ&͵data are
expressed in terms of their respective scores) and inner relations that links ʹ data to ͵data in
the latent subspace.
2.4.1 Linear PLS
X and ¥ matrices are scaled in the following way before they are processed by PLS
algorithm.
1 −
=
X
XS X and
1 −
=
Y
YS Y (2.9)
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
13
Where
=
2
1
0
0
x
x
X
s
s
S and
=
2
1
0
0
y
y
Y
s
s
S
The
X
S and
Y
S are scaled matrices.
The outer relationship for the input matrix and output matrix with predictor variables can be
written as
E TP E p t p t p t X
T T
n n
T T
+ = + + + + = ..... ..........
2 2 1 1
(2.10)
F UQ F q u q u q u Y
T T
n n
T T
+ = + + + + = ........ ..........
2 2 1 1
(2.11)
Where Ͱand ͱrepresents the matrices of scores of ʹ and ͵ data while Pand Orepresent the
loading matrices for ʹ and ͵. If all the components of ʹand ͵ are described, the errors
ͧ&ͨbecome zero. The inner model that relates ʹ to͵ is the relation between the scores
Ͱ&ͱ.
ͱ = ͰB (2.12)
Where ˔ is the regression matrix. The response ͵ can now be expressed as:
͵ = ͰBO
Ͱ
+ ͨ (2.13)
To determine the dominant direction of projection of ʹand ͵ data, the maximization of
covariance within ʹ and ͵ is used as a criterion.The first set of loading vectors p
1
and o
1
represent the dominant direction obtained by maximization of covariance within ʹand͵
.
Projection of
ʹ data on p
1
and ͵ data on o
1
resulted in the first set of score vectors ˮ
1
and ˯
1
,
hence the establishment of outer relation. The matrices ʹand ͵ can now be related through
their respective scores, which is called the inner model, representing a linear regression
between ˮ
1
and ˯
1
: ˯ ¯
1
= ˮ
1
b
1
. The calculation of first two dimensions is shown in Fig. 2.1.
The residuals are calculated at this stage is given by the following equations.
'
1 1 1
p t X E − =
(2.14)
'
1 1 1
'
1 1 1
q b t Y q u Y F − = − =
(2.15)
The procedure for determining the scores and loading vectors is continued by using the newly
computed residuals till they are small enough or the number of PLS dimensions required are
exceeded. In practice, the number of PLS dimensions is calculated by percentage of variance
explained and cross validation. For a given tolerance of residual, the number of principal
components can be much smaller than the original variable dimension. To get Ͱ and
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
14
Pmatrices iteratively, deduction of (ˮp
1
) from ʹ keeps continuing until the given tolerance
gets satisfied.This is the so called NIPALS (nonlinear iterative PLS) algorithm.ͱand O
matrices are also iteratively found using NIPALS. The irrelevant directions originating from
noise and redundancy are left as ͧ and ͨ. ͵ and ʹ matrices are related following equation
(2.13). The multivariate regression problems are translated into series of univariate regression
problems with the application of PLS.
Fig. 2.1 Standard linear PLS algorithm.
2.4.2 Dynamic PLS
For incorporation of linear dynamic relationship in a time series data in the PLS framework,
the decomposition of X block is given by equation (2.10), the dynamic analogue of equation
(2.11) is as follows:
¥ = ˙
1
(ˮ
1
)o
1
1
+ ˙
2
(ˮ
2
)o
2
1
+ ⋯˙
n
(ˮ
n
)o
n
1
+ ˘ = ¥
1
cxp
+¥
2
cxp
+. . +¥
n
cxp
+ ˘ (2.16)
where ˙
ì
s denote the linear dynamic model identified at each time instant by ARX model as
well as FIR model and ˙
ì
(ˮ
ì
)o
ì
1
is a measure of ¥ space explained by the i
th
PLS dimension
in latent subspaces.¥
1
cxp
is the experimental ˳ data in 1
st
dimension. ˙is the diagonal matrix
comprising the dynamic elements identified at each of the n
th
latent subspaces. Equation
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
15
(2.17) represents the linear second order ARX structure, which relates input and output
signals of a time series data.
˳(˫) + o
1
˳(˫  1) + o
2
˳(˫  Ŷ) = b
1
˲(˫  1) + b
2
˲(˫  Ŷ) (2.17)
where˳(˫) =output at ˫
th
instant, ˲(˫)=input. The parameters of the ARX based inner
dynamic models relating the scores ˠ&ˡ were estimated by least square regression. The
ARX based input matrix used in regression analysis is as follows:
} , , , {
2 1 2 1 − − − −
=
k k k k ARX
T T U U X
(2.18)
Finite Impulse Response Model or FIR model is also tested for inner model development.
The FIR based input matrix is as follows:
} , , , {
4 3 2 1 − − − −
=
k k k k FIR
T T T T X
(2.19)
ˠand ˡrepresent the matrices of scores of X and ¥, respectively. The identified process
transferfunction in discrete time domain:
˙
p
(˴) =
0(z)
1(z)
=
b
1
˴ + b
2
˴
2
+o
1
˴ + o
2
¡ (2.20)
The post compensation of ˡ matrix (PLS inner dynamic model output) with loading matrix
˝ provided the PLS predicted output ¥. The input matrix ˠ to the PLS inner dynamic model
was generated by post compensating the original X matrix with loading matrix˜.Figure 2.2
represents the PLS based identification of process dynamics. Prior to dynamic modeling,
order of the model should be selected. It is difficult to choose the order of the model.
Autocorrelation signals renders a good indication about order that depends on how many past
input and past output values taken in the input matrix for FIR and ARX models. The model
parameters for both ARX and FIR models are estimated by linear least square technique.
Fig. 2.2 Schematic of PLS based dynamics
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
16
2.5 PREVIOUS WORK
Data mining of process historical database has received attention in the computer
science literature; however problems involving timeseries identification & control have been
addressed only recently. It is essential to extract required information regarding abnormality
in the data to make a detection and diagnosis of abnormality in the process. Traditional
clustering techniques were extensively used for grouping of the similar objects having
analogous characteristics into the same cluster. The information acquired from the clusters is
helpful in decision making, fault detection and process monitoring. Sudjianto & Wasserman
(1996) and Trouve & Yu (2000).have successfully applied principal component analysis to
extract features from large datasets [12]. Kano et al. (2000) implemented various PCA
based statistical process monitoring methods using simulated data obtained from the
Tennessee Eastman process [3]. Wise and Gallagher (1996) reviewed some of the
chemometric techniques and their application to chemical process monitoring and dynamic
process modeling [4].Numerous clustering techniques are available for clustering of
univariate time series data [57]. Very few researchers have reported the clustering of
dynamic and high dimensional multivariate time series data based on similarity measures.
Wang and McGreavy (1998) used clustering methods to classify abnormal behavior of a
refinery fluid catalytic cracking process [8]. Ng and Huang (1999) also used a clustering
approach to classify different types of stars based on their light curves [9]. Johannesmeyer
and Seborg (1999) developed an efficient technique to locating similar records in the
historical database using PCA similarity factors [10]. Huang et al (2000) used PCA to cluster
multivariate time series data by splitting large clusters into small clusters based on the
percentage of variance explained by principal component analysis. This method is restrictive
if the number of principal components is not known a priori and because predetermined
principal components are inadequate for some operating conditions. Such feature extracting
serves as dimensionality reduction of the datasets [11]. Singhal & Seborg (2005) used PCA
and Mahalonbis distance similarity measures for location of similar operating condition in
large multivariate database [12]. In juxtaposition, Kavitha & Punithavalli (2010) claimed that
all traditional clustering or unsupervised algorithms are inappropriate for real time data [13].
The diagnosis of abnormal plant operation can be greatly facilitated if periods of
similar plant performance can be located in the historical database. Singhal and Seborg
(2002) proposed a novel methodology based on PCA and distance similarity for pattern
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
17
matching problems. At a given time period of interest; for a multivariate time series data or
template data, a similar pattern can be located in the historical database using the proposed
pattern matching algorithm [14].
Development of the multivariate statistical controller is one of the objectives of the
present dissertation. Partial least square is one of the important data based multivariable
statistical process control (MVSPC) techniques. It finds the latent variables (through principal
component decomposition; PCA) from the measured data by capturing the largest variance in
the data and achieves the maximum correlation between the predictor (X) variables and
response (¥) variables.When dealing with nonlinear systems, the underlying nonlinear
relationship between predictor variables (X) and response variables (¥) can be approximated
by quadratic PLS (QPLS) or splines. Sometimes it may not function well when the non
linearities cannot be described by quadratic relationship. Qin and McAvoy (1992; 1993)
suggested a new approach to replace the inner model by neural network model followed by
the focused R& D activities taken up by several other researchers like Wilson et al. (1997);
Holcomb & Morari (1992); Malthouse et al. (1997); Zhao et al. (2006); Lee et al. (2006)[15
21]. This approach of NNPLS employs the neural network as inner model keeping the outer
mapping framework as linear PLS algorithm. Kaspar and Ray (1993) demonstrated their
approach for identification and control problem using linear PLS models [22].
Lakshminarayanan et al., (1997) proposed the ARX/Hammerstein model as the modified PLS
inner relation and used successfully in identifying dynamic models and proposition of PLS
based feed forward and feedback controllers [23].
Since the last 5 years or so, design & development of data based monitoring & control
has been the focused area for applied research. It might not be inappropriate here to mention
some of the efforts addressing diversified fields of applications. . Akamatsu et al. (2000)
proposed data based control of an industrial tubular reactor [24]. Christian Rosen (2001)
applied chemometric approach to process monitoring and control for wastewater treatment
plant [25]. Cerrillo and MacGregor (2005) used latent variable models (LVM) for controlling
batch processes [26]. Magda and Ordonez (2008) implemented multivariate statistical process
control and case based reasoning for situation assessment of sequencing batch reactors [27].
Kano et al. (2008) implemented dynamic partial least squares regression in developing
inferential control system of distillation compositions [28]. AlGhazzawi and Lennox (2009)
developed MPC condition monitoring tool based on multivariate statistical process control
(MSPC) techniques [29]. Laurí et al. (2010) implemented data driven latent variable model
based predictive control (LVMPC) for a 2×2 distillation process claiming their proposition
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
18
outperformed the conventional data driven MPC in terms of reliable multistep ahead
predictions with reduced computational complexity and reference tracking[30]. They have
also chronicled that how the latent variable models (LVM) in recent years were implemented
for various purposes.
Multiloop (decentralized) conventional control systems (especially PID controllers)
with decouplers are often used to control interacting multiple input multiple output processes
because of their ease in understandability apart from MPC (Model predictive), IMC based
multivariable controllers.. In this context, Damala and Kundu (2010) decoupled a
multivariable bioreactor process and used the inverse dynamics of the decoupled process to
create a series of neural network based SISO (NNSISO) controllers which were tuned
independently without influencing the performance of other loops [31]. For multivariable
processes, the Partial least squares (PLS) controllers offer the opportunity to be designed as a
series of SISO controllers instead of using multivariable controllers. Till date there is no
reference on NNPLS controllers in the open literature though PLS controllers are well
documented. In view of this, present work aims towards the development of NNPLS
controllers.
Over the last decade, a focused R&D activity on plant wide control has been taken up by
several researchers [3235]. Present study is aiming one incremental lip in this regard; by
incorporating neural controllers beside the classical controllers for plant wide process control.
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
19
REFERENCES
1. Sudjianto, A., Wasserman, G. S., “A nonlinear extension of principal component
analysis for clustering and spatial differentiation.” IIE Trans. 1996, 28: 1023–1028.
2. Trouve, A., Yu, Y., “Unsupervised clustering trees by nonlinear principal component
analysis.” In Proc. 5th Intl. Conf. on Pat. Rec. and Image Analysis: New Info. Tech.
Samara, Russia, 2000: 110–114.
3. Kano, M., Nagao, K., Hasebe, S., Hashimoto, I., Ohno, H., Strauss, R., Bakshi, B.,
“Comparison of statistical process monitoring methods: Application to the Eastman
challenge problem.” Comput. Chem. Engr. 2000, 24: 175181.
4. Wise, B. M., Gallagher, N. B., “The process chemometrics approach to process
monitoring and fault detection.” J. Process Contr. 1996, 6: 329–348.
5. Agrawal, R., Gehrke, J., Gunopulos, D., Raghavan, P., “Automatic subspace
clustering of high dimensional data for data mining applications.” In Proc. ACM
SIGMOD Intl. Conf. on Management of Data. Seattle, WA, 1998: 94–105.
6. Anderberg, M. R., “Cluster analysis for applications.” New York: Academic Press,
1973.
7. Allgood, G. O., Upadhyaya, B. R., “A modelbased highfrequency matched filter
arcing diagnostic system based on principal component analysis.” In Proc. of the
SPIE—The Intl. Soc. for Optical Engr. Orlando, FL, 2000, 4055: 430–440.
8. Wang, X. Z., McGreavy, C., “Automatic classification for mining process operational
data.” Ind. Eng. Chem. Res. 1998, 37: 2215–2222.
9. Ng, M. K., Huang, Z., “DataMining massive timeseries astronomical data:
challenges and solutions”. Inform.Software Tech. 1999, 41: 545–556.
10. Johannesmeyer, M. C., Seborg, D. E., “Abnormal situation analysis using pattern
recognition techniques.” AIChE.Annual Meeting, Dallas. TX, 1999.
11. Huang, Y., McAvoy, T. J., Gertler, J., “Fault isolation in nonlinear systems with
structured partial principal component analysis and clustering analysis.” Can. J.
Chem. Eng. 2000, 78: 569–577.
12. Singhal, A., Seborg, D. E., “Clustering multivariate time series data.” J. Chemometr.
2005, 19: 427438.
13. Kavitha, V., Punithavalli, M., “Clustering time series data streamA literature
survey.” IJCSIS. 2010, 8: 289294.
14. Singhal, A, Seborg, D. E., “Pattern matching in historical batch data using PCA.”
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
20
IEEE Contr. Syst. Mag. 2002, 22: 53–63.
15. Qin, S. J., McAvoy, T. J., “Nonlinear PLS modeling using neural network.” Comput.
Chem. Engr.1992, 16(4): 379391.
16. Qin, S. J., “A statistical perspective of neural networks for process modelling and
control.” In Proceedings of the 1993 Internation Symposium on Intelligent Control,
Chicago, IL, 1993: 559604.
17. Wilson, D. J. H., Irwin, G. W., Lightbody, G., “Nonlinear PLS using radial basis
functions.” Trans. Inst. Meas. Control. 1997, 19(4): 211220.
18. Holcomb, T. R., Morari, M., “PLS/neural networks.” Comput. Chem. Engr.1992,
16(4): 393411.
19. Malthouse, E. C., Tamhane, A. C., Mah, R. S. H., “Nonlinear partial least squares”.
Comput. Chem. Engr.1997, 21 (8): 875890.
20. Zhao, S. J., Zhang, J., Xu, Y. M., Xiong, Z. H., “Nonlinear projection to latent
structures method and its applications.” Ind. Eng. Chem. Res. 2006, 45: 38433852.
21. Lee, D. S., Lee, M. W., Woo, S. H., Kim, Y., Park, J. M., “Nonlinear dynamic partial
least squares modeling of a fullscale biological wastewater treatment plant.” Process
Biochem. 2006, 41: 20502057.
22. Kaspar, M. H., Ray, W. H., “Dynamic modeling for process control.” Chem. Eng. Sci.
1993, 48 (20): 34473467.
23. Lakshminarayanan, S., Sirish, L., Nandakumar, K., “Modeling and control of
multivariable processes: The dynamic projection to latent structures approach.”
AIChE J. 1997, 43: 23072323.
24. Akamatsu
,
K.,Lakshminarayanan, S.,Manako, H., Takada, H., Satou, T., Shah,S. L.,
“Databased control of an industrial tubular reactor.” Control Eng. Pract. 2000, 8 (7):
783790
25. Rosen, C., “A chemometric approach to process monitoring and control with
applications to wastewater treatment operation.” Ph. D Thesis, Department of
Industrial Electrical Engineering and Automation, Lund University, Sweden, 2001.
26. Cerrillo, F., MacGregor, J., “Latent variable MPC for trajectory tracking in batch
processes.” J. Process Contr. 2005, 15: 651–663.
27. Magda, L., Ordonez, R., “Multivariate statistical process control and case based
reasoning for situation assessment of sequencing batch reactors.” Doctoral Thesis,
Girona, Spain, 2008.
28. Kano, M., Nakagawa, Y., “Databased process monitoring, process control, and
CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK
21
quality improvement: recent developments and applications in steel industry.”
Comput. Chem. Engr. 2008, 32: 12–24.
29. AlGhazzawi, A., Lennox, B., “Model predictive control monitoring using multivariate
statistics.” J. Process Contr. 2009, 19: 314–327.
30. Laurí, D., Rossiter, J. A., Sanchis, J., Martínez, M.,“Datadriven latentvariable
modelbased predictive control for continuous processes.” J. Process Contr. 2010, 20
(10): 12071219.
31. Damarla, S. K., Kundu, M., “Design of multivariable controllers using a classical
approach.”IJCEA. 2010, 1 (2): 165172.
32. Luyben, W. L., Tyreus, B. D., Luyben, M. L., “Plant wide process control.” McGraw
Hill, New York, 1999.
33. Stephanopoulos, G., “Chemical process control.” Prentice Hall, Englewood Cliffs,
NJ., 1984.
34. Douglas, J. M., “Conceptual design of chemical processes.” McGraw Hill, New York,
1988.
35. Downs, J. J., Vogel, E. F., “A plantwide industrial process control problem.”
Comput. Chem. Engr. 1995, 17: 245255.
MULTIVARIATE STATISTICAL PROCESS
MONITORING
22
MULTIVARIATE STATISTICAL PROCESS MONITORING
3.1 INTRODUCTION
Quality and safety are the two important aspects of any production process. With the
invention of multisensory array, mature data capture technology, advancement in data
collection, compression and storage, data driven process monitoring including product quality
monitoring, fault detection and diagnosis are getting due attention and wide spread
acceptance. In view of this, a MVSPC method; based on clustering time series data and
moving window based pattern matching technique was adapted for successful process
monitoring. Biochemical reactor, Drumboiler, continuous stirred tank with cooling jacket
and Tennessee Eastman challenge processes were taken up to implement the proposed
monitoring techniques. Instead of using first hand plat data, the above processes were
modeled using first principles and processes were perturbed at industrially relevant operating
conditions including faulty ones to create vector time series databases.
3.2 CLUSTERING TIME SERIES DATA: APPLICATION IN PROCESS
MONITORING
Clustering time series data depends on measures of similarity. Similarity factor
depends on PCA; especially the angles between the principal components in the latent
subspaces. PCA was successfully applied by Kourti & MacGregor (1996) and Martin
MULTIVARIATE STATISTICAL PROCESSMONITORING
23
&Morris (1996)to cluster multivariate time series data [12].A modified Kmeans clustering
algorithm using similarity measures as a convergence criterion has been used for clustering
datasets pertaining to different operating conditions including faulty one. Cluster purity and
efficiency; the two indices were determined as a performance index of the proposed method.
Different kinds of similarity including PCA similarity, weighted PCA similarity and distance
based similarity have already been discussed in Chapter 2
Monitoring time series data pertaining to various operating conditions depends upon
successful discrimination followed by classification. Discrimination is concerned with
separating distinct sets of objects (or observations) on a onetime basis in order to investigate
observed differences when casual relationships are not well understood. The operational
objective of classification is to allocate new objects (observations) to predefined groups based
on a few well defined rules evolved out of discrimination analysis of allied group of
observations. Discrimination and classification among the time series data can be done using
multivariate statistics. In these procedures, an underlying probability model must be assumed
in order to calculate the posterior probability upon which the classification decision is made.
One major limitation of the statistical methods is that they work well only when the
underlying assumptions are satisfied.
3.2.1 Kmeans Clustering Using Similarity Factors
Clustering technique is more primitive in that; no priori assumptions are made
regarding the group structures. Grouping of the data can be made on the basis of similarities
or distances (dissimilarities). The number of clusters K can be prespecified or can be
determined iteratively as a part of the clustering procedure. In general, the Kmeans
clustering proceeds in three steps, which are as follows:
1. Partition of the items in to K initial clusters.
2. Assigning an item to the cluster whose centroid is nearest (distance is usually
Euclidian). Recalculation of the centroid for the cluster receiving the new item and for
the cluster losing that item.
3. Repeating the step2 until no more reassignment takes place or stable cluster tags are
available for all the items.
The Kmeans clustering has a specific advantage of not requiring the distance matrix as
required in hierarchical clustering, hence ensures a faster computation. The time series data
MULTIVARIATE STATISTICAL PROCESSMONITORING
24
pertaining to various operating conditions were discriminated and classified using the
following similarity based Kmeans algorithm.
Given: Q datasets, { }
Q q
x x x x .... ,... ,
2 1
to be clustered into K clusters
1. Let
th
j dataset in the
th
i cluster be defined as
j
i
x . Computation of the aggregate dataset
) ,.... 2 , 1 ( k i X
i
= , for each of the k clusters as,
T i
Qi
x
T i
j
x
T i
x
i
X )]
) (
......( .......... )
) (
.........( )
) (
1
[( = (3.1)
Where
i
Q is the number of datasets in the database. Note that Q Q
k
i
i
=
∑
=1
.
2. Calculation of the dissimilarity between dataset
q
x and each of the k aggregate datasets
k ..... 2 , 1 i , X
i
= as,
q i q i
SF d
, ,
1 − = (3.2)
Where
q i
SF
,
is similarity between
th
q dataset and
th
i cluster described by equation (2.6). Let
the aggregate dataset
i
X in equation (3.2) be the reference dataset. Dataset
q
x is assigned to
the cluster to which it is least dissimilar. Repetition of the aforesaid steps for Q numbers of
datasets.
3.2.2 Selection of Optimum Number of Cluster
Selection of the number of clusters is crucial in Kmeans clustering algorithm.
Rissanen (1978) proposed model building and model order selection to calculate the optimum
number of clusters based on the model complexity [3]. Large number of resulted clusters
indicates the model complexity. This method penalized the more complex model more than
the less complex model. Several methods have been developed to identify optimum number
of clusters in multivariate time series data such as Akaike Information Criterion (AIC) and
Schwartz Information Criterion (SIC). However, preliminary results obtained using these
methods were not promising. Later Smyth (1996) introduced the cross validation of model to
identify optimum number of clusters in the data. In this method, data splits into two or more
parts. One part is used for clustering whereas other part is used for cross validation [4].
Singhal & Seborg (2005) proposed new methodologies for finding the optimum number of
clusters [5]. In the present work efficiently designed modified Kmeans clustering algorithm
ensures the evolution of optimum number of clusters.
MULTIVARIATE STATISTICAL PROCESSMONITORING
25
3.2.3 Clustering Performance Evaluation
Krzanowski (1979)developed PCA similarity factor by choosing largest k principal
components of each multivariate time series dataset that describe at least 95 % of variance in
the each dataset[6].Some key definitions were introduced by Singhal &. Seborg (2005) to
evaluate the performance of the clusters obtained using similarity factors [5].
Assuming the number of operating conditions is
op
N and the number of datasets for
operating condition j in the database is
DBj
N . Cluster purity is defined to characterize each
cluster in terms of how many numbers of datasets for a particular operating condition present
in the
th
i cluster.
Cluster purity is defined as,
% 100 )
max
( ×
∆
=
pi
N
ij
N
j
j
P (3.3)
Where
ij
N is the number of datasets of operating condition j in the
th
i cluster and
pi
N is
the number of datasets in the
th
i cluster.
Cluster efficiency measures the extent to which an operating condition is distributed in
different clusters. This method is to penalize the large values of K when operating condition
j distributed into different clusters. Clustering efficiency is defined as,
% 100 )
,
max
( × =
DBj
N
j i
N
i
η (3.4)
Where
DBj
N is the number of datasets for operating condition j in the database. Large
number of datasets of operating condition present in a cluster can be considered as dominant
operating condition.
3.3 MOVING WINDOW BASED PATTERN MATCHING
In this approach, the snapshot or template data with unknown start and end time of
operating condition (in sample wise manner) moves through historical data and the similarity
between them is characterized by distance and PCA based combined similarity factor [7
9].The snapshot data can also approach in as dataset. In order to compare the snapshot data to
historical data, the relevant historical data are divided into data windows that are the same
size as the snapshot data. The historical data sets are then organized by placing windows side
MULTIVARIATE STATISTICAL PROCESSMONITORING
26
byside along the time axis, which results in equal length, nonoverlapping segments of data.
The historical data windows with the largest values of similarity factors are collected in a
candidate pool and are called records to be analyzed by the process Engineer. For the present
work, the historical data window moved one observation at a time, with each old observation
is getting replaced by new one. Pool accuracy, Pattern matching efficiency and Pattern
matching algorithm efficiency are important metrics that quantify the performance of the
proposed pattern matching algorithm. The proposed pattern matching technique as depicted
in Fig. 3.1is consisting three steps which are follow as:
1. Specification of the snapshot (variables and time period)
2. Comparison between snapshot and periods of historical windows using moving
window
3. Collection of historical windows with the largest values of similarity factors
.
Fig. 3.1 Moving window based pattern matching
Specify the snapshot
(Variables and time period)
Compare snapshot and periods of
historical window approach
Collect historical data windows
with the largest values of
similarity factors
MULTIVARIATE STATISTICAL PROCESSMONITORING
27
P
N : The size of the candidate pool, it is the number of historical data windows that have
been labeled ‘‘similar’’ to the snapshot data by a pattern matching technique. The data
windows collected in the candidate pool are called records.
1
N = number of records in the candidate pool that are exactly similar to the current snapshot
data, i.e. having a similarity of 1.0/or number of correctly identified record.
2
N = number of records in the candidate pool that are not correctly identified.
2 1
N N
P
N + =
DB
N : The total number of historical data windows that are actually similar to the current
snapshot. In general,
P
N
DB
N ≠
Pool accuracy, P = % 100 )
1
( ×
P
N
N
Pattern matching efficiency, H= % 100 )]
)
1
(
( 1 [ ×
−
−
DB
N
N
P
N
Pattern matching algorithm efficiency, ξ, = % 100 ) ( ×
DB
N
P
N
A large value of Pool accuracy is important in case of detection of small number of specific
previous situations from a small pool of records without evaluating incorrectly identified
records. A large value of Pattern matching efficiency is required in case of detection of all of
the specific previous situations from a large pool of records. The proposed method is
completely data driven and unsupervised; no process models or training data are required.
The user should specify only the relevant measured variables.
3.4 DRUM BOILER PROCESS
Drum boiler is crucial benchmark process in view of modeling and control system
design. This process was addressed by Pellegrinetti & Bentsman (1996), Astrom & Bell
(2000), Tan et al (2002), and Jawahar & Pappa (2005) [1013]regarding modeling and control
aspects. In the present work Drumboiler process has been monitored using similarity
measures. Drumboiler model has been derived using first principles and is characterized by
few physical parameters. The parameters used in the model are those from a Swedish Power
Plant. The plant is an 80 MW unit in Sweden. 16 numbers of datasets belonging to four
operating conditions including an abnormal operating condition were generated perturbing
the process to evaluate the performance of the proposed techniques.
MULTIVARIATE STATISTICAL PROCESSMONITORING
28
3.4.1 The Brief Introduction of Drumboiler System
The utility boilers in the thermal/nuclear power plants are water tube drum boilers.
This type of boiler usually comprises two separate systems. One of them is the steam–water
system, which is also called the water side of the boiler. In this system preheated water from
the economizer is fed into the steam drum, and then flows through the downcomers into the
mud drum.
The diagram depicted in Fig. 3.2shows the drum that holds water at saturation or near
saturation condition and denser water flows through the downcomer into lower header by
force of gravity. After being heated up further, it returns to the drum through the riser.
Between lower and upper header, are stretch of tubes that constitute water walls and receive
radiant heat from furnace. Water walls permit use of high temperature of furnaces and
combustion rates. Part of water in these tubes and risers evaporates, with the result that the
fluid in the riser is composed of a mixer of water and steam. The difference in density of
water in the riser and downcomer provides the necessary motive force to set up circulation
of water in the boiler system.
Boiler is a reservoir of energy. Amount of energy stored in each part is a complicated
function of temperature and pressure. The model can now be developed in detail, expressing
stored energy, input power and output power as function of control variables and sate
variables. Global mass and energy balances capture much of the behavior of the system.
Fig. 3.2 Drumboiler process
Feed water
Downcomer
Drum
Steam
Riser
Lower header
Radiant heat
from furnace
MULTIVARIATE STATISTICAL PROCESSMONITORING
29
3.4.2 Modeling
Assumptions:
• The fundamental modeling simplification is that the two phases of the water inside the
system are everywhere in saturated thermodynamic state.
• There is an instantaneous and uniform thermal equilibrium between water and metal
everywhere.
• The energy stored in the steam and water is released or absorbed very rapidly when
pressure changes. This is the key for understanding boiler dynamics. The rapid release of
energy ensures that different parts of the boiler change their temperature in the same way.
• Steady state metal temperature is close to saturation temperature and the temperature
differences are small dynamically.
Global mass and energy balance equations:
The inputs to the system are chosen to be
• Heat flow rate to the risers, ˝
• Feedwater mass flow rate, o
]
• Steam flow rate, o
s
The outputs from the system are chosen to be:
• Drum level, I
• Drum pressure, ˜
A key feature of drum boiler is that there is an efficient energy and mass transfer between all
parts that are in contact with steam. The mechanism responsible for heat transfer is boiling
and condensation.
Global mass balance:
dp
s
v
st
+p
w
v
wt
]
dt
= o
]
− o
s
(3.5)
Global Energy balance:
dp
s
v
st
h
s
+p
w
v
wt
h
w
Pv
t
+m
t
c
µ
t
m
!
dt
= ˝ + o
]
ℎ
]
− o
s
ℎ
s
(3.6)
Total Volume of Drum, Downcomer and Risers:
ˢ
t
= ˢ
st
+ ˢ
wt
(3.7)
Where
µ
s
and µ
w
represent the densities of steam and water respectively,
ℎ
s
and ℎ
w
represent the enthalpies of steam and water per unit mass,
ˢ
st
and ˢ
wt
represent the total steam and water volume in the system,
MULTIVARIATE STATISTICAL PROCESSMONITORING
30
ˢ
t
, is the total volume of the drum,
˭
t
, is the total metal mass,
ˮ
m
, is the metal temperature
A simple drum boiler model is obtained by combining equations (3.5), (3.6)and(3.7)with
saturated steam tables. Mathematically the model is a differential algebraic system.
3.4.2.1 Proposed Second Order Boiler Model
To have a better insight into the key physical mechanism that affect the dynamic
behavior of the system the state variable approach is considered. Drum pressure, P is chosen
as a key state variable, since it is the most globally uniform variable in the system and is also
easily measurable. The variablesµ
s
, µ
w
, ℎ
s
, ℎ
w
are expressed as function of steam pressure
using the steam table. The second state variable is chosen to be the total volume of water in
the system i.e. ˢ
wt
.Using equation (3.7), ˢ
st
is eliminated from equations (3.5) and (3.6).
The state equations then take the following form:
˥
11
dv
wt
dt
+ ˥
12
dP
dt
= o
]
− o
s
˥
21
dv
wt
dt
+ ˥
22
dP
dt
= ˝ + o
]
ℎ
]
− o
s
ℎ
s
(3.8)
Where
˥
11
= µ
w
− µ
s
˥
12
= ˢ
st
oµ
s
o˜
+ˢ
wt
oµ
w
o˜
˥
21
= µ
w
ℎ
w
− µ
s
ℎ
s
˥
22
= ˢ
st
Әℎ
s
ðp
s
ðP
+ µ
s
ðh
s
ðP
ә + ˢ
wt
Әℎ
w
ðp
w
ðP
+ µ
w
ðh
w
ðP
ә − ˢ
t
+ ˭
t
c
p
ðt
s
ðP
(3.9)
Equations (3.8) and (3.9) constitute a state model of the second order boiler system. This
simplistic model captures the gross behavior of the boiler quite well. In particular, it describes
the response of drum pressure to changes in input power, feedwater and steam flow rates
reasonably well. But the model has a serious deficiency. It doesn’t capture the behavior of the
drum water level, as the distribution of steam and water are not taken into account. The drum
level control is difficult due to shrink and swells effects. The drum level may be defined as
the level at which water stands in the boiler. The steam level is the space above the water
level.
MULTIVARIATE STATISTICAL PROCESSMONITORING
31
3.4.2.2 Distribution of Steam in Risers and Drum
The behavior of drumlevel can best described by taking into account the distribution
of water and steam in the system. The distribution of water and steam is considered
separately for the riser section and the drum.
3.4.2.2.1 Distribution of steam and water in risers
The steamwater distribution varies along the risers. In the riser section water exists in
two phases namely the liquidphase i.e. water and the vaporphase i.e. steam. The mass
fraction or dryness fraction of a liquidvapor mixture must be defined prior to further
discussion. In a liquidvapor mixture, ˲ is known as the quality.
˲ =
m
¡
m
¡
+m
l
where , ˭
¡
and ˭
I
are the masses of vapor and liquid respectively in the
mixture. The value of ˲ varies between 0 and 1. In order to determine the average density of
steamwater mixture in the riser, it is necessary to define the void fraction. The void fraction
o of a two phase mixture is a volumetric quantity and is given as: o =(volume of
vapor)/(volume of vapor + volume of liquid).
o and ˲ are related by:
o =
1
1+Ә
1x
x
әq
or ˲ =
1
1+(
1x
x
)
1
u
(3.10)
Where, e =
0
]
0
g
s. ˰
]
and ˰
g
are the specific volumes of saturated liquid and vapor
respectively.s is the slip ratio of twophase mixture. The two phases of the mixture do not
travel at the same speed. Instead there is a slip between them, which causes the vapor to
move faster than liquid. s is a dimensionless number, greater than 1. It is defined as the ratio
of average vapor velocity to the average liquid velocity, at any crosssection of the riser. The
slip ratio is neglected in the present work, as it doesn’t have a major influence on the fit to
experiment data.
The behavior of twophase flow is complicated and is typically modeled by partial
differential equations. Keeping a finite dimensional model it is assumed that that shape of the
distribution is known. The assumed shape is based on solving the partial differential
equations in the steady state. There exists a linear distribution of steamwater mass ratio
along the risers. The ratio varies in the following form:
MULTIVARIATE STATISTICAL PROCESSMONITORING
32
o
m
(c) = o
¡
c, u ≤ c ≤ 1 (3.11)
Where c is a normalized length coordinate along the risers and o
¡
is the mass ratio at the riser
outlet. The volume and mass fractions of steam are related through
o
¡
= ˦(o
m
) (3.12)
Where, ˦(o
m
) =
p
w
u
m
p
s
+(p
w
p
s
)u
m
(3.13)
For modeling the drumlevel the total amount of steam in the drum is to be obtained. The
governing equation is the average steam volume fraction in the risers, which is obtained by
integrating the equation (3.13) over the limits 0 to 1 can be given as:
o
¡
= ] o
¡
(c)
1
0
ˤc =
1
u
r
] ˦(o
¡
c)
1
0
ˤ(o
¡
c) =
1
u
r
] ˦(c)
u
r
0
ˤc =
p
w
(p
w
p
s
)
Ә1 −
p
s
(p
w
p
s
)u
r
ln Ә1 +
p
w
p
s
p
s
o
¡
әә (3.14)
The partial derivatives of o
¡
with respect to drum pressure and steam mass fraction are
obtained as:
ðu
¡
ðP
=
1
(p
w
p
s
)
2
Әµ
w
ðp
s
ðP
− µ
s
ðp
w
ðP
ә Ә1 +
p
w
p
s
1
1+n
−
p
s
+p
w
np
s
ln(1 + n)ә
ðu
¡
ðu
r
=
p
w
p
s
n
Ә
1
n
ln(1 + n) −
1
1+n
ә (3.15)
Where n =
u
r
(p
w
p
s
)
p
s
The transfer of mass and energy between steam and water by condensation and evaporation is
a key element in the modeling. When modeling the phases separately the transfer must be
accounted for explicitly, hence the joint balance equations for water and steam are written for
the riser section.
3.4.2.2..2 Lumped parameter model
The global mass balance for the riser section is:
dp
s
u
¡
v
r
+p
w
(1u
¡
)v
r
]
dt
= o
dc
− o
¡
(3.16)
Where, o
¡
, is the total mass flow rate out of the risers.,
o
dc
, is the total mass flow rate into the risers.
The global energy balance for the riser section is:
dp
s
h
s
u
¡
v
r
+p
w
h
w
(1u
¡
)v
r
Pv
r
+m
r
c
µ
t
s
!
dt
= ˝ + o
dc
ℎ
w
− (o
¡
ℎ
c
+ ℎ
w
)o
¡
(3.17)
MULTIVARIATE STATISTICAL PROCESSMONITORING
33
3.4.2.2.3 Circulation flow:
In the natural circulation boiler the flow rate is driven by density gradients in the
risers and downcomers. The flow through the downcomer (o
dc
) can be obtained from a
momentum balance.
The equation consists of three terms namely the internal term, driving force that in this case is
the density difference and frictional force in the flow through pipes.
(a) Inertia force: (I
¡
+I
dc
)
dq
dc
dt
(b) Driving force: (µ
w
−µ
s
)o
¡
ˢ
¡
˧
(c) Friction force:
kq
dc
2
2p
w
A
dc
Combining above three terms momentum balance equation is written as following:
(I
¡
+I
dc
)
dq
dc
dt
= (µ
w
−µ
s
)o
¡
ˢ
¡
˧ −
kq
dc
2
2p
w
A
dc
(3.18)
Equation (3.15) is a first order system that has a time constant as given below:
¡ =
(L
r
+L
dc
)A
dc
p
w
kq
dc
(3.19)
The steady state relation for the system is given as:
1
2
˫o
dc
2
= µ
w
˓
dc
(µ
w
− µ
s
)o
¡
ˢ
¡
˧ (3.20)
3.4.2.3 Distribution of Steam in the Drum
The physical phenomenon in the drum is complicated. Steam enters from many riser tubes:
feed water enters through a complex arrangement, water leaves through the downcomer
tubes and steam through the steam valve. The geometry and flow patterns are complex. Basic
mechanisms that occur in the drum are separation of water and steam and condensation.
The mass balance for the steam under the liquid level is given as:
d(p
s
v
sd
)
dt
= o
¡
o
¡
−o
sd
−o
cd
(3.21)
Where, o
cd
is the condensation flow, which is given by
o
cd
=
h
w
h
]
h
c
o
]
+
1
h
c
Әµ
s
ˢ
sd
dh
s
dt
+ µ
w
ˢ
wd
dh
w
dt
− (ˢ
sd
+ ˢ
wd
)
dP
dt
+ ˭
d
c
p
dt
s
dt
ә(3.22)
The flow o
sd
is driven by density difference of water and steam, and the momentum of the
flow entering the drum. The expression for o
sd
is an empirical model and is a good fit to the
experimental data and is given as:
o
sd
=
p
s
1
d
(ˢ
sd
−ˢ
sd
0
) +o
¡
o
dc
+o
¡
[(o
dc
−o
¡
) (3.23)
MULTIVARIATE STATISTICAL PROCESSMONITORING
34
Where, ˢ
sd
0
, is the volume of steam in the drum in hypothetical situation when there is no
condensation of steam in the drum and ˠ
d
is the residence time of steam in the drum.
3.4.2.4 Drum Level:
Having accounted for distribution of steam below drumlevel, now the drumlevel is
modeled. The drum level is composed of two terms,
• The total amount of water in the drum,
• The displacement due to changes of the steamwater ratio in the risers.
Derivation of the drumlevel ˬ measured from its normal operating level is given by:
ˬ =
v
sd
+v
wd
A
d
= ˬ
w
+ ˬ
s
(3.24)
Where ˬ
w
=
v
wd
A
d
and ˬ
s
=
v
sd
A
d
ˢ
wd
= ˢ
wt
− ˢ
dc
− (1 − o
¡
)ˢ
¡
Where, ˢ
wd
, is the volume of water in the drum, ˬ
w
, is the level variation caused by changes
of amount of water in the drum, ˬ
s
, is the level variation caused by the steam in the drum,
˓
d
, is the wet surface of the drum at the operating level.
3.4.2.5 The Model
Model is given by the following equations:
dp
s
v
st
+p
w
v
wt
]
dt
= o
]
− o
s
ˤµ
s
ˢ
st
ℎ
s
+µ
w
ˢ
wt
ℎ
w
− ˜ˢ
t
+ ˭
t
c
p
ˮ
m
!
ˤˮ
= ˝ + o
]
ℎ
]
− o
s
ℎ
s
ˤµ
s
o
¡
ˢ
¡
+ µ
w
(1 − o
¡
)ˢ
¡
]
ˤˮ
= o
dc
− o
¡
ˤµ
s
ℎ
s
o
¡
ˢ
¡
+ µ
w
ℎ
w
(1 −o
¡
)ˢ
¡
− ˜ˢ
¡
+ ˭
¡
c
p
ˮ
s
!
ˤˮ
= ˝ + o
dc
ℎ
w
− (o
¡
ℎ
c
+ ℎ
w
)o
¡
ˤ(µ
s
ˢ
sd
)
ˤˮ
= o
¡
o
¡
− o
sd
− o
cd
o
cd
=
ℎ
w
− ℎ
]
ℎ
c
o
]
+
1
ℎ
c

µ
s
ˢ
sd
ˤℎ
s
ˤˮ
+ µ
w
ˢ
wd
ˤℎ
w
ˤˮ
− (ˢ
sd
+ ˢ
wd
)
ˤ˜
ˤˮ
+
˭
d
c
p
ˤˮ
s
ˤˮ
ˬ =
ˢ
sd
+ ˢ
wd
˓
d
ˢ
t
= ˢ
st
+ ˢ
wt
(3.25)
The model as can be seen is a differential algebraic system. Since most available simulation
software requires state equations, the state model is also derived.
MULTIVARIATE STATISTICAL PROCESSMONITORING
35
3.4.2.6 State Variable Model
Prior to generation of database, linear model is required to design controllers. This was
accomplished by the use of state space methodology. The selection of state variables is done
in many different ways. A convenient way is to choose those variables as states, which have a
good physical interpretation that describe storage of mass, energy and momentum. The
variables used in this procedure are as follows:
I. State variables:
a. Drum pressure ˜
b. Total water volume of the system ˢ
wt
c. Steam mass fraction o
¡
d. Volume of steam in the drum ˢ
sd
II. Manipulated inputs:
a. Heat flow rate to the risers ˝
b. Feed water flow rate to the drum o
]
c. Steam flow rate from the drum o
s
III. Measured outputs:
a. Total water volume of the system ˢ
wt
b. Drum pressure ˜
3.4.2.6.1 Derivation of state equations:
The pressure and water dynamics are obtained from the global mass and energy balances
equations (3.5) and (3.6). Combining these equations the state variable form is obtained as
given by the set of equations (3.8). The riser dynamics is given by the mass and energy
balance equations (3.16) and (3.17) are further simplified by eliminating ‘o
¡
’ and multiplying
equation (3.16) by ‘(ℎ
w
+ o
¡
ℎ
c
)’ and adding it to equation (3.17).
The resulting expression is given as:
ℎ
c
(1 − o
¡
)
d(p
s
u
¡
v
r
)
dt
+ µ
w
(1 − o
¡
)ˢ
¡
dh
w
dt
− o
¡
ℎ
c
dp
w
(1u
¡
)v
r
]
dt
+ µ
s
o
¡
ˢ
¡
dh
s
dt
− ˢ
¡
dP
dt
+
˭
¡
c
p
dt
s
dt
= ˝ − o
¡
o
dc
ℎ
c
(3.26)
If the state variables ‘P’ and ‘o
¡
’ are known, the riser flow rate ‘o
¡
’ can be computed from
equation (3.22). This can be given as:
MULTIVARIATE STATISTICAL PROCESSMONITORING
36
o
¡
= o
dc
− ˢ
¡
ð(1u
¡
)p
w
+p
s
u
¡
]
ðP
dP
dt
+ˢ
¡
(µ
w
− µ
s
)
ðu
¡
ðu
r
du
r
dt
(3.27)
The drum dynamics can be captured by the mass balance equation (3.21). The expression for
‘o
¡
’, ‘o
sd
’ and ‘o
cd
’ are substituted in equation (3.21). The resulting simplified expression is
given as:
µ
s
dv
sd
dt
+ ˢ
sd
dp
s
dt
+
1
h
c
Ӛµ
s
ˢ
sd
dh
s
dt
+ µ
w
ˢ
wd
dh
w
dt
− (ˢ
sd
+ ˢ
wd
)
dP
dt
+ ˭
d
c
p
dt
s
dt
ӛ +
o
¡
(1 + [)ˢ
¡
d(1u
¡
)p
w
+p
s
u
¡
]
dt
=
p
s
1
d
(ˢ
sd
0
− ˢ
sd
) +
h
]h
w
h
c
o
]
(3.28)
The state equations are written as:
˥
11
dv
wt
dt
+ ˥
12
dP
dt
= o
]
− o
s
˥
21
ˤˢ
wt
ˤˮ
+ ˥
22
ˤ˜
ˤˮ
= ˝ + o
]
ℎ
]
− o
s
ℎ
s
˥
32
ˤ˜
ˤˮ
+ ˥
33
ˤo
¡
ˤˮ
= ˝ −o
¡
ℎ
c
o
dc
˥
42
ˤ˜
ˤˮ
+ ˥
43
ˤo
¡
ˤˮ
+ ˥
44
ˤˢ
sd
ˤˮ
=
p
s
1
d
(ˢ
sd
0
− ˢ
sd
) +
(h
]
h
w
)
h
c
o
]
(3.29)
Where
˥
11
= (µ
w
− µ
s
)
˥
12
= ˢ
st
oµ
s
o˜
+ˢ
wt
oµ
w
o˜
˥
21
= µ
w
ℎ
w
− µ
s
ℎ
s
˥
22
= ˢ
st
ℎ
s
oµ
s
o˜
+ µ
s
oℎ
s
o˜
1 + ˢ
wt
ℎ
w
oµ
w
o˜
+ µ
w
oℎ
w
o˜
1 −ˢ
t
+ ˭
t
c
p
oˮ
s
o˜
˥
32
= µ
w
oℎ
w
o˜
− o
¡
ℎ
c
oµ
w
o˜
1 (1 − o
¡
)ˢ
¡
+ (1 − o
¡
)ℎ
c
oµ
s
o˜
+ µ
s
oℎ
s
o˜
·o
¡
ˢ
¡
+ (µ
s
+ (µ
w
− µ
s
)o
¡
)ℎ
c
ˢ
¡
oo
¡
o˜
− ˢ
¡
+ ˭
¡
c
p
oˮ
s
o˜
˥
33
= ((1 − o
¡
)µ
s
+ o
¡
µ
w
)ℎ
c
ˢ
¡
oo
¡
oo
¡
˥
42
= ˢ
sd
oµ
s
o˜
+
1
ℎ
c
µ
s
ˢ
sd
oℎ
s
o˜
+ µ
w
ˢ
wd
oℎ
w
o˜
− ˢ
sd
− ˢ
wd
+˭
d
c
p
oˮ
s
o˜
1
+ o
¡
(1 + [)ˢ
¡
o
¡
oµ
s
o˜
+ (1 − o
¡
)
oµ
w
o˜
+ (µ
w
− µ
s
)
oo
¡
o˜
1
˥
43
= o
¡
(1 + [)(µ
s
− µ
w
)ˢ
¡
ðu
¡
ðu
r
,
MULTIVARIATE STATISTICAL PROCESSMONITORING
37
˥
44
= µ
s
It is noted that the state space model obtained has an interesting lower triangular structure
where state variables can be grouped as: Ә((ˢ
wt
, ˜), o
¡
), ˢ
sd
ә. The variables inside each
parenthesis can be computed independently. Model is thus a nest of second, third and fourth
order model. The second order model describes drum pressure and total water volume in the
system. The third order model captures the steam dynamics in the risers and the fourth order
model also describes the accumulation of steam below the water surface in the drum
dynamics.
3.4.2.6.2 Equilibrium values
The steady state solution of the state model of equation (3.29) is given by:
o
]
= o
s
˝ = o
s
ℎ
s
− o
]
ℎ
]
˝ = o
dc
o
¡
ℎ
c
ˢ
sd
= ˢ
sd
0
−
1
d
(h
w
h
]
)
p
s
h
s
o
]
(3.30)
Where, o
dc
is given by
o
dc
=
2µ
w
˓
dc
(µ
w
− µ
s
)˧o
¡
ˢ
¡
K
A convenient way to find the initial values is to first specify steam flow rate o
s
and steam
pressure P. the feed water flow rate o
]
and input power Q are given by first two equations of
equation (3.30) and the steam volume in the drum is given by the last equation of (3.30). The
steam quality o
¡
is obtained by solving the nonlinear equations:
˝ = o
¡
ℎ
c
¹
2p
w
A
dc
(p
w
p
s
)gu
¡
v
r
K
o
¡
=
p
w
p
w
p
s
Ә1 −
p
s
(p
w
p
s
)u
r
lnӘ1 +
p
w
p
s
p
s
o
¡
әә (3.31)
The equilibrium values of the state variables:
1. Drum pressure ˜=8.5 MPa
2. Total water volume of the system ˢ
wt
=57.5 m
3
MULTIVARIATE STATISTICAL PROCESSMONITORING
38
3. Steam mass fraction o
¡
=0.051
4. Volume of steam in the drum ˢ
sd
=4.8 m
3
The equilibrium values of the input variables are assumed as:
1. Heat flow rate to the risers ˝=80.40437506e6 MW
2. Feed water flow rate to the drum o
]
=32.00147798 kg/sec
3. Steam flow rate from the drum o
s
=32.00147798 kg/sec
3.4.2.6.3 Parameters
An interesting feature of the model is that it requires only a few parameters to characterize
the system. The parameter values that are considered are those from a Swedish plant. The
plant is an 80 MW unit. The model is characterized by the parameters given in Table 3.1.
3.5 BIOREACTOR PROCESS
Bioreactor control has been an active area of research over a decade or so. For optimization
of cell mass growth and product formation continuous mode of operation of bioreactors are
desirable not the traditional fed batch bioreactors. Several researchers like Edwards et al.
(1972), Agrawal and Lim (1984), Menawat & Balachander (1991) have studied the
continuous bioreactor problem [1416]. Kaushiaram et al. (2010) designed neural controllers
for various configurations of continuous bioreactor process. [17].
3.5.1 Modeling
A (2×2) bioreactor process was taken up. The primary aim of a continuous bioreactor is to
avoid wash out condition which ceases reaction that may be achieved either by controlling
cell mass (X g/L) or substrate concentrations (S g/L) at the various operating points of the
bioprocess. In order to maintain the reaction rate and product quality, both of them may be
controlled with dilution rate (D=F/V (h
1
)) and feed substrate concentration (S
f
, g/L) as
manipulated variables, thus two degrees of freedom are available for control. The parameters
like specific growth rate (u), yield constant (͵) , & saturation rate constant (ͻ
1
, ͻ
ͽ
) of the
kinetic models are either inadequately determined or vary from time to time regarding the
process operation, hence they are considered as disturbance of the process. The study is based
MULTIVARIATE STATISTICAL PROCESSMONITORING
39
on single biomasssingle substrate process. The following are the model equation based on
first principle.
1
D)x (µ
dt
1
dx
− = (3.32)
Y
1
µx
)
2
x
2f
D(x
dt
2
dx
− − = (3.33)
The reaction rate is given by
1
µx
1
r = (3.34)
Where
f
x
2
is the substrate concentration in the feed. x
1
&˲
2
are the biomass and substrate
composition, respectively. µ , the specific growth is a function of substrate concentration and
given by the substrate inhibition growth rate expression:
2
2
x
1
k
2
x
m
k
2
x
max
µ
µ
+ +
= (3.35)
The relation between the rate of generation of cells and consumption of nutrients is defined
by the yield given in the following equation
2
r
1
r
Y = (3.36)
Introducing the dilution rate (
V
F
D = ) and assuming there is no biomass in the feed, i.e.,
f
x
1
=0.
The inputs are dilution rate and feed substrate concentration and the outputs are the
concentrations of substrate and biomass (All values in deviation form). The values of steady
state dilution rate (
s
D ), feed substrate concentration (
fs
x
2
), the steady state values of the
states at the stable and unstable operating points and the various parameters are presented in
Table 3.2. When both the concentrations (biomass & substrate) are large; process leads to
unstable equilibrium. When there is substrate limiting condition, process is at stable
equilibrium. Direct synthesis controllers were designed to control the biomass and substrate
concentration in both stable and unstable situations.
MULTIVARIATE STATISTICAL PROCESSMONITORING
40
3.6 CONTINUOUS STIRRED TANK REACTOR WITH COOLING
JACKET
A nonisothermal continuous stirred tank reactor with cooling jacket dynamics was
considered in order to generate historical database by using distinct operating conditions
including stable & unstable operating points and faulty conditions. An exothermic first order
reversible reaction B A → was used. An inlet fluid stream continuously fed to the reactor
and an exit stream continuously removed from the reactor so that the exit stream having same
temperature as temperature in the reactor. A cooling jacket surrounded by CSTR having inlet
& outlet fluid streams assumed to be perfect mixing and at a temperature less than the reactor
temperature. A dynamic model (equations 3.373.39) represents mass, component and energy
balance with the assumptions of perfect mixing and constant volume. Assuming cooling
jacket temperature can be directly manipulated. So no energy balance is required for cooling
jacket. Parameters used for simulation of CSTR process are presented in Table 3.3.
ρ ρ
ρ
F
in in
F
dt
dv
− =
(3.37)
rV
A
FC
AF
C
in
F
dt
A
dC
V − − = (3.38)
Vr H T
f
T
P
C F
dt
dT
P
C V ) ( ) ( ∆ − + − = ρ ρ (3.39)
Where
A
C
RT
a
E
o
k r ) exp(
−
− = and H ∆ − is heat of reaction.
Direct synthesis controllers were designed to control concentration and temperature using
relation between the manipulated inputs and the controlled outputs obtained by converting
nonlinear model into linear model using state space method.
3.7 TENNESSEE EASTMAN CHALLENGE PROCESS
The Tennessee Eastman Challenge process (TE process) is a benchmark industrial
process. Downs & Vogel (1993) [18] first identified the suitability of the T; a nonlinear
dynamic process as an active research area with little modifications in component kinetics
and process operating conditions. The TE process has wide applications like fault detection,
development of different control strategies and authentication of performance of supervised
controllers. Several researchers like Vinson (1995), Luyben (1996), Lyman & Georgakist
MULTIVARIATE STATISTICAL PROCESSMONITORING
41
(1994), Zheng (1998), Molina et al. (2010) and Zerkaoui et al. (2010) [1924]. Ricker (1995;
1996) addressed the TE process to assess the plantwide control strategies with conventional
and unconventional controllers. He has studied the TE process to find optimal steady states
which provide six modes of operating points and designed a decentralized control system
using proportional integral as well as model predictive controllers to maintain the process at
desired values according to product demand [2526]. The basic process has been adapted
from his personnel web page and studied to evaluate the proposed pattern matching
techniques.
The TE process comprises an exothermic two phase reactor, condenser, phase
separator and stripper column for conversion of reactants namely A, C, D and E to products;
G and H. Figure 3.3illustrates complete diagram of TE process. The gaseous reactants are fed
to reactor equipped with internal cooling water supply to remove the heat of reaction. A
nonvolatile catalyst is used. The products along with unreacted feed are sent to the condenser
which condenses all vapors into liquid and from there liquid is fed to separator to get the inert
and by product from purge. The uncondensed vapors are recycled to the reactor. The bottom
products of the separator are sent to stripper where the products are withdrawn. The heat &
material balance data and physical properties of components are presented in Tables 3.4 and
3.5.
The nonlinear TE process model was obtained from the modeling equations of all
unit operations involved in the process. There are virtually 52 state variables, out of which 41
can be controlled using 12 manipulated variables. The process can be operated in six modes
and mode 1 was considered as the base case. The steady state values of process
measurements, manipulated variables, and disturbance variables are listed in Tables 3.6
3.9(using base case).
3.8 GENERATION OF DATABASE
For the Drumboiler process, direct synthesis controller design procedure was used for
closed loop control system. Both the open loop and closed loop simulations were performed
in order to generate 16 numbers of databases. Four numbers of different operating conditions
for Drumboiler process are listed in Table 3.10. The drumboiler process was simulated for
1000 seconds with a sampling time of 2 seconds. Four distinct operating conditions were
created by giving impulse, step and sinusoidal changes in the manipulated inputs in open loop
as well as closed loop.
MULTIVARIATE STATISTICAL PROCESSMONITORING
42
For Bioreactor process, open loop and closed loop processes were considered in order
to generate the database including faults using distinct operating conditions at stable &
unstable operating points (by varying the controller tuning parameter, λ, the faulty operating
condition was generated). Bioreactor was simulated for one hour and data was taken up with
a sampling interval of 6 seconds using different operating conditions. Total database contains
26 numbers of datasets of 4 different operating conditions where each dataset contains 600
observations of two outputs and are presented in Table 3.11.
For the jacketed CSTR process, open loop and closed loop processes were simulated
in order to generate the database using distinct operating conditions. Various operating
conditions with their parameters are presented in Table 3.12. CSTR was simulated for one
hour using different operating conditions. Total database contains 56 numbers of datasets of 3
numbers of operating conditions where each dataset contains 600 observations of two
outputs.
For the TE process the distinctive operating modes of process are presented in Table
3.13 In this work, operating mode 1 and mode 3 have been considered in order to generate
the historical database.13 numbers of datasets were created pertaining to 13 operating
conditions including faulty one which could cause the plant to shut down. The actual values
of controlled variables in operating conditions 10 to 13, which leads to plant shut down as
specified by Downs & Vogel are bestowed in Table 3.14.Figures 3.4 to 3.7 illustrate the
qualitative time responses of few process measurements corresponding to faulty operating
conditions obtained from closed loop simulations of the decentralized TE process with
proportional controller operated in mode 1 and mode 3. The plant was run for 72 hours with
sampling time of 36 seconds for operating conditions FNM1 to FMD2 and FNM3 to FM32.
However in the event of faulty situations, the process operation was completely terminated
within one hour.
3.8 RESULTS & DISCUSSIONS
3.8.1 Drum Boiler process
Sixteen numbers of datasets were generated by perturbing the process with pseudo random
binary signal (PRBS) generator. Signal to noise ratio were maintained as 10.0. Negative step
change in feed water flow rate induced continuous decrease in total water volume and
continuous increase in drum pressure that represents abnormal scenario. Figure.3.8 shows the
response of abnormal process. The combined similarity measures are capable of detecting
MULTIVARIATE STATISTICAL PROCESSMONITORING
43
faulty operating condition as cluster 3 and datasets pertaining to various other operating
conditions were also identified correctly. The performance of clustering is presented in Table
3.15.
The proposed moving window based pattern matching technique perfectly identified
all the snapshot data including normal and faulty data which find their similarity with the
same process data present in the historical database. The technique was tested in sample wise
as well as dataset wise manner. The similarity factors obtained in dataset wise pattern
matching are shown in Table 3.16. The overall performance of moving window based pattern
matching technique is same for both the cases and shown in Table 3.17. The computation
time required in dataset wise manner was less than in comparison to sample wise manner.
Pattern matching accuracy and efficiency and efficiency of the algorithm were determined to
be 100 %.each
3.8.2 Bioreactor process
Table 3.11presents the 26 numbers of datasets, which were generated for various
operating conditions by varying the parameters
m
k ,
1
k &λ . λ is the tuning parameter of
direct synthesis controller,
m
k  a parameter (both for Monod and substrate inhibition),
1
k a
parameter for substrate inhibition only. Four numbers of optimum clusters were obtained
using similarity based; modified Kmeans algorithm. Faulty operating condition was well
captured by cluster 4. The derived cluster purity and efficiency; both were 100 % as
presented in Table 3.18.
Pattern matching was done using the moving window in a sample wise as well as dataset
wise manner. 26 sets of data pertaining to four various operating conditions were considered
as historical database and 4 snapshot data sets were considered. The similarity factors
obtained for dataset wise pattern matching are shown in Table 3.19. The overall performance
of moving window based pattern matching technique is same for both the cases as shown in
Table 3.20. Pool accuracy, Pattern matching efficiency and efficiency of the algorithm were
determined to be 100 %.each
MULTIVARIATE STATISTICAL PROCESSMONITORING
44
3.8.3 Jacketed CSTR Process
By varying
A
C , ˠ, and z, 56 numbers of datasets pertaining to three operating conditions
were created. Kmeans clustering algorithm using combined similarity factors repeated for 56
times. Entire database were grouped into 3 clusters. Table 3.21 presents the clustering
performance. Resulted clustering efficiency and purity are 100 % each. Figure 3.9presents the
response for the CSTR process under faulty operating condition
Pattern matching was done using the moving window in a sample wise as well as
dataset wise manner. 3 snapshot data sets were considered. The similarity factors obtained for
dataset wise pattern matching are shown in Table 3.22. The overall performance of moving
window based pattern matching technique is same for both the cases as shown in Table 3.23.
Pool accuracy, Pattern matching efficiency and efficiency of the algorithm were determined
to be 100 %.each
3.8.4 Tennessee Eastman Process
The proposed similarity based pattern matching technique was successfully applied to
the TE process having 13 numbers of operating conditions. The combined similarity factors
used were greater than 0.96. The similarity factors obtained for dataset wise pattern matching
are shown in Table 3.24, where S represents snapshot dataset and R represents designed
historic dataset. The overall performance of moving window based pattern matching
technique is same for both the cases as shown in Table 3.25.
3.9 CONCLUSION
A Sample wise moving window based pattern matching technique was developed
with a view to process monitoring. The proposed approach successfully located the arbitrarily
chosen different operating conditions of current period of interest among the historical
database of a Drumboiler, continuous bioreactor, Jacketed CSTR and Tennessee Eastman
process. The PCA and distance based combined similarity factors provided the effective way
of pattern matching in a multivariate time series database of the processes considered. With
the other proposed technique, the time series data pertaining to various operating conditions
including abnormal ones of the considered processes were discriminated /classified
MULTIVARIATE STATISTICAL PROCESSMONITORING
45
efficiently using a similarity based modified Kmeans clustering algorithm. Efficient
modeling and simulation of the process taken up were the key factors behind the generation
of design databases required for the successful implementation of the proposed monitoring
techniques. The present work demanded an extensive modeling and simulation activity for
the benchmark processes taken up including the prestigious Tennessee Eastman challenge
process and industrial scale Drumboiler process. The developed machine learning algorithms
with their encouraging performances deserves immense significance in the current
perspective of process monitoring.
Table 3.1 Values of the Drumboiler model parameters
S. NO Parameter name Notation Value
with
units
1 Riser volume ˢ
¡
37 m
3
2 Downcomer volume ˢ
dc
11 m
3
3 Total volume ˢ
t
88 m
3
4 Drum area at normal
operating level
˓
d
20 m
2
5 Downcomer flow area ˓
dc
0.355 m
2
6 Riser metal mass ˭
¡
160e3 kg
7 Total metal mass ˭
t
300e3 kg
8 Drum metal mass ˭
d
100e3 kg
9 Friction coefficient in
downcomerriser loop
˫ 25
10 Empirical o
sd
coefficient [ 0.3
11 Residence time of steam in
drum
ˠ
d
12 sec
12 Bubble volume coefficient ˢ
sd
0
10 m
3
13 Acceleration due to gravity ˧ 9.81
m/sec
2
14 Drum volume ˢ
d
40 m
3
MULTIVARIATE STATISTICAL PROCESSMONITORING
46
Table 3.2 Bioreactor process Parameters
Parameters Value
max
µ 0.53 h
1
m
k 0.12 g/L
1
k 0.4545 L/g
Y 0.4
fs
x
2
4.0 g/L
s
D 0.3 h
1
s
x
1
(at stable operating point) 1.5302 g/L
x
2s
(at stable operating point) 0.1746 g/L
x
1s
(at unstable operating point) 0.995103 g/L
x
2s
(at unstable operating point) 1.512243 g/L
Table 3.3 CSTR parameters used in CSTR simulation
Parameter Value
1
,
−
hr
V
F
1
1
,
−
hr k
o
9703*3600
kgmol
kcal
H), ( ∆ −
5960
kgmol
kcal
E,
11843
) (
,
3
C m
kcal
C
P ο
ρ
500
C T
f
ο
, 25
3
,
m
kgmol
C
AF
10
) (
,
3
Chr m
kcal
V
UA
ο
150
C T
j
ο
,
25
Table 3.4 Heat and material balance data for TE process.
Process stream data
Stream number
Stream number
Molar flow (kgmol h
1
)
Mass flow (kg h
1
)
Temperature (
o
c)
Mole fractions A
B
C
D
E
A feed
1
11.2
22.4
45
0.99990
0.00010
0
0
0
B feed
2
114.2
3664.0
45
0
0.00010
0
0.99990
0
E feed
3
98.0
4509.3
45
0
0
0
0
0.99990
C feed
4
417.5
6419.4
45
0.48500
0.00500
0.51000
0
0
Strp. Ovhd.
5
465.7
8979.6
65.7
0.43263
0.0044
0.45264
0.00116
0.07256
Reactor feed
6
1890.8
48015.4
86.1
0.32188
0.08893
0.26383
0.06882
0.18776
MULTIVARIATE STATISTICAL PROCESSMONITORING
47
F
G
H
0
0
0
0
0
0
0.00010
0
0
0
0
0
0.00885
0.01964
0.00808
0.01657
0.03561
0.01659
Stream number
Stream number
Molar flow (kgmol h
1
)
Mass flow (kg h
1
)
Temperature (
o
c)
Mole fractions A
B
C
D
E
F
G
H
Reactor
product
7
1476.0
48015.4
120.4
0.27164
0.11393
0.19763
0.01075
0.17722
0.02159
0.12302
0.08423
Recycle
8
1201.5
30840
102.9
0.32958
0.13823
0.23978
0.01257
0.18579
0.02263
0.04844
0.02299
Purge
9
15.1
386.5
80.1
0.32958
0.13823
0.23978
0.01257
0.18579
0.02263
0.04844
0.02299
Separation
liquid
10
259.5
16788.9
80.1
0
0
0
0.00222
0.13704
0.01669
0.47269
0.37136
Product
11
211.3
14288.6
65.7
0.00479
0.00009
0.01008
0.00018
0.00836
0.00099
0.53724
0.43828
Unit operation data
Temperature (
o
c)
Pressure (kPa gauge)
Heat duty (kW)
Liquid volume (m
3
)
Reactor
120.4
2705
6468.7
16.55
Separator
80.1
2633.7

4.88
Condenser


2140.6

Stripper
65.7
3102.2
1430
4.43
Utilities
Reactor cooling water flow
(m
3
h
1
)
Condenser cooling water
flow (m
3
h
1
)
Stripper stream flow (kg h
1
)
93.37
49.37
230.31
Table 3.5 Component physical properties (at 100
o
c) in TE process
Component Molecular
weight
Liquid density
(kg m
3
)
Liquid heat
capacity (kJ kg

1o
c
1
)
Vapour heat
capacity (kJ kg

1o
c
1
)
Heat of
vaporization (kJ
kg
1
)
A
B
C
D
E
F
G
H
2
25.4
28
32
46
48
62
617



299
365
328
612
617



7.66
4.17
4.45
2.55
2.45
14.6
2.04
1.05
1.85
1.87
2.02
0.712
0.628



202
372
372
523
486
Vapor pressure (Antoine equation):
P= exp(A+B/(T+C))
p=pressure (Pa)
T= temperature (
o
c)
Component Constant A Constant B Constant C
D 20.81 1444.0 259
E 21.24 2114.0 266
F 21.24 2144.0 266
MULTIVARIATE STATISTICAL PROCESSMONITORING
48
G 21.32 2748.0 233
H 22.10 3318.0 250
#Vapor pressure parameters are not listed for component A, B and C because they are effectively
noncondensible.
Table 3.6 Process manipulated variables in TE process
Variable name Variable number Base case value
(%)
Units
D feed flow (stream 2)
E feed flow (stream 3)
A feed flow (stream 1)
A and C feed flow (stream 4)
Compressor recycle valve
Purge valve (stream 9)
Separator pot liquid flow (stream 10)
Stripper liquid product flow (stream 11)
Stripper steam valve
Reactor cooling water flow
Condenser cooling water flow
Agitator speed
XMV (1)
XMV (2)
XMV (3)
XMV (4)
XMV (5)
XMV (6)
XMV (7)
XMV (8)
XMV (9)
XMV (10)
XMV (11)
XMV (12)
63.053
53.980
24.644
61.302
22.210
40.064
38.100
46.534
47.446
41.106
18.114
50.000
Kg h
1
Kg h
1
Kscmh
Kscmh
%
%
m
3
h
1
m
3
h
1
%
m
3
h
1
m
3
h
1
rpm
Table 3.7 Process disturbances in TE process
Variable number
Process variable
Type
IDV (1)
IDV 12)
IDV i3j
IDV (4)
IDV (5)
IDV (6)
IDV (7)
IDV (8)
IDV (9)
IDV (10)
IDV (I 1)
IDV (12)
IDV (13)
IDV ii4j
IDV (15)
IDV (16)
IDV (17)
IDV (18)
IDV (19)
IDV (20)
A/C feed ratio, B composition constant (stream 4)
B composition. A/C ratio constant (stream 4)
D feed temperature (stream 2)
Reactor cooling water inlet temperature
Condenser cooling water inlet temperature
A feed loss (stream 1)
C header pressure lossreduced availability (stream 4)
A, B, C feed composition (stream 4)
D feed temperature (stream 2)
C feed temperature (stream 4)
Reactor cooling water inlet temperature
Condenser cooling water inlet temperature
Reaction kinetics
Reactor cooling water valve
Condenser cooling water valve
Unknown
Unknown
Unknown
Unknown
Unknown
step
step
step
step
step
step
step
Random variation
Random variation
Random variation
Random variation
Random variation
Slow drift
Sticking
Sticking
Unknown
Unknown
Unknown
Unknown
Unknown
MULTIVARIATE STATISTICAL PROCESSMONITORING
49
Table 3.8 Continuous process measurements in TE process
Variable name Variable number Base case
value
Units
A feed (stream 1)
D feed (stream 2)
E feed (stream 3)
A and C feed (stream 4)
Recycle flow (stream 8)
Reactor feed rate (stream 6)
Reactor pressure
Reactor level
Reactor temperature
Purge rate (stream 9)
Product separator temperature
Product separator level
Product separator pressure
Product separator under flow (stream 10)
Stripper level
Stripper pressure
Stripper under flow (stream 11)
Stripper temperature
Stripper steam flow
Compressor work
Reactor cooling water outlet temperature
Separator cooling water outlet temperature
XMEAS (1)
XMEAS (2)
XMEAS (3)
XMEAS (4)
XMEAS (5)
XMEAS (6)
XMEAS (7)
XMEAS (8)
XMEAS (9)
XMEAS (10)
XMEAS (11)
XMEAS (12)
XMEAS (13)
XMEAS (14)
XMEAS (15)
XMEAS (16)
XMEAS (17)
XMEAS (18)
XMEAS (19)
XMEAS (20)
XMEAS (21)
XMEAS (22)
0.25052
3664.0
4509.3
9.3477
26.902
42.339
2705.0
75.000
120.40
0.33712
80.109
50.000
2633.7
25.160
50.000
3102.2
22.949
65.731
230.31
341.43
94.599
77.297
kscmh
kg h
1
kg h
1
kscmh
kscmh
kscmh
kPa gauge
%
0
C
kscmh
0
C
%
kPa gauge
m
3
h
1
%
kPa gauge
m
3
h
1
0
C
kg h
1
kW
0
C
0
C
Table 3.9 Sampled process measurements in TE process
Reactor feed analysis (stream 6)
Component Variable number Base case
value
Units Sampling frequency =0.1 h
Dead time=0.1h
A
B
C
D
E
F
XMEAS (23)
XMEAS (24)
XMEAS (25)
XMEAS (26)
XMEAS (27)
XMEAS (28)
32.188
8.8933
26.383
6.8820
18.776
1.6567
mol%
mol%
mol%
mol%
mol%
mol%
Purge gas analysis (stream 9)
Component Variable number Base case
value
Units Sampling frequency =0.1 h
Dead time=0.1h
A
B
C
D
E
F
G
H
XMEAS (29)
XMEAS (30)
XMEAS (31)
XMEAS (32)
XMEAS (33)
XMEAS (34)
XMEAS (35)
XMEAS (36)
32.958
13.823
23.978
1.2565
18.579
2.2633
4.8436
2.2986
mol%
mol%
mol%
mol%
mol%
mol%
mol%
mol%
Product analysis (stream 11)
Component Variable number Base case
value
Units Sampling frequency =0.25 h
Dead time=0.25 h
D
E
F
XMEAS (37)
XMEAS (38)
XMEAS (39)
0.01787
0.83570
0.09858
mol%
mol%
mol%
MULTIVARIATE STATISTICAL PROCESSMONITORING
50
G
H
XMEAS (40)
XMEAS (41)
53.724
48.828
mol%
mol%
#
The analyzer sample frequency is how often the analyzer takes a sample of the stream. The dead time is the time gap
between sample collection and the completion of sample analysis. For an analyzer with a sampling frequency of 0.1
h and a dead time of 0.1 h, a new measurement is available every 0.1 h and the measurement is 0.1 h old.
Table 3.10 Various operating conditions for Drumboiler process.
Operating condition Parameter range No. of datasets
1. Impulse change in
o
]
and o
s
(open loop)
6 q 4
f
≥ ≤
3 q 2
s
≥ ≤
2
2. Simultaneous step
changes in three
manipulated inputs
10 q 4
f
≥ ≤
5 q 2
s
≥ ≤
25 Q 15 ≥ ≤
7
3. Negative change
in feed water flow (fault)
6 q 4
f
≥ ≤
4
4. Simultaneous
sinusoidal change in three
inputs
High, medium and
low frequencies
3
Table 3.11 Database corresponding to various operating conditions for BioChemical
Reactor process
Op. Cond. Parameter range No. of datasets
1 4545 . 0 04545 . 0
1
≤ ≤ K 10
2 12 . 0 015 . 0 ≤ ≤
m
K 8
3 1043 . 1 2208 . 0 ≤ ≤ λ 5
4 83838 . 8 9446 . 2 ≤ ≤ λ 3
Table 3.12 Generated database for CSTR process at various operating modes
Operating
Cond.
Description Parameter range No. of
datasets
1 Stable operating point (
1 . 368 T , 359 . 2 C
A
= = )
1. 415 295 ; 5 . 0 ≤ ≤ = T C
AF
2. 415 295 ; 5 . 9 ≤ ≤ = T C
AF
24
2 Unstable operating point
1 . 339 , 518 . 5 = = T C
A
1. 415 295 ; 5 . 0 ≤ ≤ = T C
AF
2. 415 295 ; 5 . 9 ≤ ≤ = T C
AF
22
3 Tuning parameters of
controllers
1. { } 30 , 25 , 20 , 15 , 10 , 5 − − − − − − = λ
(Faulty operating cond.)
2. { } 20 , 15 , 10 , 5 = λ
10
MULTIVARIATE STATISTICAL PROCESSMONITORING
51
Table 3.13 Operating conditions for TE process
Operating condition ID Number of datasets
Mode 1
FNM1
FM11
FM12
FM13
FMD11
FMD12
FMD13
FMD14
Normal operation
15% decrease in production set point
15% decrease in reactor pressure
10% decrease in %G set point
Step change in A/C feed ratio, B composition constant
Step change in B composition, A/C ratio constant
Simultaneous step changes in IDV (1) to IDV (7).
Simultaneous step changes in IDV (1) to IDV (7) and random
variation in IDV (8) to IDV (12).
8
Mode 3
FMN3
FM31
FM32
FMD31
FMD32
Normal operation
Simultaneous 10% decrease in reactor pressure and level
Simultaneous 10% decrease in reactor pressure, reactor level,
stripper level and %g in product
Simultaneous step changes in IDV (1) to IDV (8).
Simultaneous step changes in IDV (1) to IDV (7) and random
variation in IDV (8) to IDV (12).
5
Table 3.14 Constraints in TE process
Normal operating limits Shut down limits
Process variable Low limit High limit Low limit High limit
Reactor pressure
Reactor level
Reactor temperature
Product separator level
Stripper base level
none
50%
(11.8 m
3
)
none
30%
(3.3 m
3
)
30%
(3.5 m
3
)
2895 kPa
100%
(21.3 m
3
)
150
o
C
100%
(9.0 m
3
)
100%
(6.6 m
3
)
None
2.0 m
3
none
1.0 m
3
1.0 m
3
3000 kPa
24.0 m
3
175
o
C
12.0 m
3
8.0 m
3
Table 3.15 Combined similarity factor based clustering performance for Drumboiler
process
Cluster
no.
N
P
P Op.
cond. 1
Op.
cond. 2
Op.
cond. 3
Op.
cond. 4
1 2 100 1 0 0 0
2 5 100 0 2 0 0
3 4 100 0 0 3 0
4 3 100 0 0 0 4
Avg. 100 = P 100 = η 100 = η 100 = η 100 = η
MULTIVARIATE STATISTICAL PROCESSMONITORING
52
Table 3.16 Similarity factors in dataset wise moving window based pattern matching
implementation for Drumboiler process
Snap.\Ref. Op. Cond. 1 Op. Cond. 2 Op. Cond. 3 Op. Cond. 4
Op. Cond. 1 1.0000000000000
0
0.107756743367
209
0.3686417581778
36
0.3341901077662
54
Op. Cond. 1 0.1077567433672
09
1 0.5570749586011
67
0.5634328386279
61
Op. Cond. 1 0.3686417581778
36
0.557074958601
167
1 0.3037515160803
14
Op. Cond. 1 0.0042310224002
5977
0.562974025107
154
0.3037515160791
49
1
Table 3.17 Moving window based pattern matching performance for Drumboiler process
Table 3.18 Combined similarity factor based clustering performance for BioChemical
Reactor process
Cluster
No.
Np P Op. Cond. 1 Op. Cond. 2 Op. Cond. 3 Op. Cond. 4
1 10 100 10 0 0 0
2 8 100 0 8 0 0
3 5 100 0 0 5 0
4 3 100 0 0 0 3 (faulty cond.)
Avg. 100 = P 100 = η 100 = η 100 = η 100 = η
Snapshot Op.
Cond.
Np, Total
number of
data
windows
similar to
snapshot
N1,
Correctly
identified
N2,
Incorrectly
identified
P, Pattern
accuracy
Η, Pattern
efficiency
ξ, Pattern
matching
algorithm
efficiency
Op. Cond. 1 1 1 0 100 100 100
Op. Cond. 2 1 1 0 100 100 100
Op. Cond. 3 1 1 0 100 100 100
Op. Cond. 4 1 1 0 100 100 100
MULTIVARIATE STATISTICAL PROCESSMONITORING
53
Table 3.19 Similarity factors in dataset wise moving window based pattern matching
implementation for Bioreactor process
Snap.\Ref. Op. Cond. 1 Op. Cond. 2 Op. Cond. 3 Op. Cond. 4
Op. Cond.
1
1 0.589302779704114 0.667232710866021 0.0548711784280415
Op. Cond.
2
0.589302779704114 1.00000000000000 0.993496099076534 0.629088229572527
Op. Cond.
3
0.667232710866021 0.993496099076534 1.00000000000000 0.550365271093825
Op. Cond.
4
0.054871178428041
5
0.629088229572527 0.550365271093825 1
Table 3.20 Moving window based pattern matching performance for Bioreactor process
Table 3.21 Combined similarity factor based clustering performance for Jacketed CSTR
process.
Cluster no. Np P Op. Cond. 1 Op. Cond. 2 Op. Cond. 3
1 24 100 24
2 22 100 22
3 10 100 10
Avg. 100 = P 100 = η 100 = η 100 = η
Snapshot Op.
Cond.
Np, Total
number of data
windows
similar to
snapshot
N1,
Correctly
identified
N2,
Incorrectl
y
identified
P, Pattern
accuracy
Η, Pattern
efficiency
ξ, Pattern
matching
algorithm
efficiency
Op. Cond. 1 1 1 0 100 100 100
Op. Cond. 2 1 1 0 100 100 100
Op. Cond. 3 1 1 0 100 100 100
Op. Cond. 4 1 1 0 100 100 100
MULTIVARIATE STATISTICAL PROCESSMONITORING
54
Table 3.22 Similarity factors in dataset wise moving window based pattern matching
implementation for Jacketed CSTR process
Snap.\Ref. Op. Cond. 1 Op. Cond. 2 Op. Cond. 3
Op. Cond. 1 1 0.670000000000000 0.644230769230766
Op. Cond. 2 0.670000000000000 1 0.644230769230766
Op. Cond. 3 0.644230769230766 0.644230769230766 1.00000000000000
Table 3.23 Moving window based pattern matching performance in Jacketed CSTR
process
Table 3.24 Combined similarity factors in dataset wise moving window based pattern
matching implementation for TE process
S
/
R
S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13
R
1
1.000
0000
0000
000
0.378
9922
9718
4109
0.652
1527
7235
9287
0.334
6585
4862
7690
0.506
5983
6720
0729
0.463
7603
7418
8010
0.224
3478
2923
2135
0.359
6272
7143
2008
0.207
9677
8016
1629
0.474
1872
0697
9876
0.467
2263
0119
2305
0.336
7831
4844
3645
0.336
0232
6397
0818
R
2
0.378
9922
9718
4109
1.000
0000
0000
000
0.452
6104
9762
8938
0.101
7012
8474
6415
0.497
8959
6937
2441
0.469
5319
9634
7304
0.158
2259
8691
6959
0.015
8578
6628
3337
6
0.040
8549
9079
6553
9
0.345
2606
9535
9438
0.382
7142
0127
7619
0.161
7618
1615
0416
0.162
4642
9681
5847
R
3
0.652
1527
7235
9287
0.452
6104
9762
8938
1 0.338
1554
6269
1687
0.462
6830
5907
5948
0.423
3600
4058
1553
0.233
1969
4011
8320
0.374
4488
2987
1438
0.210
5628
5434
6880
0.369
1508
5558
9941
0.386
3849
2820
1604
0.269
6635
2954
0264
0.276
3070
7783
1996
R
4
0.334
6585
4862
7690
0.101
7012
8474
6415
0.338
1554
6269
1687
1.000
0000
0000
000
0.505
4410
9784
2560
0.503
0361
0823
4947
0.629
8884
2852
6420
0.113
9718
7336
4090
0.663
0853
1186
6082
0.561
0219
6236
5065
0.559
7517
2151
7637
0.623
7551
2070
6209
0.617
6272
4924
1048
R
5
0.506
5983
6720
0.497
8959
6937
0.462
6830
5907
0.505
4410
9784
0.999
9999
9999
0.660
2230
8910
0.504
5130
3986
0.262
8847
6610
0.362
4462
4422
0.640
9731
1940
0.649
2249
7219
0.409
4827
5260
0.403
3984
9189
Snapshot Op.
Cond.
Np, Total
number of
data windows
similar to
snapshot
N1,
Correctly
identified
N2,
Incorrec
tly
identifie
d
P,
Pattern
accuracy
Η, Pattern
efficiency
ξ, Pattern
matching
algorithm
efficiency
Op. Cond. 1 1 1 0 100 100 100
Op. Cond. 2 1 1 0 100 100 100
Op. Cond. 3 1 1 0 100 100 100
MULTIVARIATE STATISTICAL PROCESSMONITORING
55
0729 2441 5948 2560 9999 3110 0232 3766 9002 9822 5507 2808 3944
R
6
0.463
7603
7418
8010
0.469
5319
9634
7304
0.423
3600
4058
1553
0.503
0361
0823
4947
0.660
2230
8910
3110
0.999
9999
9999
9999
0.546
8124
0176
3452
0.274
4364
0736
8823
0.391
6277
1985
2375
0.658
9446
9389
5653
0.664
0365
0610
8927
0.414
2387
9011
9853
0.407
6661
8666
4437
R
7
0.224
3478
2923
2135
0.158
2259
8691
6959
0.233
1969
4011
8320
0.629
8884
2852
6420
0.504
5130
3986
0232
0.546
8124
0176
3452
0.999
9999
9999
9999
0.014
3448
5744
1557
1
0.606
6315
6981
2647
0.642
8799
6332
0915
0.624
0885
4159
3125
0.588
5795
6803
4468
0.575
8615
2531
4424
R
8
0.359
6272
7143
2008
0.015
8578
6628
3337
6
0.374
4488
2987
1438
0.113
9718
7336
4090
0.262
8847
6610
3766
0.274
4364
0736
8823
0.014
3448
5744
1557
1
1.000
0000
0000
000
0.040
6606
2295
9579
1
0.451
4463
0635
0285
0.386
9974
9040
7397
0.257
4311
5707
3034
0.253
7598
5742
0549
R
9
0.207
9677
8016
1629
0.040
8549
9079
6553
9
0.210
5628
5434
6880
0.663
0853
1186
6082
0.362
4462
4422
9002
0.391
6277
1985
2375
0.606
6315
6981
2647
0.040
6606
2295
9579
1
1 0.523
4877
3133
4821
0.495
4138
3649
0478
0.653
1676
7470
2088
0.645
9302
4629
2880
R
1
0
0.474
1872
0697
9876
0.345
2606
9535
9438
0.369
1508
5558
9941
0.561
0219
6236
5065
0.640
9731
1940
9822
0.658
9446
9389
5653
0.642
8799
6332
0915
0.451
4463
0635
0285
0.523
4877
3133
4821
1.000
0000
0000
000
0.667
5351
0101
8091
0.448
2004
3879
6640
0.449
2272
3375
4385
R
1
1
0.467
2263
0119
2305
0.382
7142
0127
7619
0.386
3849
2820
1604
0.559
7517
2151
7637
0.649
2249
7219
5507
0.664
0365
0610
8927
0.624
0885
4159
3125
0.386
9974
9040
7397
0.495
4138
3649
0478
0.667
5351
0101
8091
1 0.447
0930
6551
4180
0.446
0938
9521
3688
R
1
2
0.336
7831
4844
3645
0.161
7618
1615
0416
0.269
6635
2954
0264
0.623
7551
2070
6209
0.409
4827
5260
2808
0.414
2387
9011
9853
0.588
5795
6803
4468
0.257
4311
5707
3034
0.653
1676
7470
2088
0.448
2004
3879
6640
0.447
0930
6551
4180
0.999
9999
9999
9999
0.669
3584
8682
1244
R
1
3
0.336
0232
6397
0818
0.162
4642
9681
5847
0.276
3070
7783
1996
0.617
6272
4924
1048
0.403
3984
9189
3944
0.407
6661
8666
4437
0.575
8615
2531
4424
0.253
7598
5742
0549
0.645
9302
4629
2880
0.449
2272
3375
4385
0.446
0938
9521
3688
0.669
3584
8682
1244
1.000
0000
0000
000
Table 3.25 Pattern matching performance for TE process
Snapshot Op.
Cond.
Size of the
candidate pool ,
P
N
1
N
2
N Pattern
matching
efficiency
Pool Accuracy
FNM1 1 1 0 100 100
FM11 1 1 0 100 100
FM12 1 1 0 100 100
FM13 1 1 0 100 100
FMD11 1 1 0 100 100
FMD12 1 1 0 100 100
FMD13 1 1 0 100 100
FMD14 1 1 0 100 100
FMN3 1 1 0 100 100
FM31 1 1 0 100 100
FM32 1 1 0 100 100
FMD31 1 1 0 100 100
FMD32 1 1 0 100 100
MULTIVARIATE STATISTICAL PROCESSMONITORING
56
Fig. 3.3 Tennessee Eastman Plant
MULTIVARIATE STATISTICAL PROCESSMONITORING
57
Fig. 3.4 (a) Response of reactor pressure at FMD13
\
Fig. 3.4 (b) Response of stripper level at FMD13.
0 0.5 1 1.5 2 2.5 3 3.5
2750
2800
2850
2900
2950
3000
Hours
k
P
a
g
a
u
g
e
Reactor Pressure
0 0.5 1 1.5 2 2.5 3 3.5
5
10
15
20
25
30
35
40
45
50
55
Stripper Level
Hours
%
MULTIVARIATE STATISTICAL PROCESSMONITORING
58
Fig. 3.5 (a) Response of reactor pressure at FMD14.
Fig. 3.5 (b) Response of reactor level at FMD14.
0 0.5 1 1.5 2 2.5 3 3.5 4
2750
2800
2850
2900
2950
3000
Reactor Pressure
Hours
k
P
a
g
a
u
g
e
0 0.5 1 1.5 2 2.5 3 3.5 4
63.5
64
64.5
65
65.5
66
66.5
67
67.5
68
Reactor Level
Hours
%
MULTIVARIATE STATISTICAL PROCESSMONITORING
59
Fig. 3.6 (a) Response of reactor pressure at FMD31.
Fig. 3.6 (b) Response of stripper level of FMD31.
0 0.5 1 1.5 2 2.5 3 3.5 4
2500
2550
2600
2650
2700
2750
2800
2850
Reactor Pressure
Hours
k
P
a
g
a
u
g
e
0 0.5 1 1.5 2 2.5 3 3.5 4
30
20
10
0
10
20
30
40
50
60
70
Stripper Level
Hours
%
MULTIVARIATE STATISTICAL PROCESSMONITORING
60
Fig. 3.7 (a) Response of reactor pressure of FMD32.
Fig. 3.7 (b) Response of stripper level of FMD32.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
2700
2750
2800
2850
2900
2950
3000
3050
Stripper Pressure
Hours
k
P
a
g
a
u
g
e
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
30
20
10
0
10
20
30
40
50
60
70
Stripper Level
Hours
%
MULTIVARIATE STATISTICAL PROCESSMONITORING
61
Fig. 3.8 Response of Drumboiler process for a faulty operating condition
Fig.3. 9 Response for the Jacketed CSTR process under faulty operating condition
0 500 1000 1500 2000 2500 3000 3500 4000
15
10
5
0
5
x 10
64
time,sec
C
A
,
k
g
m
o
l
/
m
3
0 500 1000 1500 2000 2500 3000 3500 4000
15
10
5
0
5
x 10
64
time,sec
T
,
o
C
MULTIVARIATE STATISTICAL PROCESSMONITORING
62
REFERENCE
1. Kourti, T., MacGregor, J. F.,“Multivariate SPC methods for process and product
monitoring.”J. Quality Tech.1996, 28: 409–428.
2. Martin, E. B., Morris, A. J.,“An overview of multivariate statistical process control in
continuous and batch performance monitoring.”T. I. Meas. Control. 1996, 18: 51–60.
3. Rissanen, J.,“Modeling by shortest data description.” Automatica.1978, 14: 465–471.
4. Smyth, P.,“Clustering using Monte Carlo crossvalidation.” In Proc. 2nd Intl. Conf.
Knowl. Discovery & Data Mining (KDD96), Portland, OR, 1996: 126–133.
5. Singhal, A., Seborg, D. E., “Clustering multivariate time series data.” J.
Chemometr.2005, 19: 427438.
6. Krzanowski, W. J.,“Betweengroups comparison of principal components.” J. Amer.
Stat. Assoc.1979, 74: 703–707.
7. Johannesmeyer, M. C., Singhal, A., Seborg, D. E., ”Pattern matching in historical
data.” AIChE J.September 2002, 48:20222038.
8. Singhal, A., Seborg, D. E., “Pattern matching in multivariate time series databases
using a moving window approach.” Ind. Eng. Chem. Res. 2002, 41: 38223838.
9. Singhal, A., Seborg, D. E.,”Matching patterns from historical data using PCA and
distance similarity factors.” Proceedingsof the 2001 American Control Conference;
IEEE: Piscataway, NJ., 2001: 17591764.
10. Pellegrinetti, G., Bentsman, J., ”Nonlinear control oriented boiler modeling
Abenchmark problem for controller design.” IEEE T. Contr. Syst. T. January 1996, 4
(1).
11. Astrom, K. J., Bell, R. D., “Drumboiler dynamics.” Automatica. 2000, 36: 363378.
12. Tan, W., Marquez, H. J., Chen, T., ”Multivariable robust controller design for a boiler
system.” IEEE T. Contr. Syst. T. vol. September 2002, 10 (5).
13. Jawahar, K., Pappa, N., ”Selftuning nonlinear controller.” Proceedings of the 6th
WSEAS Int. Conf. on Fuzzy Systems, Lisbon, Portugal, 2005: 118126.
14. Edwards, V. H., Ko, R. C., Balogh, S. A., “Dynamics and control of continuous
microbial propagators to subject substrate inhibition.”Biotechnol. Bioeng.1972,14:
939974.
MULTIVARIATE STATISTICAL PROCESSMONITORING
63
15. Agrawal, P., Lim, H. C., “Analysis of various control schemes for continuous
bioreactors.”Adv. Biochem. Eng. Biot. 1984,30: 6190.
16. Menawat, A. S., Balachander, J., 1991,” Alternate control structures for
chemostat.”AIChE J. 1991,37 (2): 302306.
17. Kaushikram, K. S., Damarla, S. K., Kundu, M., “Design of neural controllers for
various configurations of continuous bioreactor.”International Conference on System
Dynamics and ControlICSDC. 2010.
18. Downs, J. J., Vogel, E. F., “A plantwide industrial process control problem.”
Comput. Chem. Engr. 1993, 17 (3): 245255.
19. Vinson, D. R., “Studies in plantwide controllability using the Tennessee Eastman
Challenge problem, the case for multivariable control.” Proceedings of the American
Control Conference, Seattle, Washington, 1995.
20. Luyben, W. L., “Simple regulatory control of the Eastman process.” Ind. Eng. Chem.
Res. 1996, 35: 32803289.
21. Lyman, P. R., Georgakis, C., “Plantwide control of the Tennessee Eastman
problem.” Comput. Chem. Engr. 1995, 19 (3): 321331.
22. Zheng, A., “Nonlinear model predictive control of the Tennessee Eastman process.”
Proceedings of the American Control Conference, Philadelphia, Pennsylvania, June
1998.
23. Molina, G. D., Zumoffen, D. A. R., Basualdo, M. S., “Plantwide control strategy
applied to the Tennessee Eastman process at two operating points.” Comput. Chem.
Engr. 2010.
24. Zerkaoui, S., Druaux, F., Leclercq, E.,Lefebvre, D., “Indirect neural control for plant
wide systems: Application to the Tennessee Eastman Challenge process.” Comput.
Chem. Engr. 2010, 30: 232243.
25. Ricker, N. L., “Optimal steadystate operation of the Tennessee Eastman Challenge
process.” Comput. Chem. Engr. 1995, 19 (9): 949959.
26. Ricker, N. L., “ Decentralized control of the Tennessee Eastman Challenge process.”
J. Proc. Cont. 1996, 6 (4): 205221.
APPLICATION OF MULTIVARIATE
STATISTICAL AND NEURAL CONTROL
STRATEGIES
64
APPLICATION OF MULTIVARIATE STATISTICAL AND
NEURAL PROCESS CONTROL STRATEGIES
4.1 INTRODUCTION
Multiloop (decentralized) conventional control systems (especially PID controllers)
are often used to control interacting multiple input, multiple output processes because of their
ease in understandability and requirement of fewer parameters than more general
multivariable controllers. Different types of tuning methods like BLT detuning method,
sequential loop tuning method, independent loop tuning method, and relay autotuning
method have been proposed by the researchers for tuning of multiloop conventional
controllers. One of the earlier approaches of multivariable control had been the decoupling of
process to reduce the loop interactions. For multivariable processes, the Partial least squares
based (PLS) controllers offer the opportunity to be designed as a series of SISO
controllers(Qin and McAvoy (1992, 1993)[12]. The PLS based process identification
&control have been chronicled by researchers like Kaspar and Ray (1993),
Lakshminarayanan et al., (1997) [34].Till date, there is no literature reference for NNPLS
controller except the contribution by Damarla and Kundu (2011) [5]. In view of this, for
controlling nonlinear complex processes neural network based PLS (NNPLS) controllers
have been proposed. (2 × 2), (S × S), and (4 × 4)Distillation processes were taken up to
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
65
implement the proposed control strategy followed by the evaluation of their closed loop
performances. Control strategies using classical and neural controllers have been
incorporated in a plant wide multistage Juice evaporation process plant.
4.2 DEVELOPMENT OF PLS AND NNPLS CONTROLLERS
4.2.1 PLS Controller
Discrete inputoutput time series data (XY) either collected from plant or generated by
perturbing the benchmark processes with pseudo random binary signals can be used for the
development of PLS controllers. For modeling dynamic process, the input data matrix (X)
was augmented either with lagged input variables (called finite impulse response (FIR)
model) or including lagged input and output variables (called auto regressive model with
exogenous input, ARX). By combining the PLS with inner ARX model structure, dynamic
processes can be modeled, which logically builds up the framework for PLS based process
controllers.
Identification of process dynamics in latent subspace has already been discussed in
Chapter 2 (section 2.6), which is supposed to be the key component for the successful
development of a PLS based controller. The very fortes about the PLS identification of a
multidimensional time series data is as follows:
• While deriving the outer relation among the scores ((ˠ & ˡ)of multidimensional
inputoutput data, the regression occurs at each of its individual dimension. The
residuals are passed to the successive steps and then the whole procedure is repeated
assuming the residuals (˗ & ˘) from the previous steps as the inputoutput data. in the
current step, Hence, multidimensional regression problems gets transformed in to a
series of onedimensional regression problems.
• It selects the most realistic part of the predictor variable which is outmost predictive
of response variable at each dimension
• The predictor and the response variables are pre and post compensated by loading
matrices ˜ & ˝., respectively
In view of this, the PLS identified process dynamics deserves to be more precise and
realistic than other time series data based models like ARX, ARMA. ARMAX, etc.
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
66
Series of direct synthesis SISO controllers designed on the basis of the dynamic
models identified into latent subspaces and embedded in the PLS frame work were used to
control the process. This approach is having the advantage that instead of using a typical
multivariable controller, independent SISO controllers can be designed based on the inner
dynamic model ˙
p
(˴) identified at each dimension
along with the error and control signals
being pre and post compensated by the loading matrices namely
˜
and
˝
, respectively.
The
PLSbased control strategy is presented in Figure 4.1.
Fig. 4.1 Schematic of PLS based Control
) (s G = ˙
p
(˴) represents the identified process transfer function in the latent subspaces. The
controller acts on projected error ˗
u
i.e., actually measured output error post compensated by
matrices ˟
y
1
&˝
1
, respectively. ˠ , the score is computed actually by the controller in
closed loop PLS framework. The ˠ score then gets projected on loading matrix ˜ and
transformed in to real physical inputs through ˟
x
which drive the processes.
4.2.2 Development of Neural Network PLS Controller (NNPLS)
In PLS based process dynamics, the inner relationship between (X) and (¥) scores; hence the
process dynamics in latent subspace could not be well identified by linear or quadratic
relationships; especially for complex and nonlinear processes. In view of this, present work
proposed neural identification of process dynamics using the latent variables. MIMO
processes without any decouplers could be controlled by a series of NN based SISO
controllers with the PLS loading matrices being employed as pre and post compensators of
the error and control signals, respectively. The inverse dynamics of the latent variable based
process was identified as the direct inverse neural network controller (DINN) using the
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
67
historical database of latent variables. In the regulator mode, the latent variable based
disturbance process was identified using NN.
Several fortes about the NNPLS controllers are as follows:
• A multivariable controller can be used as a series of SISO NN controllers within a
PLS framework.
• Because of the diagonal structure of the dynamic part of the PLS model, inputoutput
pairings are automatic (˳
1
− ˯
1
, ˳
2
− ˯
2
, ˳
3
− ˯
3
….etc.).
• The set point or the desirable state passes to the controller as a projected variable
down to the latent subspace. It might have been easier to deal for the controllers in
tracking the set point because the infeasible part of the set point is not allowed to pass
directly to the controllers.
• The PLS identified process is somewhat decoupled owing to the orthogonality of the
input scores and the rotation of the input scores to be highly correlated with the output
scores.
• The use of neural network in identifying the inner relation among the input and output
scores might capture the process nonlinearity.
• For every inputoutput pairs participated in training the neural networks, the use of
historical database eliminates the necessity of determining cross correlation
coefficient. The cross correlation coefficient would have guided the selection of most
effective inputs and their corresponding targets to be used in training the neural
networks.
The process transfer functions were simulated over a stipulated period of time to generate
outputinput data using signal to noise ratio as 10.0. The scores corresponding to all the time
series data were generated using the principal component decomposition. Instead of the
process being identified by linear ARX based model coupled with least squares regression, in
the NNPLS design; the relationship between the ˠ&ˡ scores are estimated by feed forward
back propagation neural network. The network input was arranged in ARX structure using
the process historical database. In the closed loop simulation of the process, the inverse NN
models were used as a controller typically in a feed forward fashion with the error, if any;
adjusted as a bias. The inputs (˚1) and outputs (˚S) to the multilayer (3 layers) feed forward
neural network (FFNN) representing forward dynamics of the process regarding its training
& simulation phase were as follows:
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
68
Training phase
˚1 = {ˡ(ˮ −S), ˡ(ˮ − 2), ˠ(ˮ − S), ˠ(ˮ − 2)] (4.1)
˚S = ˡ(ˮ − 1) (4.2)
Simulation Phase
˚1 = {ˡ(ˮ − 2), ˡ(ˮ −1), ˠ(ˮ − 1), ˠ(ˮ)](4.3)
˚S = ˡ(ˮ) (4.4)
For DINN the simulation phase is synonymous to control phase. The number of hidden
layer neurons varies from process to process. Figure 4.2 represents the NNPLS scheme;
both in servo and regulator mode. In regulator mode, the effective disturbance transfer
function was simulated to produce the ¥ data and ˖ data; hence their corresponding scores.
The FFNN representing disturbance dynamics acts as effective disturbance process. The
disturbance rejection was done along with the existing servo mode.
(a)
(b)
Fig. 4.2 NNPLS a) Servo & b) Regulatory mode of control
The inputs (˚1) and outputs (˚S) of the multilayer (3 layers) disturbance FFNN regarding
the training & simulation phase were as follows,
+

+
+
d
Y
r=0
Q
1
S
y
1
G
NP
G
c
S
x
P
P
d
S
xd
G
Nd
d
+
+
Y
r
Q
1
S
y
1
G
NP
G
c
S
x
P

+
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
69
Training phase
˚1 = {ˡ(ˮ − S), ˡ(ˮ −2), ˤ(ˮ − S), ˤ(ˮ − 2)] (4.5)
˚S = ˡ(ˮ − 1) (4.6)
Simulation Phase
˚1 = {ˡ(ˮ − 2), ˡ(ˮ −1), ˤ(ˮ − 1), ˤ(ˮ)] (4.7)
˚S = ˡ(ˮ) (4.8)
The number of hidden layer neurons varies for various disturbance processes. In all the
multivariable processes considered, n
ì
, (˩ = 1. .4, no o˦ ˰or˩obˬ˥s ˮo b˥ conˮroˬˬ˥ˤ)
numbers of inputoutput time series relations were identified using neural networks in a latent
variable subspace. Inverse neural network acted as controller. To control any of the
multivariable processes, n
ì
, (˩ = 1. .4, no o˦ ˰or˩obˬ˥s ˮo b˥ conˮroˬˬ˥ˤ) numbers of SISO
controllers were designed. During the training phase of the NNs’, the input scores to the
network were arranged as per the ARX structure following eq.(4.1) and output or target score
was set as per eq.(4.2). In simulation mode, the inputs and outputs to the trained networks
were arranged as per equations. (4.3) & (4.4). Neural networks were also used to mimic the
disturbance dynamics for all the cases considered. During the training phase, the input scores
to the network were arranged as per the ARX structure following eq. (4.5) and output or
target score was set as per eq.
equation (4.6). In simulation mode, the inputs and outputs to the trained networks were as per
equation. (4.7) & (4.8). The training algorithm used was gradient based. The convergence
criterion was MSE or mean squared error. The performances of the networks representing
various processes are presented in Table 4.1.
4.3 IDENTIFICATION & CONTROL OF (2 × 2)DISTILLATION
PROCESS
4.3.1 PLS Controller
A 2 2× distillation process was chosen to identify the process dynamics in latent
subspaces and compare the PLS predicted dynamics with the actual one. Top product
composition
D
X and bottom product composition
B
X were controlled by reflux rate and
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
70
vapor boilup using PLS based direct synthesis controllers as well as neural controllers. The
following transfer function equation (4.9) was used to simulate the process when perturbed
by pseudo random binary signals (1000 samples).
Inputs:
˯
1
=Reflux flow rate; ˯
2
= vapour boil up
Disturbances:
ˤ
1
= Feed flow rate; ˤ
2
= Feed light component mole fraction
+ +
− −
+ +
−
+ +
− −
+ +
−
) 1 75 )( 1 . 0 1 (
) 1 . 0 1 ( 096 . 1
) 1 75 )( 1 . 0 1 (
) 1 . 0 1 ( 082 . 1
) 1 75 )( 6 . 0 1 (
) 6 . 0 1 ( 864 . 0
) 1 75 )( 6 . 0 1 (
) 6 . 0 1 ( 878 . 0
s s
s
s s
s
s s
s
s s
s
(4.9)
Both FIR based and ARX based inner models were used to identify the process dynamics in projected
subspaces. Equations (4.104.11) represent the identified ARX based dynamic models for outputs 1&
2, respectively. Equations (4.124.13) represent the identified FIR based dynamic models for outputs
1& 2, respectively.
˙
1
(˴) =
0.001791z0.004398
z
2
+1.547z0.5563
(4.10)
˙
2
(˴) =
0.1367z0.3776
z
2
+1.045z0.05656
(4.11)
˙
1
(˴) =
0.01951z+0.02326
z
2
+0.0152z+0.0151
(4.12)
˙
2
(˴) =
0.5637z+0.5447
z
2
+0.2816z+0.5448
(4.13)
FIR and ARX based inner correlation between the scores T and U were established keeping
the outer linear structure of the PLS intact. The predicted outputs corresponding to the inputs
within a PLS framework were obtained by post compensating the ˡ scores with ˝ matrix.
The ˠ score were generated by post compensating the original X matrix with ˜ matrix.
Figures 4.3 and 4.4 present the comparison of actual plant dynamics involving top product
composition
D
X and bottom product composition
B
X with ARX based PLS predicted
dynamics.
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
71
The desired transfer function for closed loop simulation was selected as second order system.
The controller were designed as direct synthesis controller in the following way,
+
+
=
2
) 1 (
1
0
0
2
) 1 (
1
) (
s
s
s
CL
G
λ
λ
(4.14)
Where λ is tuning parameter. Direct Synthesis controller resulted is as follows:
)) ( 1 )( (
) (
) (
s
CL
G s G
s
CL
G
s
C
G
−
=
(4.15)
The performances of proposed direct synthesis controllers designed on the basis of equations
(4.94.13& 4.15) and embedded in PLS framework were examined. PLS controller perfectly
could track the set point (step change in top product composition from 0.99 to 0.996 and step
change in bottom product composition from 0.01 to 0.005). Figures 4.5 and 4.6 illustrate and
compare the performance of PLS controllers ( FIR based/ARX based inner dynamic model)
in servo mode.
4.3.2 NNPLS Controller
In the same ( 2 2× ) distillation process output 1 (ˠopproˤ˯cˮco˭pos˩ˮ˩on, ˲
Ð
) – input1
(reflux flow rate) & the output 2 (˔oˮˮo˭proˤ˯cˮco˭pos˩ˮ˩on, ˲
B
) – input 2 (vapour boil
up) time series relations were identified using neural networks in a latent variable subspace.
Inverse neural network acted as controller. To control the process, two numbers of SISO
controllers were designed. Neural networks were also used to mimic the disturbance
dynamics for output 1 (ˠopproˤ˯cˮco˭pos˩ˮ˩on, ˲
Ð
) – input disturbance 1 (ˤ
1
, feed flow
rate) & the output 2 (˔oˮˮo˭proˤ˯cˮco˭pos˩ˮ˩on, ˲
B
) – input disturbance 2 (ˤ
2,
feed light
component mole fraction). All the designed networks were three layered networks. The
design procedure has already been discussed in the subsection 4.2.2.
Figure 4.7 presents the comparison between actual process outputs and NN identified
process outputs namely the top and bottom product compositions. Figure 4.8 shows the
closed loop performance of 2 numbers of SISO controllers. In servo mode, the controller
proved to be a very reliable in set point tracking and reached the steady state value in less
than 15 s, when the top product composition changes from 0.99 to 0.996. The 2
nd
controller
could track the set point changing in bottom product composition from 0.01 to 0.005 by
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
72
reaching the steady state value within 10 s. Figure 4.9 presents the closed loop simulation in
regulatory mode in conjunction with the existing servo; showing the disturbance rejection
performance in ˲
Ð
&˲
B
.
4.4 IDENTIFICATION & CONTROL OF (3 × 3)DISTILLATION
PROCESS
The following equations (4.16) & (4.17) are the LTI process and disturbance transfer
functions of a (S × S) distillation process, respectively which were identified by Ogunnaike
et al. (1983). NNPLS controllers were proposed to control the process.
+ +
− +
+
−
+
− −
+
− −
+
− −
+
−
+
− −
+
− −
+
−
=
1) 1)(18.8s (3.89s
s) 1)exp( s 0.87(11.61
1 10.9s
9.4s) 46.2exp(
1 8.15s
9.2s) 34.68exp(
1 7.09s
1.2s) 0.012exp(
1 5s
3s) 2.36exp(
1 3.25s
6.5s) 1.11exp(
1 9.06s
s) 0.0049exp(
1 8.64s
3.5s) 0.61exp(
1 6.7s
2.6s) 0.66exp(
Gp(s)
(4.16)
+
−
+ +
− + −
+ +
− + −
+
− −
+
−
+
−
=
1 7.765s
2.6s) 0.32exp(
1) 1)(8.94s (7.29s
3.44s) 1)exp( 62s 0.0032(19.
1) 1)(14.63s (7.85s
2.66s) 1)exp( 32s 0.0011(26.
1 7.01s
0.6s) 11.54exp(
1 6.9s
10.5s) 0.53exp(
1 6.2s
12s) 0.14exp(
(s) G
D
(4.17)
The following are the outputs, inputs and disturbances of the system Outputs:
˳
1
= Overhead ethanol mole fraction,
˳
2
= Side stream ethanol mole fraction
˳
3
=19
th
tray temperature.
Inputs:
˯
1
=Reflux flow rate
˯
2
=Side stream product flow rate,
˯
3
= Reboiler steam pressure
Disturbances:
ˤ
1
= Feed flow rate
ˤ
2
= Feed temperature
The output 1 (0veiheau ethanol mole fiaction, ˳
1
) – input1 (Reflux flow rate, ˯
1
), output
2 (Siue stieam ethanol mole fiaction , ˳
2
) – input 2 (Side stream product flow rate, ˯
2
), &
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
73
output 3 (19 th tiay tempeiatuie, ˳
3
) – input 3 (Reboiler steam pressure, ˯
3
) time series
pairings are by default in PLS structure and it was also supported by RGA (relative gain
array) analysis. Because of the orthogonal structure of PLS, the aforesaid pairings are
consequential. Three numbers of SISO NNPLS controllers were designed to control top, side
product compositions and 19
th
tray temperature of the (S × S)distillation process. The
disturbance dynamics of the distillation process were identified in the form of three numbers
of feed forward back propagation neural networks. The output 1
(0veiheau ethanol mole fiaction, ˳
1
)– input disturbance 1 (ˤ
1
, feed flow rate), output 2
(Siue stieam ethanol mole fiaction , ˳
2
) – input disturbance 1 (ˤ
1
, feed flow rate) & output
3 (Siue stieam ethanol mole fiaction , ˳
3
) – input disturbance 1 (ˤ
1
, feed flow rate) time
series relations were considered to manifest the disturbance rejection performance of the
designed controllers.The design of NNPLS controllers was done as per the procedure
discussed in subsection 4.2.2.
Figure 4.10 presents the comparison between the NNPLS predicted dynamics and the
simulated plant output. Output 3 was comparatively poorly fitted than in comparison to the
other 2 outputs as evidenced by the regression coefficient of the concerned network model.
Figure 4.11 presents the NNPLS controller based closed loop responses of the process for all
of its outputs in servo mode. The figure shows the response of the system when there is a set
point change of 0.05 in˳
1
, 0.08 in ˳
2
and no change in ˳
3
. A perfectly decoupled set point
tracking performance of the NNPLS control system design is also revealed. Figure 4.12shows
the closed loop responses in regulator mode as a result of step change in disturbance 1 by +
0.2. The designed NNPLS controllers emancipated encouraging performance in servo as well
as in regulator mode.
4.5 IDENTIFICATION & CONTROL OF (4 × 4)DISTILLATION
PROCESS
The following transfer functions equations. (4.18) & (4.19) are the process and
disturbance transfer functions of a (4 × 4) distillation process model adapted from Luyben
(1990). NNPLS controllers were proposed to control the process.
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
74
+ +
−
+ +
−
+ + +
− +
+ +
− −
+
− −
+
−
+
−
+
−
+
−
+
− −
+
−
+
− −
+
− −
+
− −
+ +
− −
+ +
−
=
) 1 s 3 . 6 )( 1 s 48 (
) s 6 . 0 exp( 49 . 4
) 1 s 5 )( 1 s 6 . 31 (
) s 05 . 0 exp( 1 . 0
) 1 s 3 s 4 . 17 )( 1 s 45 (
) s 02 . 0 exp( ) 1 s 10 ( 14
) 1 s 5 . 6 )( 1 s 43 (
) s 6 . 2 exp( 2 . 11
1 s 15
) s 5 . 1 exp( 49 . 5
1 s 5 . 18
) s 01 . 1 exp( 61 . 4
) 1 s 3 . 13 (
) s 12 exp( 11 . 5
) 1 s 13 (
) s 18 exp( 73 . 1
1 s 48
) s 8 . 3 exp( 53 . 1
) 1 s 5 . 34 (
) s 6 exp( 05 . 0
1 s 6 . 44
) s 02 . 1 exp( 93 . 6
1 s 45
) s 5 exp( 17 . 4
) 1 s 22 (
) s 6 exp( 49 . 0
1 s 21
) s 4 . 1 exp( 25 . 0
) 1 s 20 )( 1 s 6 . 31 (
) s 2 . 1 exp( 36 . 6
) 1 s 3 . 8 )( 1 s 33 (
) s 3 . 1 exp( 09 . 4
) s ( Gp
2
2 2
2
2
(4.18)
+ + +
− +
+ +
− −
+
−
+
−
+
−
+
− −
+ +
− −
+ +
−
=
) 1 3 4 . 17 )( 1 45 (
) 02 . 0 exp( ) 1 10 ( 14
) 1 5 . 6 )( 1 43 (
) 6 . 2 exp( 2 . 11
) 1 3 . 13 (
) 12 exp( 11 . 5
) 1 13 (
) 18 exp( 73 . 1
1 6 . 44
) 02 . 1 exp( 93 . 6
1 45
) 5 exp( 17 . 4
) 1 20 )( 1 6 . 31 (
) 2 . 1 exp( 36 . 6
) 1 3 . 8 )( 1 33 (
) 3 . 1 exp( 09 . 4
) (
2
2 2
s s s
s s
s s
s
s
s
s
s
s
s
s
s
s s
s
s s
s
s G
D
(4.19)
The following are the outputs, inputs and disturbances of the system
Outputs:
˳
1
= Top product composition (˲
Ð1
)
˳
2
= Side stream composition (˲
S2
)
˳
3
= Bottom product composition ˲
B3
)
˳
4
=Temperature difference to minimize the energy consumption ∆ˠ4)
Inputs:
˯
1
=Reflux rate
˯
2
= Heat input to the reboiler
˯
3
= Heat input to the stripper
˯
4
= Feed flow rate to the stripper
Disturbances:
ˤ
1
= Feed flow rate
ˤ
2
= Feed temperature
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
75
The dynamics of the distillation process were identified in the form of four numbers of feed
forward back propagation neural networks. The output 1 (Top piouuct composition , ˳
1
) –
input1 (Reflux flow rate, ˯
1
), the output 2 (Siue stieam composition , ˳
2
) – input 2 (Heat
input to the reboiler, ˯
2
), the output 3 (Bottom piouuct composition , ˳
3
) – input 3 (Heat
input to the stripper, ˯
3
) & output 4 (Tempeiatuie uiffeience , ˳
4
) – input 4 (Feed flow
rate to the stripper, ˯
4
) were the four numbers of SISO processes considered for analysis.
The disturbance dynamics of the distillation process were identified in the form of four
numbers of feed forward back propagation neural networks. The output 1
(0veiheau ethanol mole fiaction, ˳
1
)– input disturbance 1 (ˤ
1
, feed flow rate), output 2
(Siue stieam ethanol mole fiaction , ˳
2
) – input disturbance 1 (ˤ
1
, feed flow rate), output 3
(Siue stieam ethanol mole fiaction , ˳
3
) – input disturbance 1 (ˤ
1
, feed flow rate), and
output 4 (Siue stieam ethanol mole fiaction , ˳
4
) – input disturbance 1 (ˤ
1
, feed flow rate)
time series relations were considered. Four numbers of NNPLS controllers were designed as
DINN of the process as per the procedure discussed in the subsection 4.2.2.
Figure 4.13 presents the comparison between the NNPLS model and the simulated plant
output for the (4 × 4) Distillation process.Figure 4.14 presents the closed loop responses for
all the process outputs in servo mode of the designed NNPLS controller. This figure shows
the response of the system when there is a step change of +0.85 in˳
1
, 0.1 in ˳
2
, 0.05 in ˳
3
and no change in ˳
4
, and it is a perfect set point tracking performance by the NNPLS control
system designed in servo mode. Figure 4.15 shows the closed loop responses for all the
process outputs in regulator mode of the designed NNPLS controller and as a result of step
change in disturbance 1 by + 0.2. The designed NNPLS controllers emancipated encouraging
performance in servo as well as in regulator mode.
5 PLANT WIDE CONTROL
One of the important tasks for the sugar producing process is to ensure optimum working
regime for the multiple effect evaporator; hence perfect control system design. A number of
feedback and feed forward strategies based on linear and nonlinear models were investigated
in this regard. Generic model based control (GMC), generalized predictive control (GPC),
linear quadratic Gaussian (LQG), and internal model control (IMC) were applied in the past
for controlling multiple effect evaporation process ( Anderson & Moore,1989; Mohtadi et.
al., 1987) [67].
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
76
The multiple effect evaporation process in sugar industry is considered as the plant as shown
in Figure 4.16. The number of effects of the process plant is five and fruit juice having
nominal 15 % sucrose concentration. (brix) is concentrated up to 72 %. Juice to be
concentrated enters the first effect with concentration ˕
0
, flowrate ˘
0
, enthalpy E
0
and
temperature ˠ
0
. Steam of rate ˟ is injected to the first effect to vaporize water from first
effect producing vapor ˛
1
, which is directed to the next effect with a vapor deduction ˢ˜
1
.
The first effect liquid flow rate ˘
1
at a concentration ˕
1
goes to the tube side of the second
effect. The vapor from the last effect goes to the condenser and the syrupy product from there
goes to the crystallizer. The liquid holdup of i
th
effect is ˣ
ì
. The most important measurable
disturbance of the plant is the demand of steam by the crystallizer, which is deducted from
the third effect. Hence, it is necessary to control the brix ˕
3
. The other control objective
is˕
5
.
The following model equations were considered to get the LTI transfer function for every
effect..
Material balance:
i
O
i
F
1 i
F
dt
i
dw
− −
−
=
(4.20)
Where
S
1
K
1
O =
,
5 2 i
)
1 i
VP
1 i
(O
i
K
i
O
− = −
−
−
=
K
ì
= static gain
Component balance:
]
i
C
i
F
1 i
C
1 i
[F
i
W
1
dt
i
dC
−
− −
=
(4.21)
The dynamics of each portion of the plant (each effect here) were identified in state space
domain. The variable with their steady state values are presented in Table 4.2. Two numbers
of direct synthesis controllers were designed to control˕
3
and ˕
5
. Direct inverse neural
network controllers (DINN) were also designed to fulfill two numbers of control objectives.
The performance of the DINN controllers were compared with direct synthesis controllers
both in servo as well as regulator mode. It is evident from Figures 4.18 and 4.19 that ANN
controllers served well in the servo mode whereas direct synthesis or model based controllers
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
77
did well in their disturbance rejection performance. Plant wide simulation was done in
SIMULINK platform.
6 CONCLUSION
The present chapter is revealing the successful implementation of NNPLS controllers
in controlling multivariable distillation processes and successful implementation of neural
controller in a plant wide process. But the excerpts of the present chapter are far reaching.
The physics of data based model identification through PLS, and NNPLS, their advantages
over other time series models like ARX, ARMAX, ARMA, the merits of NNPLS based
control over PLS based control were addressed in this chapter. The revelation of this chapter
has got a sheer intellectual merit.
The NN controllers worked satisfactorily in regulator mode for the chosen plant wide
process. In fine, it can be concluded that the processes need to be simulated rigorously under
ASPEN PLUS environment before making any further comments on the applicability of the
NN controllers in plant wide process, in general.
Table 4.1 Designed networks and their performances in (2 × 2,S × S&4 × 4 )
Distillation processes.
system Mode of
Operation
Identified
NNPLS
MSE R
2
2
×
2
Servo G1 0.00068067 0.994
G2 0.0002064 0.999
Regulatory Gd1 0.00060809 0.995
Gd2 0.00016509 0.998
3
×
3
Servo G1 0.0038066 0.964
G2 0.00702021 0.965
G3 0.001353 0.962
Regulatory Gd1 0.0006922 0.99
Gd2 0.0098331 0.962
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
78
Gd3 0.0002519 0.996
4
×
4
Servo G1 0.000666 0.9882
G2 0.0046903 0.9662
G3 0.0016072 0.9773
G4 0.00030866 0.9891
Regulatory Gd1 0.00040426 0.9918
Gd2 0.0004304 0.9931
Gd3 0.00012274 0.9965
Gd4 0.0014292 0.9640
Table 4.2 Steady state values for variables of Juice concentration plant
Input First effect Second
effect
Third effect Fourth effect Fifth effect
˕
0
= 14.74
˫˧¡˭
3
˘
0
= 187.97
˫˧¡s
ℎ
0
=3.87
˫[¡˫˧
0
˕
˟S=56.78
˫˧¡s
ˣ
1
= 49.S ˭
3
˘
1
= 1S1.19
˫˧¡s
˕
1
= 21.12
˫˧¡˭
3
˛
1
= Su.SS
˫˧¡s
ℎ
1
=3.66
˫[¡˫˧
0
˕
ˣ
2
= 2S ˭
3
˘
2
= uu.86
˫˧¡s
˕
2
= 27.47
˫˧¡˭
3
˛
2
= 2S
˫˧¡s
ℎ
2
=3.5
˫[¡˫˧
0
˕
ˣ
3
= 22 ˭
3
˘
3
= 7S.864
˫˧¡s
˕
3
= S6.S2
˫˧¡˭
3
˛
3
= 2S.92
˫˧¡s
ℎ
3
=3.27
˫[¡˫˧
0
˕
ˣ
4
= 18 ˭
3
˘
4
= 49.94
˫˧¡s
˕
4
= SS.482
˫˧¡˭
3
˛
4
= 12.79
˫˧¡s
ℎ
4
=2.8
˫[¡˫˧
0
˕
ˣ
5
= 1S.S4 ˭
3
˘
5
= S7.1S
˫˧¡s
˕
5
= 72
˫˧¡˭
3
˛
5
= 12.79
˫˧¡s
ℎ
5
=2.314
˫[¡˫˧
0
˕
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
79
Fig. 4. 3 Comparison between actual and ARX based PLS predicted dynamics for
output1 (top product compositionX
Ð
) in distillation process.
Fig.4.4 Comparison between actual and ARX based PLS predicted dynamics for
output2 (bottom product compositionX
B
) in distillation process.
0 100 200 300 400 500 600 700 800 900 1000
0.1
0.08
0.06
0.04
0.02
0
0.02
0.04
0.06
0.08
time
O
u
t
p
u
t
1
Actual
Dyn.PLS ARX
0 100 200 300 400 500 600 700 800 900 1000
0.15
0.1
0.05
0
0.05
0.1
time
O
u
t
p
u
t
2
Actual
Dyn.PLS ARX
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
80
Fig.4.5 Comparison of the closed loop performances of ARX based and FIR PLS
controllers for a set point change in X
Ð
from 0.99 to 0.996.
Fig. 4.6 Comparison of the closed loop performances of ARX based and FIR PLS
controllers for a set point change in X
B
from 0.01 to 0.005
0 5 10 15 20 25 30 35 40 45 50
0.99
1
1.01
1.02
1.03
1.04
1.05
1.06
1.07
time
T
o
p
p
r
o
d
u
c
t
c
o
m
p
o
s
i
t
i
o
n
,
X
D
PLS(FIR)based controller
PLS (ARX with LS) based controller
0 5 10 15 20 25 30 35 40 45 50
4
5
6
7
8
9
10
x 10
3
time
B
o
t
t
o
m
p
r
o
d
u
c
t
c
o
m
p
o
s
i
t
i
o
n
,
X
B
PLS(FIR)based controller
PLS (ARX with LS) based controller
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
81
Fig. 4.7 Comparison between actual and neutrally identified outputs of a (2 × 2)
Distillation process using projected variables in latent space.
0 100 200 300 400 500 600 700 800 900 1000
1
0.5
0
0.5
1
T
o
p
P
r
o
d
u
c
t
c
o
m
p
o
s
i
t
i
o
n
0 100 200 300 400 500 600 700 800 900 1000
1
0.5
0
0.5
1
Time
B
o
t
t
o
m
P
r
o
d
u
c
t
c
o
m
p
o
s
i
t
i
o
n
NN Prediction
Actual Process
NN Prediction
Actual Process
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
82
Fig. 4.8 Closed loop response of top and bottom product composition using NNPLS
control in servo mode.
0 10 20 30 40 50 60 70 80 90 100
0.99
0.995
1
1.005
1.01
T
o
p
P
r
o
d
u
c
t
C
o
m
p
o
s
i
t
i
o
n
Closed loop Response of NNPLS Controllers in Servo Mode
0 10 20 30 40 50 60 70 80 90 100
5
6
7
8
9
10
11
x 10
3
Time
B
o
t
t
o
m
P
r
o
d
u
c
t
C
o
m
p
o
s
i
t
i
o
n
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
83
Fig. 4.9 Closed loop response of top and bottom product composition using NNPLS
control in regulatory mode.
0 10 20 30 40 50 60 70 80 90 100
1
0
1
2
3
4
T
o
p
P
r
o
d
u
c
t
C
o
m
p
o
s
i
t
i
o
n
Closed Loop Performance of NNPLS Controller in Regulatory Mode
0 10 20 30 40 50 60 70 80 90 100
0
0.1
0.2
0.3
0.4
Time
B
o
t
t
o
m
P
r
o
d
u
c
t
C
o
m
p
o
s
i
t
i
o
n
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
84
Fig. 4.10 Comparison between actual and NNPLS identified outputs of a (S × S)
Distillation process using projected variables in latent space.
Fig. 4.11 Closed loop response of the three outputs of a (S × S) distillation process
using NNPLS control in servo mode.
0 200 400 600 800 1000 1200 1400 1600 1800 2000
1
0
1
Comparison of simulated plant outputs with NNPLS outputs
time
O
v
e
r
h
e
a
d
E
t
h
a
n
o
l
m
o
l
e
f
r
a
c
t
i
o
n
Measured Output1
Neural Network Output1
0 200 400 600 800 1000 1200 1400 1600 1800 2000
1
0
1
time
S
i
d
e
s
t
r
e
a
m
m
o
l
e
f
r
a
c
t
i
o
n
Neural Network Output2
Measured Output2
0 200 400 600 800 1000 1200 1400 1600 1800 2000
0.5
0
0.5
time
1
9
T
r
a
y
T
e
m
a
p
e
r
a
t
u
r
e
Neural Network Output3
Measured Output3
0 10 20 30 40 50 60 70 80 90 100
0.1
0.05
0
0.05
time
O
v
e
r
h
e
a
d
E
t
h
a
n
o
l
m
o
l
e
f
r
a
c
t
i
o
n
Closed loop simulations of NNPLSBased controller
NNPLSBased controller
0 10 20 30 40 50 60 70 80 90 100
0.1
0
0.1
time
S
i
d
e
S
t
r
e
a
m
E
t
h
a
n
o
l
m
o
l
e
f
r
a
c
t
i
o
n
closed loop simulations of NNPLSBased controller
NNPLSBased controller
0 10 20 30 40 50 60 70 80 90 100
5
0
5
10
x 10
4
time
1
9
t
h
t
r
a
y
t
e
m
p
e
r
a
t
u
r
e
closed loop simulations of NNPLSBased controller
NNPLSBased controller
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
85
Fig. 4.12 Closed loop response of the three outputs of a (S × S) distillation process
using NNPLS control in regulator mode.
Fig. 4.13 Comparison between actual and NNPLS identified outputs of a (4 × 4)
Distillation process.
0 10 20 30 40 50 60 70 80 90 100
0.1
0
0.1
0.2
time
O
v
e
r
h
e
a
d
E
t
h
a
n
o
l
m
o
l
e
f
r
a
c
t
i
o
n
,
X
D
1
Closed loop simulations of NNPLS based controller in regulatory mode
NNPLS Based Controller
0 10 20 30 40 50 60 70 80 90 100
0.2
0.1
0
time
S
i
d
e
S
t
r
e
a
m
m
o
l
e
f
r
a
c
t
i
o
n
,
X
S
1
NNPLS Based Controller
0 10 20 30 40 50 60 70 80 90 100
1
0
1
2
time
1
9
T
r
a
y
T
e
m
p
e
r
a
t
u
r
e
,
T
NNPLS Based Controller
0 100 200 300 400 500 600 700 800 900 1000
1
0
1
Comparison of simulated plant outputs with NNPLS outputs
T
o
p
P
r
o
d
u
c
t
C
o
m
p
o
s
i
t
i
o
n
simulated plant output1
neural network output1
0 100 200 300 400 500 600 700 800 900 1000
1
0
1
S
i
d
e
s
t
r
e
a
m
C
o
m
p
o
s
i
t
i
o
n
neural network output2
simulated plant output 2
0 100 200 300 400 500 600 700 800 900 1000
1
0
1
B
o
t
t
o
m
P
r
o
d
u
c
t
C
o
m
p
o
s
i
t
i
o
n
neural network output3
simulated plant output 3
0 100 200 300 400 500 600 700 800 900 1000
0.5
0
0.5
time
T
e
m
p
e
r
a
t
u
r
e
D
i
f
f
e
r
e
n
c
e
neural network output4
simulated plant output 4
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
86
Fig. 4.14 Closed loop response of the four outputs of a (4 × 4) distillation process using
NNPLS control in servo mode.
0 10 20 30 40 50 60 70 80 90 100
1
0
1
T
o
p
P
r
o
d
u
c
t
C
o
m
p
o
s
i
t
i
o
n
Closed loop simulations of NNPLSbased controller for set point change of 0.85 in y1
NNPLSBased controller
0 10 20 30 40 50 60 70 80 90 100
0.2
0
0.2
S
i
d
e
S
t
r
e
a
m
C
o
m
p
o
s
i
t
i
o
n
Closed loop simulations of NNPLSbased controller for setpoint change of 0.1 in y2
NNPLSBased controller
0 10 20 30 40 50 60 70 80 90 100
0.1
0
0.1
B
o
t
t
o
m
P
r
o
d
u
c
t
C
o
m
p
o
s
i
t
i
o
n
Closed loop simulations of NNPLSbased controller for setpoint change of 0.05 in y3
NNPLSBased controller
0 10 20 30 40 50 60 70 80 90 100
0.05
0
0.05
time
T
e
m
p
e
r
a
t
u
r
e
d
i
f
f
.
Closed loop simulations of NNPLSbased controller with no setpont change in y4
NNPLSBased controller
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
87
Fig. 4.15 Closed loop response of the four outputs of a (4 × 4) distillation process using
NNPLS control in regulator mode.
Fig. 4.16 Juice concentration plant
0 5 10 15 20 25 30
0.1
0
0.1
Closed loop simulations of NNPLS controller in regulator mode
T
o
p
P
r
o
d
u
c
t
c
o
m
p
o
s
i
t
i
o
n
NNPLS Controller
0 5 10 15 20 25 30
0.1
0.05
0
S
i
d
e
P
r
o
d
u
c
t
c
o
m
p
o
s
i
t
i
o
n
NNPLS Controller
0 5 10 15 20 25 30
0
5
x 10
3
B
o
t
t
o
m
P
r
o
d
u
c
t
c
o
m
p
o
s
i
t
i
o
n
NNPLS Controller
0 5 10 15 20 25 30
0.1
0
0.1
time
T
e
m
p
e
r
a
t
u
r
e
d
i
f
f
.
NNPLS Controller
Condenser
F5, C5 F4, C4 F3, C3 F2, C2 F1, C1
O5
O4 O3 O2
O1
VP5 VP4
VP3 VP2 VP1
Co, Fo
To, ho
Juice
Steam
W1
W2 W3
W4
W5
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
88
Fig. 4.17 (a) Comparison between NN and model based control in servo mode for
maintaining brix from 3
rd
effect of a sugar evaporation process plant using
plant wide control strategy.
Fig. 4.17 (b) Comparison between NN and model based control in servo mode for
maintaining brix from 5
th
effect of a sugar evaporation process plant using
plant wide control strategy.
0 5 10 15 20 25 30 35 40 45 50
31
32
33
34
35
36
37
time
C
3
NN controller
Model Based Controller
0 5 10 15 20 25 30 35 40 45 50
60
62
64
66
68
70
72
74
76
78
time
C
5
NN controller
Model Based Controller
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
89
Fig. 4.18 (a) Comparison between NN and model based control in regulator mode for
maintaining brix (at its base value) from 3
rd
effect of a sugar evaporation
process plant using plant wide control strategy.
Fig. 4.18 (b) Comparison between NN and model based control in regulator mode for
maintaining brix (at its base value) from 5
th
effect of a sugar evaporation
process plant using plant wide control strategy.
0 5 10 15 20 25 30 35 40 45 50
35.6
35.8
36
36.2
36.4
36.6
time
C
3
NN controller
Model Based Controller
0 5 10 15 20 25 30 35 40 45 50
70
70.5
71
71.5
72
72.5
time
C
5
NN controller
Model Based Controller
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
90
REFERENCES
1. Qin, S. J., McAvoy, T. J., “Nonlinear PLS modeling using neural network.” Comput.
Chem. Engr.1992, 16(4): 379391.
2. Qin, S. J., “A statistical perspective of neural networks for process modelling and
control,” In Proceedings of the 1993 Internation Symposium on Intelligent Control.
Chicago, IL, 1993: 559604.
3. Kaspar, M. H., Ray,W. H., “Dynamic Modeling For Process Control.” Chemical Eng.
Science. 1993, 48 (20): 34473467.
4. Lakshminarayanan, S., Sirish, L., Nandakumar, K., “Modeling and control of
multivariable processes: The dynamic projection to latent structures approach.”
AIChE Journal. 1997,43: 23072323, September 1997.
5. Damarla, S., Kundu, M., “Design of Multivariable NNPLS controller: An alternative
to classical controller.” (communicated to Chemical product & process modeling)
6. Anderson, B. D. O., Moore, J. B., “Optimal control –linear quadratic methods.”
Prentice Hall, 1989.
7. Mohtadi, C., Shah, S. L., Clarke, D. W., “Multivariable adaptive control without a
prior knowledge of the delay matrix.” Syst. Control Lett. 1987, 9: 295306.
CONCLUSIONS AND RECOIMMENDATION
FOR FUTURE WORK
91
CONCLUSIONS AND RECOMMENDATION FOR FUTURE
WORK
5.1 CONCLUSIONS
Present work could successfully implement various MSPC techniques in process
identification, monitoring and control of industrially significant processes. The issues
regarding the implementation were also addressed. With the advancement of data capture,
storage, compression and analysis techniques, the multivariate statistical process monitoring
(MSPM) and control has become a potential and focused area of R&D activities. From this
perspective, the current project was taken up. Every physical/mathematical model encourages
a certain degree of empiricism, purely empirical models some time are based on heuristics,
hence process monitoring & control on their basis may find inadequacies; especially for non
linear & high dimensional processes. Data driven models may be a better option than models
based on first principles and truly empirical models in this regard. Efficient data pre
processing, precise analysis and judicious transformation are the key steps to success of data
based approach for process identification, monitoring & control. The deliverables of the
present dissertation are summarized as follows:
• Implementation of clustering time series data and moving window based pattern
matching for detection of faulty conditions as well as differentiating among various
normal operating conditions of Bioreactor, Drumboiler, continuous stirred tank with
cooling jacket and the prestigious Tennessee Eastman challenge processes. Both the
techniques emancipated encouraging efficiencies in their performances.
CONCLUSIONS AND RECOMMENDATION FOR FUTURE WORK
92
• Identification of time series data/process dynamics & disturbance process dynamics
with supervised Artificial neural network (ANN) model
• Identification of time series data/process dynamics in latent subspace using partial
least squares (PLS).
• Design of multivariable controllers in latent subspace using PLS framework. For
multivariable processes, the PLS based controllers offered the opportunity to be
designed as a series of decoupled SISO controllers.
• Neural nets trained with inverse dynamics of the process or direct inverse neural
networks (DINN) acted as controllers. Latent variable based DINNS’ embedded in
PLS framework was the termed coined as NNPLS controllers. The closed loop
performance of NNPLS controllers in {2 × 2{, {3 × 3{, and {4 × 4{ Distillation
processes were encouraging both in servo and regulator modes.
• Model based Direct synthesis and DINN controllers were incorporated for controlling
brix concentrations in a multiple effect evaporation process plant and their
performances were comparable in servo mode but model based Direct synthesis
controllers were better in disturbance rejection.
In fine, it can be acclaimed that the present work not only did address monitoring & control
of some of the modern day industrial problems, it has also enhanced the understanding about
data based models, their scopes, their issues, and their implementation in the current
perspective. The subject matter of the present dissertation seems to be suitable to cater the
need of the present time.
5.2 RECOMMENDATION FOR FUTURE WORK
• Plant wide control using nontraditional control system design requires to be
implemented in complex processes.
• Integration of statistical process monitoring and control for industrial automation.
• Building of Expert system or Knowledgebased systems (KBS), which is emulating
the decisionmaking capabilities and knowledge of a human expert in a specific field.
MULTIVARIATE STATISTICAL PROCESS MONITORING AND CONTROL
A Thesis Submitted in Partial Fulfillment for the Award of the Degree Of MASTER OF TECHNOLOGY In CHEMICAL ENGINEERING By
SESHU KUMAR DAMARLA
Under the guidance of
Dr. Madhusree Kundu
Chemical Engineering Department National Institute of Technology Rourkela 769008 May 2011
Department of Chemical Engineering National Institute of Technology Rourkela 769008 (ORISSA)
CERTIFICATE
This is to certify that the thesis entitled “Multivariate Statistical Process Monitoring and Control”, being submitted by Sri Seshu Kumar Damarla for the award of M.Tech. degree is a record of bonafide research carried out by him at the Chemical Engineering Department, National Institute of Technology, Rourkela, under my guidance and supervision. The work documented in this thesis has not been submitted to any other University or Institute for the award of any other degree or diploma.
Dr. Madhushree Kundu Department of Chemical Engineering National Institute of Technology Rourkela  769008
ii
The second person l love and respect is Prof. I always obliged to her. who scarified their education and desires for my brother and me. Satish garu. Prof. Mishra of the Department of Chemical Engineering. Prof. Prof. I always responsible to my sisters and their families. I express my gratitude and indebtedness to Professor S. for providing me with the necessary computer laboratory and departmental facilities. Her innovative thoughts and valuable suggestions made this dissertation possible. I express my gratitude and indebtedness to Prof. Finally. A. C. Her tireless endeavors keep me good. She encourages and supports me in all the way. K. Head. Jeevan garu. Today I am in some respectful position because of my two brothers. Kasi. I would like to express my indebtedness to Prof. Kumar. Throughout these two years of journey I have wonderful friendship with Pradeep. Biswal. for their valuable suggestions and instructions at various stages of the work. Madhusree Kundu who has brought me from low level to certain level. Khanam. Santanu Paria. A. Without his help we could not have proceeded further. S. SESHU KUMAR DAMARLA NATIONAL INSTITUTE OF TECHNOLOGY ROURKELA769008(ORISSA) iii . Tarangini akka. He is hero off the screen.ACKNOWLEDGEMENT I would like to express my sincere gratefulness to my father who has struggled every minute of his life to keep me at higher level and shown a path in which I am traveling now.Pavan and Prabhas. Madhusree Kundu wherever I will be. I never forget her. Basudeb Munshi. Palash Kundu who has given wonderful ideas and an amazing path to work. She scarifies everything for us. Prof. I express my humble regards to my mom who always thinks and worships God about us. My each and every success always dedicates to him. She inspires me through her attitude and dedication towards research. They support and motivate me in every moment of my life. I wish and love to work with Prof. Agarwal and Professor K. S. Chemical Engineering Department. Sahoo. Prof.
CONTENTS Page No.4 1.5 PREVIOUS WORK REFERENCES .4.2 GENERAL BACKGROUND DATA BASED MODELS 1.3 PCA Based Similarity Distance Based Similarity Combined Similarity Factor 8 8 10 10 11 12 12 12 14 16 19 2.6 1.4 THEORITICAL POSTULATION ON PLS 2.1 2.1 Linear PLS 2.2.5 1.2 2.1 1.3.4.2 Dynamic PLS 2.3.1 Chemometric Techniques 1.1 2.3 INTRODUCTION THEORITICAL POSTULATIONS ON PCA SIMILARITY 2.3 1.3.7 STATISTICAL PROCESS MONITORING MULTIVARIATE STATISTICAL AND STATISTICAL CONTROLLER PLANTWIDE CONTROL OBJECTIVE ORGANIZATION OF THESIS NEURO 4 5 6 6 7 REFERENCES Chapter 2 CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK 8 2.2 2. Certificate Acknowledgement Abstract List of Figures List of Tables ii iii viii x xiii 1 Chapter 1 INTRODUCTION TO MULTIVARIATE STATISTICAL PROCESS MONITORING AND CONTROL 1 1 2 3 3 1.
4.2.4.2.6 3.5.2 Equilibrium Values 3.2.3 Distribution of Steam in the Drum 3.3 3.4.6.4.4.3 Parameters 3.2.2.8.Chapter 3 MULTIVARIATE STATISTICAL PROCESS MONITORING 22 22 3.4 MOVING WINDOW BASED PATTERN MATCHING DRUM BOILER PROCESS 3.3 Jacketed CSTR 3.2.2.4.6.5 The Model 3.2 Distribution of Steam in Risers and Drum 3.2.5 BIOREACTOR PROCESS 3.1 MODELING CONTNUOUS STIRRED TANK REACTOR COOLING JACKET TENNESSEE EASTMAN CHALLENGE PROCESS 3.8.9 RESULTS & DISCUSSIONS 3.7.4.6.1 3.4.4.8.4.1 Distribution of Steam and Water in Risers 3.1 Drum Boiler Process 3.4 Drum Level 3.2.8.2 Modeling 3.2.2 Bioreactor Process 3.7 40 40 41 42 42 43 44 44 44 62 3.1 Derivation of State Equations 3.4.4.1 Generation of Database WITH 3.1 The Brief Introduction of Drum Boiler System 3.8 3.2.3 Kmeans Clustering Using Similarity Factors Selection of Optimum Number of Clusters Clustering Performance Evaluation 22 23 24 25 25 27 28 29 30 31 31 32 33 33 34 34 35 35 37 38 38 38 3.4.2.2 Lumped Parameter Model 3.2.4.3 Circulation Flow 3.4 Tennessee Eastman Process CONCLUSION REFERENCES .6 State Variable Model 3.2.1 3.2 3.2 INTRODUCTION CLUSTERING TIME SERIES DATA: APPLICATION IN PROCESS MONITORING 3.1 Proposed Second Order Boiler Model 3.2.2.2.2.
4 4.3.Chapter 4 APPLICATION OF MULTIVARIATE STATISTICAL AND NEURAL PROCESS CONTROL STRATEGIES 64 64 65 65 66 4.7 IDENTIFICATION & DISTILLATION PROCESS IDENTIFICATION & DISTILLATION PROCESS PLANT WIDE CONTROL CONCLUSIONS 72 73 75 77 90 REFERENCES Chapter 5 CONCLUSIONS AND FOR FUTURE WORK RECOMMENDATIONS 91 91 92 5.1 4.2 INTRODUCTION DEVELOPMENT OF PLS AND NNPLS CONTROLLER 4.6 4.2.3.2.2 PLS Controller NNPLS Controller CONTROL CONTROL OF OF {3 × 3{ {4 × 4{ 69 69 71 4.1 CONCLUSIONS 5.2 RECOMMENDATIONS FOR FUTURE WORK .3 IDENTIFICATION & CONTROL OF {2 × 2{ DISTILLATION PROCESS 4.1 4.5 4.1 4.2 PLS Controller Development of Neural Network PLS controller 4.
The physics of data based model identification through PLS. Present work based on successful application of chemometric techniques in implementing machine learning algorithms. based on clustering time series data and moving window based pattern matching have been proposed for detection of faulty conditions as well as differentiating among various normal operating conditions of Biochemical reactor. it has been addressing industrial problems in recent past.ABSTRACT Application of statistical methods in monitoring and control of industrially significant processes are generally known as statistical process control (SPC). the PLS based controllers offered the opportunity to be designed as a series of decoupled SISO controllers. Drumboiler. supplanted univariate SPC techniques. their advantages over other time series models like ARX. were addressed in the present dissertation. MVSPC techniques are not only significant for scholastic pursuit. Neural network. Monitoring and controlling a chemical process is a challenging task because of their multivariate. For multivariable processes. monitoring & Control. and NNPLS. new methodologies. PCA. Two such chemometric techniques. continuous stirred tank with cooling jacket and the prestigious Tennessee Eastman challenge processes. an unsupervised technique can extract the essential features from a data set by reducing its dimensionality without compromising any valuable information of it. ARMAX. Since most of the modern day industrial processes are multivariate in nature. a supervised category of data based modeling technique was used for identification of process viii . In the present work. For controlling nonlinear complex processes neural network based PLS (NNPLS) controllers were proposed. ARMA. Both the techniques emancipated encouraging efficiencies in their performances. highly correlated and nonlinear nature. . principal component analysis (PCA) & partial least squares (PLS) were extensively adapted in this work for process identification. multivariate statistical process control (MVSPC). PLS finds the latent variables from the measured data by capturing the largest variance in the data and achieves the maximum correlation between the predictor and response variables even if it is extended to time series data.
{3 × 3{. Model based Direct synthesis and DINN controllers were incorporated for controlling brix concentrations in a multiple effect evaporation process plant and their performances were compared both in servo and regulator mode.ABSTRACT dynamics. Latent variable based DINNS’ embedded in PLS framework termed as NNPLS controllers. The subject plant wide control deals with the inter unit interactions in a plant by the proper selection of manipulated and measured variables. Neural nets trained with inverse dynamics of the process or direct inverse neural networks (DINN) acted as controllers. {2 × 2{. ix . and {4 × 4{ Distillation processes were taken up to implement the proposed control strategy followed by the evaluation of their closed loop performances. selection of proper control strategies.
3 Drum Boiler Process Tennessee Eastman Plant 28 56 Figure 3.1 Standard Linear PLS algorithms Schematic of PLS based Dynamics Moving Window based Pattern Matching 14 15 26 Figure 3. Figure 3.6 (b) Response of stripper level at FMD31 Figure 3. Figure 2.2 Figure 3.8 Response of Drumboiler process for a faulty operating condition Figure 3.4 (a) Response of reactor pressure at FMD13 .LIST OF FIGURES Page No.4 (b) Response of stripper level at FMD13 Figure 3. 57 57 Figure 3.9 Response for the Jacketed CSTR process under faulty operating condition Figure 4.5 (b) Response of reactor level at FMD14.6 (a) Response of reactor pressure at FMD31 Figure 3.1 Schematic of PLS based Control 61 61 66 x . 58 58 Figure 3.7 (b) Response of reactor pressure at FMD32 59 59 60 60 Figure 3.2 Figure 3.7 (a) Response of stripper level at FMD32 Figure 3.5 (a) Response of reactor pressure at FMD14 .1 Figure 2.
12 85 Figure 4.Figure 4.11 84 Figure 4.4 79 Figure 4. Comparison between actual and ARX based PLS predicted dynamics for output2 (bottom product composition I ) in distillation process Comparison of the closed loop performances of ARX based and FIR PLS controllers for a set point change in I from 0.3 Comparison between actual and ARX based PLS predicted dynamics for output1 (top product composition I ) in distillation process.14 86 Figure 4.2 (a) NNPLS Servo mode of control Figure 4.8 82 Figure 4.2 (b) NNPLS Regulatory mode of control System Figure 4.99 to 0.13 Comparison between actual and NNPLS identified outputs of a {4 × 4{ Distillation process Closed loop response of the four outputs of a {4 × 4{ distillation process using NNPLS control in servo mode Closed loop response of the four outputs of a {4 × 4{ xi 85 Figure 4.01 to 0.7 81 Figure 4.005 Comparison between actual and NN identified outputs of a {2 × 2{ Distillation process using projected variables in latent space Closed loop response of top and bottom product composition using NNPLS control in servo mode Closed loop response of top and bottom product composition using NNPLS control in regulatory mode Comparison between actual and NNPLS identified outputs of a {3 × 3{ Distillation process using projected variables in latent space Closed loop response of the three outputs of a {3 × 3{ distillation process using NNPLS control in servo mode Closed loop response of the three outputs of a {3 × 3{ distillation process using NNPLS control in regulator mode 68 68 79 Figure 4.5 80 Figure 4.10 84 Figure 4.9 83 Figure 4.15 .6 80 Figure 4.996 Comparison of the closed loop performances of ARX based and FIR PLS controllers for a set point change in I from 0.
18 (b) Comparison between NN and model based control in regulator mode for maintaining brix (at its base value) from 5 th effect of a sugar evaporation process plant using plant wide control strategy 88 88 89 89 xii .17 (a) Comparison between NN and model based control in servo mode for maintaining brix from 3 rd effect of a sugar evaporation process plant using plant wide control strategy.distillation process using NNPLS control in regulator mode Figure 4.17 (b) Comparison between NN and model based control in servo mode for maintaining brix from 5 th effect of a sugar evaporation process plant using plant wide control strategy Figure 4.16 Juice concentration plant 87 87 Figure 4. Figure 4.18 (a) Comparison between NN and model based control in regulator mode for maintaining brix (at its base value) from 3 rd effect of a sugar evaporation process plant using plant wide control strategy Figure 4.
11 Database corresponding to various operating conditions for Bioreactor process 50 Table 3.5 Table 3.14 Constraints in TE process 51 Table 3.2 Table 3. 45 46 46 Table 3.4 Table 3. Process disturbances in TE process 46 47 48 48 Table 3.8 Continuous process measurements in TE process 49 Table 3.6 ..9 Sampled process measurements in TE process 49 Table 3.1 Table 3. Heat and material balance data for TE process.10 Various operating conditions for Drumboiler process 50 Table 3. Table 3.13 Operating conditions for TE process 51 Table 3.LIST OF TABLES Page No.3 Values of the Drumboiler model parameters Bioreactor process Parameters CSTR parameters used in CSTR simulation.12 Generated database for CSTR process at various operating modes 50 Table 3. Component physical properties (at 100oc) in TE process. Process manipulated variables in TE process.15 Combined similarity factor based clustering performance for xiii .7 Table 3.
3 × 3 & 4 × 4 ) Distillation processes.1 Designed networks and their performances in (2 × 2.24 Combined similarity factors in dataset wise moving window based pattern matching implementation for TE process 54 Table 3.22 Similarity factors in dataset wise moving window based pattern matching implementation for Jacketed CSTR process 54 Table 3.17 Moving window based pattern matching performance for Drumboiler process 52 Table 3.2 xiv .23 Moving window based pattern matching performance in Jacketed CSTR process 54 Table 3.Drumboiler process 51 Table 3.19 Similarity factors in dataset wise moving window based pattern matching implementation for Bioreactor process 53 Table 3.25 Pattern matching performance for TE process Table 4.16 Similarity factors in dataset wise moving window based pattern matching implementation for Drumboiler process 52 Table 3. Steady state values for variables of Evaporation plant 55 77 78 Table 4.18 Combined similarity factor based clustering performance for Bioreactor process 52 Table 3.21 Combined similarity factor based clustering performance for Jacketed CSTR process 53 Table 3.20 Moving window based pattern matching performance for Bioreactor process 53 Table 3.
INTRODUCTION TO MULTIVARIATE STATISTICAL PROCESS MONITORING AND CONTROL .
but they do not function well for multivariable processes with highly correlated variables. The most widely used and popular SPC techniques involve univariate methods. observing and analyzing a single variable at a time. which consider all the variables of interest simultaneously and can extract information on the behavior of each variable or characteristic relative to the others. The conventional SPC charts such as Shewhart chart and CUSUM chart have been widely used for monitoring univariate processes. univariate SPC methods and techniques provide little information about their mutual interactions.INTRODUCTION TO MULTIVARIATE STATISTICAL PROCESS MONITORING AND CONTROL 1. that is. Industrial quality problems are multivariate in nature. rather than one single characteristic. Most of the limitations of univariate SPC can be addressed through the application of Multivariate Statistical Process Control (MVSPC) techniques. since they involve measurements on a number of characteristics.1 GENERAL BACKGROUND Application of statistical methods in monitoring and control of industrially significant processes are included in a field generally known as statistical process control (SPC) or statistical quality control (SQC). fault detection and process identification & control. 1 . MVSPC research is having high value in theoretical as well as practical application and is certainly be conducive to process monitoring. As a result.
Unlike the white box models derived from first principles. and in this regard.2 DATA BASED MODELS Data based modeling is one of the very recently added crafts in process identification. Principal component regression (PCR) and Canonical Variate and Subspace State Space modeling have been proposed for these purposes. Partial Least Squares (PLS). rational as well as potential conclusions can be drawn from the data. Hierarchical 2 . free of outliers and is ensured by data pre conditioning. The phases in the Data based modeling are: • • • • • • • System analysis Data collection Data conditioning Key variable analysis Model structure design Model identification Model evaluation In this era of data explosion. they are based on inputoutput data and only describing the overall behavior of the process. hence these models are called data based models or experimental models.INTRODUCTION TO STATISTICAL PROCESS MONITORING AND CONTROL 1. The data based models are especially appropriate for problems that are data rich. Examples are Principal Component Analysis (PCA). Sufficient numbers of quality data points are required to propose a good model. The black box models are data dependent and model parameters are determined by experimental results /wet labs. Several Datadriven multivariate statistical techniques such as Principal Component Analysis (PCA). we owe a profound debt to multivariate statistics. Data based models can be divided in to two major categories namely: • Unsupervised models: These are the models which try to extract the different features present in the data without any prior knowledge of the patterns present in the data. the black box/data based models or empirical models do not describe the mechanistic phenomena of the process. but hypothesis and/or information poor. Canonical Correlation Analysis (CCA). Quality data is defined by noise free data. monitoring and control.
Present work is based on successful application and implementation of Chemometric techniques like PCA & PLS in process identification. [35]In order to ensure product quality and process safety. in newer dimensions and perspective. • Supervised models: These are the models which try to learn the patterns in the data under the guidance of a supervisor who trains these models with inputs along with their corresponding outputs. PLS has been successfully applied in diverse fields including process monitoring. identification of process dynamics & control and it deals with noisy and highly correlated data. hence. Monitoring a chemical processes is preferably a data based approach because of their multivariate.INTRODUCTION TO STATISTICAL PROCESS MONITORING AND CONTROL Clustering Techniques (Dendrograms). quite often. 1. extraction of meaningful information from the process database seems to be a natural and logical 3 . A tutorial description along with some examples on the PLS model was provided by Geladi and Kowalaski (1986) [2].3 STATISTICAL PROCESS MONITORING Process safety and product quality are the goals. highly correlated and nonlinear nature. and predicting chemical data. Efficient data mining. nonhierarchical Clustering Techniques (Kmeans). people pursuit unremittingly. interpreting. embraced with never expected possibilities. PLS finds the latent variables from the measured data by capturing the largest variance in the data and achieves the maximum correlation between the predictor ( ) variables and response ( ) variables. First proposed by Wold [1]. only with a limited number of observations available. monitoring & Control. PCA is a multivariate statistical technique that can extract the essential features from a data set by reducing its dimensionality without compromising any valuable information of it.2. 1. Data collection has become a mature technology over the years and the analysis of process historical database has become an active area of research. Examples include Artificial Neural Networks (ANN).1 Chemometric Techniques Chemometrics is the application of mathematical and statistical methods for handling. Partial Least Squares (PLS) and Auto Regression Models etc. efficient data based modeling will enable the future era to exploit the huge database available.
Very often there are a large number of process variables are to be measured thus giving rise to a high dimensional data base characterizing the process of interest. ARMAX (Auto regressive moving average with exogenous inputs) ARIMA (Auto regressive integrated moving average) etc. have to be determined in order to detect the maximum effect of inputs having lag ranging from 0 . To extract meaningful information from such a data base. Statistical performance monitoring of a process detects process faults or abnormal situations. hidden danger in the process followed by the diagnosis of the fault. hence. finds the latent variables from the measured data by capturing the largest variance in the data and achieves the maximum correlation between the predictor X variables and response Y variables. 1. 4 . Identification and control of chemical process is a challenging task because of their multivariate.INTRODUCTION TO STATISTICAL PROCESS MONITORING AND CONTROL choice. on specific outputs. PLS reduces the dimensionality of the measured data..4 MULTIVARIATE STATISTICAL AND NEURO STATISTICAL CONTROLLER Since the last decade. The diagnosis of abnormal plant operation can be greatly facilitated if periods of similar plant performance can be located in the historical database. Stationary time series data consisting of process input and output may be correlated by models like ARX (Auto regressive model with exogenous inputs). design & development of data based model control has taken its momentum.. ARMA (Auto regressive moving average). 2. Principal component decomposition of the data into the latent subspaces provides us the opportunity to compress the data without losing meaningful information of it. highly correlated and nonlinear nature. This very trend owes an explanation. This kind of identification of process dynamics requires a priori selection of model order and handle a lot of parameters. Prior to apply those models for time series identification. Otherwise those high dimensional dataset maybe seen through a smaller window by projecting the data along some selected fewer dimensions of maximum variability. has to be removed from the data and secondly. Partial least squares technique with inner PCA core offers an alternative for identification of process dynamics. New methodologies based on clustering time series data and moving window based pattern matching can be used for detection of faulty conditions as well as differentiating among various normal operating conditions. firstly nonstationarity. meticulous preprocessing of data is mandatory. the crosscorrelation coefficients. 3…. if any. 1. encourages empiricism. Originally process static data were used for the aforesaid PCA decomposition.
Any Physical model based on first principle is designed on the basis of certain heuristics. ARMA. The broad generality of such model is at low ebb when the process is nonlinear and complex. 1993))[67]. The inverse dynamics of the latent variable based process was identified as the direct inverse neural controller (DINN) using the historical database of latent variables. the Partial least squares (PLS) controllers offer the opportunity to be designed as a series of SISO controllers (Qin and McAvoy (1992. Development of transfer function using such a model resulted in a poor controller performance irrespective of the controllers chosen. inputoutput pairings are automatic. ARMAX models are less desirable for their extreme dependency on a large numbers of model parameters. PLS controllers are well documented. In view of this. By combining the PLS with inner ARX model structure. Because of the diagonal structure of the dynamic part of the PLS model. hence the process dynamics in latent subspace could not be well identified by linear or quadratic relationships. The disturbance process in lower subspace was also identified using NN. In view of this. The 5 . neural identification of process dynamics in latent subspace or NNPLS based process identification deserves attention. data driven PLS and NNPLS identified processes (transfer function) are logical alternative to design multivariable controllers. Series of SISO controllers designed on the basis of the dynamic models identified into latent subspaces and embedded in the PLS framework are used to control the process. 1. To get rid of this situation. Development of PLS and NNPLS controllers and their performance evaluation (both in servo and regulator mode) for some of the selected benchmark processes is one of the scopes of the present work. the inner relationship between (I) and (I) scores. data based models is a logical choice. Discrete time domain (z) transfer functions developed using ARX. which logically built up the framework for PLS based process controllers. For multivariable processes. Till date there is no reference on NNPLS controllers in the open literature though PLS & NNPLS based process identification. which interact with each other. nonlinear dynamic processes could be modeled apart from using neural networks. The subject plant wide control deals with these inter unit interactions by the proper selection of manipulated and measured variables.INTRODUCTION TO STATISTICAL PROCESS MONITORING AND CONTROL In PLS based process dynamics. . Discrete inputoutput time series data (XY) were used for this purpose.5 PLANTWIDE CONTROL Most of the industrial processes are having multiple units. selection of proper control strategies.
1. The second chapter emphasized on different chemometric techniques used with the mention of significant previous contributions in this field. data based process monitoring. In an ending note. In chapter three statistical process monitoring algorithms with case studies are presented. control as well as plant wide process control. SIMULINK. a focused R&D activity on plant wide control has been taken up by several researchers [811]. can be used as the platform for plant wide process simulation. Apart from using classical controllers.6 OBJECTIVE In the context of aforesaid discussion. the fifth chapter concludes with recommendation of future research initiatives. 6 . 1.7 ORGANIZATION OF THE THESIS First chapter renders an overview of multivariate process. Recycle streams in process plants have been a source of major challenge for control system design. identification of process dynamics. chemometric techniques and their use in process monitoring. identification & control seems to be unequivocally superior to physical model based approaches. For very complex and nonlinear processes. artificial neural network (ANN) based controllers can be used to control the plant wide process of interest. In view of this. Over the last decade. Application of statistical/ neuro statistical process identification and control and plant wide control are the excerpts of the fourth chapter. Expert systems or knowledge based system can be developed by integration of process knowledge derived from the process data base. it is worthy to mention that application of multivariate statistical process monitoring and control deserves extensive research before its widespread use and commercialization.INTRODUCTION TO STATISTICAL PROCESS MONITORING AND CONTROL goal of any process design is to minimize capital costs while operating with optimum utilization of material and energy. This chapter also presents the objectives of the present work with the thesis outline. the objectives of the present dissertation are as follows: • • Multivariable process monitoring using MVSPC techniques Process identification in latent subspaces and multivariate statistical process control for benchmark multivariable processes • Plant wide process control.
“Nonlinear PLS modeling using neural network. R. Kowalski. Sci. J. 1995..” Comput. G. S... Luyben. S. U.” Prentice Hall. P.” Commun. C..” Comput. J. 1999. AAAI Press: Menlo Park. Academic Press. “Estimation of principal components and related models by iterative least squares. 1988. 1993... 2. Geladi. 1998. IL.” McGraw Hill. 8. Engr. New York.. 185: 117.. Chem.McAvoy.. “Chemical process control. Piatetsky. Acta. 7 . T. Proceedings of the 4th International Conference on Knowledge Discovery and Data Mining. Qin. F. “Conceptual design of chemical processes... W. Qin.. J. “Plantwide process control. E. G. R. Chim. Eds.. R. 4 (2): 69. L. Vogel.” Anal. 11. Chem.INTRODUCTION TO STATISTICAL PROCESS MONITORING AND CONTROL REFERENCE 1. G. J. J.1992.. Downs. 39: 2734. Wold H. Ed. 17: 245255.. P. P. Piatetsky. 6. ACM.. “The KDD process for extracting useful knowledge from volumes of data. J. Stoloroz. pp 559604. B. S. New York. Englewood Cliffs. Tyreus. “Data mining: an industrial research perspective. NJ..1986. B.” McGraw Hill. Luyben.. P. Engr. 16(4): 379391. Agrawal. D. Stephanopoulos. 1997. New York. 1966: 391420.. 9... Smyth. M. Comput. Apte´. 10. “A plantwide industrial process control problem. 1984. Krishnaiah. M. L.. Chicago.. “Partial leastsquares regression: A tutorial.” In Proceedings of the 1993 International Symposium on Intelligent Control. CA. 3.” In MultiVariate Analysis II. Douglas. 1996.“A statistical perspective of neural networks for process modelling and control. Eng. Fayyad. 5. 4. S.” IEEE Trans. 7.
DATA DRIVEN MULTIVARIATE STATISTICAL TECHNIQUES AND PREVIOUS WORK .
The first PC is the direction of largest variation in the data set. Chemometric techniques offer the new approaches for process monitoring & control. The 8 .2 THEORETICAL POSTULATION ON PCA PCA is a MVSPC technique used for the purpose of data compression without losing any valuable information. Data based modeling is one of the very recently added crafts in process identification.CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK 2. Principal components (PCs) are transformed set of coordinates orthogonal to each other. which have been the mathematical foundation of present work. unsupervised learning and multivariate statistical techniques are falling under this category. supervised learning. PCA belongs to unsupervised and PLS belongs to supervised category of chemometric models. The data based approaches.1 INTRODUCTION Chemometric is the science relating measurements made on a chemical system to the state of the system via application of mathematical or statistical methods. A parsimonious presentation of the statistical rationale is included in the chapter. 2. monitoring and control. hence it might be appropriate to describe the state of art of those advancements.
1) . These plots show the relation between different observations or experiments. The coefficients I and the principal components should are called principal components which are equal to the eigenvectors of However. each row in X represents one measurement and the number of columns m is equal to the length of the measurement sequence or features. This corresponds to a preserved. Generally. the original data set is represented in fewer dimensions (typically 23) and the measurements can be plotted in the same coordinate system. only a subset (k <m) of the basic feature vectors are + (# I (2. The goal is to map the raw data vector E onto vectors S. where. PCA technique was applied on the autoscaled data matrix to determine the principal eigenvectors.3) where. The remaining coefficients are replaced by constants I and each vector x is then approximated as = (# To reduce the dimensions of the data set. the vector x can be represented as a linear combination of a set of m orthonormal vectors = (# where the coefficients can be found from the equation rotation of the coordinate system from the original to a new set of coordinates given by z. the reduction of dimensionality from m to k causes an approximation error. Considering the following matrix: X=A ⋮ ## # ⋯ ⋱ ⋯ # ⋮ E (2. Following the step described above. The sum of squares of the errors over the whole data set is minimized if we select the vectors that correspond to the largest Eigen values of the covariance matrix. PCA is a mathematical transform used to find correlations and explain variance in a data set. The drawbacks are that the new latent variables often have no physical meaning and the user has a little control over the possible loss of information. associated eigen values and scores or the transformed data along the principal components. Grouping of data points in those biplots suggest some common properties and those can be used for classification. the covariance matrixV = IJ { { and its Eigen values λ were calculated. = (2. Its eigenvectors 9 form . the covariance matrix of the data set.CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK projection of original data on the PCs produces the score data or transformed data as a linear combination of those fewer mutually orthogonal dimensions. As a result of the PCA transformation.2) The basic vectors be chosen such that the best approximation of the original vector on an average is obtained.
a new data matrix D of dimension n × k was obtained. These principal components are the eigen vectors of the covariance matrix. … { .3 SIMILARITY Similarity measures among the datasets pertaining to various operating conditions of a process provide the opportunity to cluster them into various groups. After this transformation. The number of PCs to be included should be high enough to ensure good separation between the classes. The original data set can be data matrix of reduced dimension can be constructed with the help of Eigen values of the matrix C. This is done by selecting the highest values λ since they correspond to the principal components with highest significance.3. In the present work PCA based similarity was used for process monitoring purposes. Let the first k PCs as new features be selected neglecting the remaining (mk) principal components. is comfortably relying on this principle which is conducive of safe plant operation.4) The PCA score data sets are grouped into number of classes following the rule of nearest neighborhood clustering algorithm. recently some new criterions have been proposed to determine the similarity or dissimilarity among the process historical database. Principal components with low contribution (low values of λ) should be neglected. $. a new = 1 . 2. that is = .CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK an orthonormal basis represented in the new basis using the relation: ={ #. 2. ⋯ # ## ⋮ ⋱ ⋮ E D=A # ⋯ (2.5) S PCA = 1 k ∑ ∑ Cos θ 2 i =1 j =1 k k ij (2.5) 10 . hence their classification. Apart from Euclidian and Mahalanobis distance. %. In this way.1 PCA Based Similarity PCA similarity factor was developed by choosing largest k principal components of each multivariate time series dataset that describe at least 95 % of variance in the each dataset. The PCA similarity factor between two datasets is defined by equation (2. Fault detection and diagnosis (FDD).
λi(2) are the eigen values of the first and second datasets respectively. The error function can be calculated by using any software or standard error function tables. The modified S PCA is defined as λ S PCA = ∑∑ (λ k i =1 (1) ( 2 ) 2 i λ j )Cos θij k ∑∑ j =1 (2. a one side Gaussian distribution is used because Φ ≥ 0 . Distance similarity factor compares two datasets that may have similar spatial orientation.CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK Where k is the number of selected principal components in both datasets.2 Distance Based Similarity In addition to above similarity measure. Dataset X 1 is assumed to be a reference dataset. x 2 & x1 are sample means row vector. The distance similarity factor is defined as −z2 −z2 1 ∞ 2 1 φ S dz = 2 × [1 − = 2× ∫−∞ e 2 dz ∫φ e dist 2π 2π (2.7) normalizes S dist between zero and one. In equation (2. 11 .7) Where Φ = ( x 2 − x1 ) ∑1* −1 ( x 2 − x1 )T . The distance similarity finds its worth when the process variables pertaining to different operating conditions may have similar principal components.7). S PCA may not capture the degree of similarity between two datasets because it weights all PCs equally. The integration in equation (2.6) λ(1)λ(j2) i Where λi(1) . Obviously S PCA λ has to modify to weight each PC by its explained variance. 2. θ ij is the angle between the i th principal component of X 1 and jth principal component of X 2 .3. distance similarity factor can be used to cluster multivariate time series data. When first two principal components explain 95% of variance in the datasets. ∑1 is the covariance matrix for dataset X 1 and ∑1*−1 is pseudo inverse of X 1 .
and matrices were autoscaled before they were processed by PLS algorithm. 2. only with a limited number of observations available. PLS already has been successfully applied in diverse fields including process monitoring and quality control. PLS model consists of outer relations ( & data are expressed in terms of their respective scores) and inner relations that links the latent subspace.3.1 Linear PLS I and I matrices are scaled in the following way before they are processed by PLS algorithm.4. When dealing with nonlinear systems.4 THEORETICAL POSTULATION ON PLS Projection to latent structures or partial least squares (PLS) is a multivariable statistical regression method based on projecting/viewing the information in a highdimensional data space down onto a low dimensional one defined by some latent variables.67 and 0.9) 12 . identification of process dynamics & control and deals with noisy and highly correlated data.CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK 2. In this work we selected the values of α1 & α 2 are 0. data to data in 2. X = XS X and Y = YSY −1 −1 (2.33.2 Combined Similarity Factor λ The combined similarity factor (SF) combines S PCA and S dist using weighted average of the two quantities and used for clustering of multivariate time series data. this approach assumes that the underlying nonlinear relationship between predictor data ( ) and response data ( ) can be approximated by quadratic PLS (QPLS) or neural network based PLS (NNPLS) while retaining the outer mapping framework of linear PLS algorithm. It selects the latent variables so that variations in predictor data which is most predictive of data.8) The selection of α1 and α 2 is up to the user but ensure that the sum of them is equal to one. The combined similarity is defined as λ SF = α1S PCA + α 2 Sdist (2. quite often.
. which is called the inner model.....The first set of loading vectors J# and J# data on J# resulted in the first set of score vectors and # and # Where To determine the dominant direction of projection of covariance within and data..11) data while Wand Wrepresent the are described.. and ..... representing a linear regression between The calculation of first two dimensions is shown in Fig.14) ' F1 = Y − u1 q1 = Y − t1b1 q1 (2. If all the components of to and & become zero. To get 13 and ... For a given tolerance of residual....CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK s y1 s x1 0 and SY = Where S X = 0 sx2 0 The S X and SY are scaled matrices. the errors Y = u q + u2 q2 + .10) (2.. In practice. .12) can now be expressed as: (2. The inner model that relates & .1..... the maximization of represent the dominant direction obtained by maximization of covariance within Projection of data on J# and = # I# . The matrices # and #: # can now be related through their respective scores... The residuals are calculated at this stage is given by the following equations. E1 = X − t1 p1 ' ' (2. + tn pn + E = TP T + E T T T (2... the number of PLS dimensions is calculated by percentage of variance explained and cross validation.. + un qn + F = UQT + F T 1 1 T T Where and represents the matrices of scores of and loading matrices for and ..15) The procedure for determining the scores and loading vectors is continued by using the newly computed residuals till they are small enough or the number of PLS dimensions required are exceeded. = V is the regression matrix. 0 sy2 The outer relationship for the input matrix and output matrix with predictor variables can be written as X = t1 p1 + t 2 p2 + . hence the establishment of outer relation.13) is used as a criterion... The response = VW + and is the relation between the scores (2. 2.. the number of principal components can be much smaller than the original variable dimension..
the dynamic analogue of equation (2. Equation 14 . the decomposition of X block is given by equation (2.CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK matrices are also iteratively found using NIPALS.This is the so called NIPALS (nonlinear iterative PLS) algorithm. Fig.13).2 Dynamic PLS For incorporation of linear dynamic relationship in a time series data in the PLS framework.1 Standard linear PLS algorithm. The irrelevant directions originating from noise and redundancy are left as and and gets satisfied. th is the diagonal matrix comprising the dynamic elements identified at each of the n latent subspaces. and W matrices are related following equation Wmatrices iteratively. The multivariate regression problems are translated into series of univariate regression problems with the application of PLS. 2.16) s denote the linear dynamic model identified at each time instant by ARX model as well as FIR model and in latent subspaces.10). deduction of { J { from keeps continuing until the given tolerance (2.4.I# is the experimental { {J is a measure of I space explained by the i th PLS dimension data in 1st dimension.11) is as follows: I= where # { # {J# + $ { $ {J$ +⋯ { {J + = I# + I$ + +I + (2. 2.
Tk −4 } and (2. The ARX based input matrix used in regression analysis is as follows: X ARX = {U k −1 . 2.20) The post compensation of represents the PLS based identification of process dynamics. Tk −2 } (2.U k −2 .2 Schematic of PLS based dynamics 15 . 1{ + I$ { . The FIR based input matrix is as follows: X FIR = {Tk −1 .Figure 2. order of the model should be selected. Autocorrelation signals renders a good indication about order that depends on how many past input and past output values taken in the input matrix for FIR and ARX models. { { + I# { . Tk −2 . The model parameters for both ARX and FIR models are estimated by linear least square technique.17) dynamic models relating the scores where { { =output at instant.17) represents the linear second order ARX structure. Tk −1 .CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK (2. 1{ + I$ { . { = I# { . Tk −3 . respectively.2 provided the PLS predicted output I.19) transferfunction in discrete time domain: { {= { { { { represent the matrices of scores of I and I. { th (2. Prior to dynamic modeling. which relates input and output signals of a time series data. { {=input. The input matrix matrix (PLS inner dynamic model output) with loading matrix to the PLS inner dynamic model Fig. The parameters of the ARX based inner & were estimated by least square regression. was generated by post compensating the original I matrix with loading matrix .18) Finite Impulse Response Model or FIR model is also tested for inner model development. The identified process = I# + I$ : $ + I# + I$ (2. It is difficult to choose the order of the model.
Wang and McGreavy (1998) used clustering methods to classify abnormal behavior of a refinery fluid catalytic cracking process [8]. Traditional clustering techniques were extensively used for grouping of the similar objects having analogous characteristics into the same cluster. Very few researchers have reported the clustering of dynamic and high dimensional multivariate time series data based on similarity measures. fault detection and process monitoring. Kano et al. Sudjianto & Wasserman (1996) and Trouve & Yu (2000).5 PREVIOUS WORK Data mining of process historical database has received attention in the computer science literature. Ng and Huang (1999) also used a clustering approach to classify different types of stars based on their light curves [9]. It is essential to extract required information regarding abnormality in the data to make a detection and diagnosis of abnormality in the process. (2000) implemented various PCA based statistical process monitoring methods using simulated data obtained from the Tennessee Eastman process [3]. Wise and Gallagher (1996) reviewed some of the chemometric techniques and their application to chemical process monitoring and dynamic process modeling [4]. Singhal & Seborg (2005) used PCA and Mahalonbis distance similarity measures for location of similar operating condition in large multivariate database [12]. Kavitha & Punithavalli (2010) claimed that all traditional clustering or unsupervised algorithms are inappropriate for real time data [13]. In juxtaposition.have successfully applied principal component analysis to extract features from large datasets [12]. The diagnosis of abnormal plant operation can be greatly facilitated if periods of similar plant performance can be located in the historical database. Huang et al (2000) used PCA to cluster multivariate time series data by splitting large clusters into small clusters based on the percentage of variance explained by principal component analysis. Johannesmeyer and Seborg (1999) developed an efficient technique to locating similar records in the historical database using PCA similarity factors [10].Numerous clustering techniques are available for clustering of univariate time series data [57]. This method is restrictive if the number of principal components is not known a priori and because predetermined principal components are inadequate for some operating conditions.CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK 2. however problems involving timeseries identification & control have been addressed only recently. The information acquired from the clusters is helpful in decision making. Such feature extracting serves as dimensionality reduction of the datasets [11]. Singhal and Seborg (2002) proposed a novel methodology based on PCA and distance similarity for pattern 16 .
(2010) implemented data driven latent variable model based predictive control (LVMPC) for a 2×2 distillation process claiming their proposition 17 . This approach of NNPLS employs the neural network as inner model keeping the outer mapping framework as linear PLS algorithm. (1997) proposed the ARX/Hammerstein model as the modified PLS inner relation and used successfully in identifying dynamic models and proposition of PLS based feed forward and feedback controllers [23]. Sometimes it may not function well when the nonlinearities cannot be described by quadratic relationship. (2006).. (1997). Qin and McAvoy (1992. 1993) suggested a new approach to replace the inner model by neural network model followed by the focused R& D activities taken up by several other researchers like Wilson et al. It finds the latent variables (through principal component decomposition. Lee et al. It might not be inappropriate here to mention some of the efforts addressing diversified fields of applications. PCA) from the measured data by capturing the largest variance in the data and achieves the maximum correlation between the predictor {I{ variables and response (I) variables. Since the last 5 years or so. Lakshminarayanan et al. a similar pattern can be located in the historical database using the proposed pattern matching algorithm [14]. Malthouse et al. Zhao et al. . Laurí et al.When dealing with nonlinear systems. Magda and Ordonez (2008) implemented multivariate statistical process control and case based reasoning for situation assessment of sequencing batch reactors [27]. Akamatsu et al. Cerrillo and MacGregor (2005) used latent variable models (LVM) for controlling batch processes [26]. for a multivariate time series data or template data. (2008) implemented dynamic partial least squares regression in developing inferential control system of distillation compositions [28]. Kano et al. Development of the multivariate statistical controller is one of the objectives of the present dissertation. (2000) proposed data based control of an industrial tubular reactor [24]. Christian Rosen (2001) applied chemometric approach to process monitoring and control for wastewater treatment plant [25]. At a given time period of interest. Holcomb & Morari (1992). Kaspar and Ray (1993) demonstrated their approach for identification and control problem using linear PLS models [22]. (1997). the underlying nonlinear relationship between predictor variables (I) and response variables (I) can be approximated by quadratic PLS (QPLS) or splines. AlGhazzawi and Lennox (2009) developed MPC condition monitoring tool based on multivariate statistical process control (MSPC) techniques [29].CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK matching problems. design & development of data based monitoring & control has been the focused area for applied research. (2006)[1521]. Partial least square is one of the important data based multivariable statistical process control (MVSPC) techniques.
Over the last decade. IMC based multivariable controllers. Damala and Kundu (2010) decoupled a multivariable bioreactor process and used the inverse dynamics of the decoupled process to create a series of neural network based SISO (NNSISO) controllers which were tuned independently without influencing the performance of other loops [31]. They have also chronicled that how the latent variable models (LVM) in recent years were implemented for various purposes. For multivariable processes. the Partial least squares (PLS) controllers offer the opportunity to be designed as a series of SISO controllers instead of using multivariable controllers.CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK outperformed the conventional data driven MPC in terms of reliable multistep ahead predictions with reduced computational complexity and reference tracking[30]. 18 . In this context.. a focused R&D activity on plant wide control has been taken up by several researchers [3235]. In view of this. Till date there is no reference on NNPLS controllers in the open literature though PLS controllers are well documented. by incorporating neural controllers beside the classical controllers for plant wide process control. Present study is aiming one incremental lip in this regard. present work aims towards the development of NNPLS controllers. Multiloop (decentralized) conventional control systems (especially PID controllers) with decouplers are often used to control interacting multiple input multiple output processes because of their ease in understandability apart from MPC (Model predictive).
H. TX. Chem.. FL. S. Johannesmeyer. Samara. “Automatic subspace clustering of high dimensional data for data mining applications. Tech.. M. 2000.. B. E. Trouve. 1973. J.. R. T. 2000.... “Fault isolation in nonlinear systems with structured partial principal component analysis and clustering analysis. Agrawal. Gertler. Y. D. “Cluster analysis for applications.. Sudjianto. 2000: 110–114. Soc... 78: 569–577. “Clustering multivariate time series data. Huang. “Pattern matching in historical batch data using PCA. Z. Chem. WA. Ohno.. Raghavan. 3. 1999. McGreavy.. Strauss. D. of the SPIE—The Intl. 1998. 11. 7. 6: 329–348. Huang. Process Contr.” In Proc. Allgood.” IJCSIS. Seborg.CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK REFERENCES 1.” Can. ACM SIGMOD Intl. G. Kano. M. Yu. Upadhyaya. 5th Intl. S. Conf. Rec. B. 8. 10. Punithavalli. Singhal. C. on Management of Data. M.” J. “Clustering time series data streamA literature survey..” 19 . Seattle... B. “Unsupervised clustering trees by nonlinear principal component analysis. G.Software Tech. Wasserman.. I.” Ind. “A modelbased highfrequency matched filter arcing diagnostic system based on principal component analysis. for Optical Engr. K.. Nagao. McAvoy. E... 4. Dallas. Inform. 37: 2215–2222. Y. 6. “DataMining massive timeseries astronomical data: challenges and solutions”. A. Orlando. 1999. Gehrke. “Automatic classification for mining process operational data. R. M.. O. M. and Image Analysis: New Info.. Wise. Gunopulos.. Conf. Chem. K. 24: 175181. 5. J. “Comparison of statistical process monitoring methods: Application to the Eastman challenge problem. Hashimoto.” J. 9. D. Anderberg. Seborg. “A nonlinear extension of principal component analysis for clustering and spatial differentiation. 41: 545–556. 4055: 430–440...” New York: Academic Press. Kavitha.” In Proc. “The process chemometrics approach to process monitoring and fault detection. 13. A. Bakshi.” In Proc. X. “Abnormal situation analysis using pattern recognition techniques. Seborg. R. D. on Pat. E. Res. Singhal.” Comput..Annual Meeting. Gallagher.....” IIE Trans. 2010. Engr. 1996. P. 1996. Eng... 1998: 94–105. Chemometr. A. N. 2. J.” AIChE. M. 2005. C. 19: 427438. Eng. 2000. B. 28: 1023–1028. Hasebe. 12. Russia. V.. Ng. 8: 289294. Wang. R. J. 14. Z... A.
Nandakumar. Tamhane.” Doctoral Thesis. 21.. H. 2008. Takada.. M. 8 (7): 783790 25.. T. Qin. Chem. M.” Process Biochem. R. 1993: 559604. Irwin. 2006. Satou. Xu.” Trans. Chem. Mag. 23. Syst. Y. H. Y. 15: 651–663. process control.. Park.. 16. J. 16(4): 379391. T. 27. Chem.. J.1992.. 16(4): 393411. 22: 53–63.. 2006.. MacGregor.. Qin. G. M. F. H. W. H. Sci. “Nonlinear PLS using radial basis functions. S. Akamatsu. 1997. C. L. Nakagawa. H.. Woo. L. Malthouse.. “A chemometric approach to process monitoring and control with applications to wastewater treatment operation. 1997. Lund University..S. Ordonez. T. 1993.” J. 24.” Chem.. 45: 38433852.. Zhang. W. E. C. 41: 20502057. 19(4): 211220. Engr.” Comput. Lee.. Sirish. S. K. S. “Multivariate statistical process control and case based reasoning for situation assessment of sequencing batch reactors. J. L.. J. Chem. “Latent variable MPC for trajectory tracking in batch processes. H.” In Proceedings of the 1993 Internation Symposium on Intelligent Control. Sweden. “Nonlinear PLS modeling using neural network. 21 (8): 875890. K. Mah.. Inst. Morari. Engr.1992... Eng. “Databased control of an industrial tubular reactor. Magda. “PLS/neural networks. M. S.. Kim. Rosen. Kaspar. and 20 . H.. W. “Dynamic modeling for process control.. M. Holcomb.” Ind. Engr. Pract. Control.” Control Eng. Wilson. Res. S. D Thesis. 2005. Z.. Process Contr. Chicago. 22.1997. “Nonlinear projection to latent structures method and its applications. Zhao. A.” Comput. “Databased process monitoring. Lee. Girona.. “Nonlinear dynamic partial least squares modeling of a fullscale biological wastewater treatment plant. D. J. Department of Industrial Electrical Engineering and Automation.. G.. 17. R. IL. J.. 15. Spain. Xiong. “Nonlinear partial least squares”. H...” Ph.CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK IEEE Contr. Cerrillo. 19. S. Shah. 2002. M. “A statistical perspective of neural networks for process modelling and control. 28.... Ray..Manako.” AIChE J. 20. Y. D. S. 43: 23072323.. 48 (20): 34473467. “Modeling and control of multivariable processes: The dynamic projection to latent structures approach. C... R. McAvoy. Eng. Comput.. Kano. Lakshminarayanan. Lightbody. 2001. J. 18. 2000.Lakshminarayanan. Meas. S. J. 26.
D. Vogel. New York. G. Martínez.” J.” McGraw Hill. Process Contr. Stephanopoulos. J.. 1995.. Douglas. 34. Luyben. 30..” McGraw Hill. 35.. 32: 12–24.. Englewood Cliffs. 1999. 21 . M. 1 (2): 165172. Laurí. “Conceptual design of chemical processes. Tyreus. A. A. Engr. B.. J.” Comput. 29. W. 2009... Chem. “Design of multivariable controllers using a classical approach. “A plantwide industrial process control problem. Luyben. Process Contr. M. 1988. AlGhazzawi.”IJCEA. L.CHEMOMETRIC TECHNIQUES AND PREVIOUS WORK quality improvement: recent developments and applications in steel industry.” J. E. 2008. J. Engr.. New York. F. 31. 2010...“Datadriven latentvariable modelbased predictive control for continuous processes. “Chemical process control. 17: 245255.” Prentice Hall. M. 33.. B. J. J. M. Damarla. L.. K. D.” Comput. Downs. NJ. 2010. 20 (10): 12071219. S. Chem. “Plant wide process control. Rossiter.. Sanchis. “Model predictive control monitoring using multivariate statistics. 32.. 1984.. Lennox. Kundu. 19: 314–327.
MULTIVARIATE STATISTICAL PROCESS MONITORING .
Similarity factor depends on PCA. data driven process monitoring including product quality monitoring. 3. With the invention of multisensory array. Instead of using first hand plat data. PCA was successfully applied by Kourti & MacGregor (1996) and Martin 22 . fault detection and diagnosis are getting due attention and wide spread acceptance. Drumboiler. In view of this.MULTIVARIATE STATISTICAL PROCESS MONITORING 3. compression and storage. mature data capture technology. the above processes were modeled using first principles and processes were perturbed at industrially relevant operating conditions including faulty ones to create vector time series databases. Biochemical reactor. a MVSPC method. based on clustering time series data and moving window based pattern matching technique was adapted for successful process monitoring. continuous stirred tank with cooling jacket and Tennessee Eastman challenge processes were taken up to implement the proposed monitoring techniques. advancement in data collection.1 INTRODUCTION Quality and safety are the two important aspects of any production process.2 CLUSTERING TIME SERIES DATA: APPLICATION IN PROCESS MONITORING Clustering time series data depends on measures of similarity. especially the angles between the principal components in the latent subspaces.
The time series data 23 .2. Assigning an item to the cluster whose centroid is nearest (distance is usually Euclidian).MULTIVARIATE STATISTICAL PROCESSMONITORING &Morris (1996)to cluster multivariate time series data [12]. The operational objective of classification is to allocate new objects (observations) to predefined groups based on a few well defined rules evolved out of discrimination analysis of allied group of observations. 3. the Kmeans clustering proceeds in three steps. Partition of the items in to K initial clusters. Repeating the step2 until no more reassignment takes place or stable cluster tags are available for all the items. Recalculation of the centroid for the cluster receiving the new item and for the cluster losing that item.1 Kmeans Clustering Using Similarity Factors Clustering technique is more primitive in that. In general. hence ensures a faster computation. Discrimination and classification among the time series data can be done using multivariate statistics. no priori assumptions are made regarding the group structures. Discrimination is concerned with separating distinct sets of objects (or observations) on a onetime basis in order to investigate observed differences when casual relationships are not well understood. an underlying probability model must be assumed in order to calculate the posterior probability upon which the classification decision is made. the two indices were determined as a performance index of the proposed method. 3. Grouping of the data can be made on the basis of similarities or distances (dissimilarities). 2. The Kmeans clustering has a specific advantage of not requiring the distance matrix as required in hierarchical clustering. Cluster purity and efficiency. In these procedures. Different kinds of similarity including PCA similarity. weighted PCA similarity and distance based similarity have already been discussed in Chapter 2 Monitoring time series data pertaining to various operating conditions depends upon successful discrimination followed by classification. The number of clusters K can be prespecified or can be determined iteratively as a part of the clustering procedure.A modified Kmeans clustering algorithm using similarity measures as a convergence criterion has been used for clustering datasets pertaining to different operating conditions including faulty one. which are as follows: 1. One major limitation of the statistical methods is that they work well only when the underlying assumptions are satisfied.
.. Singhal & Seborg (2005) proposed new methodologies for finding the optimum number of clusters [5].2.2) be the reference dataset...2) d i .. (3..xQ } to be clustered into K clusters 1. 3.. i = 1. x2 ... One part is used for clustering whereas other part is used for cross validation [4].2 Selection of Optimum Number of Cluster Selection of the number of clusters is crucial in Kmeans clustering algorithm. Calculation of the dissimilarity between dataset x q and each of the k aggregate datasets X i .. This method penalized the more complex model more than the less complex model.. q = 1 − SF i . However.. (3.. Given: Q datasets.2. Rissanen (1978) proposed model building and model order selection to calculate the optimum number of clusters based on the model complexity [3]. 24 . data splits into two or more parts... i =1 2.( xQi (i ) )]T Where Q i is the number of datasets in the database. {x1.6).q is similarity between q th dataset and i th cluster described by equation (2.xq . Let the aggregate dataset X i in equation (3.. In the present work efficiently designed modified Kmeans clustering algorithm ensures the evolution of optimum number of clusters.... q Where SFi . preliminary results obtained using these methods were not promising.2.. Large number of resulted clusters indicates the model complexity. Let j th dataset in the i th cluster be defined as x i j . Repetition of the aforesaid steps for Q numbers of datasets......k as.. Computation of the aggregate dataset X i (i = 1..... Note that ∑ Qi = Q . for each of the k clusters as..k ) . Later Smyth (1996) introduced the cross validation of model to identify optimum number of clusters in the data. Several methods have been developed to identify optimum number of clusters in multivariate time series data such as Akaike Information Criterion (AIC) and Schwartz Information Criterion (SIC).. In this method..( x j (i ) )T .1) k X i = [( x1(i ) )T .MULTIVARIATE STATISTICAL PROCESSMONITORING pertaining to various operating conditions were discriminated and classified using the following similarity based Kmeans algorithm..... Dataset x q is assigned to the cluster to which it is least dissimilar.
In order to compare the snapshot data to historical data.Some key definitions were introduced by Singhal &. Cluster purity is defined as.4) in the database.3) in the i th cluster and N pi is Cluster efficiency measures the extent to which an operating condition is distributed in different clusters. Cluster purity is defined to characterize each cluster in terms of how many numbers of datasets for a particular operating condition present in the ith cluster. the relevant historical data are divided into data windows that are the same size as the snapshot data. Large Where N DBj is the number of datasets for operating condition number of datasets of operating condition present in a cluster can be considered as dominant operating condition.MULTIVARIATE STATISTICAL PROCESSMONITORING 3. This method is to penalize the large values of j K when operating condition distributed into different clusters. η =( maxi Ni. j (3.The snapshot data can also approach in as dataset. the snapshot or template data with unknown start and end time of operating condition (in sample wise manner) moves through historical data and the similarity between them is characterized by distance and PCA based combined similarity factor [79]. Assuming the number of operating conditions is N op and the number of datasets for operating condition j in the database is N DBj . ∆ max j Nij ) ×100% Pj =( N pi Where Nij is the number of datasets of operating condition the number of datasets in the i th cluster.3 Clustering Performance Evaluation Krzanowski (1979)developed PCA similarity factor by choosing largest k principal components of each multivariate time series dataset that describe at least 95 % of variance in the each dataset[6]. Seborg (2005) to evaluate the performance of the clusters obtained using similarity factors [5]. j ) ×100% N DBj j (3. 3.2. Clustering efficiency is defined as.3 MOVING WINDOW BASED PATTERN MATCHING In this approach. The historical data sets are then organized by placing windows side25 .
Specification of the snapshot (variables and time period) 2.MULTIVARIATE STATISTICAL PROCESSMONITORING byside along the time axis. Comparison between snapshot and periods of historical windows using moving window 3. nonoverlapping segments of data. which results in equal length. Pool accuracy. the historical data window moved one observation at a time.1is consisting three steps which are follow as: 1. The historical data windows with the largest values of similarity factors are collected in a candidate pool and are called records to be analyzed by the process Engineer. Collection of historical windows with the largest values of similarity factors . 3.1 Moving window based pattern matching 26 . 3. For the present work. with each old observation is getting replaced by new one. Specify the snapshot (Variables and time period) Compare snapshot and periods of historical window approach Collect historical data windows with the largest values of similarity factors Fig. The proposed pattern matching technique as depicted in Fig. Pattern matching efficiency and Pattern matching algorithm efficiency are important metrics that quantify the performance of the proposed pattern matching algorithm.
The plant is an 80 MW unit in Sweden. N P = N1 + N 2 N DB : The total number of historical data windows that are actually similar to the current snapshot. no process models or training data are required. N1 = number of records in the candidate pool that are exactly similar to the current snapshot data. The data windows collected in the candidate pool are called records. This process was addressed by Pellegrinetti & Bentsman (1996). In the present work Drumboiler process has been monitored using similarity measures. and Jawahar & Pappa (2005) [1013]regarding modeling and control aspects. i.0/or number of correctly identified record. The proposed method is completely data driven and unsupervised. P = ( 1 NP Pattern matching efficiency. In general. Astrom & Bell (2000). N DB ≠ N P ) ×100% ( N P − N1) N Pool accuracy. having a similarity of 1. A large value of Pattern matching efficiency is required in case of detection of all of the specific previous situations from a large pool of records. Drumboiler model has been derived using first principles and is characterized by few physical parameters. The parameters used in the model are those from a Swedish Power Plant. 3.4 DRUM BOILER PROCESS Drum boiler is crucial benchmark process in view of modeling and control system design. 16 numbers of datasets belonging to four operating conditions including an abnormal operating condition were generated perturbing the process to evaluate the performance of the proposed techniques. = ( P N DB ) ×100% A large value of Pool accuracy is important in case of detection of small number of specific previous situations from a small pool of records without evaluating incorrectly identified records. 27 . The user should specify only the relevant measured variables.MULTIVARIATE STATISTICAL PROCESSMONITORING N P : The size of the candidate pool. H= [1 − ( N DB )] ×100% N Pattern matching algorithm efficiency.e. N 2 = number of records in the candidate pool that are not correctly identified. Tan et al (2002). it is the number of historical data windows that have been labeled ‘‘similar’’ to the snapshot data by a pattern matching technique. ξ.
it returns to the drum through the riser.2shows the drum that holds water at saturation or near saturation condition and denser water flows through the downcomer into lower header by force of gravity. Part of water in these tubes and risers evaporates. Between lower and upper header. input power and output power as function of control variables and sate variables. Amount of energy stored in each part is a complicated function of temperature and pressure. After being heated up further.MULTIVARIATE STATISTICAL PROCESSMONITORING 3. with the result that the fluid in the riser is composed of a mixer of water and steam.1 The Brief Introduction of Drumboiler System The utility boilers in the thermal/nuclear power plants are water tube drum boilers. and then flows through the downcomers into the mud drum. The difference in density of water in the riser and downcomer provides the necessary motive force to set up circulation of water in the boiler system. which is also called the water side of the boiler. Boiler is a reservoir of energy. The model can now be developed in detail. This type of boiler usually comprises two separate systems. are stretch of tubes that constitute water walls and receive radiant heat from furnace. One of them is the steam–water system. The diagram depicted in Fig. Water walls permit use of high temperature of furnaces and combustion rates. In this system preheated water from the economizer is fed into the steam drum. Global mass and energy balances capture much of the behavior of the system.2 Drumboiler process 28 . 3. 3.4. expressing stored energy. Steam Feed water Drum Riser Radiant heat from furnace Downcomer Lower header Fig.
and represent the densities of steam and water respectively.5) C Total Volume of Drum. A key feature of drum boiler is that there is an efficient energy and mass transfer between all parts that are in contact with steam. 29 . Global mass and energy balance equations: The inputs to the system are chosen to be • • • Feedwater mass flow rate. • Steady state metal temperature is close to saturation temperature and the temperature differences are small dynamically. J The outputs from the system are chosen to be: • • Drum level. This is the key for understanding boiler dynamics. • The energy stored in the steam and water is released or absorbed very rapidly when pressure changes. The mechanism responsible for heat transfer is boiling and condensation.6) (3. J Heat flow rate to the risers. Downcomer and Risers: Where and = + = +J ℎ −J ℎ (3. The rapid release of energy ensures that different parts of the boiler change their temperature in the same way. represent the total steam and water volume in the system. Drum pressure. H Steam flow rate. Global mass balance: { ? { Global Energy balance: =J −J (3.4.7) ℎ and ℎ represent the enthalpies of steam and water per unit mass.2 Modeling Assumptions: • The fundamental modeling simplification is that the two phases of the water inside the system are everywhere in saturated thermodynamic state.MULTIVARIATE STATISTICAL PROCESSMONITORING 3. • There is an instantaneous and uniform thermal equilibrium between water and metal everywhere.
(3. feedwater and steam flow rates reasonably well.6). Mathematically the model is a differential algebraic system.8) Where ## #$ $# $$ = = = = ℎ − ℎ + Equations (3.e. The drum level control is difficult due to shrink and swells effects. + ℎ + ℎ + − + I (3.9) 30 . The state equations then take the following form: ## $# + + − #$ $$ =J −J = +J ℎ −J ℎ (3. .5). ℎ . is the metal temperature A simple drum boiler model is obtained by combining equations (3. is the total metal mass. But the model has a serious deficiency.4.MULTIVARIATE STATISTICAL PROCESSMONITORING . The steam level is the space above the water level. using the steam table. since it is the most globally uniform variable in the system and is also easily measurable.7). The second state variable is chosen to be the total volume of water in the system i.Using equation (3. Drum pressure. . In particular. It doesn’t capture the behavior of the drum water level.6)and(3.7)with saturated steam tables.8) and (3. The drum level may be defined as the level at which water stands in the boiler. The variables .9) constitute a state model of the second order boiler system. 3. is the total volume of the drum. . This simplistic model captures the gross behavior of the boiler quite well.2.5) and (3. . ℎ are expressed as function of steam pressure is eliminated from equations (3. P is chosen as a key state variable. as the distribution of steam and water are not taken into account.1 Proposed Second Order Boiler Model To have a better insight into the key physical mechanism that affect the dynamic behavior of the system the state variable approach is considered. it describes the response of drum pressure to changes in input power.
4. and =(volume of = are related by: # % # or Where. In the riser section water exists in two phases namely the liquidphase i. The assumed shape is based on solving the partial differential equations in the steady state. Instead there is a slip between them. The distribution of water and steam is considered separately for the riser section and the drum. which causes the vapor to = J. water and the vaporphase i. it is necessary to define the void fraction.10) and are the specific volumes of saturated liquid and vapor of average vapor velocity to the average liquid velocity. 3. J is a dimensionless number. respectively. The void fraction of a two phase mixture is a volumetric quantity and is given as: vapor)/(volume of vapor + volume of liquid). The value of = is known as the quality.2. In order to determine the average density of steamwater mixture in the riser.4.2 Distribution of Steam in Risers and Drum The behavior of drumlevel can best described by taking into account the distribution of water and steam in the system.2. at any crosssection of the riser. steam.e.MULTIVARIATE STATISTICAL PROCESSMONITORING 3. greater than 1.J is the slip ratio of twophase mixture. There exists a linear distribution of steamwater mass ratio along the risers. are the masses of vapor and liquid respectively in the varies between 0 and 1. as it doesn’t have a major influence on the fit to experiment data. where . The behavior of twophase flow is complicated and is typically modeled by partial differential equations.e. = # ( # % ) (3. It is defined as the ratio travel at the same speed. The mass fraction or dryness fraction of a liquidvapor mixture must be defined prior to further discussion. In a liquidvapor mixture.1 Distribution of steam and water in risers The steamwater distribution varies along the risers.2. and mixture. The ratio varies in the following form: 31 . Keeping a finite dimensional model it is assumed that that shape of the distribution is known. The two phases of the mixture do not move faster than liquid. The slip ratio is neglected in the present work.
J .14) # " ( ) = ( ) 1 − ( ) ln 1 + = ( = # ) # Where The transfer of mass and energy between steam and water by condensation and evaporation is a key element in the modeling. hence the joint balance equations for water and steam are written for the riser section.2. is the total mass flow rate out of the risers.11) is the mass ratio at the riser outlet.12) (3.13) For modeling the drumlevel the total amount of steam in the drum is to be obtained. When modeling the phases separately the transfer must be accounted for explicitly.2.2 Lumped parameter model The global mass balance for the riser section is: Where. The global energy balance for the riser section is: ? (# ) C { (# ) { J . which is obtained by integrating the equation (3.16) +J ℎ −( ℎ + ℎ )J (3. ( )= ( ) (3.17) 32 . The governing equation is the average steam volume fraction in the risers.Where ( )= = ( is a normalized length coordinate along the risers and ) . 0≤ ≤1 MULTIVARIATE STATISTICAL PROCESSMONITORING (3. The volume and mass fractions of steam are related through Where..4.15) 3. = ln(1 + ) − ( ) − # # 1+ # # − ln(1 + ) (3.13) over the limits 0 to 1 can be given as: = # " ( ) = # # " ( ) ( )= The partial derivatives of with respect to drum pressure and steam mass fraction are obtained as: (3.. is the total mass flow rate into the risers. = =J −J (3.
The equation consists of three terms namely the internal term.4.4. and the momentum of the is an empirical model and is a good fit to the = ( − )+ J + (J 33 −J ) (3.3 Distribution of Steam in the Drum The physical phenomenon in the drum is complicated. water leaves through the downcomer tubes and steam through the steam valve. The geometry and flow patterns are complex.MULTIVARIATE STATISTICAL PROCESSMONITORING 3. The expression for J is driven by density difference of water and steam. driving force that in this case is the density difference and frictional force in the flow through pipes. The flow through the downcomer (J ) can be obtained from a momentum balance.21) + −( + ) + I The flow J (3.18) (3.2. The mass balance for the steam under the liquid level is given as: Where. (a) (b) (c) Inertia force: Driving force: Friction force: (H + H ) ( $ − ) Combining above three terms momentum balance equation is written as following: Equation (3. J J = ( ) = is the condensation flow.22) experimental data and is given as: " J flow entering the drum.15) is a first order system that has a time constant as given below: (H + H ) = J ( ) =( − ) − $ (3.2.2.20) 3. Steam enters from many riser tubes: feed water enters through a complex arrangement.3 Circulation flow: In the natural circulation boiler the flow rate is driven by density gradients in the risers and downcomers. Basic mechanisms that occur in the drum are separation of water and steam and condensation.23) .19) − ) # $ The steady state relation for the system is given as: $ = ( (3. which is given by J + # J −J −J (3.
. The displacement due to changes of the steamwater ratio in the risers.Where. is the level variation caused by changes Where. is the volume of steam in the drum in hypothetical situation when there is no is the residence time of steam in the drum. is the wet surface of the drum at the operating level. 3. the state model is also derived. . The drum level is composed of two terms.2.4 Drum Level: Having accounted for distribution of steam below drumlevel. 34 . of amount of water in the drum. • • = The total amount of water in the drum. now the drumlevel is modeled. Since most available simulation software requires state equations. .5 { The Model Model is given by the following equations: ? ℎ + + = ℎ − + I C + + = I +J ℎ −J ℎ C = +J ℎ −( −( + ) ℎ + ℎ )J + H { { J ( ? ℎ =J −J = ℎ −ℎ = J ℎ + + ) + (1 − ) { J −J ℎ (1 − ) 1 + ℎ −J =J − −J ℎ = I ℎ (3. = + Derivation of the drumlevel measured from its normal operating level is given by: = = (3.2. is the level variation caused by the steam in the drum. is the volume of water in the drum.25) The model as can be seen is a differential algebraic system.4. condensation of steam in the drum and 3.4. " MULTIVARIATE STATISTICAL PROCESSMONITORING .24) Where = − and − (1 − ) .
Drum pressure 3. which have a good physical interpretation that describe storage of mass.2. A convenient way is to choose those variables as states.6 State Variable Model Prior to generation of database.2.4. energy and momentum.MULTIVARIATE STATISTICAL PROCESSMONITORING 3. Combining these equations the state variable form is obtained as given by the set of equations (3. the riser flow rate ‘J ’ can be computed from equation (3. Manipulated inputs: b. Total water volume of the system c. The selection of state variables is done in many different ways.5) and (3.6.17).26) If the state variables ‘P’ and ‘ ’ are known. − ℎ { (# ) { + − + (3. The variables used in this procedure are as follows: I. Drum pressure b. Steam flow rate from the drum J Measured outputs: a. Feed water flow rate to the drum J a.16) and (3.6). Steam mass fraction d. Heat flow rate to the risers III.17) are further simplified by eliminating ‘J ’ and multiplying equation (3. State variables: a. c.22). Total water volume of the system b.1 Derivation of state equations: The pressure and water dynamics are obtained from the global mass and energy balances equations (3. linear model is required to design controllers. This can be given as: 35 .4. This was accomplished by the use of state space methodology.8). Volume of steam in the drum II. The riser dynamics is given by the mass and energy balance equations (3.16) by ‘(ℎ + I = ) ( The resulting expression is given as: ℎ (1 − − J ℎ ) + (1 − ) ℎ )’ and adding it to equation (3.
The drum dynamics can be captured by the mass balance equation (3.28) + I + +J ℎ −J ℎ ℎ J = − + ) + && J (3.21). + (1 − ) ℎ − − +( − I ) F F 36 .21).27) The state equations are written as: ## (1 + ) + + + {(# ) + # { $# %$ #$ &$ + " + %% &% $$ =J −J = = − = + " − −( + % + J ) (3.29) Where ## #$ $# $$ %$ =( = = = = − ℎ − ℎ ℎ + ℎ %% &$ = ((1 − = = +( + ) 1 ℎ − + ℎ ℎ F+ − + +( F (1 − ) ℎ )ℎ )ℎ ) + (1 − − + ℎ + )ℎ F− I + + + I ℎ G &% (1 + )( + (1 + ) − ) ℎ + . The expression for ‘J ’. The resulting simplified expression is given as: J =J − {(# ) { + ( MULTIVARIATE STATISTICAL PROCESSMONITORING − ) (3. ‘J ’ and ‘J ’ are substituted in equation (3.
2. Model is thus a nest of second.4. the feed water flow rate J and input power Q are given by first two equations of equation (3. . The second order model describes drum pressure and total water volume in the system.5 m3 37 . Drum pressure =8. third and fourth order model.30) and the steam volume in the drum is given by the last equation of (3. The variables inside each parenthesis can be computed independently.2.30) pressure P.5 MPa Total water volume of the system =57.&& = ( MULTIVARIATE STATISTICAL PROCESSMONITORING It is noted that the state space model obtained has an interesting lower triangular structure where state variables can be grouped as: . The steam quality = ℎ $ A convenient way to find the initial values is to first specify steam flow rate J and steam 2 is given by ( − H is obtained by solving the nonlinear equations: ( ) = 1 − ( ) ln 1 + (3.29) is given by: =J ℎ −J ℎ " Where. . J J = − ℎ ( ) J ) (3.30).6. ).31) The equilibrium values of the state variables: 1. The third order model captures the steam dynamics in the risers and the fourth order model also describes the accumulation of steam below the water surface in the drum dynamics.2 J =J =J = Equilibrium values The steady state solution of the state model of equation (3. 3.
g/L) as like specific growth rate ( ). 3. Menawat & Balachander (1991) have studied the continuous bioreactor problem [1416].5. 3.051 =4. (2010) designed neural controllers for various configurations of continuous bioreactor process. The plant is an 80 MW unit.4. thus two degrees of freedom are available for control. Feed water flow rate to the drum J =32. & saturation rate constant ( manipulated variables. The parameters ) of the .8 m3 Volume of steam in the drum The equilibrium values of the input variables are assumed as: 1. The primary aim of a continuous bioreactor is to avoid wash out condition which ceases reaction that may be achieved either by controlling cell mass (X g/L) or substrate concentrations (S g/L) at the various operating points of the bioprocess. For optimization of cell mass growth and product formation continuous mode of operation of bioreactors are desirable not the traditional fed batch bioreactors.MULTIVARIATE STATISTICAL PROCESSMONITORING 3.00147798 kg/sec Parameters 3.2. In order to maintain the reaction rate and product quality.00147798 kg/sec Heat flow rate to the risers =80. Kaushiaram et al.6. both of them may be controlled with dilution rate (D=F/V (h1)) and feed substrate concentration (Sf. Steam mass fraction =0. hence they are considered as disturbance of the process.1.5 BIOREACTOR PROCESS Bioreactor control has been an active area of research over a decade or so. The study is based 38 . Several researchers like Edwards et al. 2. (1972).40437506e6 MW Steam flow rate from the drum J =32.3 An interesting feature of the model is that it requires only a few parameters to characterize the system. 3. kinetic models are either inadequately determined or vary from time to time regarding the process operation. The parameter values that are considered are those from a Swedish plant.1 Modeling A (2×2) bioreactor process was taken up. 4. yield constant ( ) . Agrawal and Lim (1984). The model is characterized by the parameters given in Table 3. [17].
The following are the model equation based on first principle. the specific growth is a function of substrate concentration and µ= µ max x 2 k m + x 2 + k1x 2 2 (3.35) The relation between the rate of generation of cells and consumption of nutrients is defined by the yield given in the following equation r Y= 1 r2 (3. respectively. When there is substrate limiting condition. feed substrate concentration ( x2 fs ). x# & given by the substrate inhibition growth rate expression: The reaction rate is given by r1 = µx1 are the biomass and substrate composition. process leads to unstable equilibrium.MULTIVARIATE STATISTICAL PROCESSMONITORING on single biomasssingle substrate process.32) (3. 39 .. When both the concentrations (biomass & substrate) are large. i. Direct synthesis controllers were designed to control the biomass and substrate concentration in both stable and unstable situations.2. The values of steady state dilution rate ( Ds ).34) $ Where x 2 f is the substrate concentration in the feed. µ . F ) and assuming there is no biomass in the feed.e.36) Introducing the dilution rate ( D = =0. process is at stable equilibrium.33) (3. x1 f V The inputs are dilution rate and feed substrate concentration and the outputs are the concentrations of substrate and biomass (All values in deviation form). the steady state values of the states at the stable and unstable operating points and the various parameters are presented in Table 3. dx1 = (µ − D)x1 dt dx 2 dt = D(x 2f − x2) − µx1 Y (3.
So no energy balance is required for cooling jacket. A cooling jacket surrounded by CSTR having inlet & outlet fluid streams assumed to be perfect mixing and at a temperature less than the reactor temperature. Several researchers like Vinson (1995). component and energy balance with the assumptions of perfect mixing and constant volume. dvρ = F ρ − Fρ in in dt (3. Lyman & Georgakist 40 .39) V dC A =F C − FC − rV in AF A dt VρC dT = FρC (T − T ) + (−∆H )Vr P f P dt −E a )C and − ∆H is heat of reaction. Downs & Vogel (1993) [18] first identified the suitability of the T.38) (3.39) represents mass. 3. The TE process has wide applications like fault detection.37) (3. Luyben (1996).3. A dynamic model (equations 3. Assuming cooling jacket temperature can be directly manipulated. A RT Where r = − k exp( o Direct synthesis controllers were designed to control concentration and temperature using relation between the manipulated inputs and the controlled outputs obtained by converting nonlinear model into linear model using state space method. An inlet fluid stream continuously fed to the reactor and an exit stream continuously removed from the reactor so that the exit stream having same temperature as temperature in the reactor.6 CONTINUOUS STIRRED TANK REACTOR WITH COOLING JACKET A nonisothermal continuous stirred tank reactor with cooling jacket dynamics was considered in order to generate historical database by using distinct operating conditions including stable & unstable operating points and faulty conditions.MULTIVARIATE STATISTICAL PROCESSMONITORING 3. a nonlinear dynamic process as an active research area with little modifications in component kinetics and process operating conditions.373. development of different control strategies and authentication of performance of supervised controllers. An exothermic first order reversible reaction A → B was used. Parameters used for simulation of CSTR process are presented in Table 3.7 TENNESSEE EASTMAN CHALLENGE PROCESS The Tennessee Eastman Challenge process (TE process) is a benchmark industrial process.
C. 1996) addressed the TE process to assess the plantwide control strategies with conventional and unconventional controllers.MULTIVARIATE STATISTICAL PROCESSMONITORING (1994). The heat & material balance data and physical properties of components are presented in Tables 3. Figure 3. The gaseous reactants are fed to reactor equipped with internal cooling water supply to remove the heat of reaction. The nonlinear TE process model was obtained from the modeling equations of all unit operations involved in the process. The TE process comprises an exothermic two phase reactor.8 GENERATION OF DATABASE For the Drumboiler process. Ricker (1995. The basic process has been adapted from his personnel web page and studied to evaluate the proposed pattern matching techniques. A nonvolatile catalyst is used. 41 . The uncondensed vapors are recycled to the reactor.63. phase separator and stripper column for conversion of reactants namely A. G and H. step and sinusoidal changes in the manipulated inputs in open loop as well as closed loop.3illustrates complete diagram of TE process. out of which 41 can be controlled using 12 manipulated variables. and disturbance variables are listed in Tables 3. There are virtually 52 state variables. Four numbers of different operating conditions for Drumboiler process are listed in Table 3. condenser. (2010) and Zerkaoui et al. Zheng (1998). He has studied the TE process to find optimal steady states which provide six modes of operating points and designed a decentralized control system using proportional integral as well as model predictive controllers to maintain the process at desired values according to product demand [2526]. The process can be operated in six modes and mode 1 was considered as the base case. manipulated variables.10. 3. The steady state values of process measurements.5. Molina et al. direct synthesis controller design procedure was used for closed loop control system. The bottom products of the separator are sent to stripper where the products are withdrawn. Four distinct operating conditions were created by giving impulse. The products along with unreacted feed are sent to the condenser which condenses all vapors into liquid and from there liquid is fed to separator to get the inert and by product from purge.9(using base case).4 and 3. D and E to products. The drumboiler process was simulated for 1000 seconds with a sampling time of 2 seconds. Both the open loop and closed loop simulations were performed in order to generate 16 numbers of databases. (2010) [1924].
which leads to plant shut down as specified by Downs & Vogel are bestowed in Table 3. Total database contains 26 numbers of datasets of 4 different operating conditions where each dataset contains 600 observations of two outputs and are presented in Table 3.MULTIVARIATE STATISTICAL PROCESSMONITORING For Bioreactor process. For the TE process the distinctive operating modes of process are presented in Table 3. Signal to noise ratio were maintained as 10.8 3. open loop and closed loop processes were considered in order to generate the database including faults using distinct operating conditions at stable & unstable operating points (by varying the controller tuning parameter.3. Figure. 3. λ. Various operating conditions with their parameters are presented in Table 3.1 RESULTS & DISCUSSIONS Drum Boiler process Sixteen numbers of datasets were generated by perturbing the process with pseudo random binary signal (PRBS) generator.11.14.7 illustrate the qualitative time responses of few process measurements corresponding to faulty operating conditions obtained from closed loop simulations of the decentralized TE process with proportional controller operated in mode 1 and mode 3. Bioreactor was simulated for one hour and data was taken up with a sampling interval of 6 seconds using different operating conditions. However in the event of faulty situations. Total database contains 56 numbers of datasets of 3 numbers of operating conditions where each dataset contains 600 observations of two outputs.8 shows the response of abnormal process.13 numbers of datasets were created pertaining to 13 operating conditions including faulty one which could cause the plant to shut down. CSTR was simulated for one hour using different operating conditions. open loop and closed loop processes were simulated in order to generate the database using distinct operating conditions.8.13 In this work. the faulty operating condition was generated). The combined similarity measures are capable of detecting 42 . Negative step change in feed water flow rate induced continuous decrease in total water volume and continuous increase in drum pressure that represents abnormal scenario.12. For the jacketed CSTR process. operating mode 1 and mode 3 have been considered in order to generate the historical database.Figures 3.4 to 3. The plant was run for 72 hours with sampling time of 36 seconds for operating conditions FNM1 to FMD2 and FNM3 to FM32. The actual values of controlled variables in operating conditions 10 to 13. the process operation was completely terminated within one hour.0.
The proposed moving window based pattern matching technique perfectly identified all the snapshot data including normal and faulty data which find their similarity with the same process data present in the historical database. The performance of clustering is presented in Table 3. The technique was tested in sample wise as well as dataset wise manner. modified Kmeans algorithm. both were 100 % as presented in Table 3. Pattern matching was done using the moving window in a sample wise as well as dataset wise manner.15.2 Bioreactor process Table 3. Pool accuracy. The overall performance of moving window based pattern matching technique is same for both the cases as shown in Table 3. Four numbers of optimum clusters were obtained using similarity based. k1 a parameter for substrate inhibition only. The overall performance of moving window based pattern matching technique is same for both the cases and shown in Table 3. k m .each 43 .11presents the 26 numbers of datasets.16. λ is the tuning parameter of direct synthesis controller.17.8.each 3. which were generated for various operating conditions by varying the parameters k m . The derived cluster purity and efficiency. Pattern matching efficiency and efficiency of the algorithm were determined to be 100 %. 26 sets of data pertaining to four various operating conditions were considered as historical database and 4 snapshot data sets were considered.20. The computation time required in dataset wise manner was less than in comparison to sample wise manner.a parameter (both for Monod and substrate inhibition).19. Faulty operating condition was well captured by cluster 4. Pattern matching accuracy and efficiency and efficiency of the algorithm were determined to be 100 %. The similarity factors obtained in dataset wise pattern matching are shown in Table 3. The similarity factors obtained for dataset wise pattern matching are shown in Table 3. k1 & λ .18.MULTIVARIATE STATISTICAL PROCESSMONITORING faulty operating condition as cluster 3 and datasets pertaining to various other operating conditions were also identified correctly.
22. The PCA and distance based combined similarity factors provided the effective way of pattern matching in a multivariate time series database of the processes considered. where S represents snapshot dataset and R represents designed historic dataset. .3 Jacketed CSTR Process By varying C . Pool accuracy. continuous bioreactor. the time series data pertaining to various operating conditions including abnormal ones of the considered processes were discriminated /classified 44 .each 3. Figure 3.25. and . With the other proposed technique. Entire database were grouped into 3 clusters.8. Kmeans clustering algorithm using combined similarity factors repeated for 56 times.4 Tennessee Eastman Process The proposed similarity based pattern matching technique was successfully applied to the TE process having 13 numbers of operating conditions.8. 3. The proposed approach successfully located the arbitrarily chosen different operating conditions of current period of interest among the historical database of a Drumboiler.9 CONCLUSION A Sample wise moving window based pattern matching technique was developed with a view to process monitoring. The combined similarity factors used were greater than 0.23. The overall performance of moving window based pattern matching technique is same for both the cases as shown in Table 3. The similarity factors obtained for dataset wise pattern matching are shown in Table 3.24. The similarity factors obtained for dataset wise pattern matching are shown in Table 3. Table 3. The overall performance of moving window based pattern matching technique is same for both the cases as shown in Table 3.MULTIVARIATE STATISTICAL PROCESSMONITORING 3. 56 numbers of datasets pertaining to three operating conditions A were created.9presents the response for the CSTR process under faulty operating condition Pattern matching was done using the moving window in a sample wise as well as dataset wise manner. Pattern matching efficiency and efficiency of the algorithm were determined to be 100 %. 3 snapshot data sets were considered.21 presents the clustering performance.96. Resulted clustering efficiency and purity are 100 % each. Jacketed CSTR and Tennessee Eastman process.
NO Parameter name Notation Value with units 37 m3 11 m3 88 m3 20 m2 0. The present work demanded an extensive modeling and simulation activity for the benchmark processes taken up including the prestigious Tennessee Eastman challenge process and industrial scale Drumboiler process.1 Values of the Drumboiler model parameters S.MULTIVARIATE STATISTICAL PROCESSMONITORING efficiently using a similarity based modified Kmeans clustering algorithm. Table 3. Efficient modeling and simulation of the process taken up were the key factors behind the generation of design databases required for the successful implementation of the proposed monitoring techniques.355 m2 160e3 kg 300e3 kg 100e3 kg 25 0.3 12 sec " 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Riser volume Downcomer volume Total volume Drum area at normal operating level Downcomer flow area Riser metal mass Total metal mass Drum metal mass Friction coefficient in downcomerriser loop Empirical J coefficient Residence time of steam in drum Bubble volume coefficient Acceleration due to gravity Drum volume 10 m3 9.81 m/sec2 40 m3 45 . The developed machine learning algorithms with their encouraging performances deserves immense significance in the current perspective of process monitoring.
4 86. Stream number Stream number Molar flow (kgmol h1) Mass flow (kg h1) Temperature (oc) Mole fractions A B C D E A feed 1 11.4 45 0.2 22.3 45 0 0 0 0 0.06882 0.51000 0 0 Strp. 5 465. hr −1 (−∆H ). Ovhd.4 4.4545 L/g 0.43263 0.99990 0 E feed 3 98.0 45 0 0.12 g/L 0. kcal 3 ο (m C ) E.3 CSTR parameters used in CSTR simulation Parameter F V . kcal T f .6 65.1746 g/L 0.18776 46 .99990 C feed 4 417.53 h1 0.0 4509.4 Process stream data Heat and material balance data for TE process.00010 0 0.00116 0.32188 0.0 g/L 0.995103 g/L 1. kgmol m3 10 150 25 UA .ο C C AF .1 0.48500 0.00010 0 0 0 B feed 2 114.2 Bioreactor process Parameters Parameters µ max km k1 Value 0.08893 0. kcal 3 ο V ( m Chr ) T j .3 h1 1.MULTIVARIATE STATISTICAL PROCESSMONITORING Table 3.512243 g/L Y x2 fs Ds x1s (at stable operating point) x2s(at stable operating point) x1s (at unstable operating point) x2s(at unstable operating point) Table 3.0044 0.26383 0.7 8979. hr −1 Value 1 9703*3600 5960 11843 500 25 k o .2 3664. kcal kgmol kgmol ρC P .5 6419.99990 0.ο C Table 3.00500 0.8 48015.5302 g/L 0.7 0.07256 Reactor feed 6 1890.45264 0.4 45 0.
31 Recycle Purge Separation liquid 10 259.3 14288.13823 0.00018 0.18579 0.01964 0.0 266 F 21.00836 0.12302 0.45 Vapour heat capacity (kJ kg1o 1 c ) 14.47269 0.4 120.19763 0.04844 0.81 1444.37 230.17722 0.55 2.6 2.85 1.1 0.02299 Separator 80.6 65.0 259 E 21.04844 0.4 28 32 46 48 62 617 Liquid density (kg m3) 299 365 328 612 617 Liquid heat capacity (kJ kg1o 1 c ) 7.MULTIVARIATE STATISTICAL PROCESSMONITORING F G H 0 0 0 0 0 0 0.2 1430 4.628 Heat of vaporization (kJ kg1) 202 372 372 523 486 A B C D E F G H Vapor pressure (Antoine equation): P= exp(A+B/(T+C)) p=pressure (Pa) T= temperature (oc) Component Constant A Constant B Constant C D 20.0 48015.13823 0.00885 0.5 30840 102.1 2633.00479 0.5 Component Component physical properties (at 100oc) in TE process Molecular weight 2 25.24 2114.03561 0.02299 Condenser 2140.00222 0.32958 0.01657 0.01669 0.04 1.32958 0.9 0.01659 Stream number Stream number Molar flow (kgmol h1) Mass flow (kg h1) Temperature (oc) Mole fractions A B C D E F G H Unit operation data Temperature (oc) Pressure (kPa gauge) Heat duty (kW) Liquid volume (m3) Utilities Reactor cooling water flow (m3 h1) Condenser cooling water flow (m3 h1) Stripper stream flow (kg h1) Reactor product 7 1476.18579 0.11393 0.7 4.1 386.55 93.23978 0.712 0.05 1.7 0.00808 0.5 16788.23978 0.27164 0.37136 Stripper 65.02159 0.02 0.66 4.88 9 15.43828 Table 3.43 Product 8 1201.02263 0.00099 0.4 2705 6468.6  11 211.7 16.01257 0.9 80.00009 0.01008 0.1 0 0 0 0.13704 0.4 0.01075 0.53724 0.5 80.24 2144.87 2.45 2.00010 0 0 0 0 0 0.08423 Reactor 120.7 3102.01257 0.02263 0.0 266 47 .37 49.17 4.
302 22. A/C ratio constant (stream 4) D feed temperature (stream 2) Reactor cooling water inlet temperature Condenser cooling water inlet temperature A feed loss (stream 1) C header pressure lossreduced availability (stream 4) A.32 2748.10 3318. C feed composition (stream 4) D feed temperature (stream 2) C feed temperature (stream 4) Reactor cooling water inlet temperature Condenser cooling water inlet temperature Reaction kinetics Reactor cooling water valve Condenser cooling water valve Unknown Unknown Unknown Unknown Unknown Type step step step step step step step Random variation Random variation Random variation Random variation Random variation Slow drift Sticking Sticking Unknown Unknown Unknown Unknown Unknown 48 .210 40.7 Variable number IDV (1) IDV 12) IDV i3j IDV (4) IDV (5) IDV (6) IDV (7) IDV (8) IDV (9) IDV (10) IDV (I 1) IDV (12) IDV (13) IDV ii4j IDV (15) IDV (16) IDV (17) IDV (18) IDV (19) IDV (20) Process disturbances in TE process Process variable A/C feed ratio.0 250 #Vapor pressure parameters are not listed for component A.064 38.000 Units Kg h1 Kg h1 Kscmh Kscmh % % m3 h1 m3 h1 % m3 h1 m3 h1 rpm Variable name D feed flow (stream 2) E feed flow (stream 3) A feed flow (stream 1) A and C feed flow (stream 4) Compressor recycle valve Purge valve (stream 9) Separator pot liquid flow (stream 10) Stripper liquid product flow (stream 11) Stripper steam valve Reactor cooling water flow Condenser cooling water flow Agitator speed Table 3.6 Process manipulated variables in TE process Variable number XMV (1) XMV (2) XMV (3) XMV (4) XMV (5) XMV (6) XMV (7) XMV (8) XMV (9) XMV (10) XMV (11) XMV (12) Base case value (%) 63. Table 3.534 47.0 233 H 22.980 24. B.106 18.114 50.100 46.446 41.644 61.MULTIVARIATE STATISTICAL PROCESSMONITORING G 21.053 53. B and C because they are effectively noncondensible. B composition constant (stream 4) B composition.
8436 2.1h Sampling frequency =0.8820 18.0 4509.6567 Base case value 32.25 h Sampling frequency =0.31 341.2986 Base case value 0.339 2705.823 23.902 42.2633 4.297 Units kscmh kg h1 kg h1 kscmh kscmh kscmh kPa gauge % 0 C kscmh 0 C % kPa gauge m3 h1 % kPa gauge m3 h1 0 C kg h1 kW 0 C 0 C A feed (stream 1) D feed (stream 2) E feed (stream 3) A and C feed (stream 4) Recycle flow (stream 8) Reactor feed rate (stream 6) Reactor pressure Reactor level Reactor temperature Purge rate (stream 9) Product separator temperature Product separator level Product separator pressure Product separator under flow (stream 10) Stripper level Stripper pressure Stripper under flow (stream 11) Stripper temperature Stripper steam flow Compressor work Reactor cooling water outlet temperature Separator cooling water outlet temperature Table 3.3477 26.83570 0.1h Reactor feed analysis (stream 6) Component Variable number A XMEAS (23) B XMEAS (24) C XMEAS (25) D XMEAS (26) E XMEAS (27) F XMEAS (28) Purge gas analysis (stream 9) Component Variable number A XMEAS (29) B XMEAS (30) C XMEAS (31) D XMEAS (32) E XMEAS (33) F XMEAS (34) G XMEAS (35) H XMEAS (36) Product analysis (stream 11) Component Variable number D E F XMEAS (37) XMEAS (38) XMEAS (39) 49 .25 h Dead time=0.33712 80.383 6.2 22.8933 26.25052 3664.2565 18.949 65.7 25.109 50.0 75.1 h Dead time=0.731 230.01787 0.000 120.09858 Units mol% mol% mol% mol% mol% mol% Units mol% mol% mol% mol% mol% mol% mol% mol% Units mol% mol% mol% Sampling frequency =0.8 Variable name Continuous process measurements in TE process Variable number XMEAS (1) XMEAS (2) XMEAS (3) XMEAS (4) XMEAS (5) XMEAS (6) XMEAS (7) XMEAS (8) XMEAS (9) XMEAS (10) XMEAS (11) XMEAS (12) XMEAS (13) XMEAS (14) XMEAS (15) XMEAS (16) XMEAS (17) XMEAS (18) XMEAS (19) XMEAS (20) XMEAS (21) XMEAS (22) Base case value 0.188 8.958 13.3 9.1 h Dead time=0.43 94.9 Sampled process measurements in TE process Base case value 32.000 2633.599 77.978 1.MULTIVARIATE STATISTICAL PROCESSMONITORING Table 3.776 1.40 0.579 2.160 50.000 3102.
5. 1 2 3 4 Parameter range 0. C AF = 0. T = 339.−10.359 .83838 No.1 h and the measurement is 0.1043 2.828 mol% mol% The analyzer sample frequency is how often the analyzer takes a sample of the stream.015 ≤ K m ≤ 0.15.295 ≤ T ≤ 415 2. C AF = 9. Impulse change in J and J (open loop) 2. 1 Generated database for CSTR process at various operating modes Description Stable operating point ( C A = 2.1 Parameter range 1.2208 ≤ λ ≤ 1.−20. λ = {5. C AF = 9. λ = {− 5.T = 368.12 Operating Cond.9446 ≤ λ ≤ 8.518.1 ) Unstable operating point C A = 5.12 0.724 48. Cond. Simultaneous High. Negative change in feed water flow (fault) 4.5. Operating condition 1. of datasets 10 8 5 3 Table 3.MULTIVARIATE STATISTICAL PROCESSMONITORING G H # XMEAS (40) XMEAS (41) 53.10 Various operating conditions for Drumboiler process.−30} (Faulty operating cond.20} No.1 h old.) 2. a new measurement is available every 0. C AF = 0.1 h and a dead time of 0.295 ≤ T ≤ 415 1.295 ≤ T ≤ 415 1. The dead time is the time gap between sample collection and the completion of sample analysis. Simultaneous step changes in three manipulated inputs Parameter range 4 ≤ qf ≥ 6 No.04545 ≤ K1 ≤ 0. medium and sinusoidal change in three low frequencies inputs 4 3 Table 3.5.−15.11 Database corresponding to various operating conditions for BioChemical Reactor process Op. of datasets 24 2 3 22 10 Tuning parameters of controllers 50 .10. of datasets 2 7 2 ≤ qs ≥ 3 4 ≤ q f ≥ 10 2 ≤ qs ≥ 5 15 ≤ Q ≥ 25 4 ≤ qf ≥ 6 3.−25.1 h.5.295 ≤ T ≤ 415 2. For an analyzer with a sampling frequency of 0. Table 3.4545 0.
4 0 0 0 4 η = 100 2 5 4 3 100 100 100 100 P = 100 51 . Simultaneous step changes in IDV (1) to IDV (7) and random variation in IDV (8) to IDV (12).0 m3 8. 1 1 0 0 0 η = 100 Op.6 m3) Shut down limits Low limit High limit None 3000 kPa 2.MULTIVARIATE STATISTICAL PROCESSMONITORING Table 3.0 m3 175 oC 12. NP P Op.15 Combined similarity factor based clustering performance for Drumboiler process Cluster no.3 m3) 30% 100% (3.0 m3) (3. 2 0 2 0 0 η = 100 Op.13 Operating conditions for TE process Number of datasets Mode 1 Normal operation 15% decrease in production set point 15% decrease in reactor pressure 10% decrease in %G set point Step change in A/C feed ratio. A/C ratio constant Simultaneous step changes in IDV (1) to IDV (7).0 m3 24. cond.8 m3) none 150 oC 100% 30% (9. Simultaneous step changes in IDV (1) to IDV (7) and random variation in IDV (8) to IDV (12). reactor level.5 m3) (6.0 m3 none 1. 3 0 0 3 0 η = 100 Op. Mode 3 Normal operation Simultaneous 10% decrease in reactor pressure and level Simultaneous 10% decrease in reactor pressure. cond. cond.3 m3) (11. B composition constant Step change in B composition.0 m3 Reactor temperature Product separator level Stripper base level Table 3. 8 Operating condition ID FNM1 FM11 FM12 FM13 FMD11 FMD12 FMD13 FMD14 FMN3 FM31 FM32 FMD31 FMD32 5 Table 3.14 Process variable Reactor pressure Reactor level Constraints in TE process Normal operating limits Low limit High limit 2895 kPa none 100% 50% (21. 1 2 3 4 Avg.0 m3 1. cond. stripper level and %g in product Simultaneous step changes in IDV (1) to IDV (8).
Pattern accuracy Η. Cond. 1 Op.18 Combined similarity factor based clustering performance for BioChemical Reactor process Cluster No.17 Snapshot Op.3686417581778 36 0. Cond.) η = 100 52 . Cond. 1 1. 1 Op. Cond. Cond. Cond. 3 0 0 5 0 η = 100 Op. Cond.1077567433672 09 0.562974025107 154 Op.0000000000000 0 0.5570749586011 67 1 0.107756743367 209 1 0.3341901077662 54 0. Cond. Cond. Cond. Incorrectly identified P. 2 Op. Cond. 1 Op.3037515160791 49 Op. Cond. 1 2 3 4 Avg. Cond.\Ref. Pattern efficiency ξ.0042310224002 5977 Op.557074958601 167 0. Pattern matching algorithm efficiency Op. 3 0.3037515160803 14 1 Table 3. Cond. 4 0. Cond.16 Similarity factors in dataset wise moving window based pattern matching implementation for Drumboiler process Snap. 4 1 1 1 1 0 0 0 0 100 100 100 100 100 100 100 100 100 100 100 100 Table 3. 2 0 8 0 0 η = 100 Op.MULTIVARIATE STATISTICAL PROCESSMONITORING Table 3.3686417581778 36 0. 4 0 0 0 3 (faulty cond. 1 Op. Op.5634328386279 61 0. 3 Op. Moving window based pattern matching performance for Drumboiler process Np. Cond. Np 10 8 5 3 P 100 100 100 100 P = 100 Op. 1 Op. Total number of data windows similar to snapshot 1 1 1 1 N1. Correctly identified N2. 2 0. 1 10 0 0 0 η = 100 Op. Cond.
589302779704114 1. Cond. Cond. 3 0.629088229572527 0. Pattern matching algorithm efficiency 100 100 100 100 Op. 2 0.MULTIVARIATE STATISTICAL PROCESSMONITORING Table 3. 4 Op. Cond.993496099076534 0. Incorrectl y identified 0 0 0 0 P.19 Similarity factors in dataset wise moving window based pattern matching implementation for Bioreactor process Snap. Op. Correctly identified N2.589302779704114 0. Cond.667232710866021 0. Cond. 3 22 10 η = 100 η = 100 η = 100 53 .993496099076534 Op. Cond.054871178428041 5 Op. 3 Op. 1 Op. 3 Op. Cond. Cond.0548711784280415 0.550365271093825 1. Cond. 1 Op. Np 24 22 10 P 100 100 100 P = 100 Cluster no.00000000000000 0. 2 Op.667232710866021 0. Cond.20 Snapshot Op. Cond. Pattern efficiency ξ. Moving window based pattern matching performance for Bioreactor process Np.550365271093825 1 Table 3. 4 0. 2 Op. Cond. 2 Op. Cond. Cond. Cond.\Ref. 1 2 3 Avg.00000000000000 0. 4 1 1 1 1 100 100 100 100 100 100 100 100 Table 3. Total number of data windows similar to snapshot 1 1 1 1 N1. 1 24 Op. Op.21 Combined similarity factor based clustering performance for Jacketed CSTR process. Pattern accuracy Η. Cond. 1 1 0.629088229572527 Op.
101 7012 8474 6415 0. number of data windows similar to snapshot 1 Op. Cond.652 1527 7235 9287 0.469 5319 9634 7304 0.423 3600 4058 1553 0.334 4 6585 4862 7690 R 0.23 Moving window based pattern matching performance in Jacketed CSTR process Snapshot Op.338 1554 6269 1687 1. 3 1 1 N1. Correctly identified 1 1 1 N2.644230769230766 1.640 9731 1940 0.670000000000000 1 0.660 2230 8910 0.386 3849 2820 1604 0.000 0000 0000 000 0.497 8959 6937 2441 0. Cond.452 6104 9762 8938 1 0. Cond.262 8847 6610 0.504 5130 3986 0.652 3 1527 7235 9287 R 0. Np.269 6635 2954 0264 0.334 6585 4862 7690 0.474 1872 0697 9876 0.\Ref.409 4827 5260 0.467 2263 0119 2305 0. 2 0.369 1508 5558 9941 0.644230769230766 Op.617 6272 4924 1048 0.040 8549 9079 6553 9 0. Cond. 3 Op. 1 Op.362 4462 4422 0.336 0232 6397 0818 0. Op.MULTIVARIATE STATISTICAL PROCESSMONITORING Table 3.359 6272 7143 2008 0. 1 1 0.000 0000 0000 000 0. Total Cond.345 2606 9535 9438 0.233 1969 4011 8320 0.00000000000000 Table 3.644230769230766 Op.374 4488 2987 1438 0.210 5628 5434 6880 0.162 4642 9681 5847 0.207 9677 8016 1629 0. Cond.506 5983 6720 0729 0.663 0853 1186 6082 0.559 7517 2151 7637 0.000 1 0000 0000 000 R 0.015 8578 6628 3337 6 0. Cond.999 9999 9999 0.497 8959 6937 0. Pattern efficiency ξ.452 6104 9762 8938 0.24 Combined similarity factors in dataset wise moving window based pattern matching implementation for TE process S S1 / R R 1.670000000000000 0.224 3478 2923 2135 0.505 4410 9784 2560 0.463 7603 7418 8010 0.644230769230766 0.158 2259 8691 6959 0.505 4410 9784 0.378 2 9922 9718 4109 R 0. 1 Op.503 0361 0823 4947 0. 2 Op.462 6830 5907 0.649 2249 7219 0.382 7142 0127 7619 0. Incorrec tly identifie d 0 0 0 P. 3 0.629 8884 2852 6420 0.378 9922 9718 4109 1.506 5 5983 6720 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 0.22 Similarity factors in dataset wise moving window based pattern matching implementation for Jacketed CSTR process Snap.113 9718 7336 4090 0. Cond.336 7831 4844 3645 0.161 7618 1615 0416 0.101 7012 8474 6415 0. Pattern accuracy Η.561 0219 6236 5065 0.338 1554 6269 1687 0.623 7551 2070 6209 0.462 6830 5907 5948 0. 2 Op.276 3070 7783 1996 0.403 3984 9189 54 . Cond. Cond. Pattern matching algorithm efficiency 100 100 100 100 100 100 100 100 100 Table 3.
403 3984 9189 3944 3110 0.336 1 0232 3 6397 0818 2441 0.451 4463 0635 0285 0.446 0938 9521 3688 0.362 4462 4422 9002 0.409 4827 5260 2808 0.624 0885 4159 3125 0.999 9999 9999 9999 0.276 3070 7783 1996 2560 0.448 2004 3879 6640 0.414 2387 9011 9853 0.414 2387 9011 9853 0.653 1676 7470 2088 0.015 8578 6628 3337 6 0.345 2606 9535 9438 0.274 4364 0736 8823 0.407 6661 8666 4437 0.451 4463 0635 0285 0.MULTIVARIATE STATISTICAL PROCESSMONITORING 0729 R 0.667 5351 0101 8091 0.642 8799 6332 0915 0.658 9446 9389 5653 0.649 2249 7219 5507 0.233 1969 4011 8320 0.407 6661 8666 4437 0232 0.449 2272 3375 4385 0.359 8 6272 7143 2008 R 0.274 4364 0736 8823 0.386 3849 2820 1604 0.463 6 7603 7418 8010 R 0.161 7618 1615 0416 0.207 9 9677 8016 1629 R 0.645 9302 4629 2880 0.504 5130 3986 0232 0.523 4877 3133 4821 0.386 9974 9040 7397 0.262 8847 6610 3766 0.257 4311 5707 3034 0.469 5319 9634 7304 0.575 8615 2531 4424 0.391 6277 1985 2375 0.446 0938 9521 3688 Table 3.040 8549 9079 6553 9 0.999 9999 9999 9999 0.467 1 2263 1 0119 2305 R 0.561 0219 6236 5065 0.162 4642 9681 5847 5948 0. Cond.642 8799 6332 0915 0.546 8124 0176 3452 0.588 5795 6803 4468 0.669 3584 8682 1244 3944 0.999 9999 9999 9999 0.382 7142 0127 7619 0.336 1 7831 2 4844 3645 R 0.503 0361 0823 4947 0.014 3448 5744 1557 1 1. FNM1 FM11 FM12 FM13 FMD11 FMD12 FMD13 FMD14 FMN3 FM31 FM32 FMD31 FMD32 Pattern matching performance for TE process Size of the candidate pool .640 9731 1940 9822 0.113 9718 7336 4090 0.000 0000 0000 000 0.449 2272 3375 4385 5507 0.040 6606 2295 9579 1 1 9822 0.606 6315 6981 2647 0.000 0000 0000 000 0.658 9446 9389 5653 0.495 4138 3649 0478 0.386 9974 9040 7397 0.645 9302 4629 2880 0.559 7517 2151 7637 0.623 7551 2070 6209 0.369 1508 5558 9941 0.588 5795 6803 4468 0.253 7598 5742 0549 0.448 2004 3879 6640 0.158 2259 8691 6959 0.269 6635 2954 0264 0.447 0930 6551 4180 0.474 1 1872 0 0697 9876 R 0.664 0365 0610 8927 0.629 8884 2852 6420 0.606 6315 6981 2647 0.495 4138 3649 0478 0.653 1676 7470 2088 0.663 0853 1186 6082 0.210 5628 5434 6880 0. NP 1 1 1 1 1 1 1 1 1 1 1 1 1 N1 N2 Pattern matching efficiency 100 100 100 100 100 100 100 100 100 100 100 100 100 Pool Accuracy 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 100 100 100 100 100 100 100 100 100 100 100 100 100 55 .257 4311 5707 3034 0.25 Snapshot Op.423 3600 4058 1553 0.447 0930 6551 4180 0.660 2230 8910 3110 0.575 8615 2531 4424 3766 0.000 0000 0000 000 0.374 4488 2987 1438 0.617 6272 4924 1048 9999 0.253 7598 5742 0549 9002 0.391 6277 1985 2375 0.546 8124 0176 3452 0.040 6606 2295 9579 1 0.667 5351 0101 8091 1 2808 0.014 3448 5744 1557 1 0.669 3584 8682 1244 1.224 7 3478 2923 2135 R 0.523 4877 3133 4821 1.664 0365 0610 8927 0.624 0885 4159 3125 0.
3 Tennessee Eastman Plant 56 . 3.MULTIVARIATE STATISTICAL PROCESSMONITORING Fig.
4 (a) Response of reactor pressure at FMD13 Stripper Level 55 50 45 40 35 % 30 25 20 15 10 5 0 0.5 \ Fig. 57 .5 1 1.5 Hours 2 2.5 Fig.5 3 3.4 (b) Response of stripper level at FMD13.5 Hours 2 2. 3.5 1 1.MULTIVARIATE STATISTICAL PROCESSMONITORING Reactor Pressure 3000 2950 kPa gauge 2900 2850 2800 2750 0 0. 3.5 3 3.
5 2 Hours 2.5 1 1.5 67 66.5 3 3.5 64 63.5 3 3.5 (b) Response of reactor level at FMD14.5 (a) Response of reactor pressure at FMD14.5 66 % 65.5 4 Fig.MULTIVARIATE STATISTICAL PROCESSMONITORING Reactor Pressure 3000 2950 kPa gauge 2900 2850 2800 2750 0 0.5 1 1. 3.5 4 Fig. 58 . 3. Reactor Level 68 67.5 2 Hours 2.5 0 0.5 65 64.
5 1 1.5 3 3. 59 .6 (b) Response of stripper level of FMD31.5 4 Fig.5 2 Hours 2.5 3 3.6 (a) Response of reactor pressure at FMD31. 3. 3.5 4 Fig.5 2 Hours 2.5 1 1. Stripper Level 70 60 50 40 30 % 20 10 0 10 20 30 0 0.MULTIVARIATE STATISTICAL PROCESSMONITORING Reactor Pressure 2850 2800 2750 kPa gauge 2700 2650 2600 2550 2500 0 0.
7 (b) Response of stripper level of FMD32. Stripper Level 70 60 50 40 30 % 20 10 0 10 20 30 0 0.5 4 4.MULTIVARIATE STATISTICAL PROCESSMONITORING Stripper Pressure 3050 3000 2950 kPa gauge 2900 2850 2800 2750 2700 0 0.5 1 1.5 Hours 3 3.5 2 2. 3. 60 .5 4 4.5 1 1.5 Fig.5 Fig.5 2 2.7 (a) Response of reactor pressure of FMD32.5 Hours 3 3. 3.
oC 5 10 15 0 500 1000 1500 2000 time.sec 2500 3000 3500 4000 5 0 T.3. 9 Response for the Jacketed CSTR process under faulty operating condition 61 .sec 2500 3000 3500 4000 Fig.kgmol/m3 0 5 10 15 x 10 64 0 x 10 64 500 1000 1500 2000 time.8 Response of Drumboiler process for a faulty operating condition 5 CA.MULTIVARIATE STATISTICAL PROCESSMONITORING Fig. 3.
3. 10 (5). F. Marquez. M. 2005: 118126. Singhal. E. Chen.. 14: 465–471. P. Jawahar. Singhal. E. A. N. Bentsman.. Contr. Astrom. D. Contr. Bell. Chem.” J. “Clustering multivariate time series data. Balogh.. Quality Tech. IEEE: Piscataway. A. W.1979. Martin.. 74: 703–707.. Singhal. 2nd Intl. 18: 51–60. 2001: 17591764.. 5. E. MacGregor. Tan..“Betweengroups comparison of principal components. 12.”T. J. J. Kourti. 14.” Proceedings of the 6th WSEAS Int. J. J. V. Control. Morris.. J. D. Smyth. 11.. “Pattern matching in multivariate time series databases using a moving window approach. Seborg. 6. Amer. J. E.“An overview of multivariate statistical process control in continuous and batch performance monitoring. Discovery & Data Mining (KDD96). G.“Modeling by shortest data description. Seborg.. 36: 363378. 41: 38223838. 4.“Multivariate SPC methods for process and product monitoring.” IEEE T. ”Nonlinear control oriented boiler modelingAbenchmark problem for controller design. Seborg..14: 939974. Ko. E.” Automatica..” Ind. 2000. “Dynamics and control of continuous microbial propagators to subject substrate inhibition. A. J.1996. Stat. S.. 1996. Portland.. Meas. 28: 409–428..” J. ”Pattern matching in historical data.MULTIVARIATE STATISTICAL PROCESSMONITORING REFERENCE 1. Knowl.. D. 48:20222038. Pappa.” IEEE T... Syst. Conf. D. Rissanen.. Portugal.”Matching patterns from historical data using PCA and distance similarity factors. Edwards...” In Proc. vol. C. Johannesmeyer. T. A.. Krzanowski. 8. Chemometr. 2002. 10. Pellegrinetti. I. T. T.” Proceedingsof the 2001 American Control Conference. Eng. 19: 427438. Conf.2005. 2.”Biotechnol. 7. D.. K.September 2002.1978. NJ. “Drumboiler dynamics. Lisbon. January 1996... 13. 62 . Seborg.”J. September 2002. K.. B. R.” Automatica. A. W. ”Selftuning nonlinear controller. 9. 1996: 126–133. R. ”Multivariable robust controller design for a boiler system. Assoc. T. H. OR. Singhal.. Bioeng.. H. A. Syst.. C.” AIChE J. 4 (1).1972.“Clustering using Monte Carlo crossvalidation. on Fuzzy Systems. Res.
J. 17 (3): 245255. A. 63 ... Cont. Chem. L. D.”AIChE J. 1996. “ Decentralized control of the Tennessee Eastman Challenge process. “Design of neural controllers for various configurations of continuous bioreactor. 19 (9): 949959. F.. 17.. Basualdo.. F. 18. June 1998. A. Engr. Ricker. R. N. Vinson. Zerkaoui. “Analysis of various control schemes for continuous bioreactors.. 22.” Comput. D.” Proceedings of the American Control Conference. 25. K. R. Chem. Druaux. Balachander.. 2010. N. Eng.. Biot. L.” Comput. 16. C. Washington.” J. “Simple regulatory control of the Eastman process. Leclercq. Lim. 2010.” Comput.Lefebvre. “Plantwide control of the Tennessee Eastman problem. 1996. C. P.. A. Pennsylvania.. Kundu. “Nonlinear model predictive control of the Tennessee Eastman process. Lyman. 23. L. “Indirect neural control for plantwide systems: Application to the Tennessee Eastman Challenge process. Chem.. Kaushikram. “Plantwide control strategy applied to the Tennessee Eastman process at two operating points. 1993. 1991.. S. Georgakis.” Proceedings of the American Control Conference. “Optimal steadystate operation of the Tennessee Eastman Challenge process. W. 1995. 24. 19. “A plantwide industrial process control problem. 1995. 6 (4): 205221. G. Downs... Ricker. 26. “Studies in plantwide controllability using the Tennessee Eastman Challenge problem. D.30: 6190. the case for multivariable control.” Comput.. 35: 32803289. Chem. S. Engr. 1995. J... 2010. K. Menawat.”International Conference on System Dynamics and ControlICSDC. Res. Vogel.” Ind. R..” Alternate control structures for chemostat.37 (2): 302306. Proc. H. J. Philadelphia.MULTIVARIATE STATISTICAL PROCESSMONITORING 15. Damarla.”Adv..” Comput. Zheng. Luyben. S. Seattle. Chem. Engr.. S. Molina. 20.. 19 (3): 321331. Eng. 21. 1984. Agrawal. M. M. Biochem. 1991. Engr. E... E. Chem. D. S. Zumoffen. 30: 232243. P. Engr.
APPLICATION OF MULTIVARIATE STATISTICAL AND NEURAL CONTROL STRATEGIES .
sequential loop tuning method. {2 × 2{. Different types of tuning methods like BLT detuning method. One of the earlier approaches of multivariable control had been the decoupling of process to reduce the loop interactions.APPLICATION OF MULTIVARIATE STATISTICAL AND NEURAL PROCESS CONTROL STRATEGIES 4. multiple output processes because of their ease in understandability and requirement of fewer parameters than more general multivariable controllers. the Partial least squares based (PLS) controllers offer the opportunity to be designed as a series of SISO controllers(Qin and McAvoy (1992. In view of this. &control have been chronicled by The PLS based process identification like Kaspar and Ray (1993). researchers Lakshminarayanan et al.Till date. independent loop tuning method. there is no literature reference for NNPLS controller except the contribution by Damarla and Kundu (2011) [5]. 1993)[12]. for controlling nonlinear complex processes neural network based PLS (NNPLS) controllers have been proposed. {3 × 3{.. and {4 × 4{Distillation processes were taken up to 64 . For multivariable processes. (1997) [34]. and relay autotuning method have been proposed by the researchers for tuning of multiloop conventional controllers.1 INTRODUCTION Multiloop (decentralized) conventional control systems (especially PID controllers) are often used to control interacting multiple input.
By combining the PLS with inner ARX model structure. ARMAX. For modeling dynamic process. multidimensional regression problems gets transformed in to a series of onedimensional regression problems. the input data matrix {I{ was augmented either with lagged input variables (called finite impulse response (FIR) model) or including lagged input and output variables (called auto regressive model with exogenous input.2 4. ARMA. 65 . etc. in the current step.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER implement the proposed control strategy followed by the evaluation of their closed loop performances.6). • It selects the most realistic part of the predictor variable which is outmost predictive of response variable at each dimension • The predictor and the response variables are pre and post compensated by loading matrices & . Hence. which logically builds up the framework for PLS based process controllers. The residuals are passed to the successive steps and then the whole procedure is repeated assuming the residuals ( & { from the previous steps as the inputoutput data. respectively In view of this.2. ARX). The very fortes about the PLS identification of a multidimensional time series data is as follows: • While deriving the outer relation among the scores ({ & {of multidimensional inputoutput data.. Control strategies using classical and neural controllers have been incorporated in a plant wide multistage Juice evaporation process plant.1 DEVELOPMENT OF PLS AND NNPLS CONTROLLERS PLS Controller Discrete inputoutput time series data (XY) either collected from plant or generated by perturbing the benchmark processes with pseudo random binary signals can be used for the development of PLS controllers. which is supposed to be the key component for the successful development of a PLS based controller. Identification of process dynamics in latent subspace has already been discussed in Chapter 2 (section 2. 4. dynamic processes can be modeled. the PLS identified process dynamics deserves to be more precise and realistic than other time series data based models like ARX. the regression occurs at each of its individual dimension.
present work proposed neural identification of process dynamics using the latent variables. The inverse dynamics of the latent variable based process was identified as the direct inverse neural network controller (DINN) using the 66 . 4. In view of this. independent SISO controllers can be designed based on the inner dynamic model { { identified at each dimension along with the error and control signals and . hence the process dynamics in latent subspace could not be well identified by linear or quadratic relationships.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER Series of direct synthesis SISO controllers designed on the basis of the dynamic models identified into latent subspaces and embedded in the PLS frame work were used to control the process. the score is computed actually by the controller in score then gets projected on loading matrix which drive the processes. especially for complex and nonlinear processes.1 Schematic of PLS based Control G (s ) = { { represents the identified process transfer function in the latent subspaces.2. respectively. respectively. MIMO processes without any decouplers could be controlled by a series of NN based SISO controllers with the PLS loading matrices being employed as pre and post compensators of the error and control signals.2 Development of Neural Network PLS Controller (NNPLS) In PLS based process dynamics. the inner relationship between {I{ and (I) scores. Fig. The being pre and post compensated by the loading matrices namely PLSbased control strategy is presented in Figure 4. This approach is having the advantage that instead of using a typical multivariable controller. The transformed in to real physical inputs through 4. The i.. actually measured output error post compensated by . respectively.1. and controller acts on projected error matrices # & # . closed loop PLS framework.e.
0. % − % …. The cross correlation coefficient would have guided the selection of most effective inputs and their corresponding targets to be used in training the neural networks. The network input was arranged in ARX structure using the process historical database. The inputs ( 1) and outputs ( 3{ to the multilayer (3 layers) feed forward neural network (FFNN) representing forward dynamics of the process regarding its training & simulation phase were as follows: 67 . The scores corresponding to all the time series data were generated using the principal component decomposition. Several fortes about the NNPLS controllers are as follows: • A multivariable controller can be used as a series of SISO NN controllers within a PLS framework. Instead of the process being identified by linear ARX based model coupled with least squares regression. the relationship between the & scores are estimated by feed forward back propagation neural network. • The PLS identified process is somewhat decoupled owing to the orthogonality of the input scores and the rotation of the input scores to be highly correlated with the output scores. if any. the latent variable based disturbance process was identified using NN. in the NNPLS design. inputoutput pairings are automatic ( • # − #.). It might have been easier to deal for the controllers in tracking the set point because the infeasible part of the set point is not allowed to pass directly to the controllers. • The use of neural network in identifying the inner relation among the input and output scores might capture the process nonlinearity. • For every inputoutput pairs participated in training the neural networks. The set point or the desirable state passes to the controller as a projected variable down to the latent subspace. • Because of the diagonal structure of the dynamic part of the PLS model.etc. In the regulator mode. $ − $. the inverse NN models were used as a controller typically in a feed forward fashion with the error. In the closed loop simulation of the process. adjusted as a bias. The process transfer functions were simulated over a stipulated period of time to generate outputinput data using signal to noise ratio as 10.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER historical database of latent variables. the use of historical database eliminates the necessity of determining cross correlation coefficient.
The disturbance rejection was done along with the existing servo mode. 4. Figure 4. hence their corresponding scores. { {{(4. 68 . The FFNN representing disturbance dynamics acts as effective disturbance process. { − 3{. The number of hidden layer neurons varies from process to process. d Q 1 r + Sy1 Gc P Sx GNP  + + Y (a) Pd r=0 + Sxd GNd + Gc P Sx GNP + d Y Q 1 Sy1 (b) Fig. { − 2{. { − 2{{ (4.3) (4.4) For DINN the simulation phase is synonymous to control phase.2) 3 = { − 1{ Simulation Phase 1 = { { − 2{.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER Training phase 1 = { { − 3{. In regulator mode.2 represents the NNPLS scheme. the effective disturbance transfer function was simulated to produce the I data and data.1) (4.2 NNPLS a) Servo & b) Regulatory mode of control The inputs ( 1) and outputs ( 3{ of the multilayer (3 layers) disturbance FFNN regarding the training & simulation phase were as follows. both in servo and regulator mode. 3= { { { − 1{. { − 1{.
3. the input scores to the network were arranged as per the ARX structure following eq.5) and output or target score was set as per eq.3) & (4.7) (4. To control any of the multivariable processes. { − 3{.4. In simulation mode. The training algorithm used was gradient based.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER Training phase 1 = { { − 3{. JJ J IJ II J J I IJJ JJ { numbers of inputoutput time series relations were identified using neural networks in a latent variable subspace. Neural networks were also used to mimic the disturbance dynamics for all the cases considered.4). (4.6). J . J .8). { − 2{. { − 1{. The performances of the networks representing various processes are presented in Table 4. { − 1{. the input scores to the network were arranged as per the ARX structure following eq. the inputs and outputs to the trained networks were as per equation. { = 1.1. { = 1.(4. the inputs and outputs to the trained networks were arranged as per equations. 4.3 IDENTIFICATION & CONTROL OF { × {DISTILLATION PROCESS 4.(4. Inverse neural network acted as controller. (4. . . (4. During the training phase of the NNs’. Top product composition X D and bottom product composition X B were controlled by reflux rate and 69 . In all the multivariable processes considered. The convergence criterion was MSE or mean squared error.5) (4.4. In simulation mode.1) and output or target score was set as per eq. { − 2{{ 3 = { − 1{ Simulation Phase 1 = { { − 2{.8) (4. During the training phase. equation (4.2).6) The number of hidden layer neurons varies for various disturbance processes. { {{ 3= { { (4.7) & (4.1 PLS Controller A 2 × 2 distillation process was chosen to identify the process dynamics in latent subspaces and compare the PLS predicted dynamics with the actual one. JJ J IJ II J J I IJJ JJ { numbers of SISO controllers were designed.
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
vapor boilup using PLS based direct synthesis controllers as well as neural controllers. The following transfer function equation (4.9) was used to simulate the process when perturbed by pseudo random binary signals (1000 samples).
Inputs:
# =Reflux
flow rate;
$
= vapour boil up
Disturbances:
#
= Feed flow rate;
$
= Feed light component mole fraction
− 0.864(1 − 0.6 s ) 0.878(1 − 0.6 s ) (1 + 0.6 s )(75 s + 1) (1 + 0.6 s )(75 s + 1) 1.082 (1 − 0.1s ) − 1.096 (1 − 0.1s ) (1 + 0.1s )(75 s + 1) (1 + 0.1s )( 75 s + 1)
(4.9)
Both FIR based and ARX based inner models were used to identify the process dynamics in projected subspaces. Equations (4.104.11) represent the identified ARX based dynamic models for outputs 1& 2, respectively. Equations (4.124.13) represent the identified FIR based dynamic models for outputs 1& 2, respectively.
#{
{= {= {= {=
".""#
#
".""&% ".'' % ".% "."' ' "."$%$ "."#'# ".'&& ".'&&
#.'& ".#% #."&' "."# '# "."#'$ ".' % ".$ #
(4.10)
${
(4.11)
#{
(4.12)
${
(4.13)
FIR and ARX based inner correlation between the scores T and U were established keeping the outer linear structure of the PLS intact. The predicted outputs corresponding to the inputs within a PLS framework were obtained by post compensating the The scores with matrix. matrix.
score were generated by post compensating the original I matrix with
Figures 4.3 and 4.4 present the comparison of actual plant dynamics involving top product composition X D and bottom product composition X B with ARX based PLS predicted dynamics.
70
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
The desired transfer function for closed loop simulation was selected as second order system. The controller were designed as direct synthesis controller in the following way,
1 (λs + 1) 2 GCL ( s ) = 0
Where
1 2 (λs + 1)
0
(4.14)
λ is tuning parameter. Direct Synthesis controller resulted is as follows:
GCL ( s ) GC ( s ) = G ( s )(1 − GCL ( s ))
(4.15)
The performances of proposed direct synthesis controllers designed on the basis of equations (4.94.13& 4.15) and embedded in PLS framework were examined. PLS controller perfectly could track the set point (step change in top product composition from 0.99 to 0.996 and step change in bottom product composition from 0.01 to 0.005). Figures 4.5 and 4.6 illustrate and compare the performance of PLS controllers ( FIR based/ARX based inner dynamic model) in servo mode. 4.3.2 NNPLS Controller I IJ JJJ JJ, JJ, { – input1
In the same ( 2 × 2 ) distillation process output 1 ( JJJJJ (reflux flow rate) & the output 2 ( J J JJJ I IJ JJJ
{ – input 2 (vapour boil
up) time series relations were identified using neural networks in a latent variable subspace. Inverse neural network acted as controller. To control the process, two numbers of SISO controllers were designed. Neural networks were also used to mimic the disturbance dynamics for output 1 ( JJJJJ rate) & the output 2 ( J J JJJ I IJ JJJ I IJ JJJ JJ, JJ, { – input disturbance 1 ( { – input disturbance 2 (
#,
feed flow light
$, feed
component mole fraction). All the designed networks were three layered networks. The design procedure has already been discussed in the subsection 4.2.2. Figure 4.7 presents the comparison between actual process outputs and NN identified
process outputs namely the top and bottom product compositions. Figure 4.8 shows the closed loop performance of 2 numbers of SISO controllers. In servo mode, the controller proved to be a very reliable in set point tracking and reached the steady state value in less than 15 s, when the top product composition changes from 0.99 to 0.996. The 2
nd
controller
could track the set point changing in bottom product composition from 0.01 to 0.005 by
71
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER
reaching the steady state value within 10 s. Figure 4.9 presents the closed loop simulation in regulatory mode in conjunction with the existing servo; showing the disturbance rejection performance in & .
4.4
IDENTIFICATION & CONTROL OF { × {DISTILLATION PROCESS
The following equations (4.16) & (4.17) are the LTI process and disturbance transfer functions of a {3 × 3{ distillation process, respectively which were identified by Ogunnaike et al. (1983). NNPLS controllers were proposed to control the process.
0.66exp(−2.6s) 6.7s + 1 1.11exp(−6.5s) Gp(s) = 3.25s + 1 − 34.68exp(−9.2s) 8.15s + 1 − 0.61exp(−3.5s) 8.64s + 1 − 2.36exp(−3s) 5s + 1 46.2exp(−9.4s) 10.9s + 1 − 0.0049exp(−s) 9.06s + 1 − 0.012exp(−1.2s) 7.09s + 1 0.87(11.61s + 1)exp(−s) (3.89s + 1)(18.8s + 1)
(4.16)
0.14exp( −12s) − 0.0011(26.32s + 1)exp(−2.66s) (7.85s + 1)(14.63s + 1) 6.2s + 1 0.53exp( −10.5s) − 0.0032(19.62s + 1)exp(−3.44s) G D (s) = 6.9s + 1 (7.29s + 1)(8.94s + 1) − 11.54exp( −0.6s) 0.32exp( −2.6s) 7.01s + 1 7.765s + 1 The following are the outputs, inputs and disturbances of the system Outputs:
# $ %
(4.17)
= Overhead ethanol mole fraction, = Side stream ethanol mole fraction =19 th tray temperature.
Inputs:
# $ %
=Reflux flow rate =Side stream product flow rate, = Reboiler steam pressure
Disturbances:
# $
= Feed flow rate = Feed temperature
#{
The output 1 (Overhead ethanol mole fraction, 2 (Side stream ethanol mole fraction ,
${
– input1 (Reflux flow rate,
#
), output
$ ),
– input 2 (Side stream product flow rate,
72
&
The design of NNPLS controllers was done as per the procedure discussed in subsection 4. NNPLS controllers were proposed to control the process. input disturbance 1 ( feed flow rate).MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER output 3 (19 th tray temperature. 0.05 in # . Figure 4.12shows the closed loop responses in regulator mode as a result of step change in disturbance 1 by + 0. Because of the orthogonal structure of PLS. Three numbers of SISO NNPLS controllers were designed to control top.08 in $ and no change in %. %) time series pairings are by default in PLS structure and it was also supported by RGA (relative gain array) analysis. The designed NNPLS controllers emancipated encouraging performance in servo as well as in regulator mode.11 presents the NNPLS controller based closed loop responses of the process for all of its outputs in servo mode. Figure 4. 4. 3 (Side stream ethanol mole fraction .2. Figure 4. output 2 #.10 presents the comparison between the NNPLS predicted dynamics and the simulated plant output. The disturbance dynamics of the distillation process were identified in the form of three numbers of feed forward back propagation # {– neural networks.2.19) are the process and disturbance transfer functions of a {4 × 4{ distillation process model adapted from Luyben (1990). Output 3 was comparatively poorly fitted than in comparison to the other 2 outputs as evidenced by the regression coefficient of the concerned network model.2. A perfectly decoupled set point tracking performance of the NNPLS control system design is also revealed. side product compositions and 19 th tray temperature of the {3 × 3{distillation process. The output 1 (Overhead ethanol mole fraction.5 IDENTIFICATION & CONTROL OF { × {DISTILLATION PROCESS The following transfer functions equations. 73 . The figure shows the response of the system when there is a setpoint change of 0. ${ – input disturbance 1 ( %{ feed flow rate) & output #. %{ – input 3 (Reboiler steam pressure. – input disturbance 1 ( feed flow rate) time series relations were considered to manifest the disturbance rejection performance of the designed controllers. (Side stream ethanol mole fraction .18) & (4. #. the aforesaid pairings are consequential. (4.
73exp(−18s ) 5.17 exp(−5s ) G D ( s ) = 1.4 s 2 + 3s + 1) (4.2 exp(−2.5s + 1) − 6.5s + 1)2 Gp( s ) = 1.4s ) − 0.2 exp( −2.02s ) 1.49exp(−6s ) 4.3s + 1) ( 31.3s ) ( 33s + 1)( 8.05exp(−6s ) − 4.3s + 1) − 4.6 s + 1 5.05s ) 4.6s + 1)( 20s + 1 ) 21s + 1 ( 22s + 1 )2 6.11 exp( −12 s ) 2 (13.7345s + 1 s ) exp(−18 (13s + 1) 2 − 11.6s ) 2 ( 43s + 1)(6.2s ) − 0.3s + 1) 14(10 s + 1) exp(−0.61exp(−1.53exp(−3.6s + 1)( 5s + 1) ( 48s + 1)(6.6 s ) (43s + 1)(6.02s ) 0.36 exp(−1.17 exp(−5s ) 45s + 1 44.02 s ) (45s + 1)(17.01s ) − 5.3s ) (33s + 1)(8.6s ) 14( 10s + 1) exp(−0.02 s ) 44.1exp(−0.5s + 1) ( 45s + 1)(17.18) 4.8s ) − 0.09exp(−1.5s ) 18.6 s + 1)(20 s + 1) 6.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER − 6.19) The following are the outputs.2 s ) (31.11exp(−12s ) 4.36 exp(−1.93exp(−1.3s + 1 )2 ( 13s + 1)2 − 11.5s + 1 15s + 1 ( 13.49exp(−0.3s + 1) (4. inputs and disturbances of the system Outputs: # $ % & = Top product composition ( = Side stream composition ( #{ ${ %{ = Bottom product composition =Temperature difference to minimize the energy consumption ∆ 4{ Inputs: # $ % & =Reflux rate = Heat input to the reboiler = Heat input to the stripper = Feed flow rate to the stripper Disturbances: # $ = Feed flow rate = Feed temperature 74 .09 exp(−1.4s + 3s + 1) ( 31.93 exp(−1.6 s + 1 48s + 1 ( 34.49exp(−1.25exp(−1.
1987) [67]. The output 1 (Overhead ethanol mole fraction. the output 2 (Side stream composition . The output 1 (Top product composition . output 3 #. The designed NNPLS controllers emancipated encouraging performance in servo as well as in regulator mode.1989. (Side stream ethanol mole fraction . 5 PLANT WIDE CONTROL One of the important tasks for the sugar producing process is to ensure optimum working regime for the multiple effect evaporator. ${ %{ – input disturbance 1 ( feed flow rate).85 in # . &{ – input 4 (Feed flow were the four numbers of SISO processes considered for analysis. ${ %{ – input 2 (Heat – input 3 (Heat the output 3 (Bottom product composition . input to the reboiler. Figure 4. Mohtadi et. output 4 (Side stream ethanol mole fraction . al. linear quadratic Gaussian (LQG).15 shows the closed loop responses for all the process outputs in regulator mode of the designed NNPLS controller and as a result of step change in disturbance 1 by + 0. and internal model control (IMC) were applied in the past for controlling multiple effect evaporation process ( Anderson & Moore.14 presents the closed loop responses for all the process outputs in servo mode of the designed NNPLS controller. – input disturbance 1 ( &{ feed flow rate). A number of feedback and feed forward strategies based on linear and nonlinear models were investigated in this regard. hence perfect control system design. This figure shows the response of the system when there is a step change of +0. The disturbance dynamics of the distillation process were identified in the form of four numbers of feed forward back # {– propagation neural networks. Figure 4. generalized predictive control (GPC). (Side stream ethanol mole fraction .2. and #. Four numbers of NNPLS controllers were designed as DINN of the process as per the procedure discussed in the subsection 4.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER The dynamics of the distillation process were identified in the form of four numbers of feed forward back propagation neural networks.Figure 4. – input disturbance 1 ( feed flow rate) time series relations were considered..13 presents the comparison between the NNPLS model and the simulated plant output for the {4 × 4{ Distillation process.2. input1 (Reflux flow rate. rate to the stripper. 0. 75 . $ ). #.2. input to the stripper.05 in % .1 in and no change in & $ . output 2 #. %) &) # #{ – ). Generic model based control (GMC). input disturbance 1 ( feed flow rate). 0. and it is a perfect set point tracking performance by the NNPLS control system designed in servo mode. & output 4 (Temperature difference .
20) Where O1 = K1S . The other control objective Material balance: dw i dt = Fi −1 − Fi − O i (4.21) The dynamics of each portion of the plant (each effect here) were identified in state space domain.2. It is evident from Figures 4.18 and 4. is injected to the first effect to vaporize water from first effect producing vapor which is directed to the next effect with a vapor deduction # The first effect liquid flow rate at a concentration # goes to the tube side of the second effect. The following model equations were considered to get the LTI transfer function for every effect. The variable with their steady state values are presented in Table 4. The liquid holdup of i th effect is . it is necessary to control the brix is ' . The number of effects of the process plant is five and fruit juice having nominal 15 % sucrose concentration. Hence. The vapor from the last effect goes to the condenser and the syrupy product from there goes to the crystallizer. enthalpy H" and #.. flowrate ". % . Two numbers of direct synthesis controllers were designed to control % and '.19 that ANN controllers served well in the servo mode whereas direct synthesis or model based controllers 76 .16. O i = K i (O i −1 − VPi −1 ) i = 2−5 H = static gain Component balance: dC i dt 1 Wi = [Fi −1Ci −1 − Fi Ci ] (4. Steam of rate #. (brix) is concentrated up to 72 %. which is deducted from the third effect.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER The multiple effect evaporation process in sugar industry is considered as the plant as shown in Figure 4. The performance of the DINN controllers were compared with direct synthesis controllers both in servo as well as regulator mode. ". Juice to be concentrated enters the first effect with concentration temperature ". The most important measurable disturbance of the plant is the demand of steam by the crystallizer. Direct inverse neural network controllers (DINN) were also designed to fulfill two numbers of control objectives.
999 0.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER did well in their disturbance rejection performance.3 × 3&4 × 4 ) Distillation processes.0038066 0.994 0.964 0. The NN controllers worked satisfactorily in regulator mode for the chosen plant wide process.0002064 0.00016509 0. 6 CONCLUSION The present chapter is revealing the successful implementation of NNPLS controllers in controlling multivariable distillation processes and successful implementation of neural controller in a plant wide process. Mode of Operation Servo Identified NNPLS G1 G2 MSE 0. The physics of data based model identification through PLS.99 0. The revelation of this chapter has got a sheer intellectual merit. it can be concluded that the processes need to be simulated rigorously under ASPEN PLUS environment before making any further comments on the applicability of the NN controllers in plant wide process.995 0. their advantages over other time series models like ARX.998 0.965 0. the merits of NNPLS based control over PLS based control were addressed in this chapter. ARMA.0098331 W 0.0006922 0. But the excerpts of the present chapter are far reaching.00702021 0. and NNPLS. Table 4.962 0.00068067 0.00060809 0. In fine. ARMAX.1 Designed networks and their performances in (2 × 2.962 system × Regulatory Gd1 Gd2 Servo G1 G2 × G3 Regulatory Gd1 Gd2 77 . Plant wide simulation was done in SIMULINK platform.001353 0. in general.
9918 0.9640 Table 4.314 H/ " ' 78 .2 Steady state values for variables of Juice concentration plant Input " " First effect # % = 14.9965 0.9931 0.52 = 25 % $ /J / % ℎ$ =3.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER Gd3 Servo G1 G2 G3 G4 × Regulatory Gd1 Gd2 Gd3 Gd4 0.8 H/ " = 13.00030866 0.79 /J ℎ& =2.864 $ /J / % = 36.9773 0.9891 0.87 H/ " S=56.12 # / % # = 30.0046903 0.15 /J ' = 72 / % ' = 12.47 = 75.0004304 0.66 H/ " Second Third effect effect % % $ = 25 = 00.996 0.0016072 0.0014292 0.5 # = 187.00012274 0.33 /J ℎ# =3.19 /J = 21.5 % = 25.000666 0.00040426 0.27 H/ " Fourth effect Fifth effect & & = 18 % = 49.92 " /J H/ ℎ% =3.86 = 22 % $ /J % = 27.78 /J = 131.0002519 0.79 /J ℎ' =2.9662 0.94 /J & = 55.482 / % & = 12.74 / % = 49.54 % ' = 37.9882 0.97 /J ℎ" =3.
4. 4. 3 Comparison between actual and ARX based PLS predicted dynamics for output1 (top product compositionI ) in distillation process.1 Actual Dyn.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER 0.08 0.4 Comparison between actual and ARX based PLS predicted dynamics for output2 (bottom product compositionI ) in distillation process.PLS ARX 0 100 200 300 400 500 time 600 700 800 900 1000 Fig.06 0. 0.1 0.08 0.02 O utput1 0 0.06 0.02 0.05 0 Output2 0.PLS ARX 0.04 0.15 0 100 200 300 400 500 time 600 700 800 900 1000 Fig.04 0.05 0.1 Actual Dyn. 79 .
04 1. 10 x 10 3 9 B o tto m p r o d u c t c o m p o s itio n .06 T o p p ro d u c t c o m p o s itio n .99 PLS(FIR)based controller PLS (ARX with LS) based controller 0 5 10 15 20 25 time 30 35 40 45 50 Fig.01 to 0.99 to 0.996.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER 1.02 1.5 Comparison of the closed loop performances of ARX based and FIR PLS controllers for a set point change in I from 0.07 1.4.X D 1.X B PLS(FIR)based controller PLS (ARX with LS) based controller 8 7 6 5 4 0 5 10 15 20 25 time 30 35 40 45 50 Fig.01 1 0.03 1.05 1.6 Comparison of the closed loop performances of ARX based and FIR PLS controllers for a set point change in I from 0.005 80 . 4.
5 0 0.5 NN Prediction Actual Process 0 100 200 300 400 500 600 700 800 900 1000 1 Time Fig.5 0 0.7 Comparison between actual and neutrally identified outputs of a {2 × 2{ Distillation process using projected variables in latent space.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER T o p P r o d u c t c o m p o s it io n 1 0. 81 .5 NN Prediction Actual Process 0 100 200 300 400 500 600 700 800 900 1000 1 B o tto m P r o d u c t c o m p o s itio n 1 0. 4.
82 .8 Closed loop response of top and bottom product composition using NNPLS control in servo mode.01 T o p P r o d u c t C o m p o s it i o n 1. 4.005 1 0.995 0.99 0 10 20 30 40 50 60 70 80 90 100 11 x 10 3 B o tto m P r o d u c t C o m p o s itio n 10 9 8 7 6 5 0 10 20 30 40 50 60 70 80 90 100 Time Fig.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER Closed loop Response of NNPLS Controllers in Servo Mode 1.
2 0.3 0.9 Closed loop response of top and bottom product composition using NNPLS control in regulatory mode. 83 .1 0 0 10 20 30 40 50 60 70 80 90 100 Time Fig.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER B o tto m P r o d u c t C o m p o s itio n T o p P r o d u c t C o m p o s itio n Closed Loop Performance of NNPLS Controller in Regulatory Mode 4 3 2 1 0 1 0 10 20 30 40 50 60 70 80 90 100 0. 4.4 0.
05 0 NNPLSBased controller 0.1 0 x 10 4 10 20 30 1 9 t h tra y te m p e ra tu re 50 60 70 time closed loop simulations of NNPLSBased controller 40 80 90 100 10 5 0 5 NNPLSBased controller 0 10 20 30 40 50 60 70 80 90 100 time Fig. 4.11 Closed loop response of the three outputs of a {3 × 3{ distillation process using NNPLS control in servo mode.5 Neural Network Output3 Measured Output3 0 0. 84 .05 0.5 0 200 400 600 800 1000 1200 1400 1600 1800 2000 time Fig.1 0 10 20 30 50 60 70 time closed loop simulations of NNPLSBased controller 40 80 90 100 0.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER S id e s t re a m m o le f ra c t io n O v e rh e a d E t h a n o l m o le f ra c t io n Comparison of simulated plant outputs with NNPLS outputs Measured Output1 Neural Network Output1 1 0 1 0 200 400 600 800 1000 time 1200 1400 1600 1800 2000 1 Neural Network Output2 Measured Output2 0 1 0 200 400 600 800 1000 time 1200 1400 1600 1800 2000 1 9 T ra y T e m a p e ra tu re 0. 4. S id e S tre a m E th a n o l m o le fra c tio n O v e rh e a d E th a n o l m o le fra c tio n Closed loop simulations of NNPLSBased controller 0.10 Comparison between actual and NNPLS identified outputs of a {3 × 3{ Distillation process using projected variables in latent space.1 NNPLSBased controller 0 0.
MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER S id e S t re a m m o le f ra c t io n .5 0 neural network output4 simulated plant output 4 0. S id e s t re a m C o m p o s it io n B o t t o m P ro d u c t C o m p o s it io n T o p P ro d u c t C o m p o s itio n Comparison of simulated plant outputs with NNPLS outputs simulated plant output1 neural network output1 1 0 1 1 0 0 100 200 300 400 500 600 700 800 900 1000 neural network output2 simulated plant output 2 1 1 0 0 100 200 300 400 500 600 700 800 900 1000 neural network output3 simulated plant output 3 1 0 100 200 300 400 500 600 700 800 900 1000 T e m p e ra t u re D if f e re n c e 0.5 0 100 200 300 400 500 600 700 800 900 1000 time Fig. X D 1 O v e rh 1 9 T ra y T e m p e ra t u re .1 0 0 10 20 30 40 50 time 60 70 80 90 100 0. 4.13 Comparison between actual and NNPLS identified outputs of a {4 × 4{ Distillation process. 85 . X Se1a d E t h a n o l m o le f ra c t io n .1 0 NNPLS Based Controller 0. 4.12 Closed loop response of the three outputs of a {3 × 3{ distillation process using NNPLS control in regulator mode. T Closed loop simulations of NNPLS based controller in regulatory mode NNPLS Based Controller 0.2 0 10 20 30 40 50 time 60 70 80 90 100 2 NNPLS Based Controller 1 0 1 0 10 20 30 40 50 60 70 80 90 100 time Fig.1 0.2 0.
05 in y3 NNPLSBased controller 0. Fig.05 0 10 20 30 40 50 60 70 80 90 100 time Closed loop response of the four outputs of a {4 × 4{ distillation process using NNPLS control in servo mode. 4.1 0 0.85 in y1 NNPLSBased controller 1 0 1 0 10 20 30 40 50 60 70 80 90 100 Closed loop simulations of NNPLSbased controller for setpoint change of 0. B o t t o m P r o d u c t C o m p o si dt eo n t r e a m C o m p o sTi toi o n P r o d u c t C o m p o s i t i o n p Closed loop simulations of NNPLSbased controller for set point change of 0.2 0 10 20 30 40 50 60 70 80 90 100 Closed loop simulations of NNPLSbased controller for setpoint change of 0.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER S i i S T e m p e r a t u r e d i f f .05 0 0.14 86 .2 0 0.1 0 10 20 30 40 50 60 70 80 90 100 Closed loop simulations of NNPLSbased controller with no setpont change in y4 NNPLSBased controller 0.1 in y2 NNPLSBased controller 0.
VP1 O1 W1 Steam Juice Co. C4 F5.1 0 x 10 3 5 10 15 20 25 30 5 NNPLS Controller 0 0 5 10 15 20 25 30 0.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER S id e P ro d u c t c o m p o s it io n T e m p e ra t u re d if f .05 0. C1 F2. C2 F3. B o t t o m P ro d u c t c o m p o s it io n T o p P ro d u c t c o m p o s it io n Closed loop simulations of NNPLS controller in regulator mode NNPLS Controller 0.1 NNPLS Controller 0 0. C5 Condenser Fig. C3 F4.15 Closed loop response of the four outputs of a {4 × 4{ distillation process using NNPLS control in regulator mode.1 0 5 10 15 20 25 30 time Fig. Fo To.1 0 0.1 0 5 10 15 20 25 30 0 NNPLS Controller 0. 4. ho VP2 O2 W2 VP3 O3 W3 VP4 O4 W4 VP5 O5 W5 F1. 4.16 Juice concentration plant 87 .
17 (a) Comparison between NN and model based control in servo mode for maintaining brix from 3 rd effect of a sugar evaporation process plant using plant wide control strategy.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER 37 NN controller Model Based Controller 36 35 C3 34 33 32 31 0 5 10 15 20 25 time 30 35 40 45 50 Fig. 4. 4. 88 . 78 76 74 72 70 C5 68 66 64 62 60 NN controller Model Based Controller 0 5 10 15 20 25 time 30 35 40 45 50 Fig.17 (b) Comparison between NN and model based control in servo mode for maintaining brix from 5 th effect of a sugar evaporation process plant using plant wide control strategy.
4.6 0 5 10 15 20 25 time 30 35 40 45 50 Fig.5 NN controller Model Based Controller 72 71.4 NN controller Model Based Controller 36.6 36.5 70 0 5 10 15 20 25 time 30 35 40 45 50 Fig.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER 36.8 35.5 C5 71 70. 4.18 (a) Comparison between NN and model based control in regulator mode for maintaining brix (at its base value) from 3 rd effect of a sugar evaporation process plant using plant wide control strategy. 72.18 (b) Comparison between NN and model based control in regulator mode for maintaining brix (at its base value) from 5 th effect of a sugar evaporation process plant using plant wide control strategy. 89 .2 C3 36 35.
McAvoy.” In Proceedings of the 1993 Internation Symposium on Intelligent Control. O. “Multivariable adaptive control without a prior knowledge of the delay matrix. B. Lakshminarayanan. D. 90 . Anderson. Mohtadi.43: 23072323.. 2. L. “Modeling and control of multivariable processes: The dynamic projection to latent structures approach... 5. W.” (communicated to Chemical product & process modeling) 6. D. Kaspar. K. Engr. 1987. Sirish. “Design of Multivariable NNPLS controller: An alternative to classical controller. Chem... Science. Damarla. S...” Comput. J. B.1992..” AIChE Journal. J.” Syst.MULTIVARIATE STATISTICAL ANDNEURAL CONTROLLER REFERENCES 1. Shah.. Nandakumar. Clarke. J.W. J.. S. H. Qin. S. “A statistical perspective of neural networks for process modelling and control. Chicago. Ray. Control Lett. 1993: 559604. S.. Qin. 9: 295306. “Optimal control –linear quadratic methods. C. H. 1997. “Dynamic Modeling For Process Control. S. M. 1993.. M. 7.” Chemical Eng. 1989.” Prentice Hall. 4. 16(4): 379391. IL. L. 48 (20): 34473467. T.. Kundu. Moore. September 1997.. “Nonlinear PLS modeling using neural network.. 3.
CONCLUSIONS AND RECOIMMENDATION FOR FUTURE WORK .
hence process monitoring & control on their basis may find inadequacies. 91 . Both the techniques emancipated encouraging efficiencies in their performances. Every physical/mathematical model encourages a certain degree of empiricism.1 CONCLUSIONS Present work could successfully implement various MSPC techniques in process identification.CONCLUSIONS AND RECOMMENDATION FOR FUTURE WORK 5. compression and analysis techniques. The deliverables of the present dissertation are summarized as follows: • Implementation of clustering time series data and moving window based pattern matching for detection of faulty conditions as well as differentiating among various normal operating conditions of Bioreactor. With the advancement of data capture. From this perspective. monitoring & control. the current project was taken up. monitoring and control of industrially significant processes. precise analysis and judicious transformation are the key steps to success of data based approach for process identification. especially for nonlinear & high dimensional processes. continuous stirred tank with cooling jacket and the prestigious Tennessee Eastman challenge processes. Data driven models may be a better option than models based on first principles and truly empirical models in this regard. Efficient data preprocessing. storage. Drumboiler. purely empirical models some time are based on heuristics. the multivariate statistical process monitoring (MSPM) and control has become a potential and focused area of R&D activities. The issues regarding the implementation were also addressed.
it can be acclaimed that the present work not only did address monitoring & control of some of the modern day industrial problems. Latent variable based DINNS’ embedded in PLS framework was the termed coined as NNPLS controllers. • • Integration of statistical process monitoring and control for industrial automation. The closed loop performance of NNPLS controllers in {2 × 2{. {3 × 3{. their issues. and {4 × 4{ Distillation processes were encouraging both in servo and regulator modes. and their implementation in the current perspective. For multivariable processes. • Neural nets trained with inverse dynamics of the process or direct inverse neural networks (DINN) acted as controllers.CONCLUSIONS AND RECOMMENDATION FOR FUTURE WORK • Identification of time series data/process dynamics & disturbance process dynamics with supervised Artificial neural network (ANN) model • Identification of time series data/process dynamics in latent subspace using partial least squares (PLS). which is emulating the decisionmaking capabilities and knowledge of a human expert in a specific field. 5. 92 . the PLS based controllers offered the opportunity to be designed as a series of decoupled SISO controllers.2 • RECOMMENDATION FOR FUTURE WORK Plant wide control using nontraditional control system design requires to be implemented in complex processes. Building of Expert system or Knowledgebased systems (KBS). • Design of multivariable controllers in latent subspace using PLS framework. In fine. it has also enhanced the understanding about data based models. The subject matter of the present dissertation seems to be suitable to cater the need of the present time. • Model based Direct synthesis and DINN controllers were incorporated for controlling brix concentrations in a multiple effect evaporation process plant and their performances were comparable in servo mode but model based Direct synthesis controllers were better in disturbance rejection. their scopes.
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.