Professional Documents
Culture Documents
a r t i c l e i n f o a b s t r a c t
Article history: In this paper, a new dynamic kernel partial least squares (D-KPLS) modeling approach
Received 30 October 2015 and corresponding process monitoring method are proposed. The contributions are as fol-
Received in revised form 12 lows: (1) Different from standard kernel partial least squares, which performs an oblique
December 2015 decomposition on measurement space. D-KPLS performs an orthogonal decomposition on
Accepted 15 December 2015 measurement space, which separates measurement space into quality-related part and
Available online 24 December 2015 quality-unrelated part. (2) Compared with the standard KPLS algorithm, the new KPLS algo-
rithm, D-KPLS, builds a dynamic relationship between measurements and quality indices.
Keywords: (3) By introducing the forgetting factor to the model, i.e., the samples gathered at the differ-
Dynamic kernel partial least ent history time are assigned to different weights, so the D-KPLS model builds a more robust
squares (D-KPLS) relationship between input and output variables than standard KPLS model. On the basis
Quality prediction of proposed D-KPLS algorithm, corresponding process monitoring and quality prediction
Fault detection methods are proposed. The D-KPLS monitoring method is used to monitor the numerical
Data-based process monitoring example and Tennessee Eastman (TE) process, and faults are detected accurately by the
proposed D-KPLS model. The case studies show the effeteness of the proposed approach.
© 2016 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved.
∗
Corresponding author.
E-mail address: zhangyingwei@mail.neu.edu.cn (Y. Zhang).
http://dx.doi.org/10.1016/j.cherd.2015.12.015
0263-8762/© 2016 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved.
chemical engineering research and design 1 0 6 ( 2 0 1 6 ) 242–252 243
PLS modeling has been a useful tool for building the rela- 2. KPLS model
tionship between measurements and the quality indices in
the multivariate industrial process monitoring field. PLS as Given input matrix X consisting of N samples with m process
a prevalent data-driven modeling method has satisfactory variables, and output matrix Y containing N observations with
performance in the field of quality prediction, and indus- J quality variables, i.e.,
trial process monitoring. The process monitoring based on ⎡ ⎤ ⎡ ⎤
the PLS approach is to extract a few latent variables from x1 y1
highly correlated measurements according to the covariance ⎢ ⎥ ⎢ ⎥
X = ⎢ .. ⎥ ∈ RN×m , Y = ⎢ .. ⎥ ∈ RN×J ,
between measurements and quality variables. By projecting ⎣ . ⎦ ⎣ . ⎦
the measurements on the latent space, the quality-related xN yN
part and quality-unrelated part can be parted and monitored,
respectively. For the purpose of process monitoring, extend where xi ∈ Rm and yi ∈ RJ (i = 1, . . ., N) are row vectors, respec-
PLS methods have also been proposed. For example, Herman tively.
Wold and Svante Wold proposed a multivariate projection The main idea of the KPLS algorithm is to map the input
method for multi-blocks data (Wold, 1982; Wold et al., 1987). data xi ∈ Rm on a high-dimensional feature space through a
Helland et al. (1992) proposed a recursive PLS (RPLS) algo- non-linear mapping, which is a reproducing kernel Hilbert
rithm to update the PLS model with the latest process data. space. And the dimension of feature space may be arbitrarily
Several preprocessing and postprocessing modification meth- large even infinite. The non-linear structure in input space is
ods of PLS, such as orthogonal signal correction PLS (OSC-PLS) more likely to be linear in feature space, which a linear KPLS
(Antti et al., 1998) and total projection to latent structures (T- regression can be applied.
PLS) (Zhou et al., 2010; Qin et al., 2001; Li et al., 2010, 2011a) Chosen a Mercer kernel k(· , ·), then the non-linear mapping
have been proposed. With T-PLS models, which decompose ϕ(·) can be obtained by the following inner products.
the measurement space further, quality-related fault diagno-
T
sis can be performed effectively for multivariate process. For k(xi , xj ) = ϕ(xi )ϕ(xj ) (1)
the nonlinear input and output data, polynomial PLS (Wold,
1992; Malthouse et al., 1997), neural PLS (Kramer, 1992; Qin and with ϕ(xi ) ∈ R1×S , i = 1, . . ., N. S is the dimension of feature
McAvoy, 1992), and kernel PLS (KPLS) (Zhang and Teng, 2010; space.
Zhang and Hu, 2011) have been proposed. Especially, KPLS The kernel function k(· , ·) selected must satisfy the Mercer’s
is the most common and prevalent method. With the KPLS theorem conditions. And a specific kernel function implicitly
method, the nonlinear input data are mapped into a high- determines the associated mapping ϕ(·) and the feature space.
dimensional feature space in which the input data are more By substituting the k(xi , xj ) for ϕ(xi )ϕ(xj )T , knowing the
nearly linear. Although KPLS has been used to monitor multi- explicit non-linear mapping and calculating the inner product
variate industrial processes, there are still some problems for can be avoided.
process monitoring base on KPLS technique. Since standard According to Eq. (1), the Gram matrix K ∈ RN×N can be
KPLS performs oblique projection to input space, it has limita- obtained as follows:
tions in distinguishing quality-related and quality-unrelated
faults. K = ˚(X)˚(X)
T
(2)
Standard KPLS mentioned above consider only static rela-
tions between measurements and quality variables. However, T T T
with ˚(X) = [ϕ(x1 ) , . . ., ϕ(xN ) ] ∈ RN×S .
the true relationship between measurements and quality-
Before building the KPLS model, the mapped sample ϕ(xi )
related data is dynamic. Standard KPLS models are not suitable
need to be centered as follows:
to model this kind of processes and there are a number of
approaches to cope with this problem. A widely accepted way
T T
is to include a relatively large number of lagged values of the ϕ(xi ) = ϕ(xi ) − ˚T e (3)
input and output variables in the measurement block. The
model built by those measurements is called dynamic KPLS where e is a column vector with all the entries equal to 1/N.
(D-KPLS) that can reflect the true relations between measure- And the centered Gram matrix K is calculated by Eq. (4).
ments and quality indices.
T
In this paper, we propose a D-KPLS algorithm for building K = ˚(X)˚(X) = (I − E)K(I − E) (4)
a more robust relationship between measurement and qual-
ity indices. And the new model, called dynamic kernel partial where E is a (N × N) matrix with all its entries equal to 1/N and
least squares, is built in reproducing kernel Hilbert space. the element (i,j) of K is k(xi , xj ) = ϕ(xi )ϕ(xj )T .
In addition, for the purpose of process monitoring, D-KPLS According to KPLS algorithm, the regression function
decomposes the feature space into two orthogonal subspaces. between Y = [y1 , . . ., yN ]T and ˚(X) can be expressed by the
And the fault detection approaches based on the new model following matrix forms (Zhang and Teng, 2010):
are proposed.
The remainder of this paper is organized as follows. In Sec- Ŷ = ˚(X)C = TTT Y (5)
tion 2, the standard KPLS algorithm is reviewed. In Section
3, the new D-KPLS algorithm is proposed. The corresponding
T = ˚(X)R
fault detection methods are proposed in Section 4. Numer- T −1 (6)
ical example and Tennessee Eastman process cases studies R = ˚(X) U(TT KU)
are given out in Section 5 to show the effectiveness of new
T −1 T
models for industrial process monitoring. The conclusions are with C = ˚(X) U(TT KU) T Y is regression coefficient, and Ŷ
summarized in the last section. is the prediction of Y.
244 chemical engineering research and design 1 0 6 ( 2 0 1 6 ) 242–252
⎧
⎪
A
Ŷ = ˚(X)CD = ˚(X)˚(X) U(TT KU)
T −1 T
T Y (9)
⎪
⎪ ˚(X) = T
+ = ti pTi + E
⎪
⎨
TP E
i
(7) Consider the following equation:
⎪
⎪
A
⎪
⎪ = T
+ = ti qTi + F
⎩ Y UQ F
CD CTD = ˚(X) U(TT KU)
T −1 T
T YYT T(UT KT T)UT ˚(X)
i
T
= ˚(X) M˚(X)
where T ∈ RN×A is score matrix of ˚(X), P ∈ RS×A is load matrix
−1 −1
of ˚(X), U ∈ RN×A is score matrix of Y, Q ∈ RS×A is load matrix where M = M(TT KU) TT YYT T(UT KT T) UT is a symmetry
of Y. The KPLS algorithm is listed in Table 1. matrix, CD⎡CTD ∈ RS×S , M ∈ R⎤N×N , ˚(X) ∈ RN×S , CD ∈ RS×J , Ŷ ∈
m11 · · · m1N
⎢. ⎥
RN×J , M = ⎢ .. ⎥
.. ..
3. D-KPLS model ⎣ . . ⎦ .
to the following problem. Perform singular value decomposition (SVD) on CD CTD gets:
⎡ ⎤
T L×L
max wT ˚(X) Yc
⎢ ⎥ T
⎢ 0 ⎥ W1
CD CTD = W1 W2 ⎢
⎢
⎥
⎥ (10)
..
⎣ . ⎦ WT2
s.t. w = 1, c = 1 0
Obviously, standard KPLS model only describes the static where L×L is a diagonal matrix consisting of non-zero singu-
variations within the input data that are most related to lar value of CD CTD .
output data. In this section, a new dynamic KPLS model is Let: wk ∈ RS×1 is a column of W = [W1 W2 ]. ci denotes the
proposed, which can captures the dynamic relations between ith column of CD .
input variables and output variables. Proof: wk belongs to ˚(X)-space.
The matrix X and Y can be expressed as follows:
N
N
⎡ ⎤ ⎡ ⎤ CD CTD = mij
ϕ(xi )
T
ϕ(xj ) (11)
xg1 y1
⎢ ⎥ ⎢ ⎥ i=1 j=1
⎢ .. ⎥ ⎢ .. ⎥
⎢ . ⎥ ⎢ . ⎥
X=⎢ ⎥, Y = ⎢ ⎥.
⎢ xgN−1 ⎥ ⎢y ⎥ Let and wk are eigenvalue and eigenvector of CD CTD ,
⎣ ⎦ ⎣ N−1 ⎦ respectively. Then Eq. (12) is obtained.
xgN yN
wk = CD CTD wk
here row vector xgi = [z−1 yi , . . ., z−p yi , z−1 xi , . . ., z−q xi ]
(i = 1, . . .,
N
N
T
N). z−1 is the unit delay operator. p and q are time lags. = mij
ϕ(xi )
ϕ(xj )wk
According to the mapped samples, the sample matrices in (12)
i=1 j=1
feature space are shown by Eq. (8), where is the forgetting
N
T
factor. In order to adapt the process changes better, a prevalent = ˇi
ϕ(xi )
method of building model is to introduce forgetting factor into i=1
chemical engineering research and design 1 0 6 ( 2 0 1 6 ) 242–252 245
N
N where and are the eigenvalue and eigenvector of K, respec-
ˇ i T T
⇒ wk = ϕ(xi ) = ki
ϕ(xi ) = ˚(X)k (13) tively. Then the wk can be obtained as follows:
i=1 i=1
N
T T
where wk = i (xi ) = ˚(X) k (18)
i=1
N
T
ˇi = ϕ(xj )wk , ki = ˇi /, k = k1
mij ··· kN ∈ RN×1 .
where k is the kth eigenvector of K, W1 = [w1 , . . ., wA ],
j=1
W2 = [wA+1 , . . ., wN ], 1 ≥ · · · ≥ A ≥ A+1 ≥ · · · ≥ N . And A is the
principal number of D-KPLS, which determined by the cross
The wk can be calculated by Eqs. (14)–(18).
validation rule. The parameters are restricted to a finite region
(A ≤ Amax ). First, divide test set (X,Y) into w subsets. Then
N
T
choose one subset (Xi ,Yi ) (i = 1, . . ., w) each time, and train a
wk = ki
ϕ(xi ) D-PLS model with the rest of subsets. Finally, for every Yi ,
i=1 calculate the prediction error with Xi and the corresponding
N
T model. Obtain all predicted error square sum (PRESS) for all
ϕ(xm )wk = ki
ϕ(xm )
ϕ(xi )
subset (Yi ,Yi ). The parameter that corresponds to the smallest
i=1
PRESS is chosen as the principal number.
N
(14) W1 WT1 can be consider as the orthogonal projection matrix,
ϕ(xm )wk = ki kim
which is onto span{W1 } along span{W2 }. In the same way,
i=1
⎡ ⎤ W2 WT2 are the orthogonal projection matrix, which is onto
ϕ(x1 )
span{W2 } along span{W1 }. And span{W1 } is completely
⎢ ⎥
⎢ .. ⎥ wk = Kk responsible for predicting Y, while span{W2 } is almost no any
⎣ . ⎦
contribution for output prediction.
ϕ(xN ) Then the explicit decomposition structure onto feature
space is shown by the following equation:
⎡ ⎤
k11 ··· k1N
⎢. ⎥
T ˚(X)
ˆ T T
= W1 WT1 ˚(X) ≡ span{W1 }
, K = ⎢ .. ⎥
.. ..
with k = k1 ··· kN .
⎣ . . ⎦ T T T
(19)
˚(X)
˜ = (I − W1 WT1 )˚(X) = W2 WT2 ˚(X) ≡ span{W2 }
kN1 ··· kNN
N×N
⎡ ⎤
(x1 ) = KA+1 , . . ., K N
⎢ ⎥
⎢ .. ⎥ CD CTD wk = K2 k
⎣ . ⎦ External model of D-KPLS is shown by Eq. (22) as follows:
(xN )
(15) ˚(X) = TWT = T̂W1 T + T̃W2 T = ˚(X)
ˆ + ˚(X)
˜
(22)
T
Y = T̂B Q̂ + F
Let Eq. (14) equals to Eq. (15). Then Eq. (16) is obtained.
T T −1 T
where Q̂ = (BT T̂ T̂B) BT T̂ Y, and the orthogonal decompo-
K = K2 (16) sition on feature space is illustrated in Fig. 1.
In Fig. 1, dots denote mapped measurement in the fea-
ture space. Straight line denoting span{W1 } is the subspace
Since K is Gram matrix, Eq. (16) can be rewritten as follows:
of feature space, which mostly related to span{u}. Moreover,
the plane denoting span{W2 } is the orthogonal subspace to
= K (17) span{W1 }.
246 chemical engineering research and design 1 0 6 ( 2 0 1 6 ) 242–252
where
T
t̂new = WT1 (xnew )
T T
T T
= ˚(X) 1 , . . ., ˚(X) A (xnew )
A T T
= 1 , . . ., ˚(X)(xnew )
⎡ ⎤
(x1 )
T ⎢ ⎥
= 1 , . . ., A ⎢ .. ⎥ (xnew )T , (27)
⎣ . ⎦
(xN )
⎡ ⎤
k1new
T ⎢ ⎥
= 1 , . . ., A ⎢ .. ⎥
⎣ . ⎦
Fig. 1 – Orthogonal decomposition of feature space. kNnew
K = K − KE − EK + EKE (24) 1 T
˜ =
T̃ T̃ = diag(A+1 , . . ., N ),
N−A−1
where K ∈ RN×N and
and 1 ≥ · · · ≥ L ≥ · · · ≥ N . i (i = 1, . . ., N) is the eigenvalue of
⎡ ⎤ T
1 ··· 1 ˚(X)M ˚(X) . Hence, TY2 is suitable statistic for monitoring
1
⎢ .. .. .. ⎥ span{W1 }, which is called quality-related statistic. And T2 is
E=
N
⎣. . . ⎦ ∈ RN×N .
suitable candidate for monitoring span{W2 }, which is called
1 ··· 1 quality-unrelated statistic.
In this paper, we use F distribution to calculate the thresh-
Given a new sample (xnew ) ∈ R1×S , it can be decomposed
old for TY2 and T2 statistics as follows:
into two parts, i.e., (x
ˆ new ) and (x
˜ new ).
A(N2 − 1)
ϕ̂(xnew ) T = W1 WT1 ϕ(xnew ) T T22 = FA,N−A,ˇ (31)
(25)
TY ,ˇ N(N − A)
ϕ̃(xnew ) T = W2 WT2 ϕ(xnew ) T = ϕ(xnew ) T − ϕ̂(xnew ) T
(N − A)(N2 − 1)
TT2 2 ,ˇ = FN−A,A,ˇ (32)
Scores in span{W1 } and span{W2 } of new sample ϕ(xnew ) N×A
can be computed as follows:
where ˇ is significance level. A is principal number and N is
T the sample number.
t̂new = WT1 (xnew ) ∈ R A×1
(26) To sum up, fault detection flowchart of the D-KPLS is shown
T
t̃new = WT2 (xnew ) ∈ R(N−A)×1 in Fig. 2.
chemical engineering research and design 1 0 6 ( 2 0 1 6 ) 242–252 247
-3
x 10
3
2.5
PRESS
1.5
0.5
0
1 2 3 4 5 6
Principal number
1.7198 0.5835 1.4236 0.4963 −2.5717
C2 = ,
−0.3715 1.5011 1.3226 −1.4145 1.0696
⎡ ⎤
0.5586 0.2042 0.6370
⎢ 0.2007 0.0492 0.4429 ⎥
⎢ ⎥
⎢ ⎥
P = ⎢ 0.0874 0.6062 0.0664 ⎥ ,
⎢ ⎥
⎣ 0.9332 0.5463 0.3743 ⎦
0.2594 0.0958 0.2491
⎡ ⎤
0.4389 0.1210 −0.0862
␣1 = ⎣ −0.2966 −0.0550 0.2274 ⎦,
0.4538 −0.6573 0.4239
Fig. 2 – Quality-related fault detection flowchart of the
proposed method. ⎡ ⎤
−0.2998 −0.1905 −0.2669
␣2 = ⎣ −0.0204 −0.1585 −0.2950 ⎦ ,
5. Simulation 0.1461 −0.0755 0.3749
3 20
Y
2
predicted Y
1
1
Y
T2
10
y
-1
-2
-3
0 100 200 300 400 0
100 200 300 400
Sample index
(a)
3
200
Y
2
predicted Y
1
2
0
y
T2
100
-1
-2
-3
0 100 200 300 400
0
Sample index 100 200 300 400
(b)
Fig. 4 – Output and predicted output with D-KPLS.
Fig. 6 – Fault 2 detection results with D-KPLS.
25
Y 37.35%
D-KPLS model
˚(X) 32.61% 67. 39%
Y 95.33% 0
100 200 300 400
(a)
100
50
Y
T2
50
T2
25
0
100 200 300 400 0
100 200 300 400
(a)
(b)
10
Fig. 7 – Fault 1 detection results with KPLS.
T2
5
When a fault occurs in the quality-unrelated part of the
process as Eq. (35) with k ≥ 200, the output y is not affected.
However, this kind of fault can be detected by T2 , which indi-
0
100 200 300 400 cates a quality-unrelated fault. Other indices are not involved
(b) in this kind of fault, as shown in Fig. 6.
50
Table 5 – Faults description of Tennessee Eastman
process.
SPE
25
No. of fault Fault type Process variable
25
6 Step A feed loss (stream 1)
7 Step C header pressure loss-reduced
availability (stream 4)
0
100 200 300 400 8 Random variation A, B, C feed composition
(b) (stream 4)
9 Random variation D feed temp. (stream 2)
Fig. 8 – Fault 2 detection results with KPLS. 10 Random variation C feed temp. (stream 4)
11 Random variation Reactor cooling water inlet
temp.
Table 4 – Quality variable.
12 Random variation Condenser cooling water inlet
Number Variable temp.
13 Slow drift Reaction kinetics
1 Component of E in stream 11
14 Sticking Reactor cooling water valve
15 Sticking Condenser cooling water valve
16 Unknown Unknown
provide a benchmark of industrial process for evaluating pro- 17 Unknown Unknown
cess control and monitoring approaches, including PCA, KPLS, 18 Unknown Unknown
and the Fisher Discriminant Analysis. TE process contains two 19 Unknown Unknown
blocks of variable, 12 manipulated variables and 41 measured 20 Unknown Unknown
8 200
Y
4 D-KPLS: Predicted Y
Output
Y
T2
0 100
-4
0
-8 100 200 300 400 500
100 200 300 400 500
Sample index
Sample index
200
4
Prediction error
2
T2
100
0
-2
0
100 200 300 400 500
-4
100 200 300 400 500 Sample index
Sample index
Fig. 11 – Monitoring results of fault 2 with D-KPLS.
Fig. 9 – Predicted output and prediction error with D-KPLS.
200
200
Y
T2
100
Y
T2
100
0
0 100 200 300 400 500
100 200 300 400 500
Sample index
Sample index
200
200
T2
100
T2
100
0
0 100 200 300 400 500
100 200 300 400 500
Sample index
Sample index
Fig. 12 – Monitoring results of fault 3 with D-KPLS.
Fig. 10 – Monitoring results of fault 1 with D-KPLS.
the normal situation. Obviously, from the 201st to the 500st the normal situation. From the 201st to the 500st samples, it
samples, the fault is detected by TY2 , which indicates a quality- can be judged that a quality-related fault occurs according to
related fault. However, T2 statistic is not affected by this fault, TY2 statistic, as shown in Fig. 11.
as shown in Fig. 10. Fault 3 is introduced to test the performance of D-KPLS for
From 201st sample begin, a quality-related fault 2 pre- detecting quality-unrelated fault, and the monitoring results
sented in Table 5 is added to the samples. When a are shown in Fig. 12. It can be seen from Fig. 12 that T2 is under
quality-related fault occurs, the output y is affected. And this the confidence limits, which means that there is no quality-
kind of fault can be detected by TY2 statistic, which indicates a related process fault occurring. However, from the 201st to the
quality-related fault. From the 1st to the 200st samples, it can 500st samples, the T2 statistic surpasses the confidence limits
be seen from Fig. 11 that TY2 and T2 statistics both are under sharply, which indicates that a quality-unrelated fault occurs
the confidence limits, which means that the process is under (Fig. 13).
chemical engineering research and design 1 0 6 ( 2 0 1 6 ) 242–252 251
6. Conclusion Ge, Z.Q., Song, Z.H., Gao, F.R., 2013. Review of recent research on
data-based process monitoring. Ind. Eng. Chem. Res. 52 (10),
In this paper, a D-KPLS modeling method is proposed, which 3543–3562.
Geladi, P., Kowalshi, B.R., 1986. Partial least squares regression: a
can build a dynamic relationship between input and output
tutorial. Anal. Chim. Acta 185, 1–17.
variables. And corresponding fault diagnosis approaches have Helland, K., Berntsen, H.E., Borgen, O.S., Martens, H., 1992.
been proposed. In order to test the performance of proposed Recursive algorithm for partial least squares regression.
approach for monitoring abnormal situation in dynamic pro- Chemom. Intell. Lab. Syst. 14, 129–137.
cesses, D-KPLS model is applied to the numerical example Hoskuldsson, A., 1988. KPLS regression methods. J. Chemom. 2,
and TE process. With the D-KPLS model, abnormal situations 211–228.
Hsu, C.C., Su, C.T., 2011. An adaptive forecast-based chart for
in numerical example and TE process are detected. For the
non-Gaussian process monitoring: with application to
purpose of verifying the modeling performance, a compari-
equipment malfunctions detection in a thermal power plant.
son for numerical example between D-KPLS and KPLS is given IEEE Trans. Control Syst. Technol. 19, 1245–1250.
out to illustrate that D-KPLS model is superior to standard Jackson, J.E., 1980. Latent variables and factor analysis: Part
KPLS model for dynamic process modeling. The cases studies I—Latent variables. Qual. Technol. 12, 201–213.
above prove that the D-KPLS technique is feasible in dynamic Jackson, J.E., 1991. A User’s Guide to Latent Variables. Wiley, New
process monitoring field. The method for fault isolating and York, NY.
Kano, M., Tanaka, S., Hasebe, S., Hashimoto, I., Ohno, H., 2003.
identifying will be our future work.
Monitoring independent components for fault detection.
AIChE J. 49, 969–976.
Kramer, M.A., 1992. Autoassociative neural networks. Comput.
Acknowledgements Chem. Eng. 16, 313–328.
Li, G., Qin, S.J., Zhou, D., 2010. Output relevant fault
The work is supported by China’s National 973 program reconstruction and fault subspace extraction in total
projection to latent structures models. Ind. Eng. Chem. Res. 49
(2009CB320600) and NSFC (61325015 and 61273163).
(19), 9175–9183.
Li, G., Alcala, C.F., Qin, S.J., Zhou, D.H., 2011a. Generalized
reconstruction-based contributions for output-relevant fault
References diagnosis with application to the Tennessee Eastman process.
IEEE Trans. Control Syst. Technol. 19 (Sep (5)), 1114–1127.
Li, G., Liu, B.S., Qin, S.J., Zhou, D.H., 2011b. Quality relevant
Antti, S.H., Lindgren, F., Ohman, J., 1998. Orthogonal signal
data-driven modeling and monitoring of multivariate
correction of near-infrared spectra. Chemom. Intell. Lab. Syst.
dynamic process: the dynamic T-PLS approach. IEEE Trans.
44 (1), 175–185.
Neural Networks 22 (12), 2262–2271.
Dayal, B.S., MacGregor, J.F., 1997. Improved KPLS algorithms. J.
Malthouse, E.C., Tamhane, A.C., Mah, R.S.H., 1997. Nonlinear
Chemom. 11, 73–85.
partial least squares. Comput. Chem. Eng. 21, 875–890.
Dunteman, G.H., 1995. Principal Component Analysis. SAGE
Misra, M., Yue, H.H., Qin, S.J., Ling, C., 2002. Multivariate process
Publications, London (1989.Goering, B.K. Ph.D. Dissertation,
monitoring and fault diagnosis by multi-scale PCA. Comput.
Cornell University).
Chem. Eng. 26, 1281–1293.
252 chemical engineering research and design 1 0 6 ( 2 0 1 6 ) 242–252
Qin, S.J., McAvoy, T.J., 1992. Nonlinear PLS modeling using neural Wold, H., 1982. Soft modeling, the basic design and some
networks. Comput. Chem. Eng. 16, 379–391. extensions. Syst. Indirect Obs. 2, 589–591.
Qin, S.J., Valle, S., Piovoso, M.J., 2001. On unifying multiblock Wold, S., 1992. Nonlinear partial least squares modelling II.
analysis with application to decentralized process Spline inner relation. Chemom. Intell. Lab. Syst. 14, 71–84.
monitoring. J. Chemom. 15, 715–742. Zhang, Y., Hu, Z., 2011. Multivariate process monitoring and
Tsai, D., Wu, S., Chiu, W., 2013. Defect detection in solar modules analysis based on multi-scale KPLS. Chem. Eng. Res. Des. 89,
using ICA basis images. IEEE Trans. Ind. Electron. 9, 122–131. 2667–2678.
Wang, H., 1999. Partial Least Squares Regression Method and Zhang, Y., Teng, Y., 2010. Process data modeling using modified
Application. National Defense Industry Press, Beijing. kernel partial least squares. Chem. Eng. Sci. 65, 6353–6361.
Wold, S., Hellberg, S., Lundstedt, T., et al., 1987. PLS modeling Zhou, D., Li, G., Qin, S.J., 2010. Total projection to latent structures
with latent variables in two or more dimensions. In: for process monitoring. AIChE J. 56 (1), 168–178.
Proceedings of PLS Meeting, Frankfurt, pp. 1–21.