You are on page 1of 14

J Intell Manuf (2010) 21:717–730

DOI 10.1007/s10845-009-0249-y

Tool wear monitoring using artificial neural network based


on extended Kalman filter weight updation with transformed
input patterns
Srinivasan Purushothaman

Received: 26 January 2008 / Accepted: 10 February 2009 / Published online: 28 February 2009
© Springer Science+Business Media, LLC 2009

Abstract The condition of the tool in a turning operation used to execute the operations, so that good quality parts with
is monitored by using artificial neural network (ANN). The mass production are achieved. When the tools are worn out,
recursive Kalman filter algorithm is used for weight upda- they are replaced with new tools, or reground and used. The
tion of the ANN. To monitor the status of the tool, tool wear duration, after which a tool has to be replaced or reground,
patterns are collected. The patterns are transformed from can be expressed in terms of amount of flank wear land width
n-dimensional feature space to a lower dimensional space of the tool (Vb ) or tool life in minutes. Established data both
(two dimensions). This is done by using two discriminant in terms of tool life and amount of tool wear, are available,
vectors ϕ1 and ϕ2 . These discriminant vectors are found by based on which, the tools can be replaced or reground. There
optimal discriminant plane method. Thirty patterns are used is no assurance that the tool will last, till the established
for training the ANN. A comparison between the classi- time. There is every possibility for the tool to fail in advance.
fication performances of the ANN trained without reduc- Methods should be implemented to monitor the status of the
ing the dimensions of the input patterns and with reduced tooltip in order to maintain the surface roughness quality of
dimensions of the input patterns is done. The ANN trained the machined work piece with the specified limits. In the case
with transformed tool wear patterns gives better results in of computer numerical control (CNC), tool geometry offset
terms of improved classification performance in less itera- and tool wear offset can be implemented by inserting the val-
tion, when compared with the results of the ANN trained ues of Vb estimated from the information collected from the
without transforming the dimensions of the input patterns to sensors during turning.
a lower dimension. The methods, used for monitoring tool wear, are (I) direct
and (II) indirect. The direct methods use measurements of
Keywords Back-propagation algorithm · Extended volumetric loss of tool material. This procedure is done on-
Kalman filter · Optimal discriminant plane · n-Dimensional line (Kurada and Bradley 1997a, b; Wong et al. 1997). Table 1
vector · 2 Dimensional vector · Transformation shows details of direct and indirect methods for tool wear
measurements with their advantages and disadvantages. Due
to the complexity and unpredictable nature of the machin-
Introduction ing processes, the process has to be modeled with rule-based
techniques. Modeling correlates process state variables to
In the manufacturing industries, automated machine tools are parameters. The process state variable is Vb . The process
used. Some of them are single spindle, multispindle auto- parameters are feed rate (F), cutting speed (S) and depth
mats, and capstan, turret and computer numerical control of cut (Dc ). Some of the modeling techniques are multiple
machines. In all these machines, predefined sequence of regression analysis and group method data handling. These
instructions, like using stops and programming methods, are methods require a relationship between process parameters
and process state variables (Chen and Chen 2005; Chrys-
S. Purushothaman (B) solouris and Guillot 1988; Lee et al. 2004; Li and Yao 2005;
Faculty of Engineering and Technology, Multimedia University,
Melaka, Malaysia
Tug orul zel and Karpat 2005).
e-mail: dr.s.purushothaman@gmail.com

123
718 J Intell Manuf (2010) 21:717–730

Table 1 Tool wear monitoring methods :Advantages and Disadvantages


Measurements Transducer Advantages Disadvantages

Direct methods: more reliable and not suitable for in process monitoring
Optical Shape/position of TV camera/optical A direct measuring Cumbersome
cutting edge system shows process
changes in the
look, quality and
geometry of the
edge.
Wear particles and Particle size and Spectrophotometer Slow in detection Unsafe
radioactivity concentration
Tool/work junction Dimension of Micrometer, Less costly Not convenient
resistance/Work workpiece pneumatic, optical,
piece size Ultrasonic,
magnetic
Tool/work distance Distance of work Micrometer, Less costly Not convenient
piece and pneumatic
tool/Tool holder
Indirect methods: convenient method
Cutting force Changes of cutting Dynamometer, Near about wear No proper
force Strain gauge details can be correlation is
obtained established.
Careful placement
of strain gauges or
dynamometers.
Transducers
frequently have to
be incorporated
into the original
design of the
machine.
Acoustic Stress wave Acoustic emission Signal gives more Difficult to
emission/sound/ energy/acoustic transducer, information generalize
vibration waves/Vibration of Microphone,
tool and tool post Accelerometer
Temperature Variation of cutting Thermocouple, Can measure bulk Require extensive
temperature on Pyrometer temperature redesign of a
tool machine’s spindle.
The rise in the bulk
temperature of the
tool, caused by
wear, is very small
and the signal to
noise ratio is poor.
Power input Power /current Amperemeter/dyna- Potential of Wear produces very
consumption of mometer providing small changes in
spindle or feed real-time power
motor optimization of consumption
metal removal which must be
rates detected as a
perturbation of a
much larger
signal. Progressive
wear of the tool
increases power
consumption but
plastic
deformation of the
tool at high
temperature
decreases power
consumption

123
J Intell Manuf (2010) 21:717–730 719

An artificial neural network (ANN) with extended nant vectors obtained. The ANN learns the 2 dimensional
Kalman filter weight updating algorithm has been proposed patterns by using extended Kalman filter with initial weights
to implement tool wear monitoring in a turning operation. and after sufficient iterations, the final weights are stored into
Earlier work have been done using ANN for tool wear mon- a file. During online implementation S, F, Dc , Fx , Fy , Fz are
itoring (Venkatesh et al. 1997; Sick 1998; Kuo and Cohen collected from the sensors and the values are projected into
1999; Yao et al. 1999; Li et al. 2000; Srinivasa et al. 2004; 2 dimensional vector using discriminant vector obtained fol-
Jemielniak and Bombiński 2006; Panda et al. 2008). The neu- lowed by processing with final weights to obtain a Vb value.
ral network approach does not require any modeling between This value is further compared with a set threshold for further
process parameters, and the outputs are process state vari- action.
ables. The network maps the input domains with the output
domains. The inputs are process parameters and the outputs Transformation of n-dimensional input patterns
are process state variables. Each process parameter or pro- into two dimensional input vectors
cess state variable is called feature. The combination of input
and output constitutes a pattern. Many patterns will be called The process of changing the dimensions of a vector is called
data. transformation. The transformation of a set of n-dimensional
In this work, instead of using the actual dimension of the real vectors onto a plane is called a mapping operation. The
input pattern (input vector), the dimension is reduced to two. result of this operation is a planar display. The main advan-
The two dimensional input vector does not represent any indi- tage of the planar display is that the distribution of the original
vidual feature of the original n-dimensional input pattern; patterns of higher dimensions (more than two dimensions)
instead, it is a combination of ‘n’ features of the original can be seen on a two dimensional graph. The mapping oper-
pattern. The components of the reduced pattern do not have ation can be linear or non-linear. Linear classification algo-
any dimensional quantity. A similar work has been done ear- rithm (Fisher 1936) and a method for constructing a classifier
lier by (Purushothaman and Srinivasa 1998). In the earlier on the optimal discriminant plane, with minimum distance
work, backpropagation algorithm (BPA) was considered. The criterion for multiclass classification with small number of
EKF algorithm has been used in this work as an alternative patterns (Hong and Yang 1991), have been developed.
approach to BPA. A Kalman filter attempts to estimate the The method of considering the number of patterns and fea-
state of a system. In neural network training, the weights of ture size (Foley 1972), and the relations between discriminant
the network are the states and the Kalman filter when used analysis and multilayer perceptrons (Gallinari et al. 1991),
attempts to estimate, and the desired output of the network have been analyzed.
is the measurement used. Due to the estimation property of A linear mapping is used to map an n-dimensional vector
the EKF, the convergence is faster compared to that of BPA. space n onto a two dimensional space. Some of the lin-
The main advantage of using the transformation method is ear mapping algorithms are principal component mapping
reduced input dimension and hence faster computation. (Kittler and Young 1973) generalized declustering mapping
(Sammon 1970a, b; Fehlauer and Eisenstein 1978; Gelsema
and Eden 1980), leased squared error mapping (Mix and
Schematic diagram Jones 1982) and projection pursuit mapping (Friedman and
Turkey 1974).
In Fig. 1, a schematic presentation for training and testing In this work, the generalized declustering optimal discrim-
of artificial neural network (ANN) to implement online tool inant plane is used. The mapping of the original pattern ‘X’
wear monitoring in a turning operation is given. The values onto a new vector ‘Y’ on a plane is done by a matrix trans-
of speed of the workpiece (S), feed (F) and depth of cut of the formation, which is given by
cutting tool (Dc ) are collected along with the axial force (Fx ),
radial force (Fy ) and tangential force (Fz ) from the tool wear Y = AX (1)
dynamometer. These values are used as inputs in the input where
layer of the ANN. The flank wear land width (Vb ) is collected  
ϕ1
using tool maker’s microscope. This value is used as target in A= (2)
ϕ2
the output layer of the ANN. By using the statistical variance
technique, all the collected patterns are separated into train- and ϕ1 and ϕ2 are the discriminant vectors (also called pro-
ing and testing patterns and are grouped based on the range jection vectors).
of Vb . The Fisher’s linear discriminant criteria is applied on An overview of different mapping techniques is given
the training patterns to obtain the two dimensional transfor- (Siedlecki et al. 1988a, b). The vectors ϕ1 and ϕ2 are obtained
mation vectors. The training patterns are further transformed by optimizing a given criterion. The plane formed by the dis-
into 2 dimensional patterns by projecting with the dicrimi- criminant vectors is the optimal vectors which are the optimal

123
720 J Intell Manuf (2010) 21:717–730

Fig. 1 Tool wear condition


monitoring

discriminant planes. This plane gives the highest possible where


classification for the new patterns.
The steps involved in the linear mappings are: p(ωi ) a priori the probability of the ith pattern, generally,
p(ωi ) = m 1

Step 1: Computation of the discriminant vectors ϕ1 and ϕ2 : mi is the mean of each feature of the ith class patterns,
this is specific for a particular linear mapping algo- (i = 1, 2. . .,m),
rithm. mo is the global mean of a feature of all the patterns in all
Step 2: Computation of the planar images of the original the classes,
data points: this is for all linear mapping algorithms. X is {Xi , i = 1, 2, . . . L} the n-dimensional patterns of each
class,
Computation of discriminant vectors ϕ1 and ϕ2 L is the total number of patterns.

The criterion to evaluate the classification performance is Equation 3 states that the distance between the class centers
given by: should be maximum. The discriminant vector ϕ1 that max-
imizes ‘J’ in Eq. 3 is found as a solution of the eigenvalue
ϕ T Sb ϕ problem given by:
J (ϕ) = (3)
ϕ T Sw ϕ
Sb ϕ1 = λml Sw ϕ1 (6)
where
where
Sb is the between class matrix, and
Sw is the within class matrix which is non-singular. λml is the greatest non-zero eigenvalue of (Sb S−1
w )
ϕ1 is the eigenvalue corresponding to λml

Sb = p(ωi (mi − mo )(mi − mo )T (4) The reason for choosing the eigenvector with maximum
   eigenvalue is that the Euclidean distance of this vector will
Sw = p(ωi )E Xi − mo )(Xi − mi )T ωi (5) be the maximum, when the compared with that of the other

123
J Intell Manuf (2010) 21:717–730 721

eigenvectors of Eq. 6. Another discriminant vector ϕ2 is Computation of two-dimensional vector from the original
obtained, by using the same criterion of Eq. 3. The discrimi- n-dimensional input patterns
nant vector ϕ2 should also satisfy the condition given by:
The two-dimensional vector set yi is obtained by:
ϕ2T ϕ1 = 0 (7)
yi = (uI , vI ) = (XiT ϕ1 , XTi ϕ2 ) (11)
Equation 7 indicates that the solution obtained is geomet-
rically independent and the vectors ϕ1 and ϕ2 are perpendic- The vector set yi is obtained by projecting the original pattern
ular to each other. Whenever the patterns are perpendicular to ‘X’ onto the space, spanned by ϕ1 and ϕ2 by using Eq. 11.
each other, it means that there is absolutely no redundancy, or The values of uI and vI can be plotted in a two-dimensional
repetition of a pattern, during collection of tool wears patterns graph, to know the distribution of the original patterns.
in turning operation. The discriminant vector ϕ2 is found as
a solution of the eigenvalue problem, which is given by:

Qp Sb ϕ2 = λm2 Sw ϕ2 (8) Extended Kalman filter algorithm (EKF)

where The algorithm uses a modified form of the BPA, to minimize


the difference between the desired outputs and the actual
λm2 is the greatest non-zero eigenvalue of Qp Sb S−1
w , and outputs, with respect to the inner products to the non-linear
Qp is the projection matrix which is given by function. In the conventional BPA, the difference between
the desired outputs and the outputs of the network are min-
imized with respect to the weights. The EKF algorithm is a
−1
ϕ1 ϕ1T SW state estimation method for a non-linear system, and it can
Qp = I − −1
(9) be used as a parameter estimation method, by augmenting
ϕ1T SW ϕ1
the state with unknown parameter (Kalman and Bucy 1961).
where A multi-layered network is a non-linear system with layered
I is an identity matrix structure, and its learning algorithm is regarded as parameter
The eigenvector corresponding to the maximum eigen- estimation for such a system (Liu et al. 1992; Scalero and
value of Eq. 8 is the discriminant vector ϕ2 . Tepedelenliegu 1992). The EKF based learning algorithm
In Eqs. 6 and 8, SW should be non-singular. The SW matrix gives approximately the minimum variance estimates of the
should be non-singular, even for a more general discrim- weights. The convergence of EKF is faster than that of BPA
inating analysis and multiorthonormal vectors (Foley and (Li 2001). Error values, which are generated by EKF, are
Sammon 1975; Liu et al. 1992; Cheng et al. 1992) If the used to estimate the inputs to the non-linearities. The esti-
determinant of SW is zero, then singular value decomposi- mated inputs, along with the input vectors to the respective
tion (SVD) on SW has to be done. On using SVD, SW is nodes, are used to produce an updated set of weights, through
decomposed into three matrices U, W and V. The matrices U a system of linear equations at each node. Using Kalman filter
and W are unitary matrices, and V is a diagonal matrix with at each layer solves these systems of linear equations.
non-negative diagonal elements arranged in the decreasing In EKF algorithm, the inputs to the non-linearities are esti-
order. A small value of 10−5 to 10−8 is to be added to the mated, and its error co-variance matrix is minimized. This
diagonal elements of V matrix, whose value is zero. This minimization of the co-variance of the vector helps in faster
process is called perturbation. After perturbing the V matrix, convergence of the network. The steps involved in training
the matrix S1w is calculated by: the ANN, by using extended Kalman filter algorithm are:

Sw1 = U ∗ W ∗ V T (10)
Step 1: Initialize the weights and thresholds randomly
where between layers, initial trace of the error co-variance
S1W is the non-singular matrix which has to be considered matrix Q and the accelerating parameters λ and
in the place of Sw . Tmax to a very small value.
The perturbing value should be very minimum, which is Step 2: Present the inputs of a pattern and compute outputs
just sufficient to make S1w non-singular. The method of SVD of nodes in the successive layers by
computation and its applications are given (Klema and Laub
 
1980; Sullivan and Liu 1984). As per Eq. 7, when the values n+1 1 1 ≤ i ≤ Nn+1
of ϕ1 and ϕ2 are innerproducted, the resultant value should X̂i =  
1+ exp −Wij X̂i + 1≤n≤M − 1
be zero. In reality, the innerproducted value will not be zero.
This is due to floating point operations. (12)

123
722 J Intell Manuf (2010) 21:717–730

Step 3: Calculate the error E (p) of a pattern by: where

1  2 n
Ri (p) = di (p) − Ŷi (p) in the output layer (22)
E (p) =  di (p) − X̂i (p) (13)
2 n n
Ri (p) = X̂i (p) − Ŷi (p) in the hidden layer (23)
and the mean squared error (MSE) for all the pat-
terns in iteration is obtained by: Step 9: Update the error co-variance matrix Q by:

E = E(p) (14) Qni (p) = Qni (p − 1)


βin (p)
+
in (p)
in (p)T (24)
where λ̂ (p) + αi βin (p)
n
p is the pattern number, and Ŷi+1 (p) = Ŷi (p)
n
 
d is the desired output. +δin (p) X̂i (p)T Wnij (p) − Wnij (p − 1) (25)
Step 4: Calculate the accelerating parameter λ̂ by: n+1 n
Ŷi = ŶNn+1

λ̂(p) = λ̂(p − 1)

Step 10: Update estimation accuracy Ŷ in the hidden layers
1 (d(p)−X̂(p))T −(d(p)−X̂(p))
+ −λ̂(p−1) by:
Tmax nL Step 11: Calculate the error δ at each node in the hidden layer
(15) by:

n+1
 n+1

Step 5: Allot output X̂ of each node to Ŷ , which improves δin (p) = X̂i 1 − X̂i δjn (p) Wij (p) (26)
the estimation accuracy

M−1 M Step 12: Adopt Eqs. 14–26 until weights and thresholds
Ŷi (p) = X̂ (p) [1 ≤ i ≤ NM − 1] (16) between the layers are updated. Stop training the
network once the performance index of the network
For n = M − 1 to 1 Step-1 is reached: otherwise continue with Step 2.
For i = 1 to Nn+1
Step 6: Calculate the error δ at each node in the output layer
by: Collection of data

M M Experimental study on turning was conducted with sphe-


δ1n (p) = X̂i (1 − X̂i ) (17) roidal graphite cast iron work material. The tool used for
this work is made of 13 layers of coating with ALON, TiC,
Step 7: Calculate the temporary scalars β and α by: TiN, Ii(C, N) over a carbide substrate. This tool is named as
Widalon HK15. The ranges of various process parameters
βin (p) = δin (p)T δin (p) (18) are: cutting speed (200–500 m/min), feed (0.063–0.25 mm/
n rev) and depth of cut (0.5–2 mm). The turning operation was
αin (p) = X̂i (p)T
in (p) (19)
carried on a VDF high-speed precision lathe. The machining
conditions are given in Table 2.
where Figure 2 shows a block diagram for the arrangement of
experimental setup. A three-component piezoelectric crystal
n

in (p) = Q ni (p − 1) X̂ (p) (20) type of dynamometer (KISTLER type 9441) was used with
the single point cutting tool to collect axial force (Fx ), radial
force (Fy ) and tangential force (Fz ). Theses cutting forces
Step 8: The weights Wij are updated by:
were collected for different speed (S), feed (F) and depth of
cut (Dc ). All the six variables were used in the input layer
δin (p) Ri (p)
in (p) of the ANN. The flank wear land width (Vb ) was collected
Wijn (p) = Wnij (p − 1) +
λ̂ (p) + αin (p) βin (p) using tool maker’s microscope. This value is used a s target
(21) output in the output layer of ANN.

123
J Intell Manuf (2010) 21:717–730 723

Table 2 Composition of work material


Work material Spheroidal graphite cast iron
Hardness 220–240 HB
Composition
Carbon 3.46%
Chromium 0.039%
Phosphorous 0.041%
Manganese 0.577%
Vanadium 0.004%
Sulphur 0.008%
Copper 0.101%
Silicon 2.799%
Nickel 0.063%
Molybdenum 0.002%
Tooling material Fig. 2 Block diagram of the experimental setup. Axial cutting force
(Fx ), radial cutting force (Fy ) and tangential cutting force (Fz ), flank
Holder type MA BNR 25 25 12 wear land width (Vb )
Tool type SNMA 12 04 08
Tool material Widalon HK 15 13 layers slot made out in the tool holder, so that there was no change
coating consisting of
ALON, TiC, TiN, Ti(C,N) in the tool overhang.
over a carbide substrate
Overhang 26.8 mm Normalizing the patterns
Entry angle 75◦
Rake angle −7◦ The patterns are selected for training and testing. The inputs
Included angle 90◦ of the training and test patterns are normalized by:
Side clearance angle 7◦
Xi
End clearance angle 6◦ Xi = (27)
X21 + X22 + X23 + · · · + X2n
Nose radius 0.8 mm
Experimental conditons and the outputs are normalized by:
Sharp tool Cutting speed 200, 300, 400 500 m/min Xi
Feed 0.063, 0.08, 0.1, 0.2, 0.25 mm/rev Xi = (28)
Xmax
Depth of cut 0.5, 1.0, 1.5, 2.0 mm
where
Progressive wear cutting speed 300, 350, 400, 450 m/min
Feed 0.1 mm/rev
xI = the value of a feature, and
Depth of cut 0.5, 1.0, 1.5, 2.0 mm
xmax = the maximum value of the feature.

The reasons for using Eq. 27 to normalize the inputs of the


Experimental procedure patterns are:

Turning was done for the combination of different speeds, 1. Each pattern is converted into unit length, so that the
feeds and depths of cut, using a fresh cutting edge. The ranges patterns lie in unit space,
of cutting conditions were decided for progressive wear of 2. The vast difference among the values of features of a
the tool. For each cutting condition, the three components of pattern is reduced. In such cases, the values of each fea-
cutting forces Fx , Fy and Fz , were measured. Measurements ture of a pattern should be normalized within a close
were made at different intervals of time. Depending on the range of values, and
length of cut, machining was stopped after every 60–80 s, and 3. The number of floating point operations is minimized.
Vb was measured. Static forces were recorded at two or three
intermediate points between two wear measurements. The set Selection of patterns for training ANN
of measurements, immediately prior to a wear measurement,
had been used for training the neural network. About 113 The number of classes, the number of patterns in each class,
patterns were collected. During re-insertion of tool inserts the classification range in each class and the total number of
after every wear measurement, inserts were slugged into the training patterns are decided. If only one output is considered,

123
724 J Intell Manuf (2010) 21:717–730

the range of classification is simple. If more than one output Table 3 Patterns collected during turning
is considered, a combination criterion has to be used. The S. No. Inputs Outputs
remaining patterns which are used for training the network,
S F Dc Fx Fy Fz Time Vb
should be, such that they represent the entire population of (m/min) (mm/rev) (mm) (N) (N) (N) (sec) (µm)
the data. The selection of patterns is done by:
nf 2 1 450 0.10 1.5 150 115 350 45 15
j=1 xij − x̄ j 2 450 0.10 0.5 60 50 115 38 15
E2i = (29)
σi2 3 450 0.10 2.0 180 130 450 32 15
where 4 350 0.10 0.5 60 90 125 30 15
5 300 0.06 0.5 45 80 70 428 20
E2i the maximum variance of a pattern 6 300 0.06 0.5 40 80 65 428 20
nf the number of features, and 7 200 0.06 0.5 40 65 75 428 20
8 400 0.10 0.5 60 85 110 428 20
nf 2 9 300 0.08 0.5 50 90 85 428 20
j=1 xij − x̄j 10 300 0.08 0.5 45 90 85 428 20
σi2 = (30)
L 11 400 0.06 0.5 40 75 95 428 20
where 12 500 0.08 0.5 50 40 105 428 20
13 200 0.10 0.5 60 90 110 428 20
xj the mean for each feature, and 14 400 0.08 0.5 55 90 100 428 20
L the number of patterns. 15 500 0.10 0.5 50 95 110 428 20
16 200 0.10 0.5 45 85 105 428 20
The value of E2i is found for the patterns given in Table 3. 17 450 0.10 1.0 115 105 250 428 20
Fifteen Patterns with maximum E2i and fifteen patterns with 18 300 0.10 0.5 45 110 105 428 20
minimum E2i are chosen for training the ANN and the corre- 19 300 0.10 0.5 40 105 105 428 20
sponding pattern numbers from each class are given in Table 4 20 200 0.06 0.5 35 65 70 428 20
with their classes. The remaining 83 patterns are considered 21 500 0.06 0.5 45 70 90 428 20
as test patterns. The classification ranges and the number of 22 200 0.08 0.5 40 75 80 428 20
patterns used for training and testing the ANN, is given in 23 200 0.08 0.5 50 75 90 428 20
Table 5. The Vb values have been grouped into two classes
24 400 0.20 0.5 75 115 195 428 20
with 200 µm as the threshold. If the wear is less than this
25 300 0.25 0.5 70 140 225 428 20
range then the surface roughness of the machined compo-
26 200 0.20 0.5 50 125 190 428 20
nent will normally acceptable. We can create more classes
27 200 0.20 0.5 120 130 190 428 20
with additional range of Vb .
28 400 0.25 0.5 80 125 230 428 20
29 300 0.20 0.5 65 130 185 428 20
Procedure for implementing optimal discriminant 30 500 0.25 0.5 60 115 215 428 20
plane method in ANN 31 500 0.20 0.5 60 110 195 428 20
32 200 0.25 0.5 85 160 220 428 20
The steps involved in implementing optimal discriminate 33 200 0.25 0.5 60 150 210 35 20
plane method in the ANN are as follows: 34 450 0.10 0.5 60 55 105 92 30
35 350 0.10 0.5 55 85 140 75 30
Step 1: Patterns for training the ANN are selected by using 36 450 0.10 1.5 130 115 345 65 30
Eqs. 29 and 30; and the remaining patterns are con- 37 450 0.10 2 100 140 450 60 30
sidered as test patterns. 38 450 0.10 1 100 105 250 70 40
Step 2: The inputs of each pattern in the training and test set 39 400 0.10 0.5 55 85 110 129 45
are normalized, by using Eq. 27, so that the length
40 350 0.10 0.5 60 100 120 42 45
of each pattern is one. The outputs of all the patterns
41 450 0.10 0.5 25 70 85 111 55
are normalized by using Eq. 28.
42 450 0.10 2 160 140 470 85 55
Step 3: Sw and Sb matrices are calculated by using Eqs. 4
43 350 0.10 0.5 60 100 125 165 60
and 5. The Sw matrix is checked for non-singularity.
44 450 0.10 1.5 150 105 330 94 65
If Sw matrix is singular, singular value decomposi-
45 350 0.10 0.5 60 80 125 202 75
tion is applied to Sw and small perturbation is done.
46 450 0.10 1 115 110 260 110 75
After perturbation, Sw matrix is recomputed as S1w .

123
J Intell Manuf (2010) 21:717–730 725

Table 3 continued Table 3 continued


S. No. Inputs Outputs S. No. Inputs Outputs
S F Dc Fx Fy Fz Time Vb S F Dc Fx Fy Fz Time Vb
(m/min) (mm/rev) (mm) (N) (N) (N) (s) (µm) (m/min) (mm/rev) (mm) (N) (N) (N) (sec) (µm)

47 450 0.10 0.5 25 70 90 149 80 93 450 0.10 1.0 325 290 345 240 290
48 450 0.10 2 110 140 470 110 80 94 400 0.10 0.5 200 350 180 494 295
49 400 0.10 0.5 50 80 105 240 90 95 450 0.10 1.0 400 375 365 269 340
50 350 0.10 0.5 55 85 130 85 90 96 450 0.10 1.5 450 390 400 308 350
51 450 0.10 0.5 60 105 125 180 90 97 450 0.10 1.0 450 400 360 299 365
52 400 0.10 0.5 60 90 100 128 92 98 450 0.10 1.5 680 580 560 338 375
53 350 0.10 0.5 70 100 120 365 93 99 450 0.10 1.0 450 430 370 529 400
54 450 0.10 1.5 135 110 325 123 94 100 450 0.10 2.0 850 700 750 329 400
55 400 0.10 0.5 50 85 100 172 95 101 450 0.10 1.5 750 650 500 361 400
56 350 0.10 0.5 70 100 115 290 96 102 400 0.10 0.5 175 350 140 299 400
57 450 0.10 0.5 75 150 150 212 100 103 450 0.10 1.5 240 850 620 394 540
58 400 0.10 0.5 60 100 105 207 105 104 450 0.10 1.0 550 590 430 366 550
59 450 0.10 1 150 140 275 146 105 105 450 0.10 2.0 1100 1200 840 336 585
60 350 0.10 2 200 140 460 135 105 106 450 0.10 1.5 260 1200 800 427 690
61 450 0.10 0.5 60 100 115 358 107 107 450 0.10 1.0 570 700 450 403 755
62 400 0.10 0.5 75 205 145 244 110 108 450 0.10 2.0 1200 1400 1000 364 785
63 350 0.10 0.5 45 100 110 403 115 109 450 0.10 1.5 20 1800 1400 454 825
64 450 0.10 0.5 60 105 125 241 115 110 450 0.10 2.0 1500 1800 1000 396 880
65 450 0.10 1.5 155 170 370 151 115 111 450 0.10 1.0 950 700 500 440 980
66 400 0.10 0.5 100 250 150 276 120 112 450 0.10 1.5 380 1880 1500 482 980
67 400 0.10 0.5 50 112 85 277 120 113 450 0.10 2.0 1250 1440 1040 428 990
68 450 0.10 0.5 70 175 85 311 125
69 450 0.10 0.5 125 250 150 314 130
70 450 0.10 1.5 300 245 410 161 130 Step 4: By using Eqs. 6 and 8, the discriminant vectors ϕ1
71 350 0.10 2 160 140 460 179 130 and ϕ2 are calculated and are given by
72 450 0.10 0.5 75 145 120 438 133
⎡ ⎤ ⎡ ⎤
73 450 0.10 0.5 125 275 150 351 140 +0.535115 +0.483875
⎢ +0.000019 ⎥ ⎢ −0.000229 ⎥
74 350 0.10 2 400 240 550 197 140 ⎢ ⎥ ⎢ ⎥
⎢ +0.002912 ⎥ ⎢ ⎥
75 450 0.10 1.5 325 230 450 210 148 ϕ1 = ⎢ ⎥ , ϕ2 = ⎢ −0.000692 ⎥
⎢ −0.143451 ⎥ ⎢ +0.341656 ⎥
76 350 0.10 0.5 100 170 120 474 150 ⎢ ⎥ ⎢ ⎥
⎣ −0.805960 ⎦ ⎣ −0.347608 ⎦
77 450 0.10 2 510 350 600 229 150
78 350 0.10 0.5 135 225 170 510 160 +0.208547 −0.726844
79 450 0.10 0.5 160 325 50 422 165 (31)
80 450 0.10 1.5 350 255 455 389 165
81 400 0.10 0.5 150 175 150 240 165 Step 5: The normalized inputs of the training patterns are
82 450 0.10 1.0 200 160 310 547 170 transformed into two-dimensional vectors by using
83 350 0.10 0.5 150 260 175 180 170 Eq. 31. The two-dimensional vectors for the train-
84 450 0.10 0.5 160 255 150 583 185
ing patterns are given in Table 6.
85 400 0.10 0.5 150 340 50 426 185
86 350 0.10 0.5 165 300 140 459 195 Training the ANN off-line
87 450 0.10 0.5 160 260 140 620 200
88 450 0.10 0.5 220 425 60 457 205
Step 6: The normalized inputs of the test patterns are trans-
89 450 0.10 1.0 220 180 320 211 240
formed to two-dimensional vector, by using Eq. 31
Step 7: The two dimensional vectors of the training pat-
90 450 0.10 0.5 230 500 65 488 240
terns are used as the inputs, and the corresponding
91 450 0.10 1.5 480 350 350 274 245
normalized outputs to the ANN. Presenting all the
92 450 0.10 2.0 700 390 640 264 275
training vectors forms one iteration.

123
726 J Intell Manuf (2010) 21:717–730

Table 4 Patterns used for


Class Pattern number
finding out discriminant vectors
ϕ1 and ϕ2 , and training the ANN
I 5, 6, 11, 12, 13, 14, 22, 27, 32, 33, 41, 42, 44, 47, 48, 60, 71, 74, 75, 77
II 102, 105, 106, 107, 108, 109, 110, 111, 112, 113

Table 5 Number of patterns and


classification range in each class Pattern number No. of patterns in No. of pattern No. of patterns Range of Vb
and the class each class used for training used for testing (mm)
the ANN the ANN

1–87 Class I 87 20 67 ≤200


88–113 Class II 26 10 16 >200 and ≤990
Total 113 30 83

Step 8: At the end of each iteration, the two dimensional


vectors of the test patterns are presented to the net-
work. A minimum of 80% of total 83 patterns are
to be classified by the ANN correctly. If the classi-
fication performance of the ANN is less than 80%,
Step 7 is adopted. Otherwise, training of the ANN
is stopped and the weights obtained in the last iter-
ation are considered, as the final weights.

Implementing the ANN online for tool wear monitoring

Based on the above, an on-line method for tool wear condi-


tion monitoring is suggested below:

Step 1: The final weights of the network, obtained during


training, are stored in the database. Fig. 3 Distribution of the test patterns without transformation
Step 2: The cutting forces are collected from the dynamom-
eter, and speed, feed and depth of cut are given as
the inputs to the ANN.
Step 3: The inputs are normalized and transformed into Results and discussion
two-dimensional vectors by using the ϕ1 and ϕ2
discriminant vectors. Figure 3 shows the outputs of EKF for the 83 test patterns
Step 4: Classification rules are written based on the Vb val- without transformation The network outputs in Vb are shown
ues given in Table 4…During testing and online in the y-axis. The x-axis represents the pattern numbers used
implementation. Output of the node in the output other than the pattern numbers mentioned in third column of
layer is compared with the classification rule. If Table 6. The legend ‘o’ represents 67 test patterns from class
the output of the network is within the specified 1 and legend ‘x’ represents 16 patterns from class II. There
value (Vb ≤200µ m), Step 2 is continued. Other- is some mixing of test patterns belonging to one class with
wise, corrective actions are implemented. Some of another class. This indicates some misclassification. Eq. 31
the corrective actions are: has ensured that the class I and class II are maximum sep-
arated with minimum misclassification. The transformation
I. I. Stopping the machining operation, or method also helps us in visualizing of how the given set of
II. Regrinding the tool, or patterns is distributed in space.
III. Replacing the worn out tool with a new tool The ANN is trained with different number of nodes in the
using the automatic tool changer (ATC). hidden layer, to find out the exact number of hidden nodes
IV. Changing the speed, feed and depth of cut in required to represent the tool wear data. The exact number
case of adaptive machining of nodes obtained is 6. The actual number of nodes in the

123
J Intell Manuf (2010) 21:717–730 727

Table 6 The 2-dimensional vectors of the normalized training patterns the desired performance index of the network. The range of
Serial No. Training pattern No u v initial weights used is 0.25–0.45. The classification perfor-
mance of the network has been taken as criteria to stop the
Class I training of the network. During training of the network, at
1 5 0.248405 0.113239 the end of each training iteration, all the test patterns are pre-
6 0.284535 0.146954 sented to the network, and their correct classification is noted.
3 11 0.286177 0.157134 Thirty patterns are used for training and remaining 83 pat-
4 12 0.075246 −0.138888 terns are used for testing. While presenting the test patterns
5 13 0.126176 −0.220629 to the network, weight updating is not done. The number of
6 14 0.309048 0.021163 test patterns for class I is 67 and for class II is 16 as given
7 22 0.208151 −0.106131 in Table 5. If the classification performance is not up to the
8 27 0.377304 0.118302 expectation, training of the network is continued. Once the
9 32 0.033771 −0.257652 desired classification performance is obtained, training of the
10 33 0.064219 −0.263593 network is stopped: and the weights are treated as the final
11 41 0.427905 0.302125
weights for on-line implementation.
12 42 0.296603 −0.172194
13 44 0.429281 0.293695
Training the ANN by using EKF without reducing the
14 47 0.311602 −0.200051
dimensions of the inputs of the tool wear patterns
from 6 to 2
15 48 0.344748 −0.018211
16 60 0.283635 −0.140848
The network is trained by using EKF weight updating algo-
17 71 0.296513 −0.163203
rithm to learn the tool wear data. The inputs of the training
18 74 0.123165 −0.151501
patterns are presented to the network without reducing their
19 75 0.136756 −0.104236
dimensions. The training conditions used are: momentum
20 77 0.010996 −0.170528
factor as 0.5, initial value for the error co-variance matrix
Class II Q as 20 and the initial value for the accelerating parameter
21 102 −0.036130 −0.128443 Tmax as 20. The number of nodes in the hidden layer is 6. The
22 105 −0.205433 −0.199332 configuration of the network is 6-6-1. The above values for
23 106 −0.375849 −0.230142 Q and Tmax are obtained by simulation of network. A maxi-
24 107 −0.389305 −0.451422 mum classification performance of 89.15% is obtained in 9
25 108 −0.396734 −0.273041 iterations. The mean squared error (MSE) is 3.919. The MSE
26 109 −0.410809 −0.560982 is the summation of the square of the difference between the
27 110 −0.470258 −0.240555 desired and actual outputs of the network for all the training
28 111 −0.410351 −0.563859 patterns. The classification performance and MSE curves are
29 112 −0.397719 −0.275753 shown in Fig. 4. The classification performance increases up
30 113 −0.328632 −0.092794

hidden layer is fixed correctly only after going through


repeated simulation and dynamic node analysis, in which
the pruning of the number of nodes is done. Pruning refers
to reducing number of nodes in the hidden layer. Training of
ANN started with 15 nodes in the hidden layer and reached 2
nodes in the hidden layer. During this process, number of iter-
ations taken along with classification performance of ANN
for each number of nodes in the hidden layer has been noted
and found that 6 nodes in the hidden layer is optimal. Only
one hidden layer is used. Using more than one hidden layer
will only increase the number of arithmetic operations. Nor-
mally, one hidden layer is sufficient for the ANN to represent
most of the patterns available. Thresholds are not used during Fig. 4 MSE and classification performance of the network trained by
training, as they increase the number of iterations to reach using EKF without transforming the inputs of the tool wear patterns

123
728 J Intell Manuf (2010) 21:717–730

Table 7 Comparison of the Performance of the network trained by using EKF and BPA with and without transformed input patterns
Method Configuration of % Classification Iterations MSE Computational
ANN Total test effort for weight
patterns = 83 updation (reverse
operation
EKF BPA EKF BPA EKF BPA EKF BPA

Without 6-6-1 89.16 (74) 96.36 (80) 9 19 1.1277 0.1463 563430 625280
Transforming
input Patterns
With transformed 2-6-1 95.18 (79) 96.36 (80) 8 9 0.4666 0.2732 90420 107800
input patterns
Numbers in the parenthesis is the total number of patterns classified
Total computational effort = Number of training patterns (30)×
Number of iterations to reached the MSE × Number of arithmetic operation for weight updation (EKF or BPA)
Training
Forward operation: Computational effort is same for both training and online implementation
Without transformation = 98 arithmetic operations for one input pattern(6 × 6 × 1 network topology)
With transformation (ODP) = 80 arithmetic operations for one input pattern (2 × 6 × 1 network topology)
Reverse operation or weight updation
Number of arithmetic operation for weight updation for EKF

L−1 
2
15 + 5nL + (n i−1
3
+ 28ni−1
2
+ 18ni−1 + 2) + (4n i + 5)n i−1
i=1 i=L−1
Number of arithmetic operation for forward computation for BPA

L−1 
2
9nL + 7 n i n i−1 + (4n i + 5)n i−1
i=1 i=L−1
where
L is the total number of layers, which include input layer,
i is the layer number, and
ni is the number of nodes in the ith layer.

Training the ANN by using EKF by reducing the


dimensions of the inputs of the tool wear patterns
from 6 to 2

The network is trained by using EKF. The inputs of the train-


ing patterns are transformed into 2 dimensions and presented
to the network. The initial value for the accelerating param-
eter Tmax is 20, and initial value for the error co-variance
matrix Q is 20. The configuration of the network is 2-6-1.
A maximum classification performance of 95.18% is obtained
in 2 iterations at MSE of 1.052 (Fig. 5), whereas the classifica-
tion performance of the network is only 89.15% (Fig. 4) when
trained without transforming the dimensions of the input pat-
terns. Because of the transformation of the input patterns (i)
Fig. 5 MSE and Classification performance of the network trained by the size of the network is reduced, (ii) the classification per-
using EKF with transformed inputs of the tool wear patterns formance has increased and (iii) the iterations, at which max-
imum classification performance is obtained, are reduced.
A comparison of the performance of the network trained by
using EKF with transformed input patterns and without trans-
to the 9th iteration, and then decreases. There is no improve-
forming the input patterns is given in Table 7. From Table 7,
ment in the classification performance during further training
it can be seen that the classification performance of EKF is
of the network.

123
J Intell Manuf (2010) 21:717–730 729

95.18% which is higher and the iteration, MSE and compu- Fehlauer, J., & Eisenstein, B. A. (1978). A declustering criterion for
tational effort are reduced for EKF with transformation of feature extraction in pattern recognition. IEEE Transactions on Com-
puters, 27(3), 261–266. doi:10.1109/TC.1978.1675083.
test patterns when compared to that of EKF without trans- Fisher, R. A. (1936). The use of multiple measurements in taxonomic
formation of test patterns. problems. Annals of Eugenics, 7, 178–188.
The performance of EKF is closer to that of BPA with Foley, D. H. (1972). Consideration of sample and feature size. IEEE
transformation of input patterns, but the iteration and Transactions on Information Theory, 18(5), 626–681. doi:10.1109/
TIT.1972.1054863.
computational effort are higher for BPA. The convergence Foley, D. H., & Sammon, J. E. (1975). An optimal set of dicriminant
efficiency of the network is purely based on the quality of vectors. IEEE Transactions on Computers, 24(3), 281–289.
patterns which means, the patterns should be orthogonal to Friedman, H., & Turkey, J. W. (1974). A projection pursuit algorithm for
each other without any redundancy. The results in Table 7 exploratory data analysis. IEEE Transactions on Computers, 23(9),
881–890. doi:10.1109/T-C.1974.224051.
shows an alternative algorithms using EKF and ODP for tool
Gallinari, P., Thiria, S., Badran, F., & Fogelman-Soulie, F. (1991). On
wear monitoring. the relations between discriminant analysis and multilayer percep-
The network learns the patterns very quickly, efficiently trons. Neural Networks, 4(3), 349–360.
when the input pattern dimension is less which is achieved Gelsema, E., & Eden, R. (1980). A mapping algorithms in
ISPAHAN. Pattern Recognition, 12, 127–136. doi:10.1016/0031-
through ODP. The innerproducted value of Eq. 31 is 0.3384.
3203(80)90036-9.
This indicates that still the tool wear patterns are not com- Hong, Z. Q., & Yang, Y. J. (1991). Optimal discriminate plane for a
pletely orthogonal. If the innerproducted value is close to small number of samples and design method of classifier on the
zero, then it indicates that the patterns are almost orthogonal plane. Pattern Recognition, 24, 317–324.
Jemielniak, K., & Bombiński, S. (2006). Hierarchical strategies in
and hence the classification will be maximum. Additional tool wear monitoring. Proceedings of the Institution of Mechani-
preprocessing strategies can be implemented to make the cal Engineers. Part B: Journal of Engineering Manufacture, 220,
patterns to be maximum orthogonal. 375–381.
Kalman, R. E., & Bucy, R. S. (1961). New result in linear filtering
and prediction theory. Transactions of the ASME—Journal Of Basic
Engineering, 83(1), 95–108.
Conclusions Kittler, J., & Young, P. C. (1973). Approach to feature selection based on
the Karhunen-Loeve expansion. Pattern Recognition, 5(5), 335–352.
doi:10.1016/0031-3203(73)90025-3.
The artificial neural network is trained using extend Kalman Klema, V. C., & Laub, A. J. (1980). The singular value decomposition:
filter algorithm with transformed input patterns. This method Its computation and some applications. IEEE Transactions on
has multifold advantages than the ANN trained without trans- Automatic Control, 25(2), 164–176. doi:10.1109/TAC.1980.
forming the input patterns. The advantages are: (i) the number 1102314.
Kuo, R. J., & Cohen, P. H. (1999). Multi-sensor integration for
of nodes in the input layer is reduced, and hence the size of on-line tool wear estimation through radial basis function networks
the network also is reduced, (ii) the number of arithmetic and fuzzy neural network. Neural Networks, 12(2), 355–370.
operations is drastically reduced and (iii) the iterations, at Kurada, S., & Bradley, C. (1997a). A review of machine vision sensors
which maximum classification performance is obtained, are for tool condition monitoring. Computers in Industry, 34, 55–72.
doi:10.1016/S0166-3615(96)00075-9.
reduced. The results can be further improved when the data Kurada, S., & Bradley, C. (1997b). A machine vision system for tool-
are almost orthogonal. From Table 7, it appears that the per- wear measurement. Tribology International, 30(4), 295–304. doi:10.
formance of EKF is lesser than that of BPA. There is every 1016/S0301-679X(96)00058-8.
possibility that correct way of estimating the tool wear online Lee, D., Hwang, I., & Dornfeld, D. (2004). Precision manufacturing
process monitoring with acoustic. Emission, Proceedings AC’04,
could be much better than BPA as EKF is an estimation algo- Zakopane, Poland.
rithm. As a future work, different combinations of tool and Li, S. (2001). Comparative analysis of backpropagation and extended
workpiece can be considered and the estimation performance Kalman filter in pattern and batch forms for training neural networks,
of EKF can be studied. Neural Networks, 2001. Proceedings. IJCNN ‘01.
Li, X., Dong, S., & Venuvinod, P. K. (2000). Hybrid learning for tool
wear monitoring. International Journal of Advanced Manufacturing
References Technology, 16, 303–307. doi:10.1007/s001700050161.
Li, X., & Yao, X. (2005). Multi-scale statistical process monitoring
Chen, J. C., & Chen, J. C. (2005). An artificial-neural-networks-based in machining. IEEE Transactions on Industrial Electronics, 52(3),
in-process tool wear prediction system in milling operations. Inter- 924–926.
national Journal of Advanced Manufacturing Technology, 25(5–6), Liguni, Y., Sakai, H., & Tokumaru, H. (1992). A real-time learning
427-434. doi:10.1002/0470033983. algorithm for a multilayered neural network based on the extended
Cheng, Y. Q., Zhuang, Y. M., & Yang, J. Y. (1992). Optimal fisher Kalman filter. IEEE Transactions on Acoustics, Speech, and Signal
discriminant analysis using the rank decomposition. Pattern Recog- Processing, 40(4), 959–966.
nition, 25(1), 101–111. Liu, K., Cheng, Y. Q., & Yang, J. Y. (1992). A generalized optimal set of
Chryssolouris, G., & Guillot, M. (1988). An A.I. approach to the discriminant vectors. Pattern Recognition, 25(7), 731–739. doi:10.
selection of process parameters in intelligent machining. The winter 1016/0031-3203(92)90136-7.
annual meeting of the ASME (pp. 199–206), November 27–Decem- Mix, D. F., & Jones, R. A. (1982). A dimensionality reduction tech-
ber 2, Chicago, Illinois. niques based on a last squared error criterion. IEEE Transactions

123
730 J Intell Manuf (2010) 21:717–730

on Pattern Analysis and Machine Intelligence, 4, 537–544. doi:10. Srinivasa, C., Rao, h., & Srikant, R. R. (2004). Tool wear monitoring—
1109/TPAMI.1982.4767299. an intelligent approach. Proceedings of the I MECH E Part B Journal
Panda, S. S., Chakraborty, D., & Pal, S. K. (2008). Flank wear prediction of Engineering Manufacture, 218(8), 905–912.
in drilling using back propagation neural network and radial basis Sullivan, B. J., & Liu, B. (1984). On the use of singular value
function network. Applied Soft Computing, 8(2), 858–871. doi:10. decomposition and decimation in discrete-time band-limited signal
1016/j.asoc.2007.07.003. extrapolation. IEEE Transactions on Acoustics, Speech, and Signal
Purushothaman, S., & Srinivasa, Y. G. (1998). A procedure for training Processing, 32(6), 1201–1212. doi:10.1109/TASSP.1984.1164462.
an artificial neural network with application to tool wear monitor- Tug orul zel, O., & Karpat, Y. (2005). Predictive modeling of surface
ing. International Journal of Production Research, 36(3), 635–651. roughness and tool wear in hard turning using regression and neural
doi:10.1080/002075498193615. networks. International Journal of Machine Tools & Manufacture
Sammon, J. W. (1970a). Interactive pattern analysis and classification. 45, 467–479.
IEEE Transactions on Computers, 19(7), 594–616. Venkatesh, K., Zhou, M., & Caudill, R. J. (1997). Design of artifi-
Sammon, J. W. (1970b). An optimal discriminant plane. IEEE Transac- cial neural networks for tool wear monitoring. Journal of Intelligent
tions on Computers, 19(9), 826–829. doi:10.1109/T-C.1970.223047. Manufacturing, 8(3–12), 215–226. doi:10.1023/A:1018573224739.
Scalero, R. S., & Tepedelenliegu, N. (1992). A fast new algorithm for Wong, Y. S., Nee, A. Y. C., Li, X. Q., & Reisdorf, C. (1997).
training feed forward neural networks. IEEE Transactions on Acous- Tool condition monitoring using laser scatter pattern. Journal of
tics, Speech, and Signal Processing, 40(1), 202–210. Materials Processing Technology, 63, 205–210. doi:10.1016/S0924-
Sick, B. (1998). On-line tool wear monitoring in turning using neural 0136(96)02625-8.
networks. Neural Computing & Applications, 7(4), 356–366. doi:10. Yao, Y., Li, X., & Yuan, Z. (1999). Tool wear detection with fuzzy clas-
1007/BF01428126. sification and wavelet fuzzy neural network. International Journal of
Siedlecki, W., Siedlecka, K., & Skalansky, J. (1988a). An overview of Machine Tools & Manufacture, 39, 1525–1538. doi:10.1016/S0890-
mapping techniques for exploratory data analysis. Pattern Recogni- 6955(99)00018-8.
tion, 21(5), 411–429.
Siedlecki, W., Siedlecka, K., & Skalancky, J. (1988b). Experiments on
mapping techniques for exploratory pattern analysis. Pattern Recog-
nition, 21, 431–438.

123

You might also like