Discriminant Analysis

Discriminant Analysis is a classification technique. It is similar to the Multiple
Regression Analysis. But the dependent variables in the Discriminant Analysis assume
the categorical (nominal) values.

Requirement
To do this classification, a priori defined groups are required. With these groups and the
corresponding independent variables, a Discriminant function is arrived. This function is
the linear combination of the independent variables. The discriminant function enables us
to assign any object into any one of the group based on the score arrived by using the
discriminant function.
The groups are mutually exclusive and collectively exhaustive. The purpose of
Discriminant Analysis is to minimize the variance within the group and to maximize
variance between the groups.

Applications
1. This analysis can be done to know the credit worthiness of the customers.
2. In materials management, to classify the items as vital, essential and desirable
(VED analysis), the Discriminant analysis can be used.

Assumptions
1. The independent variables in this model follow multivariate normal distribution.
2. The variance-covariance matrix computed in each group is same.

The model
Z = W′ X
Where, Z is the 1×n vector of discriminant score,
W’ is the 1×p vector of discriminant weights and
X is p×n matrix
Here n is the number of observations and p is the number of independent variables.

-1-

The formation of discriminant function is based on the principle of maximizing variations
between the groups and minimizing the variations within the group. Using the data
matrix, the mean corrected sum-of-squares and cross product matrices for the groups
are formed. We will denote these as W1, W2, W3 etc. Similarly the mean corrected sumof-squares and cross product matrices and for the total group are formed by considering
all the observations. We will denote these as T1, T2, T3 etc.
Now
W = W1 + W2 + W3 + …………. + Wk
Similarly
T = T1 + T2 + T3 + …………. + Tk
Where k is the number of groups
Also
T=W+B
=>B=T–W
We know that with respect to linear composite ‘w’ the between sum-of-squares is given
by
ŵBw
and similarly within sum-of-squares is given by
ŵWw
ˆ Bw
w
λˆ =
ˆ Ww
w

--- (1)

Here we should maximize the values of λˆ
From (1),

λˆ ŵWw = ŵBw
∴ ŵBw – λˆ ŵWw = 0

-2-

(W-1B – λˆ I)w = 0 Where λˆ are the eigen values and w is the eigen vector. We should keep as many discriminant functions according to the above requirement. Classification rules: 1. Classification of object based on two discriminant functions:- -3- . The values of the eigenvectors are called discriminant weightage. calculate the Z value. d) The object is classified as belonging to a particular group on the basis of shortest distance calculated in step (c). 2. c) Calculate the distances between the value in step (b) and each mean discriminate value in step (a). The number of the discriminant functions to be considered can be found by min (p. Classification of object based on single discriminant function:a) Compute the mean discriminant values ( Z ) for each group by substituting the mean of the observations of independent variables in the discriminant function.Now making the partial derivative of λˆ with respect to ŵ equal to zero ∂ ∂ ˆ ˆ Bw ) − ˆ Ww ˆ)=0 (w ( λw ˆ ˆ ∂w ∂w  B w – λˆ Ww = 0  (B – λˆ W)w = 0 Now pre-multiplying with W-1. Here p is the number of independent variables and k is the number of groups. k-1). b) For the new object also.

b) For the new object also. d) The object is classified as belonging to a particular group on the basis of shortest distance calculated in step (c). -4- .a) Compute the mean discriminant values ( Z i ) for each group by substituting the mean of the observations of independent variables in the discriminant functions. This process can be continued even for more than two discriminant functions. Example: A table is given below which contains data on breakfast cereals produced by three different manufacturers (G. Z 2 ) c) Calculate the distances between the value in step (b) and each mean discriminate value in step (a) using Euclidean Distance formula. Thus we will get ( Z 1 . Z 2 ) for each group. calculate the ( Z 1 . K and Q).

5 10.Brand ACCheerios Cheerios CocoaPuffs CountChocula GoldenGrahams HoneyNutCheerios Kix LuckyCharms MultiGrainCheerios OatmealRaisinCrisp RaisinNutBran TotalCornFlakes TotalRaisinBran TotalWholeGrain Trix Cheaties WheatiesHoneyGold AllBran AppleJacks CornFlakes CornPops CracklinOatBran Crispix FrootLoops FrostedFlakes FrostedMiniWheats FruitfulBran JustRightCrunchyNuggets MueslixCrispyBlend NutNHoneyCrunch NutriGrainAlmondRaisin NutriGrainWheat Product19 RaisinBran RiceKrispies Smacks SpecialK CapNCrunch HoneyGrahamOhs Life PuffedRice PuffedWheat QuakerOatmeal Manuf G G G G G G G G G G G G G G G G G K K K K K K K K K K K K K K K K K K K K Q Q Q Q Q Q Cal.5 0 4 3 0 3 1 9 1 1 1 4 1 1 1 3 5 1 3 0 3 3 1 5 0 1 1 0 1 2 0 1 2.5 0 0 2 1.5 17 12 12 15 11.5 2 0 0 0 1.7 10.5 21 15 16 13 17 16 7 11 21 13 10 21 11 14 14 14 17 17 15 21 18 20 14 22 9 16 12 12 12 13 10 1 10 1 13 13 9 10 3 12 6 10 8 3 14 3 12 3 8 5 14 2 12 7 3 13 11 7 12 6 13 9 7 2 3 12 3 15 3 12 11 6 0 0 1 70 105 55 65 45 90 40 55 90 120 140 35 230 110 25 110 60 320 30 35 20 160 30 30 25 100 190 60 160 40 130 90 45 240 35 40 55 35 45 95 15 50 110 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 2 6 1 1 1 3 2 2 2 3 3 2 3 3 1 3 2 4 2 2 1 3 2 2 1 3 3 2 3 2 3 3 3 3 2 2 6 1 1 4 1 2 5 -5- . X1 110 110 110 110 110 110 110 110 100 130 100 110 140 100 110 100 110 70 110 100 110 110 110 110 110 100 120 110 160 120 140 90 100 120 110 110 110 120 120 100 50 50 100 Protein X2 Fat X3 Sodium X4 Fiber X5 CarbohyDrates X6 Sugar X7 PotassIum X8 Group 2 2 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 1 0 0 0 3 0 1 0 0 0 1 2 1 2 0 0 1 0 1 0 2 2 2 0 0 2 180 290 180 180 180 250 260 180 220 170 140 200 190 200 140 200 200 260 125 290 90 140 220 125 200 0 240 170 150 190 220 170 320 210 290 70 230 220 220 150 0 0 0 1.5 2.5 21 12 15 13.

3529 0.0007 -0.0008 -0.0476 0.2059 -1.2353 -0.0002 0.4118 -1.0370 0.6471 -0.4118 1.3529 -0.0050 0.1565 -0.6471 -1.0000 -30.0016 0.7059 -1.3529 0.2353 -0.1176 3.2353 -17.1565 -0.0016 0.0044 -0.6471 2.0157 -0.6471 -0.0000 145.8824 -7.0032 -0.0082 0.5882 29.6471 0.2941 0.2941 2.0882 2.0005 0.0000 5.0284 0.6471 52.0000 20.5882 -0.0930 0.1176 -5.0012 -0.0002 0.6471 -57.5882 2.2353 -0.2353 -0.0000 25.0165 -0.1054 -0.x1=G1-column mean of G1 -0.6471 92.1176 3.5882 -0.0012 -0.2941 1.0444 -0.0050 0.0476 0.0011 0.7059 -0.3529 0.1176 -15.0028 -0.5882 -0.0000 5.0095 0.0082 0.0345 3.5882 19.7059 -1.1176 1.4118 -2.3950 0.4118 1.4118 0.3529 -0.0882 6.0003 -0.3529 0.7059 0.0003 -0.0882 -4.9750 -6- .2353 -0.0000 55.3529 -27.6471 -17.5882 -2.1054 -0.2059 0.0882 6.6471 0.0000 -60.0000 -20.0032 -0.5882 -10.0005 0.0011 0.2941 -1.0930 -0.0000 25.6471 2.0011 2.0370 -0.2353 -0.8824 0.0339 0.3529 -57.4118 -10.7647 -0.3529 2.4118 1.3529 0.0002 -0.6471 -1.0018 -0.3529 -17.0520 0.0016 0.5882 -0.0000 35.8824 -0.0000 -45.0011 0.0003 0.5882 -0.6471 22.5882 0.5882 -0.8824 -5.0016 0.0157 0.2353 -0.3529 62.4706 0.0000 -40.6471 2.5882 0.8824 -5.7647 0.8824 -2.0345 0.3529 3.8824 1.2353 0.3529 -1.2059 1.4118 -10.2941 0.2059 -1.5882 -10.0007 0.2353 -0.0000 -30.0339 0.5882 -0.2353 -0.6471 -0.4118 -3.7647 -0.7647 0.1694 0.6471 -17.2941 -1.0e+004* -0.2353 -0.7059 1.5882 -0.0444 0.5882 -0.0024 0.3529 -7.3950 0.4118 -1.2353 -0.3529 -17.3529 0.0000 -25.0044 0.4118 -2.0095 -0.0000 W1=x1’*x1 0.3529 -1.1176 -0.1176 4.5882 -0.0002 -0.2941 -4.8824 -5.0008 1.1176 5.0018 0.8824 4.0520 0.2941 -1.0000 -50.5882 -0.

0500 1.0000 0.0e+005 * -0.6500 0.0000 1.0000 -1.0009 -0.0000 -1.5000 -1.5000 -1.0500 -0.0037 -0.0042 0.2500 -31.4000 0.3333 -13.2500 -60.0500 0.5000 0.0343 -0.0000 -0.9500 2.0060 0.2500 24.6000 0.7500 -4.6000 3.2500 6.5000 -1.3500 -0.0000 10.2500 -45.5000 -1.0500 6.0000 -0.7500 -0.0000 -40.0000 0.6667 -1.9500 -6.7500 8.0023 0.0500 5.6000 -0.7500 -60.9500 1.0000 -1.1167 0 -5.0013 1.6667 0.0000 -1.5000 2.6500 0.7500 34.3500 -0.1796 0.7500 -4.0008 0.column mean of G3 30.0037 1.6500 74.0500 -1.0335 0.0311 -0.0037 0.7500 104.9500 4.0067 -0.6667 .0002 0.2500 -115.3500 -0.0000 -5.7500 -71.5000 1.7500 -4.7500 5.6667 -1.2500 14.6667 -98.5000 -2.6000 -1.0001 0.3333 0.5000 -1.0023 1.5000 -1.0002 0.0000 -1.0000 30.2500 4.1167 2.2500 -61.5000 0.6500 0.0379 0.5000 -1.0000 -1.4000 0.7500 54.3333 2.0000 10.0083 0.7500 -36.6667 0.0038 0.2500 4.0379 0.0018 0.7500 38.0042 -0.3500 1.2500 -95.0500 -5.0000 -1.7500 W2=x2’*x2 0.2500 -0.2500 -185.5000 0.1167 3.7500 -56.3333 -1.0003 0.3333 1.2500 7.0335 -0.0022 0.9500 5.9500 228.0000 1.x2=G2.9500 -4.0036 -0.6667 121.2500 104.7500 4.3500 -0.2500 -0.4000 0.3500 -0.0022 -0.2500 -61.0008 0.6500 -0.9500 -1.0003 0.0000 -0.0002 0.0001 -0.6500 -0.2500 98.0000 -1.7500 -5.2500 34.9500 -1.9500 -4.4000 0.0060 -0.0000 -1.7500 -1.0311 0.3333 -98.0013 0.2165 0.6000 0.0001 -0.0598 -0.0000 -1.0343 -0.0000 0.7500 -61.3333 -0.6000 0.0000 29.6500 -0.0500 5.4000 -0.0000 -1.6667 51.2500 -2.0000 -11.4000 0.0164 -0.0026 0.0000 9.0249 0.6667 -43.6000 0.2500 -1.0000 6.0000 -4.0000 9.7500 -66.3500 0.5000 6.0000 -40.6500 0.6000 -1.8833 2.2500 -51.2500 1.2500 4.7500 -4.7500 -51.7500 -15.1796 -0.column mean of G2 -41.6000 -0.6500 2.0500 -1.0000 -11.9500 -2.0000 -11.0000 7.0000 -0.3333 -98.0000 49.0037 -0.3500 -0.3333 36.0083 0.0164 0.7500 148.5000 -1.7500 -46.6000 -0.4000 -0.2500 -8.4000 -0.0249 0.7500 -15.0002 0.0002 -0.7500 -5.6500 -0.2500 -56.5000 0.0000 9.0000 1.0018 0.7500 134.3333 -1.0000 -1.0002 0.1167 2.0002 -0.5000 -2.5000 2.2500 44.3500 1.0003 0.7500 68.4000 -0.0026 0.2500 3.6667 0.6667 121.3333 -1.0000 -0.5833 -9.0003 0.0000 0.5000 -1.2500 5.3196 x3=G3.5000 -1.3333 -8.0000 -7- -23.4000 -0.3333 51.0000 -21.7500 68.2500 -35.0067 -0.6500 -0.

5713 -0.0080 0.0006 0.4287 1.7794 0.0760 0.5065 99.0729 -0.7206 -6.0225 -1.9271 -0.7206 -1.6389 41.5713 -0.0155 0.0103 0.9464 -1.9775 1.0729 -0.0005 0.0037 -0.0225 78.4935 39.6389 31.0002 0.0393 6.0013 -0.7206 6.6389 61.3010 -0.1534 -0.0393 0.5536 0.0003 0.1180 -0.8627 2.0003 -0.5536 -1.7206 -4.0002 -0.0006 0.8627 6.0010 -0.4464 -1.9775 3.0002 0.5065 29.0160 0.0021 0.0139 0.0033 0.3611 11.3611 -23.3611 t1=G1.0729 160.9775 -1.0018 0.1373 6.4287 0.0177 0.0048 0.5065 -0.3611 151.0337 0.4287 -1.0033 -0.5536 2.5065 19.7206 -4.1373 -3.0567 -0.0001 0.4287 0.8627 6.5536 1.1850 0.1373 -3.0015 0.5065 19.0008 0.0033 -0.9775 -8.0003 0.0020 0.0002 -0.0018 0.0308 0.0049 -0.5713 0.9775 7.5536 -0.3611 31.5713 -0.7206 1.0337 0.0345 0.6389 -53.9775 -1.0234 0.4464 -0.0030 0.0417 0.6683 W=W1+W2+W3 0.0139 0.3010 -0.2567 -0.1373 36.0016 0.0048 0.0006 0.0729 -0.5200 0.4287 3.1850 -0.W3=x3’*x3 0.4464 -1.1373 -3.7206 -4.5065 19.0011 0.4464 1.1111 -0.0001 0.0002 -0.6389 -43.4287 0.0729 -0.1373 6.9271 0.4287 -0.6389 -23.0177 0.0002 -0.8627 26.5065 39.1373 -3.0729 -0.0010 0.0056 1.0056 0.9775 2.1373 6.5536 -1.0191 0.5065 19.4935 1.0018 0.2794 4.0009 -0.4464 -0.column mean of whole data 6.0003 0.0345 0.0030 -0.3611 -13.0595 0.0567 -0.4700 0.7206 -4.0225 -1.9271 -0.0191 -0.2206 2.5065 89.0030 -0.4287 -1.0010 0.1283 -0.3611 26.0155 -0.0447 0.0103 1.0225 2.9775 1.5713 -1.0166 0.1373 6.7206 0.0004 -0.9775 -2.0160 1.0037 0.4287 0.1373 6.5713 -1.5536 -1.1373 6.0763 0.0225 1.0417 -0.0009 0.0225 0.0008 0.4935 39.0729 -0.1373 6.1111 0.0082 0.0729 -0.0004 0.0001 0.1180 0.0029 0.0447 -0.0729 -0.7794 2.5065 59.5713 0.3611 .0063 -0.0729 -0.9775 7.0760 0.1534 -0.2794 4.0e+005 * 0.2794 7.2794 5.0102 0.0595 0.2567 0.9775 3.0070 -0.6389 -38.1373 -0.0030 0.0003 0.5536 13.2794 5.0070 0.0152 -0.5065 9.0020 0.6389 -18.0729 -0.0225 -0.9271 0.0166 0.0536 -1.0234 0.0018 -0.0080 -0.8627 6.0006 -0.4287 -0.0002 0.5065 129.0029 1.0729 0.7794 2.1287 0.5065 -20.0729 19.5065 -20.0003 0.0001 0.5536 -8- -2.0536 0.0e+004* 1.0011 0.0002 -0.4287 0.0033 -0.0002 0.0308 0.0049 -0.7840 Column means of whole data 103.3611 -33.0729 -0.4700 -0.5065 39.0016 2.0010 0.0536 0.3611 11.0021 0.

0070 0.2794 -2.0007 1.0029 -0.0001 0.0004 0.0003 0.0442 -0.T1=t1'*t1 0.3771 0.0010 -0.2463 0.0000 -0.9775 -4.0729 -90.0044 0.7206 0.0304 0.0333 0.0729 -35.6389 -48.5536 0.4287 -1.0048 0.0225 6.0225 -5.0010 -0.0024 0.3611 -43.4464 -0.6389 -38.0700 -0.9775 -0.4287 -1.0729 69.7206 -0.5065 -0.1373 56.0001 0.0396 0.5536 -1.0729 -35.0225 -4.0062 1.0464 0.5065 3.3415 0.0000 -0.4287 0.0901 -0.0026 1.2465 -9- 0.4287 -0.0062 t2=G2.9271 59.5713 -0.6389 -48.0046 0.9271 -20.0044 -0.0045 0.0133 -0.0333 0.0139 0.0007 0.3611 -58.0010 -0.1373 16.0044 1.0022 0.0008 0.5065 -1.5065 -1.0354 0.0002 0.5713 -0.4143 0.4464 -1.5065 3.7206 0.3611 161.0396 0.7206 4.0729 49.1373 6.0019 -0.8627 6.2794 7.0087 0.0729 9.0442 -0.6389 -33.4935 2.0016 0.6389 -18.column mean of whole data -33.5536 -1.5536 0.4935 1.0092 0.9775 3.5536 -6.8627 6.2465 0.5536 -0.7206 3.4464 -1.6389 11.0002 0.0729 129.4935 -0.0222 0.0003 0.0001 0.1373 -3.0901 0.5536 1.2794 0.5065 -0.0046 -0.9775 -5.5065 -0.0511 -0.1373 1.0001 -0.0082 0.0729 99.0003 0.8627 16.1373 6.0070 0.1880 -0.0729 79.0001 0.5713 0.0511 0.0046 0.0040 0.0028 0.3611 81.9775 -0.9775 -1.0048 0.0061 0.0225 241.9775 1.0000 0.0082 0.4287 3.0045 0.0004 0.9775 -4.0000 0.6389 111.5065 -0.4143 -0.7206 1.0729 39.0469 0.0729 -160.5536 -1.0002 0.0016 0.0225 4.0225 7.5065 -0.5713 -0.0222 0.0874 0.3611 21.5536 -1.0729 59.0139 0.5713 -0.0729 129.7206 3.0000 0.0196 0.0005 0.0196 0.1880 -0.0026 0.4287 -0.5713 -0.0225 5.9775 -0.4464 -1.4709 -0.5065 7.5713 0.7206 8.0061 0.1373 6.0002 -0.0002 0.4287 0.1373 -3.0002 -0.0008 0.0874 0.0008 0.4464 -0.0004 0.7206 -4.6389 -43.0087 0.0004 0.1373 6.0008 0.4464 -1.5713 0.0464 0.4935 1.4464 -1.7206 7.4464 -1.8627 -3.2794 7.0008 0.0729 159.2794 2.3611 -48.0080 0.0133 -0.1373 36.3611 -23.7206 -2.4709 0.0113 -0.3611 -0.5713 T2=t2’*t2 0.3555 .0113 0.5065 1.0004 0.0225 -4.1373 6.7206 6.0001 -0.4935 -0.0729 -70.0080 -0.0008 -0.8627 16.0028 -0.5713 0.5065 1.0225 4.1373 -13.5065 -0.0729 29.7206 -2.0354 -0.0002 0.8173 -0.0499 -0.4287 0.3771 0.3611 51.0002 -0.0002 0.3611 -38.0003 -0.0005 0.0225 5.4287 0.0729 9.1373 6.0002 -0.0022 0.0194 -0.0469 4.5536 -0.1373 6.2794 -3.4935 -0.0225 4.4935 -0.0e+004* 0.0046 -0.0e+005* 0.3611 81.7206 0.5536 -0.4287 -0.0092 4.5536 -1.0000 0.0019 0.3611 -53.1373 6.9271 -10.

0106 0.0011 0.10 - .0007 -0.0024 0.4935 -1.3611 31.4287 -0.3611 -28.0063 1.6353 0.2794 -1.0005 -0.0238 0.0704 0.8627 -1.0008 -0.0171 0.0101 0.0017 0.0002 -0.4935 -0.0072 0.0073 0.0005 0.1464 -1.0203 0.5065 -0.0038 0.8627 -53.4287 1.0049 -0.0203 0.9775 3.0007 0.0468 0.0002 -0.0001 -0.3764 -0.9090 T=T1+T2+T3 0.0013 0.4903 0.0426 0.0001 -0.0043 0.0088 0.0468 -0.4336 0.0301 0.0005 2.column mean of whole data 16.0011 0.8359 .0067 0.0e+004* 1.5713 0.0005 0.0615 0.0007 -0.0088 0.2941 0.0024 -0.0201 0.0030 0.0003 -0.2794 -3.5713 -1.3035 0.0729 -160.4903 0.3611 16.0002 0.8514 0.0002 0.0006 0.5065 -1.0089 0.9271 59.0870 -0.0126 0.0013 0.0177 0.0003 -0.4464 -1.0001 0.0007 -0.0007 -0.0038 0.0850 0.1069 0.0072 0.0023 1.2794 -1.0003 0.0009 0.0171 0.9156 0.0007 0.0089 -0.0138 1.1910 1.0012 0.5536 0.3611 -33.0167 0.0028 -0.0000 0.0073 0.0210 0.0126 0.0009 -0.0225 -7.8627 -3.9870 0.0009 0.0320 0.0870 -0.0017 0.0928 0.0201 0.1910 1.0101 0.0104 0.0615 0.8627 -53.0e+004* 1.0850 0.0002 0.4287 2.0729 -160.0009 -0.0426 0.0023 -0.2794 4.1069 0.0238 0.0225 -6.3516 0.0017 0.0296 8.9775 -1.0118 1.0005 0.3370 0.0004 0.0073 0.4935 0.0004 -0.1373 16.0008 -0.0003 0.3370 0.0049 -0.0002 0.0220 0.0002 0.t3=G3.0000 0.2988 0.5536 0.0073 -0.4336 1.2794 -12.2403 -0.0206 5.8359 0.2410 0.0118 0.2794 -0.0018 -0.0928 0.0055 0.0104 -0.0024 0.0088 0.0007 0.0003 -0.0244 -0.0206 -0.0704 0.0063 -0.0030 0.0014 -0.0347 0.9271 -160.0320 0.4935 1.0e+005* 0.0010 0.0043 -0.3516 0.0080 -0.0018 0.4467 0.0225 -43.0055 -0.0055 0.0003 -0.4287 -1.2410 0.0055 0.0225 -7.0024 0.9271 59.1373 -3.0010 -0.5536 -1.1581 0.0106 0.0015 0.0017 0.0028 -0.0003 0.0023 0.6679 0.0067 0.0001 0.0244 -0.6389 -63.0171 0.0347 0.0138 1.0001 0.0003 0.0210 0.3035 0.0301 0.0003 -0.5536 0.6741 B=T-W 0.2403 0.0023 0.0296 0.6389 T3=t3’*t3 0.0004 0.0220 -0.9870 0.3764 0.0002 -0.2988 0.0005 0.0001 0.9271 -10.0062 0.0001 -0.0006 0.

7022 0.2144 -0.3763 0.6799 7.3081 .0588 -0.0107 0.0000 .0226 -0.0001 -0.0061i 0.0681 -0.4886 -0.2650 + 0.0040 0.1612 0.7333 -11.0067 -0.1347 -0.8396 0.2490 -0.0065 -0.0061i 0.0003 0.0059 -0.6193 -0.5126 0.0173 Eigen Vector2 0.0709i -0.0459i 0.1163 0.0720 + 0.0000i 0 0 0 0 -0.1281 -0.1092 + 0.0010 1.0731 11.0110 0.6918 0.3447 Now we have to find the eigen vectors and eigen values The eigen vectors are 0.0184 0.4810 Corresponding eigen vectors are Eigen Vector 1 0.0000 0 0 0 0 The two highest eigen values are 1.8232 -5.0873 24.8046 2.0003 0.1388 0.1511 0.W-1B -0.0037 0.0226 .6923 -0.0144 -0.1021 2.2671 0.0049 0.6202 -0.0837i -0.0113 -0.6226 0.1352 0.0.0352 -0.4886 -0.6566 -0.0391i 0.8854 -0.3612 0.11 - 0 0 0 0 0 0 0 -0.0.0179 -0.0458 0.0121 -0.2508 -0.6581 -0.0476 -0.3488 -0.9826 17.0.1347 -0.3081 + 0.0.0049i 0.4810 0 0 0 0 0 0 0 0 0.0003 0.0391i 0.2001 -1.7510 0.0000 0 0 0 0 -0.2781 -0.0527 0.2697 0.0013 0.0084 0.0156i -0.0812 0.0000 + 0.1616 -19.0709i -0.0110 0.0146 8.0379 0.0115 0.1436 -0.0352 -0.4110 0.0000 .0103 1.6898 0.2144 -0.0236 0.0256 .8728 12.0837i -0.0.0459i 0.1119 -0.2039 0.8698 and 0.1678 0.5703 + 0.2871 0.1530 -0.1436 -0.0965 0.0256 + 0.0.2882 20.8133 0.6239 -0.0006 -0.0049i 0.0770 0.0065 + 0.6129 -0.6898 0.0048 0.0817 -0.0173 0.0000i 0 0 0 0 -0.0296 -0.3398 -0.1613 -0.3488 -0.1388 0.1538 0.3589 0.0023 -0.4031 4.1613 -0.0031 4.0996 -0.2650 .2461 -0.8276 -0.0025 0.0092 -0.2443 -0.2490 -0.8854 -0.1092 .3978 4.0316 0.8133 0.0040 0.1273 -0.0026 2.0156i -0.1358 0.0000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.4109 0.5703 -0.8032 0.8698 0 0 0 0 0 0 0 0 0.0720 .0180 And the eigen values are 1.4065 -0.

8336 _ Z2 = -2.3488X3 .8854X5 .0.1436X6 .0.1388X6 + 0.9193 _ Z2 = -1.004X4 + 0.0.1794 For Group G3:_ Z1 = 2.0352X1 .0173X8 Z2 = 0.2144X2 .0.0003X4 + 0.011X1 + 0.1347X7 .12 - .249X2 .0190 For Group G2:_ Z1 = 4.0.Hence the Discriminant functions are formed as follows Z1 = 0.0226X8 For Group G1:_ Z1 = 3.0.1613X7 .0.0.8459 _ Z2 = -0.4886X3 .0.8133X5 + 0.7681 Classification of the objects is given as follows:- .

0967713 1.2598 3.872641 6.4078173 0.354 1.1667627 0.54778 0.3752 3.133003 3.545119 3.3821 1.6225776 0.079881 3.265705 0.3166 2.167047 6.9433113 4.2014 -1.287342 1 1 1 3 3 3 .969828 1.9932 3.9348151 0.2167991 Distance from G2 2.239213 5.3547 -2.073985 5.7167305 1.2851 5.118187 2.9379 3.1255895 1.2266 5 14 2 12 7 3 13 11 7 12 6 13 9 7 2 3 12 3 15 3 320 30 35 20 160 30 30 25 100 190 60 160 40 130 90 45 240 35 40 55 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 12 11 6 0 0 1 35 45 95 15 50 110 3 3 3 3 3 3 X4 X5 X6 X7 X8 2 2 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 180 290 180 180 180 250 260 180 220 170 140 200 190 200 140 200 200 1.1052 5.624525 5.381982 4.4781 1.5 0 0 2 1.5 17 12 12 15 11.6227 -1.4386 6.653369 2 2 2 2 3 2 2 2 2 2 1 2 1 2 2 2 1 1 2 2 3.662258 4.819 -1.1629213 1.118669 15.5649 -2.282997 5.28626 1.732139 7.7668563 1.3283 4.373563 2.0002 -2.9215 -1.090529 2.155417 0.923861 1.629545 5.X1 X2 X3 110 110 110 110 110 110 110 110 100 130 100 110 140 100 110 100 110 2 6 1 1 1 3 2 2 2 3 3 2 3 3 1 3 2 70 110 100 110 110 110 110 110 100 120 110 160 120 140 90 100 120 110 110 110 120 120 100 50 50 100 Z1 Z2 70 105 55 65 45 90 40 55 90 120 140 35 230 110 25 110 60 Actual group 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3.042785 5.6562011 2.19084 4.6266 5.9773 4.7288 -0.698 2.485226 2.119592 0.56981 0.4895 3.2260817 0.747615 0.2483 4.0961 -2.950084 3.707006 0.8790773 14.3412 2.6633 -0.8791 -0.904391 3.8460409 7.918603 1.0341 -1.223088 6.7613 3.127559 3.154 -0.9102 4.5 0 4 3 0 3 1 10.8021 4.5 2 0 0 0 1.8963 5.5256 2.9591 4.2606 5.8279 -1.912959 7.7693 Distance from G1 0.8676 5. Group 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 1 4.717832 3.386589 5.4829 -2.458449 4.4707 -1.8702339 0.5634447 0.863076 3.401665 1.511523 1.441982 0.181792 6.7058 -2.214798 0.8725 -0.2437897 0.0217 -1.3413 0.893547 2.3743995 2.1386 -2.6916 -1.5 21 12 15 13.9071 -0.9975 3.371 4.3115996 3.526625 0.8904 -1.018998 1.507 2.5 10.5324 -1.0651803 4.913025 3.0385 -2.208364 3.1910683 4.9695 3.8935 -2.2023 3.7008 3.1706173 6.13 - .299188 1.526975 1.2825005 1.693384 0.6101 0.9845 4.908734 Pred.3893 -2.9670183 1.127127 0.827815 Distance from G3 0.179274 0.7326 -0.6597517 0.558674 9.5937 4.3138 3.347661 0.7 12 12 12 13 10 1 -1.4549 3.3153 -0.0758 -1.77439 0.518828 12.7092 -0.4180283 1.9306 -2.881702 1.971586 3.4220067 0.4841 3.931304 2.772786 3.5 21 15 16 13 17 16 10 1 13 13 9 10 3 12 6 10 8 3 14 3 12 3 8 4 2 2 1 3 2 2 1 3 3 2 3 2 3 3 3 3 2 2 6 1 0 0 0 3 0 1 0 0 0 1 2 1 2 0 0 1 0 1 0 260 125 290 90 140 220 125 200 0 240 170 150 190 220 170 320 210 290 70 230 9 1 1 1 4 1 1 1 3 5 1 3 0 3 3 1 5 0 1 1 7 11 21 13 10 21 11 14 14 14 17 17 15 21 18 20 14 22 9 16 1 1 4 1 2 5 2 2 2 0 0 2 220 220 150 0 0 0 0 1 2 0 1 2.307871 0.43565 0.525028 1.988311 0.2674535 0.179861 0.4257 3.556398 7.0274 -0.3819844 0.0314166 0.4793 -2.1978 5.603838 1.985857 0.349231 3.751399 4.866 4.4467 -1.1151 -0.968894 1.065343 1.4659 4.783154 2.760973 0.5285 -2.874937 0.0740464 0.8887315 0.721785 1.68268 1.1351337 3.277986 2.079483 6.2034 -1.6861056 0.3528 -2.503568 6.5 2.1245 -1.125561 2.7478 5.5835 5.0376752 2.3318 4.195 3.1166497 0.0014032 2.

After substituting the data values in the discriminant function.256 % 2) Hold out method – divide the samples into two divisions in each group. Let us suppose. at a time leave one element in each group and with remaining of that group . One portion of each group is used for the classification purpose. if the discriminant function discriminates as shown in the table below. there are two groups G1 and G2 and n1.n2’ Actual membership G1 G2 The error rate is calculated as follows:Error rate = (n1’+ n2’)  (n1 + n2) The above type of tables are called confusion matrix. then the misclassification rate is 0.n1’ n1’ n2’ n2. For the given example. The second portion in each group is used for the calculation of error rate estimation 3) The U method or Cross Validation or Jack Knife method – If there are two groups. The available group data is re-substituted to calculate the APperent Error Rate (APER).14 - . n2 be the number of observations in two groups respectively.Calculation of misclassification error 1) Re-substitution method – In this method whole sample data is considered. the confusion matrix can be formed as follows:Actual membership G1 G2 G3 Predicted membership G1 G2 15 2 4 15 3 0 G3 0 1 3 APER = (2+4+1+3)/43 = 10/43 = 23. Predicted membership G1 G2 n1 0 0 n2 Actual membership G1 G2 Let us suppose n1’ is the misclassified observation in G1 and n2’ is the misclassified observation in G2 as shown below. Predicted membership G1 G2 n1.

Then use the left out element for the error rate calculation. M is tested for a particular ‘α’ value. if Z2 < Z1 If the distance D2 is higher. Do this until all the elements in the group are subjected to misclassification procedure. we have some tests to solve our purpose.and with the other elements of the other group. thus M follows F distribution with degrees of freedom p. which is used mainly for two-group problem. n1+n2-p-1. The distance is calculated as follows:D2 = Z2 – Z1. Here D2 is the generalized distance between the centroids of the two groups. Go to the second group. The expression of M is as follows: M= n n 2 (n1 + n2 − p − 1) D 2 (n1 + n )(n1 + n − 2) p In this test. It was shown that the distance D2 depends upon a test statistic M.15 - . Continue until all the elements are subjected to the misclassification procedure. leave one element and do the classification for the remaining elements with the all the elements of the other group the nthe left out element is used for misclassification test. do the classification. If the value of M is significant at that point then the discriminant function is significant otherwise not. The centroid for each group is arrived by substituting the mean values of each independent variable in the discriminant function. then it means that the discriminant function is discriminating effectively. The corresponding Z1 and Z2 values are the centroid of the groups. Here n1 and n2 are the number of observations in the two groups and p is the number of independent variables. Statistical Tests: To identify whether the developed discriminant functions discriminate properly. One useful test is Mahalonobis’ D2 test statistic. . if Z1 < Z2 Z1 – Z2.

V1 = {(n-1) . If the V value is significant then the next job is to identify how many discriminant functions are significant out of total discriminant functions. If there are two groups (number of discriminant function is only one). If it is significant. This test is cariied out in the following manner.Also we can say that there is not much difference between the two groups. number of discriminant function is more than one.16 - . But if there are more than two groups i.½ (p+k)} ln(1+λ1) Subtract V1 from V. then it is possible to test the significance of the discriminant function by the above test. then the significance of the next discriminant function is checked. First calculate the V1 value. then the Bartlet’s χ2 test statistic comes as handy. The test statistic V is tested to identify whether it is significant or not. This statistic is given by r V = {(n-1) . .e. This is the value corresponding to the highest eigen value. Now test the V – V1 value is significant or not using degrees of freedom (p-1) (k-2). If V – V1 value is not significant then it means that first discriminant function corresponding to the highest eigen value is significant and other discriminant functions are not significant. This is done by using χ2 distribution with degrees of freedom p (k-1) for particular ‘α’ value. This will help us to retain the significant discriminant functions.½ (p+k)} ∑ j=1 ln(1+λj) The λ values are the eigen values of the W-1B matrix.

V2 value is not significant then come to the conclusion that the second discriminant function is significant in addition to the first one.V1 V V – V1 If V – V1 is significant then it means that there are some more discriminant functions which may contribute to the significance of the discriminate the groups.V2. Stop the procedure. Now V2 is calculated corresponding to the second highest eigen value as follows. Test the significance of this value for the degrees of freedom (p-2)(k-3). This whole process is given in the flowchart as follows: - .17 - .V1 . V2 = {(n-1) – 1 (p+k)} ln(1+λ2) 2 Now calculate V . If it is significant then the second discriminant function is significant and also we should proceed to explore whether other significant discriminant functions are there or not. If the V .V1 .

Calculate V i=1 i=i+1 CalculateVi V = V – V1 Subtract V1 from V Discriminant function of λi value is significant Is V. Stop .Vi Significant ? Discriminant function Zi is significant in addition to the previous discriminant function if any.18 - .

“ Step 2: Calculate the Wilks’ Lambda ( Λ ) and also univariate F-value associated with each variable. F= (n − k ) × Highest eigen value (k − 1) Thus the F value for each discriminant function is calculated. The approximate F-values for each discriminant function can be obtained as follows. Next draw the vectors from the origin.   (n − k ) 1 Fi =  − 1 ×  Wilks' Λ i  (k − 1) Stretch the loadings for each variable by multiplying it with the univariate F-value associated with the particular variable.19 - . Then we know the positions of the attributes in which quadrant they lie. The univariate F-value associated with a particular variable can be calculated as follows.Interpretation of the attributes with respect to Discriminant axes Labeling the discriminant space:For the interpretation we can follow the following steps: Step 1: First determine how many discriminant functions to retain. The new centroid is also calculated according to the discriminant loadings. O. The discriminant loading is the cosine of the angle between the attribute and the discriminant axes. Step 3: Stretch the each group centroid also by multiplying it with the approximate Fvalues for the different discriminant functions. For first Discriminant function. and then go for the calculation of the discriminant loadings for the discriminant problem. .

0132 0.0055 .0009  0.0033 .0002 0.0.0523 0.0.0568 0.0012 6.0012 0. 1^j = R*Wj* Where R is the correlation matrix.0132 0.0159  0.0012 0.0.0.0011 .0132 .0055 0. (a) Rescaling the discriminant weights Wj* = C*Wj Where Wj is the discriminant weights and Wj* are rescaled discriminant weights.0007 .3765 0.0523  0.0000  0. The mean corrected sum-of-squares and cross-products matrix (S) can be obtained by pre-multiplying the mean corrected data matrix by its transpose.0004 . (b) Calculation of discriminant loadings.0002 0.0104 0.0.0.0000 0.0247 0.0209 0.Step 1: As we have already seen how to decide about the number of discriminant function to retain. For the given example (a) Cov_mat = (S)/42 0.0.0.0.0006 0.2024  0.8024  0.0.0406 0.2072 0. C contains the square roots of the diagonal elements of total sample variance-covariance matrix.0011 .0280   0.0022 0.0004 .0191 .0002 .5738 1.0055 0.0.1115   .0247 0.0002 .0e+003*  0.4081  .0.0132 0. R can be obtained as follows R = 1/(n-1)*(D-1/2SD-1/2) D-1/2 contains the reciprocals of the standard deviations of the variables in original data matrix.0.0104 0.0.8024 .0055  0.0002 0.0012 .0.0406  0.1115 . Now let us see how the discriminant loadings are calculated.0. Covariance matrix is obtained by dividing each element of the mean corrected sum-ofsquares and cross-products matrix (S) by n-1.0568  .5738 0.0280 4.0002 0.0015 0.0.2072 .0.0159 0.0009 0.0022 0.0006 .0017 .20 - 0. the sample size less 1.0017 0.2024 0.3522 0.

8066 0 0 0   0 0 0 0 4.3179 1.3932  0 W1* = C*(Eigen vector 1) W2* = C*(Eigen vector 2) 0.6278 -0.2125 0.6160 -1.0527 0 0 0 0 0 0 0 0 0.2816 -0.8185 0 0 0 0 0 0 0 1.3703 0 0  0   0  0 0 0 0 0 4.7379 -1.covariance matrix(C) 0 0 0 0 0 0 0 19.Square Roots of diagonal elemnts of variance .5744 0   0 0 0 0 0 0 66.5996 -0.4693 0.2224 0 0 0 0 0 0    0  0 0.21 - 0 0 0 0 0 0.7004 0 0 0 0  0   0  0 0 0 1.5558 0 0 0 .2204 0 0 0 0 0 0 0 0 0.3044 -0.2621 -0.1479 0.5037 (b)D-1/2 matrix 0.3944 -0.4048   0  1.2349 0 0 0 0 0 0 0 0 0.6824 -0.0129 0 0 0 0 0 0 0 0 0.6064 0.0243 1.2478 0 0 0 0 0 0 0 0 0.0151 .8073 0 0 0 0 0   0 0 79.

0.4366 0.1528 .2198 1.6075 .0670 -0.1957 0.0.1392 .0.2851 .R = 1/(n-1)*(D-1/2SD-1/2) = 1.1614 0. if a variable attached to the first discriminant function.2017 .2017 0.9372 .0.0392 0. we have seen that how discriminant loadings are calculated.3008   0.3621 0.0934   0.0190  0.2022 0.0.1614  0.22 - .5065 -0.3060 0.0.3960 0.0190 1.5029  0.0691 .0294 0.0000 .3008 0.0942 0. Let us suppose.0226 .3509 0.6265 .0.0.0.2198 .1234 0.0. then it contributes more in the discrimination .3060 .2022 1.1371 0.0001 0.0000 0.2443 -0.0.0000  Now the discriminant loadings can be arrived 1^1 = R*W1* 0.5152 .3895 0.0000 0.6075 0.4833 -0.6265 0.0942  0.5152 0.1029 Variable contribution In the previous section.0226 0.5029 0.1392  0.0.0000 0.4291 -0.0001 0.0294  0.0000 .1563 0.0.2293 -0.0000 .2851 1.3406 0.0934 1.0.0. Discriminant loadings are the correlation between the discriminant function and the corresponding variable.2561 0.0.0691 .4468 1^2 = R*W2* -0.9372   .3960 0.0.3621 0.3391 0.0000 0.3895 0.1716 0.3509 0.0.0392 1. These discriminant loadings will help us to know about the contribution of the variables in discriminating the objects.1563  1.5392 0.1528 0.1716 .

5882 -0.5882 -0.5882 19.5882 w1= 1. provided the discriminant function represent the more variation compare to other discriminant function.5882 -0.1)-Mn1(1) -0.5882 -0.5882 29.23 - .4118 -10.process.5882 -0.5882 -0.5882 -10.5882 -10.6941e+003 G2(:.5882 -0.5882 -0.4118 -10.1)-Mn2(1) -41 -1 -11 -1 -1 -1 -1 -1 -11 9 -1 49 9 29 -21 -11 . Step 2: Calculation of Wilks’ Lambda ( Λ ) For variable X1 G1(:.5882 -0.5882 -0.

24 . T1 = G1(:.4631e+003 T2= (G2(:.1)-Mn_whole(1)) = 6.5815e+004 Wilks’ lambda = W/T = 0.7864 0.1)-Mn_whole(1))'*(G2(:.57 0.8140 Similarly.1)-Mn_whole(1))= 2. These are given in the following table.3531e+003 T = T1 + T2 +T3 = 1.1)-Mn_whole(1))'*(G1(:.1)-Mn_whole(1)) = 6.8381 0.9293 0.7783 0.8140 0.9988e+003 T2= (G3(:.24 - Univariate F-value 4.9882 .9882 0.9 -1 -1 -1 w2= 5980 G3(:. Variable X1 X2 X3 X4 X5 X6 X7 X8 Wilks’ lambda 0.9125 0.1)-Mn3(1) 30 30 10 -40 -40 10 w3 = 5200 W=w1+w2+w3 = 12874 Similarly.8140 0. for other variables.9636 The univariate F-values can also be calculated as mentioned earlier. Variable X1 X2 Wilks’ lambda 0.1)-Mn_whole(1))'*(G3(:. Wilks’ lambda can be calculated.

918 5.86 5.755 Now the previously calculated discriminant loadings are stretched by multiplying them with the respective variable’s univariate F-value.9125 0. .8381 0.25 - .43 1.X3 X4 X5 X6 X7 X8 0.697 1.7783 0.9293 0.52 0.9636 3. The stretched discriminant loadings are given in the following table.7864 0.

0010 0.7255 -0.4178 0.0000 2.26 - .0001 0.0000 -0.0074 0.6389 7.0e+003 * 0.9706 0.3018 0.0000 58.0074 -0.0602 0.0015 0.0007 0.0420 0.0207 5.0683 0.0006 -0.0003 0.0010 -0.3333 1.0037 -0.1167 10.1008 0.0001 -0.0062 0.1176 85.6471 1.5000 2.0602 0.0003 -0.3529 1.0004 -0.0001 0.0001 0.01608 -1.7500 90.3018 0.2941 14.7554 2.3333 98.0055 1.0002 -0.0683 0.0012 0.0001 0.6000 0.8627 -0.8264 0.0020 -0.6245 cov_mat=s/2 1.0278 >> s=x'*x 1.0000 5.0093 0.07769 Centroid Discriminant Loadings G= 110.3333 1.2353 197.0037 -0.1896 0.0098 1.0349 0.0954 0.0003 0.0046 0.2883 0.0000 2.0098 1.0007 0.Variable X1 X2 X3 X4 X5 X6 X7 X8 Stretched Discriminant Loadings Function 1 Function 2 2.0099 0.0093 0.0055 -0.0028 -0.0025 0.0164 0.1442 0.0951 6.0001 0.2604 -62.5882 8.4178 0.38927 -0.0164 0.8264 0.07182 0.9275 13.0000 111.0002 -0.86554 -0.84131 -0.0002 0.0349 0.6450 0.444583 0.0019 -0.3333 x= 6.2089 0.0005 -0.0005 0.8696 0.0000 -0.0758 0.0001 0.2595 1.1624 37.0104 -0.750295 0.0049 .2901 0.6964 1.99526 0.0007 -0.2794 -2.0025 0.0225 -20.0003 -0.0015 -0.1713 -0.0099 0.9500 91.0420 0.1602 -0.0037 0.3889 -13.4369 -3.1008 0.2901 0.0342 0.371336 0.0007 -0.041876 -1.0028 0.1896 1.6500 185.653271 2.1373 0.1536 -0.0006 0.2500 7.0065 0.173122 -3.4229 25.029616 0.0164 0.262958 -1.0210 0.3088 1.0012 0.0e+003 * 0.2500 15.5882 2.0020 0.0207 -0.

9348 0.1388 0.0504 0.1321 0.0049 0.0003 0.0072 0 0 0 0 0 0 0 0 0.1613 -0.0040 0.0082 0.0004 -0.6699 ev1 = 0.0352 -0.0002 -0.0006 0.0049 2.1805 -0.0019 -0.0948 0.0046 0.0504 12.0370 -0.4958 .0049 0.0175 0.2089 c= 0.6450 0.0012 0.1509 0.1436 -0.0104 -0.0007 -0.8854 -0.2490 -0.0.0226 wstar1=c*ev1 wstar1 = 0.0173 ev2 = 0.8593 0 0 0 0 0 0 0 0 1.0175 0.0163 0.3695 0 0 0 0 0 0 0 0 54.0082 0.0004 0.0003 0.1347 -0.1486 0 0 0 0 0 0 0 0 0.6096 0 0 0 0 0 0 0 0 2.0006 0.9132 0.0001 0.8133 0.0001 0.4886 -0.0037 0.7536 0 0 0 0 0 0 0 0 17.0082 0.27 - 0.0110 0.0012 0.0948 0.1738 0 0 0 0 0 0 0 0 0.0210 0.9132 0.0342 0.3122 .0301 0.0301 0.2144 -0.1509 0.3488 -0.0031 0.

1289 -0.28 - 0.9866 0.7064 0 0 0 0 0 0 0 0 0.5719 1.9998 -0.6486 -0.0072 0.3497 0 0 0 0 0 0 0 0 0.9982 0.2167 0.7048 -0.8938 0.9952 0.5158 -0.0.9980 0.6404 0 0 0 0 0 0 0 0 0.6486 0.0319 -0.9715 1.9971 0.5719 -0.0185 0 0 0 0 0 0 0 0 1.4227 -0.9562 0.6096 2.0833 0 0 0 0 0 0 0 0 6.2362 -0.9866 1.0044 0.9998 0.7072 0.0566 R= 1.9715 l*2 = R* W*2 -0.7591 l*1 = R* W*1 0.4605 0.9998 0.6240 -0.9850 0.2829 -0.0002 0.4605 -0.7048 -0.5189 -0.5825 0.9980 0.9982 0.7072 0.5705 1.3993 STD (g) 12.5397 -0.9971 0.9971 -0.3695 54.7505 0.9952 0.4106 -0.7505 0.6699 d-1/2 = 0.6240 0.1486 0.0001 0.9940 0.9562 0.0006 -0.9761 1.3969 0.5825 0.7591 0.0002 .1738 0.7536 17.5300 0.5158 0.4724 .9850 0.5705 -0.5703 0 0 0 0 0 0 0 0 0.0002 0.9940 0.6342 0.9971 0.6979 0.7295 0 0 0 0 0 0 0 0 2.5300 0.8593 1.0004 0.6979 -0.9761 0.9999 -0.5189 1.9999 0.6342 0.3057 >> wstar2=c*ev2 0.

4277 -0.5248 0.0225 0.049249 13.4935 0.0826 -0.83275 -49.04 -1.3611 0.4277 12.227 1.9745 0.8575 0.4724 92.0317 1.0729 -0.9698 0.49857 -5.513983 0.5536 0.4991 -0.50493 78.200611 1.5726 -84.0649 2.0.9698 -0.29 - .8198 0.0405 -0.0400 -0.4287 0.0826 2.9719 -0.8198 -0.15915 -30.04292 160.9745 0.4991 6.3905 l*1* mean l*2* mean Mean 103.863 0.2794 0.8938 -0.29075 0.3905 76.9412 -0.9719 l*1 l*2 0.6796 7.9432 0.0317 -0.5248 131.021794 -3.9432 -0.6 .9412 0.8575 -0.

Sign up to vote on this title
UsefulNot useful