You are on page 1of 29

# Discriminant Analysis

Discriminant Analysis is a classification technique. It is similar to the Multiple
Regression Analysis. But the dependent variables in the Discriminant Analysis assume
the categorical (nominal) values.

Requirement
To do this classification, a priori defined groups are required. With these groups and the
corresponding independent variables, a Discriminant function is arrived. This function is
the linear combination of the independent variables. The discriminant function enables us
to assign any object into any one of the group based on the score arrived by using the
discriminant function.
The groups are mutually exclusive and collectively exhaustive. The purpose of
Discriminant Analysis is to minimize the variance within the group and to maximize
variance between the groups.

Applications
1. This analysis can be done to know the credit worthiness of the customers.
2. In materials management, to classify the items as vital, essential and desirable
(VED analysis), the Discriminant analysis can be used.

Assumptions
1. The independent variables in this model follow multivariate normal distribution.
2. The variance-covariance matrix computed in each group is same.

The model
Z = W′ X
Where, Z is the 1×n vector of discriminant score,
W’ is the 1×p vector of discriminant weights and
X is p×n matrix
Here n is the number of observations and p is the number of independent variables.

-1-

The formation of discriminant function is based on the principle of maximizing variations
between the groups and minimizing the variations within the group. Using the data
matrix, the mean corrected sum-of-squares and cross product matrices for the groups
are formed. We will denote these as W1, W2, W3 etc. Similarly the mean corrected sumof-squares and cross product matrices and for the total group are formed by considering
all the observations. We will denote these as T1, T2, T3 etc.
Now
W = W1 + W2 + W3 + …………. + Wk
Similarly
T = T1 + T2 + T3 + …………. + Tk
Where k is the number of groups
Also
T=W+B
=>B=T–W
We know that with respect to linear composite ‘w’ the between sum-of-squares is given
by
ŵBw
and similarly within sum-of-squares is given by
ŵWw
ˆ =
λ

ˆ Bw
w
ˆ Ww
w

--- (1)

ˆ
Here we should maximize the values of λ

From (1),
ˆ ŵWw = ŵBw
λ

∴ŵBw –

ˆ ŵWw = 0
λ

-2-

The number of the discriminant functions to be considered can be found by min (p. (W-1B – λˆ I)w = 0 Where λˆ are the eigen values and w is the eigen vector. calculate the Z value. We should keep as many discriminant functions according to the above requirement. Classification of object based on two discriminant functions:a) Compute the mean discriminant values ( Z i ) for each group by substituting the mean of the observations of independent variables in the discriminant functions. The values of the eigenvectors are called discriminant weightage. Z 2 ) for each group. 2. Classification rules: 1. Classification of object based on single discriminant function:a) Compute the mean discriminant values ( Z ) for each group by substituting the mean of the observations of independent variables in the discriminant function. Here p is the number of independent variables and k is the number of groups. k-1). c) Calculate the distances between the value in step (b) and each mean discriminate value in step (a). b) For the new object also. Thus we will get ( Z 1 .Now making the partial derivative of λˆ with respect to ŵ equal to zero ∂ ∂ ˆ ˆ Bw ) − ˆ Ww ˆ ) =0 (w (λw ˆ ˆ ∂w ∂w  B w – λˆ Ww = 0  (B – λˆ W)w = 0 Now pre-multiplying with W-1. -3- . d) The object is classified as belonging to a particular group on the basis of shortest distance calculated in step (c).

-4- . calculate the ( Z 1 . Z 2 ) c) Calculate the distances between the value in step (b) and each mean discriminate value in step (a) using Euclidean Distance formula. K and Q). This process can be continued even for more than two discriminant functions. d) The object is classified as belonging to a particular group on the basis of shortest distance calculated in step (c). Example: A table is given below which contains data on breakfast cereals produced by three different manufacturers (G.b) For the new object also.

X1 110 110 110 110 110 110 110 110 100 130 100 110 140 100 110 100 110 70 110 100 110 110 110 110 110 100 120 110 160 120 140 90 100 120 110 110 110 120 120 100 50 50 100 Protein X2 Fat X3 Sodium X4 Fiber X5 CarbohyDrates X6 Sugar X7 PotassIum X8 Group 2 2 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 1 0 0 0 3 0 1 0 0 0 1 2 1 2 0 0 1 0 1 0 2 2 2 0 0 2 180 290 180 180 180 250 260 180 220 170 140 200 190 200 140 200 200 260 125 290 90 140 220 125 200 0 240 170 150 190 220 170 320 210 290 70 230 220 220 150 0 0 0 1.5 21 12 15 13.5 10.Brand ACCheerios Cheerios CocoaPuffs CountChocula GoldenGrahams HoneyNutCheerios Kix LuckyCharms MultiGrainCheerios OatmealRaisinCrisp RaisinNutBran TotalCornFlakes TotalRaisinBran TotalWholeGrain Trix Cheaties WheatiesHoneyGold AllBran AppleJacks CornFlakes CornPops CracklinOatBran Crispix FrootLoops FrostedFlakes FrostedMiniWheats FruitfulBran JustRightCrunchyNuggets MueslixCrispyBlend NutNHoneyCrunch NutriGrainAlmondRaisin NutriGrainWheat Product19 RaisinBran RiceKrispies Smacks SpecialK CapNCrunch HoneyGrahamOhs Life PuffedRice PuffedWheat QuakerOatmeal Manuf G G G G G G G G G G G G G G G G G K K K K K K K K K K K K K K K K K K K K Q Q Q Q Q Q Cal.5 2.5 0 0 2 1.5 17 12 12 15 11.7 10.5 0 4 3 0 3 1 9 1 1 1 4 1 1 1 3 5 1 3 0 3 3 1 5 0 1 1 0 1 2 0 1 2.5 2 0 0 0 1.5 21 15 16 13 17 16 7 11 21 13 10 21 11 14 14 14 17 17 15 21 18 20 14 22 9 16 12 12 12 13 10 1 10 1 13 13 9 10 3 12 6 10 8 3 14 3 12 3 8 5 14 2 12 7 3 13 11 7 12 6 13 9 7 2 3 12 3 15 3 12 11 6 0 0 1 70 105 55 65 45 90 40 55 90 120 140 35 230 110 25 110 60 320 30 35 20 160 30 30 25 100 190 60 160 40 130 90 45 240 35 40 55 35 45 95 15 50 110 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 2 6 1 1 1 3 2 2 2 3 3 2 3 3 1 3 2 4 2 2 1 3 2 2 1 3 3 2 3 2 3 3 3 3 2 2 6 1 1 4 1 2 5 -5- .

5882 0.7059 0.0095 -0.8824 -0.3529 -17.0016 0.0003 0.3529 -17.6471 -0.0000 -40.2059 1.0e+004* -0.4118 1.5882 -0.3529 -57.1176 3.0000 35.2941 2.0018 -0.2941 -1.0082 0.0044 -0.0082 0.8824 -5.0032 -0.0520 0.6471 92.0002 -0.4118 1.6471 52.3529 0.0444 0.6471 -0.0012 -0.4118 -1.0011 0.6471 22.5882 19.2353 -0.5882 -0.6471 2.0339 0.0000 -50.1176 1.3529 2.8824 -7.0000 -20.5882 -0.x1=G1-column mean of G1 -0.2353 -0.0930 0.3529 3.7647 -0.1176 4.0000 25.2353 -0.0016 0.0003 -0.0028 -0.0882 6.3529 62.0012 -0.3529 -7.2353 -0.5882 -0.0007 -0.0000 W1=x1’*x1 0.3529 -0.4118 -2.0000 -25.0002 0.7647 -0.1565 -0.0882 2.1176 5.0520 0.3529 -27.0000 55.0000 -30.0476 0.5882 -0.4118 -1.0345 3.6471 2.5882 -10.3529 -0.6471 0.0165 -0.5882 -0.0007 0.0011 2.2353 -0.0008 -0.8824 0.0284 0.7647 0.1176 -0.4118 0.1176 -15.2353 -0.4118 -10.3529 0.6471 -17.3529 -1.2353 -17.2353 -0.0005 0.5882 -0.3950 0.2059 0.0000 -45.0000 20.3529 0.1176 -5.0370 -0.2353 0.2059 -1.0345 0.1565 -0.0930 -0.3529 0.4118 1.2059 -1.5882 -10.9750 -6- .0024 0.1694 0.0000 25.2353 -0.6471 -1.0002 0.5882 -0.0000 5.0003 -0.0002 -0.4118 -3.8824 -5.4118 -2.2941 0.2353 -0.0095 0.0018 0.0050 0.0000 -30.0370 0.0882 -4.8824 -2.7059 -1.8824 1.8824 4.2941 0.0000 145.0050 0.7059 -1.0157 0.0444 -0.6471 -1.1176 3.6471 -17.3529 0.1054 -0.5882 2.2353 -0.6471 -0.5882 -0.7059 1.0005 0.3950 0.0016 0.4118 -10.0339 0.5882 -0.0011 0.5882 -0.3529 0.4706 0.6471 -57.0000 -60.2941 -1.2941 -4.6471 2.3529 -1.2353 -0.0000 5.8824 -5.0011 0.0476 0.0157 -0.2941 1.5882 -2.0016 0.5882 0.1054 -0.5882 29.0044 0.7647 0.2941 -1.0882 6.0008 1.7059 -0.0032 -0.6471 0.

9500 -6.0002 0.4000 0.0000 -1.6000 -0.0002 0.0001 -0.0083 0.6000 0.6500 -0.6000 -0.9500 228.2500 44.5000 2.0e+005 * -0.0000 9.0002 -0.0000 6.3500 -0.7500 -4.6667 0.2500 24.7500 8.3500 1.0037 0.0000 0.0026 0.5000 2.0379 0.7500 68.0000 -0.7500 5.5000 6.0000 -1.9500 1.3333 -98.9500 -4.7500 -4.2500 -185.3500 -0.6000 3.5000 0.2500 1.6500 -0.0000 -0.0067 -0.0037 -0.7500 -71.0500 -1.6500 74.2500 -2.7500 -4.0000 30.0042 0.0598 -0.0008 0.6667 -98.0037 -0.0023 1.9500 -1.0311 0.0003 0.2500 -1.1167 3.4000 -0.0000 -1.3500 -0.0000 -4.0000 1.2500 -0.1796 0.0000 -1.0022 0.0164 -0.6500 0.0164 0.0002 0.3333 -0.3500 -0.0060 -0.2500 4.7500 -60.6667 -43.5000 0.0002 0.2500 7.3333 -98.0000 7.7500 -61.5000 -1.0000 9.2500 -35.0038 0.0335 0.2500 -115.0311 -0.4000 -0.7500 38.0018 0.0000 -1.5000 -1.6500 -0.9500 -4.0500 0.0000 9.6000 -0.8833 2.6667 .0000 -40.6500 0.x2=G2.4000 -0.1167 2.0249 0.0000 -11.3333 -8.4000 0.0000 0.0001 -0.5000 -1.2500 5.6000 0.0002 -0.7500 -56.0000 10.0000 49.1167 0 -5.5833 -9.7500 54.3333 36.3333 0.2500 -95.0000 0.2500 -51.0335 -0.0000 29.2500 6.0500 5.0000 -1.0009 -0.0013 0.6500 -0.0500 -1.column mean of G3 30.6667 0.6667 -1.0022 -0.0000 -1.0000 -1.7500 148.2500 -45.0379 0.2500 104.7500 -15.3333 51.0083 0.9500 2.0343 -0.2500 -60.6500 2.0003 0.0000 -40.0002 0.0000 -5.7500 -36.3333 -1.6000 -1.2500 14.2500 34.4000 -0.7500 -66.7500 134.5000 -1.3333 -13.7500 -51.0023 0.3333 -1.5000 -1.5000 1.3333 -1.9500 4.0500 1.6667 51.6000 0.7500 34.0000 -11.0013 1.9500 -2.3500 -0.7500 -5.2165 0.5000 -1.6000 -1.0003 0.0000 -0.7500 68.7500 W2=x2’*x2 0.column mean of G2 -41.5000 -2.6667 121.7500 4.6667 0.5000 -1.5000 -2.0060 0.0000 1.2500 -56.0008 0.7500 -46.5000 0.7500 -15.0003 0.0500 -5.0500 5.4000 0.7500 -0.4000 0.9500 -1.3500 1.2500 3.2500 -8.0000 -0.7500 -1.4000 0.0000 -11.7500 -4.0000 -1.5000 -1.3196 x3=G3.3333 2.0018 0.2500 98.6500 -0.0500 6.0000 -0.0000 -7- -23.0036 -0.7500 -5.6000 0.0000 -1.0001 0.2500 4.3333 1.0026 0.5000 -1.0500 -0.6500 0.4000 -0.0000 -21.3500 -0.0000 0.2500 -31.0042 -0.3500 0.0000 10.1796 -0.2500 -61.2500 4.6667 121.5000 0.2500 -0.0249 0.6667 -1.0037 1.9500 5.2500 -61.0000 -1.0000 -1.0343 -0.7500 104.6500 0.5000 -1.0000 1.1167 2.0067 -0.

0393 0.6389 61.0729 -0.0417 0.3010 -0.5065 -20.0037 0.1373 6.5065 19.0102 0.3611 151.9775 7.0005 0.0004 -0.7206 -4.0008 0.0010 0.0729 -0.0002 0.5713 0.0001 0.0177 0.0447 0.9775 2.3010 -0.4464 -0.1373 6.0006 0.0015 0.0056 1.1180 -0.5065 99.0080 -0.4287 3.6389 31.7206 -4.0177 0.0033 0.4700 -0.4464 -1.1373 -3.0447 -0.0080 0.9775 7.0225 78.7206 -4.2794 4.1534 -0.8627 6.0002 -0.1287 0.0016 2.7206 0.0729 19.0191 0.0160 1.1373 6.5713 -0.1850 0.0002 -0.4287 -1.2206 2.0008 0.0567 -0.0049 -0.9775 -1.0029 1.2794 5.0225 0.0002 0.5713 -1.0002 -0.0001 0.5065 19.0417 -0.9775 -2.0011 0.5536 -8- -2.5065 129.3611 11.4287 0.0308 0.8627 6.0139 0.0002 -0.0049 -0.9775 -8.7206 -6.3611 .0166 0.0103 1.0013 -0.0729 -0.5065 19.0345 0.6389 -23.6389 -53.9775 -1.4287 -0.1111 -0.0536 0.1373 6.0003 0.0760 0.3611 -23.0729 160.2794 4.5713 0.1373 -3.5065 29.0152 -0.0021 0.6683 W=W1+W2+W3 0.0009 0.0082 0.5536 -1.4464 -0.0139 0.9775 3.9775 1.0001 0.0033 -0.8627 6.4935 39.5713 -0.1373 -3.2567 -0.7206 -1.0595 0.5065 59.0030 0.0002 -0.0191 -0.0003 0.0020 0.0595 0.0003 0.0056 0.2567 0.9271 -0.0018 -0.4287 0.7794 0.0016 0.0002 0.4700 0.8627 26.0e+004* 1.0011 0.1373 -0.0003 0.3611 t1=G1.4287 -1.9271 0.7206 -4.0103 0.0033 -0.0393 6.0048 0.0729 -0.4464 -1.0308 0.5065 39.0037 -0.0010 -0.0063 -0.0006 0.2794 5.0225 -1.4287 -0.4287 0.0729 -0.0729 -0.5536 -1.8627 2.9271 0.0760 0.5713 -0.4464 1.0234 0.0002 0.5065 89.1850 -0.0729 -0.0729 -0.0048 0.0155 -0.9775 1.0001 0.0070 -0.0003 0.0009 -0.4287 0.7206 6.0010 0.0729 -0.0030 -0.5065 9.0225 2.0729 -0.1180 0.1373 6.6389 41.3611 31.5536 -1.6389 -38.6389 -43.1111 0.4287 0.W3=x3’*x3 0.3611 11.4935 39.7206 1.2794 7.0155 0.0004 0.5065 39.7840 Column means of whole data 103.0234 0.0763 0.7794 2.5065 -20.1373 6.1373 36.0166 0.0020 0.5536 2.1283 -0.1534 -0.0536 -1.0225 1.column mean of whole data 6.7794 2.0729 -0.0160 0.5065 -0.5065 19.9775 3.0225 -0.6389 -18.0010 0.3611 -33.9271 -0.5536 0.3611 -13.3611 26.5536 13.5200 0.0345 0.5536 1.5713 -1.0006 0.0536 0.0337 0.4935 1.0070 0.0018 0.0021 0.0030 -0.0033 -0.0030 0.9464 -1.5536 -0.1373 6.0337 0.1373 -3.0e+005 * 0.0018 0.0003 -0.0225 -1.0029 0.0567 -0.0729 0.0018 0.0006 -0.4287 1.

2794 2.3611 -23.5536 -1.0046 -0.5065 7.0000 0.4935 -0.0046 0.0001 -0.0003 -0.0026 1.2794 0.7206 -0.0016 0.6389 11.0002 0.2794 -3.0194 -0.0002 -0.0729 9.0511 -0.0113 -0.0396 0.0062 t2=G2.3611 -53.0225 -4.4287 0.5536 -0.5536 -0.5536 0.0139 0.0e+004* 0.0729 159.1373 16.3611 51.6389 111.0729 79.0001 0.0004 0.0002 0.4464 -1.1373 6.0225 7.0022 0.2794 7.0019 0.4287 -0.0002 -0.5536 -0.0010 -0.column mean of whole data -33.0000 -0.0062 1.0002 -0.0026 0.8627 16.7206 8.8173 -0.0002 0.0040 0.0082 0.6389 -43.0010 -0.0028 0.0133 -0.3611 -0.7206 -2.0729 -70.4464 -1.0333 0.5713 0.0080 0.4464 -1.5065 -0.4287 -0.5713 -0.5713 0.7206 4.5536 -1.5536 -1.1373 -13.0729 59.0139 0.9775 -4.0000 0.7206 1.7206 0.9775 -4.1373 6.1373 6.5065 -0.0019 -0.3415 0.4287 0.9775 -5.1373 6.0442 -0.0005 0.4935 2.7206 -4.5713 T2=t2’*t2 0.2794 -2.2465 -9- 0.0304 0.0729 129.0007 1.0004 0.0008 0.1373 6.3611 21.4935 1.1880 -0.3771 0.0729 -35.0022 0.0016 0.0225 -4.7206 6.0729 29.0005 0.1373 6.5536 -1.0729 -160.4935 1.0002 0.5713 -0.6389 -18.9775 -1.0061 0.4464 -1.0499 -0.0070 0.3611 161.0729 39.5065 -0.9775 -0.1373 -3.5536 -1.4287 0.0354 -0.0396 0.0045 0.5065 -0.0333 0.0087 0.0046 -0.5065 -1.0010 -0.6389 -48.0001 -0.0008 0.0225 4.4709 -0.0874 0.5065 1.4143 0.0004 0.1373 6.0008 0.2463 0.3611 -48.5065 1.0002 -0.0000 -0.0113 0.4935 -0.1880 -0.3611 -43.9775 -0.1373 6.0729 -35.4287 -1.0464 0.0029 -0.0000 0.0729 -90.8627 16.0048 0.0008 0.3611 81.0004 0.0003 0.3611 -58.5065 3.0511 0.5065 -0.0002 0.8627 6.1373 1.0092 4.3611 -38.0729 49.T1=t1'*t1 0.5536 1.4464 -0.4464 -1.5713 0.0225 -5.0045 0.0222 0.5713 -0.9271 59.9775 1.1373 36.0082 0.0001 0.5713 0.0002 0.8627 -3.4287 3.0700 -0.4464 -1.4287 -1.0044 0.0469 4.0225 5.0070 0.4287 -0.5536 -6.9271 -10.1373 56.0044 1.0001 0.0729 129.0092 0.9271 -20.0008 -0.5536 0.0196 0.0044 -0.0225 241.7206 3.0087 0.7206 3.7206 0.0024 0.0901 0.4143 -0.0003 0.9775 3.0007 0.0225 6.0225 4.2465 0.0000 0.0008 0.3611 81.2794 7.5713 -0.4709 0.0048 0.0046 0.7206 -2.9775 -0.4935 -0.0e+005* 0.0225 5.0001 0.0464 0.0442 -0.7206 0.3555 .5065 -0.0003 0.0133 -0.0469 0.0354 0.0729 99.8627 6.0028 -0.6389 -38.3771 0.0061 0.4287 0.0004 0.6389 -33.0080 -0.7206 7.6389 -48.0874 0.0729 69.5713 -0.5065 3.0901 -0.0196 0.4464 -0.0222 0.0225 4.0729 9.5065 -1.4935 -0.1373 -3.

0003 -0.8359 0.0049 -0.0203 0.0007 -0.0023 0.0210 0.4903 0.5536 0.0009 -0.0043 0.3035 0.0003 -0.9870 0.4464 -1.0089 0.0101 0.0220 0.3764 0.0704 0.0080 -0.2794 -1.0320 0.3035 0.5065 -0.0030 0.0011 0.0006 0.0062 0.0001 -0.0002 0.0007 0.0e+004* 1.0073 0.0005 0.0003 0.0870 -0.3516 0.0870 -0.8514 0.9271 -10.3611 -28.0729 -160.0017 0.0001 -0.9090 T=T1+T2+T3 0.0008 -0.0468 0.0002 -0.0171 0.0210 0.0073 0.0002 0.0e+004* 1.0001 -0.4287 1.0e+005* 0.0104 0.0301 0.0003 -0.0038 0.0238 0.0615 0.0014 -0.0003 -0.0225 -7.0320 0.9156 0.0007 0.8627 -3.0138 1.2794 -1.0244 -0.0005 -0.0072 0.1373 -3.3370 0.0028 -0.0171 0.3764 -0.4287 -0.0010 -0.0004 0.0023 0.6389 -63.0005 2.0850 0.3516 0.0002 0.4935 1.0003 -0.2794 4.5536 0.1464 -1.0704 0.1910 1.9870 0.0005 0.0001 0.t3=G3.8627 -53.0126 0.9271 59.0088 0.0001 0.0003 0.0023 1.9775 -1.0055 -0.8359 .0347 0.0063 -0.0018 -0.4336 0.0003 0.0007 -0.0013 0.0101 0.0203 0.0024 0.0206 5.6741 B=T-W 0.2403 -0.0118 1.2794 -12.0007 -0.0206 -0.0244 -0.0104 -0.4935 -1.0171 0.8627 -53.3611 31.6679 0.0002 0.1581 0.0038 0.0225 -7.0005 0.0001 0.0468 -0.0024 0.0426 0.2410 0.2794 -0.3370 0.2988 0.0043 -0.0615 0.0118 0.0055 0.0106 0.0850 0.0008 -0.0000 0.0001 0.3611 -33.0072 0.2403 0.0126 0.3611 16.0201 0.2410 0.0023 -0.5065 -1.0002 0.0729 -160.0009 -0.0225 -43.0004 0.0063 1.0024 -0.0201 0.0067 0.2941 0.0003 0.0426 0.0018 0.0928 0.0017 0.0002 -0.4935 0.0009 0.0055 0.6389 T3=t3’*t3 0.0009 0.0017 0.0089 -0.0012 0.1069 0.0225 -6.10 - .0024 0.0007 -0.0030 0.5536 0.4336 1.0006 0.4935 -0.0167 0.0347 0.0238 0.0088 0.0177 0.9271 -160.0028 -0.1910 1.0301 0.4467 0.0011 0.0004 -0.5536 -1.0055 0.0138 1.0928 0.2988 0.0010 0.0049 -0.column mean of whole data 16.5713 -1.0220 -0.0015 0.0088 0.0106 0.4903 0.0296 0.6353 0.1069 0.0073 -0.4287 -1.9775 3.0067 0.0002 -0.0000 0.2794 -3.0296 8.0017 0.1373 16.9271 59.8627 -1.0005 0.0073 0.4287 2.0013 0.0007 0.5713 0.

0110 0.0720 + 0.0107 0.2490 -0.0720 .8276 -0.0067 -0.4886 -0.0049i 0.3589 0.0146 8.0709i -0.3081 + 0.4065 -0.6799 7.0996 -0.5703 -0.1281 -0.0709i -0.1388 0.2671 0.0003 0.0226 .0.0059 -0.0316 0.0379 0.0837i -0.2001 -1.8854 -0.6898 0.0144 -0.1613 -0.0048 0.0817 -0.0025 0.0156i -0.6129 -0.1347 -0.0000i 0 0 0 0 -0.0156i -0.0000 0 0 0 0 The two highest eigen values are 1.0121 -0.0065 -0.0010 1.0115 0.0459i 0.0023 -0.3978 4.7333 -11.2882 20.8396 0.7510 0.2144 -0.0003 0.1616 -19.0065 + 0.4886 -0.0061i 0.8854 -0.3447 Now we have to find the eigen vectors and eigen values The eigen vectors are 0.0180 And the eigen values are 1.0256 + 0.6581 -0.0084 0.0965 0.0103 1.2508 -0.1092 .0184 0.0296 -0.6202 -0.7022 0.0026 2.8698 0 0 0 0 0 0 0 0 0.2650 .6923 -0.6193 -0.0.2697 0.0013 0.3081 .0.6918 0.6239 -0.0031 4.0352 -0.8133 0.9826 17.6898 0.0000i 0 0 0 0 -0.8728 12.4109 0.0000 + 0.1530 -0.2871 0.0873 24.8698 and 0.0113 -0.1352 0.1436 -0.2490 -0.1092 + 0.3488 -0.0236 0.0256 .0588 -0.4810 0 0 0 0 0 0 0 0 0.1163 0.0770 0.0837i -0.0812 0.8032 0.0049 0.2144 -0.1358 0.0458 0.0037 0.2781 -0.1347 -0.1678 0.0000 .4031 4.0092 -0.1273 -0.0000 .2039 0.0000 0 0 0 0 -0.3763 0.0000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.0352 -0.0681 -0.11 - 0 0 0 0 0 0 0 -0.0049i 0.0731 11.3488 -0.1388 0.0476 -0.0001 -0.0006 -0.0173 Eigen Vector2 0.0.8046 2.1612 0.0.1511 0.4810 Corresponding eigen vectors are Eigen Vector 1 0.3612 0.2443 -0.1021 2.0459i 0.0226 -0.8232 -5.0173 0.0179 -0.0110 0.1436 -0.2650 + 0.0.0003 0.8133 0.6226 0.0061i 0.0040 0.0040 0.1538 0.3398 -0.0391i 0.0527 0.5126 0.1119 -0.W-1B -0.4110 0.1613 -0.6566 -0.2461 -0.0391i 0.5703 + 0.

1347X7 .249X2 .0.0.0.Hence the Discriminant functions are formed as follows Z1 = 0.0352X1 .4886X3 .8854X5 .9193 _ Z2 = -1.1436X6 .0.0173X8 Z2 = 0.8459 _ Z2 = -0.2144X2 .0.0.011X1 + 0.7681 Classification of the objects is given as follows:- .004X4 + 0.0.1388X6 + 0.0226X8 For Group G1:_ Z1 = 3.8133X5 + 0.0.1613X7 .8336 _ Z2 = -2.3488X3 .0003X4 + 0.1794 For Group G3:_ Z1 = 2.0.0190 For Group G2:_ Z1 = 4.12 - .

2825005 1.6597517 0.912959 7.3318 4.155417 0.717832 3.9773 4.732139 7.19084 4.5649 -2.6861056 0.2034 -1.5 0 0 2 1.6225776 0.5 21 15 16 13 17 16 10 1 13 13 9 10 3 12 6 10 8 3 14 3 12 3 8 4 2 2 1 3 2 2 1 3 3 2 3 2 3 3 3 3 2 2 6 1 0 0 0 3 0 1 0 0 0 1 2 1 2 0 0 1 0 1 0 260 125 290 90 140 220 125 200 0 240 170 150 190 220 170 320 210 290 70 230 9 1 1 1 4 1 1 1 3 5 1 3 0 3 3 1 5 0 1 1 7 11 21 13 10 21 11 14 14 14 17 17 15 21 18 20 14 22 9 16 1 1 4 1 2 5 2 2 2 0 0 2 220 220 150 0 0 0 0 1 2 0 1 2.43565 0.5 2.819 -1.1706173 6.874937 0.526975 1.118187 2.179861 0.8021 4.0740464 0.969828 1.3413 0.1978 5.1151 -0.4781 1.7167305 1.507 2.3821 1.9348151 0.167047 6.6101 0.1245 -1.2674535 0.195 3.8791 -0.8935 -2.0341 -1.485226 2.4793 -2.8963 5.3893 -2.5937 4.2437897 0.526625 0.X1 X2 X3 110 110 110 110 110 110 110 110 100 130 100 110 140 100 110 100 110 2 6 1 1 1 3 2 2 2 3 3 2 3 3 1 3 2 70 110 100 110 110 110 110 110 100 120 110 160 120 140 90 100 120 110 110 110 120 120 100 50 50 100 Z1 Z2 70 105 55 65 45 90 40 55 90 120 140 35 230 110 25 110 60 Actual group 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3.9433113 4.56981 0.2167991 Distance from G2 2.3283 4.968894 1.347661 0.125561 2.8725 -0.1910683 4.772786 3.5 10.2014 -1.307871 0.518828 12.2598 3.4220067 0.747615 0.988311 0.9695 3.277986 2.908734 Pred.9932 3.0758 -1.127127 0.918603 1.8279 -1.127559 3.8790773 14.0002 -2.0651803 4.5 2 0 0 0 1.28626 1.923861 1.239213 5.7478 5.0385 -2.872641 6.1351337 3.603838 1.0014032 2.556398 7.4829 -2.893547 2.4659 4.079483 6.4257 3.7288 -0.2483 4.13 - .707006 0.1255895 1.079881 3.349231 3.913025 3.6266 5.381982 4.8887315 0.214798 0.760973 0.1629213 1.154 -0.9591 4.525028 1.558674 9.5256 2.0314166 0.2260817 0.783154 2.373563 2.7693 Distance from G1 0.9102 4.54778 0.7613 3.863076 3.7008 3.2606 5.181792 6.7668563 1.0961 -2.3138 3.0967713 1.287342 1 1 1 3 3 3 .3547 -2.7 12 12 12 13 10 1 -1.3819844 0.073985 5.985857 0.401665 1.881702 1.4078173 0.6227 -1.545119 3.3115996 3.371 4.0217 -1.5835 5.3528 -2.698 2.0274 -0.693384 0.7058 -2.299188 1.9670183 1. Group 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 1 4.511523 1.8904 -1.119592 0.662258 4.9306 -2.458449 4.1386 -2.1667627 0.354 1.931304 2.179274 0.0376752 2.065343 1.090529 2.9845 4.5 21 12 15 13.751399 4.6916 -1.5 0 4 3 0 3 1 10.866 4.9975 3.827815 Distance from G3 0.77439 0.3743995 2.653369 2 2 2 2 3 2 2 2 2 2 1 2 1 2 2 2 1 1 2 2 3.5285 -2.223088 6.3166 2.2266 5 14 2 12 7 3 13 11 7 12 6 13 9 7 2 3 12 3 15 3 320 30 35 20 160 30 30 25 100 190 60 160 40 130 90 45 240 35 40 55 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 12 11 6 0 0 1 35 45 95 15 50 110 3 3 3 3 3 3 X4 X5 X6 X7 X8 2 2 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 180 290 180 180 180 250 260 180 220 170 140 200 190 200 140 200 200 1.2023 3.6633 -0.4549 3.8702339 0.971586 3.1052 5.042785 5.441982 0.68268 1.629545 5.5634447 0.4180283 1.133003 3.624525 5.4467 -1.503568 6.9071 -0.3412 2.3752 3.6562011 2.950084 3.282997 5.208364 3.8676 5.018998 1.8460409 7.5324 -1.4386 6.721785 1.118669 15.7326 -0.4841 3.5 17 12 12 15 11.7092 -0.1166497 0.3153 -0.9215 -1.904391 3.9379 3.386589 5.265705 0.4895 3.4707 -1.2851 5.

n1’ n1’ n2’ n2. Let us suppose. The second portion in each group is used for the calculation of error rate estimation 3) The U method or Cross Validation or Jack Knife method – If there are two groups. then the misclassification rate is 0. Predicted membership G1 G2 n1 0 0 n2 Actual membership G1 G2 Let us suppose n1’ is the misclassified observation in G1 and n2’ is the misclassified observation in G2 as shown below. the confusion matrix can be formed as follows:Actual membership G1 G2 G3 Predicted membership G1 G2 15 2 4 15 3 0 G3 0 1 3 APER = (2+4+1+3)/43 = 10/43 = 23.Calculation of misclassification error 1) Re-substitution method – In this method whole sample data is considered.256 % 2) Hold out method – divide the samples into two divisions in each group. there are two groups G1 and G2 and n1. n2 be the number of observations in two groups respectively.n2’ Actual membership G1 G2 The error rate is calculated as follows:Error rate = (n1’+ n2’)  (n1 + n2) The above type of tables are called confusion matrix.14 - . Predicted membership G1 G2 n1. After substituting the data values in the discriminant function. One portion of each group is used for the classification purpose. For the given example. at a time leave one element in each group and with remaining of that group . The available group data is re-substituted to calculate the APperent Error Rate (APER). if the discriminant function discriminates as shown in the table below.

if Z1 < Z2 Z1 – Z2. thus M follows F distribution with degrees of freedom p. It was shown that the distance D2 depends upon a test statistic M. Do this until all the elements in the group are subjected to misclassification procedure. The distance is calculated as follows:D2 = Z2 – Z1. leave one element and do the classification for the remaining elements with the all the elements of the other group the nthe left out element is used for misclassification test. which is used mainly for two-group problem. . if Z2 < Z1 If the distance D2 is higher.15 - . If the value of M is significant at that point then the discriminant function is significant otherwise not. Go to the second group. One useful test is Mahalonobis’ D2 test statistic. Then use the left out element for the error rate calculation. The centroid for each group is arrived by substituting the mean values of each independent variable in the discriminant function. then it means that the discriminant function is discriminating effectively. The expression of M is as follows: M= n n 2 ( n1 + n 2 − p − 1) D 2 ( n1 + n )( n1 + n − 2) p In this test. Here D2 is the generalized distance between the centroids of the two groups. M is tested for a particular ‘α’ value. Continue until all the elements are subjected to the misclassification procedure. The corresponding Z1 and Z2 values are the centroid of the groups.and with the other elements of the other group. do the classification. Here n1 and n2 are the number of observations in the two groups and p is the number of independent variables. Statistical Tests: To identify whether the developed discriminant functions discriminate properly. we have some tests to solve our purpose. n1+n2-p-1.

But if there are more than two groups i. Now test the V – V1 value is significant or not using degrees of freedom (p-1) (k-2).½ (p+k)} ∑ j =1 ln(1+λj) The λ values are the eigen values of the W-1B matrix. This is done by using χ2 distribution with degrees of freedom p (k-1) for particular ‘α’ value. number of discriminant function is more than one. First calculate the V1 value. then it is possible to test the significance of the discriminant function by the above test. . If V – V1 value is not significant then it means that first discriminant function corresponding to the highest eigen value is significant and other discriminant functions are not significant. This statistic is given by r V = {(n-1) . This test is cariied out in the following manner. This will help us to retain the significant discriminant functions. If it is significant. If the V value is significant then the next job is to identify how many discriminant functions are significant out of total discriminant functions. V1 = {(n-1) . then the significance of the next discriminant function is checked.16 - . The test statistic V is tested to identify whether it is significant or not.e. If there are two groups (number of discriminant function is only one).½ (p+k)} ln(1+λ1) Subtract V1 from V.Also we can say that there is not much difference between the two groups. This is the value corresponding to the highest eigen value. then the Bartlet’s χ2 test statistic comes as handy.

V2.V1 V V – V1 If V – V1 is significant then it means that there are some more discriminant functions which may contribute to the significance of the discriminate the groups.V1 . If it is significant then the second discriminant function is significant and also we should proceed to explore whether other significant discriminant functions are there or not.17 - . This whole process is given in the flowchart as follows: - .V2 value is not significant then come to the conclusion that the second discriminant function is significant in addition to the first one. V2 = {(n-1) – 1 (p+k)} ln(1+λ2) 2 Now calculate V . Test the significance of this value for the degrees of freedom (p-2)(k-3).V1 . Stop the procedure. If the V . Now V2 is calculated corresponding to the second highest eigen value as follows.

Calculate V i=1 i=i+1 CalculateVi V = V – V1 Subtract V1 from V Discriminant function of λi value is significant Is V.Vi Significant ? Discriminant function Zi is significant in addition to the previous discriminant function if any.18 - . Stop .

The new centroid is also calculated according to the discriminant loadings. For first Discriminant function. and then go for the calculation of the discriminant loadings for the discriminant problem. The approximate F-values for each discriminant function can be obtained as follows. The discriminant loading is the cosine of the angle between the attribute and the discriminant axes. The univariate F-value associated with a particular variable can be calculated as follows. Step 3: Stretch the each group centroid also by multiplying it with the approximate Fvalues for the different discriminant functions. F= (n − k ) × (k − 1) Highest eigen value Thus the F value for each discriminant function is calculated.19 - . O. Next draw the vectors from the origin. Then we know the positions of the attributes in which quadrant they lie.“ Step 2: Calculate the Wilks’ Lambda ( Λ) and also univariate F-value associated with each variable.   (n − k ) 1 Fi =   Wilks ' Λ −1  × (k −1) i   Stretch the loadings for each variable by multiplying it with the univariate F-value associated with the particular variable. .Interpretation of the attributes with respect to Discriminant axes Labeling the discriminant space:For the interpretation we can follow the following steps: Step 1: First determine how many discriminant functions to retain.

The mean corrected sum-of-squares and cross-products matrix (S) can be obtained by pre-multiplying the mean corrected data matrix by its transpose. (a) Rescaling the discriminant weights Wj* = C*Wj Where Wj is the discriminant weights and Wj* are rescaled discriminant weights. 1^j = R*Wj* Where R is the correlation matrix.Step 1: As we have already seen how to decide about the number of discriminant function to retain. Now let us see how the discriminant loadings are calculated. Covariance matrix is obtained by dividing each element of the mean corrected sum-ofsquares and cross-products matrix (S) by n-1. C contains the square roots of the diagonal elements of total sample variance-covariance matrix. For the given example (a) Cov_mat = (S)/42 .20 - . the sample size less 1. R can be obtained as follows R = 1/(n-1)*(D-1/2SD-1/2) D-1/2 contains the reciprocals of the standard deviations of the variables in original data matrix. (b) Calculation of discriminant loadings.

5996 -0.4693 0.0 0 1 7.8 0 2  4  0 .1 1 1 5.4 0 4 8 0  0  1 .0 0 0 7.3044 -0.2 0 7 2 0 .0 2 80    0 .0 .0 0 0 4.0 .0 2 0 90 .7 0 0 0 4 0 0 0    0 0 0 0 1 .3 7 0 3 0 0    0 0 0 0 0 0 4 .0 0 1 50 .0 0 0 2.0 5 6 80 .0 0 0 20 .2 2 2 4 0 0 0 0 0 0    0 0 0 .8 0 6 6 0 0 0    0 0 0 0 0 4 .6064 0.0 .4 0 81 Square Roots of diagonal elemnts of variance .0 .0 0 0 2.3944 -0.0 2 4 7 0 .2621 -0.0 0 1 2 0 .0 1 9 1 0 .0 0 5 50 .0 0 1 20 .0 .0 0 2 2 0 .0 0 0 4 0 .0 4 0 6 0 .0 0 0 2 0 .0 0 0 90 .0 0 0 20 .0 .0 0 5 5 0 .0 1 5 9   0 .21 - .0 1 3 20 .0 1 3 2 0 .8 0 2 40 .1.2125 0.3 9 3 2  0 W1* = C*(Eigen vector 1) W2* = C*(Eigen vector 2) 0.0 2 8 0 4 .0 1 0 4.0 .0243 1.0 1 3 2 0 .0 0 0 0 0 .0 5 2 3.0 0 0 90 .3179 1.0 .0 1 5 90 .2 0 2 4  0 .0 0 0 00 .5 7 3 8 0 .0 5 6  8  0 .2816 -0.0 0 0 6 0 .5 7 3 8 0 .0 5 2 3 0 .0 0 1 1 0 .0 0 1 7 0 .0 2 4 7 0 .1 1 1 5   0 .0 0 0 20 .0 .0 0 5 50 .3 5 2 2 0 .0 0 1 2 0 .0 0 5 50 .0 .0 1 0 4 0 .covariance matrix(C) 0 0 0 0 0 0   1 9 .0e+003*  0 .0 0 0 6.2 0 2 40 .0 1 3 2 0 .0 .0 0 1 2 6 .0 4 0 6    0 .0 0 1 1.0 0 2 20 .2 0 7 2 0 .8 0 7 3 0 0 0 0 0    0 0 0 7 9 .6278 .6824 -0.0 0 3 3.3 7 6 50 .5 7 4 4 0    0 0 0 0 0 0 6 6 .

0 2 9 4.0 .0 2 2 6 0 .6 0 7 5.2 8 5 1.0 .2 0 2 2 0 .6 2 6 5 0 .2 0 1 7  0 .3 5 0 90 .0 .0 .1 6 14  0 .7379 -1.0 9 4 21 .6 2 6 5.0 .9 3 7 2.0 9 3 41 .2 8 5 11 .0 .3 0 0 8   0 .0 3 9 20 .0 0 0 0 0 .0 .22 - .0 .0 2 9 4 0 .0 0 0 00 .0 0 0 0 0 .0 0 0 0.3 9 6 00 .1 5 2 8 0 .0 .0 .0 6 9 1.0 0 0 0.0 1 9 00 .2478 0 0 0 0 0 0 0 0 0.1479 -0.0 .1 7 1 6 0 .0 2 2 6.0 0 0 0.2 1 9 8.3 0 0 8 0 .2204 0 0 0 0 0 0 0 0 0.1 3 9 2 0 .0 3 9 21 .0 6 9 1.0 .3 9 6 0 0 .1234 0.1 5 2 8.0 .0129 0 0 0 0 0 0 0 0 0.5 1 5 20 .0 1 9 0 1 .6 0 7 50 .0 .3 6 2 1 0 .3 0 6 0 0 .2 0 1 70 .0 0 0 1 0 .0 .0670 .2349 0 0 0 0 0 0 0 0 0.1 5 6 30 .4468 1^2 = R*W2* -0.0 0 0 0 0 .6160 -1.3 8 9 50 .3 6 2 1 0 .0 0 0 0 Now the discriminant loadings can be arrived 1^1 = R*W1* 0.5558 0 0 0 0 0 0 0 0 0.9 3 7 2    0 .8185 0 0 0 0 0 0 0 1.2 0 2 21 .0527 0 0 0 0 0 0 0 0 0.3 8 9 5 0 .5 1 5 2.4366 0.5037 (b)D-1/2 matrix 0.5 0 2 90 .0151 R = 1/(n-1)*(D-1/2SD-1/2) = 1 .2 1 9 81 .1 6 1 4 0 .5 0 2 9    0 .0 .0 0 0 1 0 .1 3 9 20 .1 7 1 6.0 9 4 2 0 .3 0 6 0.1 5 6  3  0 .0.0 9 3 4    0 .3 5 0 9 0 .

Let us suppose.5882 -0.1)-Mn1(1) -0.1029 Variable contribution In the previous section.5882 -0.5882 -10.3406 0. then it contributes more in the discrimination process. These discriminant loadings will help us to know about the contribution of the variables in discriminating the objects.5882 -0.4291 -0. Discriminant loadings are the correlation between the discriminant function and the corresponding variable.5882 -0. we have seen that how discriminant loadings are calculated.5882 -0.5882 . if a variable attached to the first discriminant function.23 - .-0.5882 -0.5882 -0.1957 0.5882 19.5882 -0.5392 0. provided the discriminant function represent the more variation compare to other discriminant function.5882 -10.5065 -0.1371 0.5882 29.3391 0.4118 -10.2443 -0.4118 -10.4833 -0.2561 0. Step 2: Calculation of Wilks’ Lambda ( Λ) For variable X1 G1(:.2293 -0.5882 -0.5882 -0.

1)-Mn_whole(1)) = 6.4631e+003 T2= (G2(:.6941e+003 G2(:.5815e+004 Wilks’ lambda = W/T = 0.9988e+003 T2= (G3(:.1)-Mn_whole(1))'*(G1(:. T1 = G1(:.1)-Mn_whole(1))= 2.1)-Mn_whole(1)) = 6.3531e+003 T = T1 + T2 +T3 = 1.1)-Mn_whole(1))'*(G2(:.24 - .1)-Mn2(1) -41 -1 -11 -1 -1 -1 -1 -1 -11 9 -1 49 9 29 -21 -11 9 -1 -1 -1 w2= 5980 G3(:.8140 .1)-Mn3(1) 30 30 10 -40 -40 10 w3 = 5200 W=w1+w2+w3 = 12874 Similarly.1)-Mn_whole(1))'*(G3(:.w1= 1.

7783 0.9293 0.7864 0.52 0.43 1.24 3.25 - .9882 0. for other variables. The stretched discriminant loadings are given in the following table.697 1.Similarly.9293 0.7783 0.57 0. These are given in the following table.8140 0.86 5.9882 0. Variable X1 X2 X3 X4 X5 X6 X7 X8 Wilks’ lambda 0.8140 0.9636 The univariate F-values can also be calculated as mentioned earlier. Variable X1 X2 X3 X4 X5 X6 X7 X8 Wilks’ lambda 0. .8381 0.7864 0.8381 0.9125 0.9636 Univariate F-value 4.918 5.9125 0.755 Now the previously calculated discriminant loadings are stretched by multiplying them with the respective variable’s univariate F-value. Wilks’ lambda can be calculated.

38927 -0.0207 5.0049 .0055 1.0025 0.Variable X1 X2 X3 X4 X5 X6 X7 X8 Stretched Discriminant Loadings Function 1 Function 2 2.0004 -0.01608 -1.173122 -3.1896 0.0210 0.2595 1.0001 0.84131 -0.6964 1.7255 -0.0037 -0.3889 -13.0954 0.86554 -0.3088 1.1176 85.9500 91.1442 0.0e+003 * 0.0000 111.0015 -0.371336 0.0951 6.0001 0.0098 1.0003 0.2500 15.1896 1.0164 0.0207 -0.0602 0.4178 0.0012 0.2353 197.0278 >> s=x'*x 1.1536 -0.0001 -0.0005 -0.0006 0.0104 -0.0001 0.2500 7.5882 2.2941 14.0093 0.8264 0.0420 0.0002 -0.0046 0.6000 0.6245 cov_mat=s/2 1.0037 -0.0074 0.0005 0.9275 13.0001 0.3333 1.0074 -0.0003 0.3333 98.0098 1.0007 0.3529 1.0683 0.0020 0.0010 0.0037 0.0010 -0.0006 -0.0019 -0.1713 -0.0349 0.0758 0.0000 58.8627 -0.0000 2.2604 -62.0015 0.0025 0.029616 0.1008 0.0001 0.4178 0.0683 0.0164 0.041876 -1.1167 10.0003 -0.2883 0.8696 0.1624 37.0007 -0.7500 90.6389 7.0028 0.3333 x= 6.4369 -3.5000 2.3333 1.0000 5.2794 -2.0e+003 * 0.0164 0.0000 -0.4229 25.6471 1.0093 0.0420 0.0065 0.0001 0.6450 0.0062 0.1602 -0.0349 0.1008 0.653271 2.5882 8.0002 -0.07769 Centroid Discriminant Loadings G= 110.0003 -0.750295 0.262958 -1.26 - .07182 0.99526 0.3018 0.444583 0.0028 -0.0602 0.2089 0.1373 0.9706 0.0099 0.0055 -0.0012 0.8264 0.0225 -20.0002 0.0099 0.0342 0.0000 -0.2901 0.6500 185.0007 0.0007 -0.7554 2.2901 0.0000 2.3018 0.0020 -0.

0001 0.0037 0.0072 0 0 0 0 0 0 0 0 0.1738 0 0 0 0 0 0 0 0 0.6450 0.0110 0.0175 0.0007 -0.0082 0.0.0004 -0.1509 0.0004 0.8593 0 0 0 0 0 0 0 0 1.1509 0.0082 0.0019 -0.0104 -0.1613 -0.0082 0.0301 0.2490 -0.0210 0.4958 .0948 0.0040 0.0301 0.0173 ev2 = 0.0012 0.0012 0.9348 0.9132 0.3122 .0342 0.0370 -0.2089 c= 0.1805 -0.3695 0 0 0 0 0 0 0 0 54.3488 -0.0031 0.9132 0.6096 0 0 0 0 0 0 0 0 2.0352 -0.2144 -0.1436 -0.1347 -0.0046 0.4886 -0.8133 0.0163 0.0003 0.0049 0.1388 0.0003 0.0006 0.0226 wstar1=c*ev1 wstar1 = 0.0948 0.0504 0.0049 2.0049 0.0006 0.1321 0.0504 12.8854 -0.27 - 0.0175 0.0002 -0.7536 0 0 0 0 0 0 0 0 17.0001 0.1486 0 0 0 0 0 0 0 0 0.6699 ev1 = 0.

7536 17.6979 -0.6486 0.28 - 0.9940 0.4106 -0.6096 2.1486 0.7072 0.5825 0.9761 1.7591 0.6486 -0.0566 R= 1.7295 0 0 0 0 0 0 0 0 2.5300 0.3993 STD (g) 12.9850 0.9971 0.9998 0.5158 -0.3497 0 0 0 0 0 0 0 0 0.8938 0.6342 0.6404 0 0 0 0 0 0 0 0 0.6699 d-1/2 = 0.6342 0.5189 1.5189 -0.0185 0 0 0 0 0 0 0 0 1.4605 -0.7072 0.9952 0.3969 0.5719 1.7064 0 0 0 0 0 0 0 0 0.9562 0.8593 1.9980 0.5703 0 0 0 0 0 0 0 0 0.5158 0.9562 0.9998 -0.7591 l*1 = R* W*1 0.1738 0.9971 -0.1289 -0.9971 0.0072 0.9715 1.6979 0.0002 0.0002 .9866 0.2362 -0.9999 0.0002 0.9866 1.9982 0.4605 0.5397 -0.9980 0.9971 0.0833 0 0 0 0 0 0 0 0 6.6240 0.0004 0.5300 0.0044 0.0006 -0.4724 .0.9761 0.2167 0.5825 0.3695 54.0001 0.7048 -0.5719 -0.2829 -0.9982 0.7505 0.7048 -0.9998 0.7505 0.5705 -0.4227 -0.3057 >> wstar2=c*ev2 0.6240 -0.9715 l*2 = R* W*2 -0.9999 -0.0319 -0.9952 0.5705 1.9850 0.9940 0.

5726 -84.0400 -0.021794 -3.9412 0.8575 0.9745 0.4935 0.0729 -0.4724 92.2794 0.9719 -0.8198 0.9719 l*1 l*2 0.227 1.200611 1.4277 12.15915 -30.0649 2.29 - .04 -1.83275 -49.9432 -0.3905 l*1* mean l*2* mean Mean 103.8938 -0.0.0826 -0.4991 6.0317 -0.29075 0.6796 7.9698 0.5248 0.8575 -0.4287 0.9412 -0.9745 0.3905 76.513983 0.8198 -0.49857 -5.9698 -0.4277 -0.0317 1.04292 160.0405 -0.0225 0.863 0.0826 2.4991 -0.5536 0.3611 0.9432 0.50493 78.6 .049249 13.5248 131.