You are on page 1of 45

Equations

Eigenvalue

4.1 Introduction
Equations of the type

Ax j..1x (4 1)

often occur in practice, for example in the analysis of oscillatory systems.We have to
find a vector x which, when multiplied by A yields a scalar multiple of itself. This
multiple is called an'eigenvalue'or'characteristic value'of A and we shall seethere are
N of these for a matrix of order N. Physically they might represent frequenciesof
, oscillation. There are also N vectors X;, oDe associatedwith each of the eigenvalueslt.
'eigenvectors' or 'characteristic vectors'. Physically they might
These are called
represent the mode shapesof oscillation.
A specific example of eq. 4.1 might be

l'' :,i :'{"}


[-' :i{i,} (4.2)

l'
[ ;
which can be rewritten

- ). -24

l{,.}:{l}
18
-2-l 0 (4.3)
18 -17-A
[ ;
equationsis only possibleif the
A non-trivialsolutionof this setof linearsimultaneous
determinantof the coefficients is zero

t6- I -24 18
3 -2-,1 0 -0 (4.4)
-9 18 -r7-),

Expandingthe determinantgives

7 3+ 3 A 2- 3 6 7 + 3 2 : o (4.s)
JI
E I G E N V A L U EE Q U A T I O N S 93

'characteristic equation'. Clearly one way


which is called the of solving eigenvalue
equations would be to reduce them to an Nth degreecharacteristic equation and use
the methods of the previous chapter to find its roots. This is sometimesdone, perhaps as
part of a total solution process,but on its own is not usually the best meansof solving
eigenvalueequations.
In the case of eq. 4.5 the characteristic equation has simple factors

(1 -4 \( -l xt+8) :0 ( 4.6)
and so the eigenvaluesof our matrix arc 4, I and - 8.
Note that for arbitrary matrices A the characteristic polynomial is likely to yield
imaginary as well as real roots. We shall restrict our discussion to matrices with real
eigenvaluesand indeed physical constraints will often mean that matrices of interest are
'positive
symmetric and definite' (seeChapter 2) in which case all the eigenvaluesare
red and positive.
Having found an eigenvalue,its associatedeigenvector can, in principle, be found
by solving a set of linear simultaneous equations. For example, for the caseof .1: I in
eq.4.6

:,:
[: ::]{1,]{l} (4.7)

-'i,z :{;}
Carrying out the first stageof a Gaussianeliminationgives

[" -;:]{',}
As we knew already from the zero determinant the system of equations exhibits linear
dependenceand all we can say is that the ratio of x2 to x3 is 2: 1. Similarly we find the
ratio of x, to x, is 1: I and so any vector with the ratios x1:x2 ix3:)1)' 1 is an eigen-
vector associatedwith the eigenvalue ,1.: l.

4.2 Orthogonalityand normalisationof eigenvectors


In latersectionswe shallcalculatenumericallythe eigenvalues of the
and eigenvectors
real,symmetricmatrix

^:[:i;] (4.e)

and we shallseethat its threeeigenvalues


are4+llJ2,4 and4- 1/.u/2respectively.
C H A P T E R4

The cor will be found to be

*,r,t - {
{.

^-rrr,--l t (4.10)

x(3)r

respectively.
'orthogonality' one to the other. That is, the dot
These vectors are said to exhibit
product x(r)rx(r)#0 but the dot products x(l)rx(2)and x(1)rx(3)are both zero. This
proporty will be found to be possessedby the eigenvectorsof all symmetric matrices.
We can also note that the three eigenvectorsin eq. 4.10have been scaledor'normalised'
such that J>y--rr?:1. This is convenientbecauseit means that the dot products
1i1" *tt)rrs(t) are equal to one. Other normalisations are possible, for example so as
to make lxil-o,: l.

4.3 Solution methods for eigenvalueequations


Becauseof the presenceof the unknown vector x on both sidesof eq. 4,1 we can seethat
solution methods for eigenvalueproblems will be essentiallyiterative in character.We
have already seen one such method, which involved finding the roots of the char-
'transformation methods'
acteristic polynomial. A second classof methods comprises
in which the matrix A in eq. 4.1 is iteratively transformed into a new matrix, say A*,
which has the sameeigenvaluesas A. However, theseeigenvaluesare easierto compute
'vector
than the eigenvaluesof the original matrix. A third classof methods comprises

!r iterative'methods which are perhaps the most obvious of all. Just as we did in Chapter
3 for iterative substitution in the solution of nonlinear equations, a guessis made for
Ii x on the left-hand side of eq.4.1, the product Ax is formed, and compared with the
right-hand side.The guessis then iteratively adjusted until agreementis reached.In the
following sections we shall deal with these classesof methods in reverse order,
beginningwith vector iteration.

4.4 Vector iteration


The procedureis easily describedand programmed, and is sometimescalled the'power'
method.
Going back to eq. 4.2 which was

I tu -24 r8l
-2
I i 1 8 - t 7oJl
L-e {i,}:,{i'} (4.2)
EIGENVALUE OUATIONS 95

weguessa solutionfor x, say{ 1 I I }r. Thenthematrix-by-vector


multiplicationon the
left-hand side yields

[ 'u -24
-2
''l f 'l | 'oJ [ 'o')
| : ol{r}:( r):ro( o.r) (4.1I )
L - n 1 8- n ) [ ' . J[ - ' . J [ - o ' J
wherewe havenormalisedthe resultingvectorby dividingby the largestabsoluteterm
in it to givelx;l-",:1. The normalisedx is thenusedfor the next roundof iteration.
This round gives
-24 ,'l 'ol
[ 'u
3 -2 0
[ 0 . r [-orl
2.8
[-orzsI
0.4375 (4.r2)
| | i ):1 l:6.4 { )
[-e 18-n) [-orj I uo) [ 'o )
and the next gives
-24 "l .]
'u f-orzs [ ,, I
[ -,:_,;l f-ooszs
I
li""J (4.r3)
L; t?1"'J:l_i"l:-'l
and finally:

I tu -24 ' r l f - o s s z [s:lr z l [ -0o. 3+M o +l


l r - ) 0| ( 0 . t s 6 2 s ) : i- 2 . 3 9 l : - 7 . 8 6 5 ( ) (4.r4\
L_;,, - r 7 l [ ' o J [ - t t u , J l.ro)
2: - 8.Note that thisis theeigenvalue
illustratingconvergenfttowardstheeigenvalue
of largestabsolutevalueand the power methodwill usuallyconvergeto this largest
eigenvalue.

Program 4.1. Vector iteration to find the'largest'eigenvalue


Given library subroutines capable of multiplying a matrix by a vector, and checking for
convergenceof the iterative proc€ss,programming of the power method is extremely simple. The
following nomenclature is used:

Simple uariables
N Number of equations
TOL Convergence tolerance
ITS Maximum number of iterations allowed
ITERS Current number of iterations
ICON Convergencecriterion in cHEcoN
BIG The eigenvalue

Variable length arrays


A The coefficients of the matrix
X0 Eigenvector before an iteration
X1 Eigenvectorafter an iteration
96 C H A P T E R4

PARAMETER restriction
IN>N

PROGRAM P4]-

c PROGRAM 4.1 POWER METHOD FOR EIGE}N/ALUES A N D E I G E N V E C T O R S


c
ALTER NEXT LINE TO CHANGE PROBLEM SIZE

PARAMETER (IN=20)

REAL A(rN, rN), Xo(rN),X1 (rN)

READ (5,*) N, ((A(I,J),J:1,N),I=1,N)


READ (5,*) (xo(I),I:1,N)
READ (s,*) lrol,,ITS
I{RITE(6,*) (I** D I R E C T I T E R A T I O } { T O F I N D T,ARGEST EIGENVALUE **' )
WRITE(6,*)
w R r r E ( 6 f * ) ( T M A T R T XA r )
OALL PRINTA (A, IN, N, N, 6)
wRrTE(6,*)
ITERS = O
1 C A L L I T V M U L T ( A ,I N , X 0 , N , N , X l )
BIG = 0.O
Do 2 I = 1,N
2 rF (ABS(X1(r) ).GT.ABS(BrG) ) BrG : X1(r)
DO 3 I = 1,N
I x1(I) = x1(I)/BIG
C A L L C H E C O N( X l , X O , N , T O L , I C O N )
ITERS:ITERS+1
IF (ICON.EQ.O .AND. ITERS.LT.ITS) GO TO 1
SIIM = 0. O
Do 4 I : 1,N
4 sUM = StlM + X1 (I)**2
sU},I = sQRT(sW)
Do 5 I = 1,N
5 xl- (I) = x1 (I) /s{J}{
WRITE(6, *) ( 'I,ARGEST EIGENVALUEI )
WRITE (6,100o) BrG
WRITE(6,*)
wRrTE(6,*) ('CORRESPONDINGEIGENVECTORT )
CALL PRINW(X1,N,6)
I,
WRITE(6, *)
fl r^TRITE(6, *) ( ' I T E R A T T O N S T O C O N V E R G E N C E )T

f;
i
1000
2000
wRrTE (6,2OOO) rrERS
FoRMAT (Er.2.4)
FoRMAT (I5)
STOP
END

Input and output for the programare shown in Figs 4.1(a)and (b) respectively. The program
readsin the numberof equationsto be solvedfollowed by the coefficientmatrix and the initial
guessed eigenvector.The tolerancerequiredof the iteration processand themaximumnumberof
iterationscompletethe data set. After a matrix-vectormultiplication using library routine
MvMULT,the neweigenvector is normalisedso that its largestcomponentis 1.0.The cHEcoN
subroutinethen checksifconvergencehas becn achievedand updatesthe eigenvector.Ifmore
iterationsare rcquired,the programreturns to the MvMUlTlquq4e and the processis repeated.
After convergence, the eigenvector is normalisedso that J!;!= rr? : 1.Figure4.1(b)showsthat
convergen@ to the'largest'eigenvalue-8 is achievedin 19 iterations.SubroutinesPRINTA
and pRlNtv outputarraysand vectorsrespectively to the requiredchannel(in thiscaseIctt:6).

-
E I GE N V A L U EE O U A T I O N S q7

Array size N
3
Array ( A 0 ,J ) ,J : I , N ) , I : 1 ,N
16.0 -24.0 18.0
3.0 -2.0 0.0
- 9.0 I 8.0 - 17.0
Initial vector x0(I), I: I, N
1.0 1.0 1.0
Tolerance TOL
l.E-5
Iteration limit ITS
100

Fig. a.l(a) Input data for Program4.1


a

** DIREqI ITERATION T O F I N D I,ARGEST EIGENVALUE T*

}4ATRIX A
.1600E+02 -.2400E+02 .18008+02
. 3OOOE+o1 -.2OOOE+o1 .0000E+oo
-.9000E+o1 .1800E+o2 . 17OOE+O2

I,ARGEST EIGETWALUE
-. 8000E+o1

CORRESPONDING EIGEIWECTOR
-.43648+00 .2182E+00 87298+OO

ITERATIONS TO COWERGENCE
19

Fig. a.t@) Resultsfrom Program4 . 1

4.4.1 Shiftediteration

The rate of convergenceof the power method dependsupon the nature of the
eigenvalues. For closelyspacedeigenvalues,convergence can be slow. For example,
4.9
takingtheA matrixof eq. and using Program 4.1with xsr: { I I 1),convergenceto
A:4.7071is only achievedafter26 iterationsfor a toleranceof 1 x 10-5. The deviceof
'shifting'is basedon the solution of the modifiedproblem

(A - pl)x: (1.-p)x (4.15)

wherep is a simplescalar.Thus eq.4.1 has beenchangedto eq.4.15 by simply


subtracting px from both sides of the equation. In the modified problem the
eigenvectors of the matrix A - pl aregiven
x arethe sameas beforebut theeigenvalues
by ),-p, or'shifted'by an amountp. To recovertheeigenvaluesof A wemerelyhaveto
add p to the eigenvalues of A-pl.
98 C H A P T E R4

Program 4.2. Shifted iteration


This program only differs from the previous one in that SHIFT is read, two lines are added to
perform the subtraction of pI from A, and SHIFT is added to the eigenvaluebeforeprinting. The
input and output are shown in Figs 4.2(a)and (b) respectively.It can be seenthat using a shift of
3.5,convergenceto 2r:4]071is now reachedin 7 iterations for the same tolerance.

PROGRAM
P42
c
C SHIFTED ITERATION USING THE POWERMETHOD
c
C ALTER NEXT LINE TO CHANGEPROBLEMSIZE
c
PARAMETER(IN=20)
c
R E A LA ( r N , r N ) , x o ( r N ) , x l ( r N )
c
R E A D( 5 , * ) N , ( ( A ( I , J ) , J : 1 , N ) , I : 1 , N ) , S H I F T
v i R I T E ( 6 ,* ) ( | * * * * * * * * * * * * * * * S H T F T E DI T E R A T T O N* * * * * * * * * * * * * * * |)
W R I T E ( 6 *, )
W R I T E ( 6 , * ) ( , } , [ . A T R I XA ' )
C A L L P R I N T A( A , I N , N , N , 6 )
WRITE(6,*)
wRrTx(6,*) (,sHIFrr)
wRrrE(6,10oo) sHrFT
IfRITE(6, *)
D O 1 0 I : 1 , N
10 A(I,I) = A(r,I) - SHIFT
READ(5.*) (x0(I),I:1,N)
READ (5,*) TOL,ITS
ITERS = O
1 C A L L M V M L I L T ( A ,r N , X O , N , N , X 1 )
BIG = O.0
DO 2 f : 1,N
2 IF (ABs(x1(I) ).GT.ABs(BIG) ) BIG : xr.(I)
DO 3 I : 1.N
3 x1(I) = xI(I)/BIG
C A L L C H E C O N( X 1 , X 0 , N , T O L , r C O N )
ITERS:ITERS+1
rF (rcoN.EQ.o .AND. ITERS.LT.ITS) GO TO 1
StM = 0.0
DO 4 I = 1,N
lr 4 SUM = SIIM + X1 (I)**2
S L J M= S Q R T ( S W )
DO 5 I : I,N
5 x1(I) = X1(I)/stty
WRITE(6, *) ('ErGElntALUEf )
wRrTE (6,1000) BrG + SHrFT
W R I T E( 6 , * )
wRrTE(6,*) ( ' C O R R E S P O N D T N GE I G E N V E C T O R ')
CALL PRTNTV(Xr,,N,6)
I{RITE(6,*)
wRrTE(6,*) ( T T T E R A T I O N S T O C O i T E R G E N C E r)
WRITE (6,20O0) rTERS
L000 FoRMAT (812.4)
2000 FoRMAT (15)
STOP
END

A useful feature of shifted iteration is that it enables the reAdy calculation of the smallest
eigenvaluefor systemswhoseeigenvalues areall positive.The powermethodcan first be usedto
computethe largesteigenvalue, and this can then be usedas the shift,implying that convergence
will be to the unshiftedeisenvalueclosestto z€ro

-
E I G E N V A L UE
EO U A T I O N S 99

Array size N

Array (A 0, J),J: I, N ) ,r : 1 , N
4.0 0.5 0.0
0.5 4.0 0.5
0.0 0.5 4.0

Shifr SHIFT
3.5
Initial vector xql),I:l,N
1.0 1.0 1.0
Tolerance TOL
l.E-5
Iterationlimit ITS
a
100
Fig. a.2(a) Input data for Program4.2

*********l***rr S H I I T I ' E DI T E R A T I O N * * * * * * * * * * * * * * *
},TATRIX A
.4OOOE+o1 .5OOOE+OO . OOOOE+OO
. 5000E+oo .4000E+o1 .50008+oo
. ooooE+oo .5000E+oo .4000E+o1

SHIFT
.35008+O1

EIGEWALUE
.47078+O1

CORRESPONDING EIGEA/ECTOR
.soooE+oo .?071E+00 .5000E+oo

ITERATTONS TO CO}WERGENCE

Fig. a.2Q) Resultsfrom Program4.2

4.4.2 Shifted inverse iteration


other
A more direct way of achieving convergenc€of the power method on eigenvalues
than the'largest'is to recasteq. 4.1 in the form

I
(A-PI)-tx:=j-x (4.16)
/.- p
-
wherep is a scalarshiftand l" is an eigenvalueof A. The eigenvectors of (A - pI) t are
the sameof thoseof A, but it can be shown that the eigenvalues of (A-pI)-t are
- -I of A that is
I lQ - il. Hence,the largesteigenvalue of (A pI) leadsto theeigenvalue
closestto p. Thus, if the largest eigenvalue of (A-pI)-t it lr, then the required
eigenvalue of A is givenby 1:(llfi+p. For small matrices it would be possible just to
invert(A-pI) and to usethe inverseto solveeq.4.16iterativelyin exactly the same
algorithmas wasusedin Program4.1.Howeverwe saw'in Chapter2, that in general
1OO C H A P T E R4

factorisationmethodsare the most applicable to inverseproblems.Thus, whereasin


the normal shiftediteration method we would have to compute in every iteration

( A - p l ) x o: ; i t , (4.r7)
in the inverseiteration we have to compute

(A-pI)-t*o:*r.

(A - pI)(seeChapter2)usingtheappropriatelibrarysubroutine
By factorising (lurec)
we can write

(A-pI):LU (4.18)
and so

(LLf-rxo:yt (4.le)
or

U_rL_t*o:*, (4.20)
If now we let

Lyo: xo or y o: L - 1 x o (4.21)
and

Uxr:yo or xr:U-ryo (4.22)


we can seethat eq.4.20 is solved for xt by solving in successionLyo: xo for yo and

h Uxr:yo for x1. These processesare just the


'forward'-
and
'back'-substitution

t
t
!
pro@sseswe saw in Chapter 2, for which subroutines suBFoR and susgnc were
developed.
By altering p in a systematicway, all the eigenvaluesof A can be found by this
method.
['
t ,
t l

Example 4.1

Use shifted inverse iteration to find the eigenvalue of the matrix

l-i )1
A:1.,I
LJ +)

that is closest to 2.

Solution4.1

Use simple iteration operating on the matrix

B:(A-pI)-t w h e r ep : 2 i n t h i s c a s e .
E I G E N V A L UE O U A T I O N S 101

':[] :]-': -rr'l_:-11


u:[-0.5 - orl
L 0.7s 0.25J

Let x6:{l} ""0 xf be the vatueof x, beforenormalisation

.':{1,},.,:{:},,r:05

,.t:{-:;,}',.':{-;,}' trz:,5
.
.-t:{-l ll,},*.:{-: ""}' t+:aB.,S
.,:{_l li||},..: i *}, 7,:0e286
{-
I J ur,.,
*u,,,
lteratlons
f-l)
t 'J I
Program43. Shiftedinverseiteration
The programusesexactlythesamenomenclature
asin Program4.2with theadditionalvariable
lengtharrays:
uPrRr Upper triangularfactor of (A -pI)
LowTRI Lower triangularfactor of (A -pt)

PROGRAM P43

C
P R O G R A M4 . 3 SHIFTED INVERSE ITERATTON FOR EIGENVALUE
c CIOSEST TO A GIVEN VALUE

ALTER NEXT LINE TO CHANGE PROBLEH STZE


c
PARAHETER (IN=20)

REAL A(rN, rN),upTRr (rN, IN),r.owTRr (rN,rN),xo(rN),xl-(rN)

READ (s,*) N, ( (A(I,J),J=l,N),r:1,N),SHIFT


I.IRITE(6, *) ( | ********** SHIrTED INVERSE ITERATION ************ |)
W R I T E( 6 , * )
wRrTE(6, * ) ( ' M A T R I X A' )
C A L L P R T N T A( A , I N , N , N , 6 )
wRrTE(6,*)
102 C H A P T E R4

wRrTE(5,*) ('sHrFTr)
wRrTE(6,100o) SHIFT
wRrTE(6,*)
Do 10 I:1,N
-sHrFT
10 A(r, r) =A(r, r)
car,i r.urac (A, uPTRr, Low:tRr, rN' N)
R E A D( 5 , * ) ( X o ( I ) , I = 1 , N )
nnao 1s,*) TOL'ITS
ITERS = 0
CALLVECCOP(XO,Xl,N)
1 CALL SUBFOR(LOWTRT' rN' xl, N)
C A L L S U B B A (CU P T R I ,I N , X 1 , N )
BIG = 0.o
Do 2 I = 1,N :
z in lans(xl(r) ).cr.ABs(Brc) ) Brc X1(r)
DO 3 I = l,N
3 Xl(I) = x1(I)/BIG
CALL CHECON (Xl, X0, N,TOL, rcoN)
ITERS:ITERS+l'
Go ro 1
rr 1rcoH.no.0 -AND. TTERS.LT'ITS)
SIJM : o. o
Do 4 I : 1,N
4 S U M = S U I M+ X l ( I ) * * 2
sttM = sQRT(sw)
DO 5 I = 1,N
s xL(I) = x1(I)/suM
WRITE(6,*) ( ! E I G E N V A L U E CI,OSEST TO SHIFTI )
W R I T E ( 6 , 1 0 0 0 ) 1 - I B I G + SHIFT
W R I T E( 6 , * )
T C o R R E S P O N D I N GEIGENVECTOR I )
WRITE(6, *) (
CALL PRINTV(X1,N,6)
WRITE ( 6, 't ) '
wnrtr ie , "i ( I rrERATroNsro cowERGENcE)
w R r r E ( 5 , 2 O O O )r r E R S
looo FORMAT(El2.4)
2OoO FoRMAT (I5)
STOP
END

Array size N
J

Array (A 0, D,J: 1,N),I: I, N


4.0 0.5 0.0
0.5 4.0 0.5
0.0 0.5 4.0

Shift SHIFT
0.0
Initial vector xql),I:1,N
1.0 1.0 1.0
Tolerance TOI-
1.E-5
Iteration limit TTS
100

Fig. 43(a) Input data for Program4'3


E I GE t ' i V A L UE E O U A T I ON S 103

********** S H T F T E DI T W E R S EI T E R A T I O N * * * * * * * * * * * *
MATRIX A
.4000E+01 .50008+00 . o000E+00
.50008+00 .4000E+o1 .5000E+00
.000oE+oo .5000E+o0 . 4 0 0 0 E + 01

SHIFT
.0000E+00

EIGEWALUE CLOSEST TO SHIFT


.3293E+01

CORRESPONDING EIGEWE TOR


-.5000E+oo .70718+00 -.50008+00

ITERATIONS TO CONVERGENCE
J b

Fig. a3(b) Resultsfrom Program4.3

(b) The numberof


Typicatinputdataandoutputresultsareshownin Figs4.3(a)and respectively.
equationsandthecoefficients ofA arereadin, followedby thescalarshift,thefirstguessofvector
'guess'xe is copiedinto x1
x6, the toleranceand themaximumnumberof iterations.The initial
for futurereference,usingveccon, andtheiterationloop entered.Matrix (A -pI) is formed(still
calledA) followedby factorisationusingLUFAC,andcallsto suBron and suBBAccompletethe
determination of x1 followinge4s4.21and4.22.Vectorx1 is thennormalisedto lxl l-.: 1.0and
the convergence checkinvoked.When convergence is completethe convcrgedvector x1 is
normalisedso ttrat JI'it= , xi : 1.0,
and the eigenvector, the eigenvalueof the original A closestto
p and the numberof iterationsto convergen@printed. For example,this processcan be used,
'smallest'eigenvalue of the matrix.For thecaseshownin Fig.a.3(b),
with a shiftp:0, to find the
@nvergence to the'smallcst'eigenvalue, namelyq-U.,fr is achievedin 36 iterations.

4.5 Calculation of intermediate eigenvalues- deflation


In the previous section we have shown that by using the power method, or simple
variations of it, convergenceto the numerically largest eigenvalue,the numerically
smallest,or the eigenvalueclosestto a given quantity can usually be obtained. Suppose,
however,that the secondlargesteigenvalueof a systemis to be investigated.One means
'deflation' and it consists essentially in removing the largest
of doing this is called
eigenvalue,once it has been computed by, for example, the power method, from the
system of equations.
As an example, consider the eigenvalue problem

l2
Lr llfi;l:,{.;} (4.23)

polynomialis readily shown to be


The characteristic

(.i-3)(.r-r):0 9.24)
and so the eigenvaluesare +3 and+1. The eigenvalue*3 is associatedwith the
normalised eigenvector{UJ} UJ2\'and the eigenvalue * I is associatedwith the
eigenvector {UJ2 -llJ2}'. Since the matrix in eq. 4.23 is symmetric, these
104 C H A P T E R4

eigenvectorsobey the orthogonality rules describedin Section 4.1, that is,

xrTxr:1

and (4.2s)
xrtx2-0

We can use 'his property to establisha modified matrix A* such that

A*: A - 11x1x1T (4.26)

We now multiply this equation by any eigenvectorx1 to give

A * x , : A x r - , 1 , ,x , x r T x , (4.77)

when i: I eq. 4.27 can be written as

A*xr:Ax, -/,rxrxrrxt
(4.28a)
:,l,rxr-l,1xr:g

and when r> l, eq. 4.27 can be written as

A*xr:1-1- (4.28b)

Thus the first eigenvalueof A* is zero, and all other eigenvaluesof A* are the sameas
those of A.
Having'deflated'A to A*, the largesteigenvalueof A* can be found, for exampleby
simple iteration,and thus will equal the secondlargesteigenvalueof A. Following this

t'' procedurefor eq.4.23

fz
n:L t ll
(4.2e)
* 2l

F' and,11:3with xrr:1116. tff).

( r )
rnus

ri
h
tI
.lnl
X r X r: 1
t+t
lJ2 )
/
{hr,l"i] (4.30)

and therefore

n.:L;l-l;
f't r-l f7 fa
il
+) (4.3r)

:[ + -il
L-+ +l
E I G E N V A L U EE O U A T I O N S 105

The eigenvaluesof A* are then given by the characteristic polynomial

l.(2- 1):0 (4.32)


illustrating that the remaining nonzero eigenvalue of A* is the secondlargest
eigenvalueof A, namely + l.
As a furtherexample,let us useprogram 4.1 to find an eigenvalueof the deflated
matrix of the A givenby eq. 4.9.carrying out the deflationprocessleadsto

3-+ I-{2 -t- J=


4J2
A+_ t - "
4-\/L
.. - " 1 I
6 *o (4.33)

- 1--l:
4J2
t E : --l:
a
4J2

For the input data of Fig. 4.1, i.e. a guessedstartingeigenvectorxor:{t t t},


convergenreoccursnot to the secondeigenvalue,2:4 with associatedeigenvector
luJ, o -ll.,nY b.t ratherro the thirdeigenvarue
4- rlf. with"r*""i"i
eigenvector{) -UJ2 !}r in2 iterations.
A change in the guessedstarting eigenvector to xor : I 0 -
{ +} leads to convergen@
to the secondeigenvalueand eigenvector in 46 iterations for the given tolerance.This
example shows that care must be taken with vector iterative methods for deflated
matricesor thosewith closely spacedeigenvaluesif one is to be sure that convergenceto
a desired eigenvaluehas been attained.

4.6 The generalised


eigenvalue
problemAx.=,tBx
Frequentlyin engineering
practicethereis an extramatrix on the right-handsideof the
eigenvalue
equationleadingto the form
Ax: lBx (4.34)
By rearrangement
of eq. 4.34 we could write either of the equivalenteigenvalue
equations
B- rAx:,ix (4.35a)
I
or A-tBx:;x (4.35b)
A

The presentimplementationyields the largesteigenvalue(llA) of eq. 4.35(b).The


reciprocalof this valuecorrespondsro the smallesteigenvalue(/.) of eq.4.35(a).
In thecaseof eq.4.34wecanfirst set2 to I and makea guessat x on the right-hand
side.A matrix-by-vectormultiplicationthen yields
Bx: y (4.36)
106 C H A P T E R4

allowing a new estimate of x to be established by solving the set of linear equations

Ax: Y (4.37)

by any of the techniquesdescribedin Chapter 2. For example,as was done in Program


4.3,LU factorisationmay be employedwith the factorisationphasecompletedoutside
the iteration loop.
When the new x has beencomputed,it may be normalised to give lxi l-". : 1 and the
procedure can be repeatedfrom eq. 4.36 onwards.

Program 4.4. Iterative solution of Ax:lBx


The program describing this process is easily developed from the previous programs in this
chapter. Figure 4.4(a) shows data from a typical engineering situation which leads to the
generalisedproblem of eq. 4.34.
In this casethe A matrix representsthe stiflnessof a compressedstrut and the B matrix the
destabilisingeffectof the compressiveforce,l..There are four equations to be solved to a tolerance
of 10-5 with an iteration limit of 100.The guessedstarting vector xo is {1.0 1.0 1.0 1.0}r.
Matrices A and B are read in, and in preparation for the equation solution of eq.4.37, A is
factorised using subroutine t-urnc. The iteration loop begins at label I by the multiplication of
B by xo as required by eq.4.36.Forward- and back-substitution complete the equation solution
called for by eq.4.37 and the resulting vector is normalised. The convcrgencecheck is invoked
and control returns to label I if convergenceis incomplete, unless the iteration limit has been
reached. The final normalisation involving the sum of the squares of the components of the
eigenvectoris then carried out and the normalised eigenvectorand number of iterations printed.
In this casethe reciprocal ofthe'largest'eigenvalue is the'buckling'load, which is also printed.
The output is shown in Fig.4.4{b)where the estimateof the buckling load of 9.94after 6 iterations
can be compared with the exact solution n2:9.8696.

PROGRAM P44
c
PROGRAM 4.4 POWER I1ETHOD FOR A*X:LAMBDA*B*X
c
c ALTER NEXT LINE TO CHANGE PROBLE}T SIZE
c
PA-RAI.{ETER (IN=20)

R E A L A ( I N , I N ) , U P T R I ( I N , I N ) , I . O W T R (I I N , I N ) , X O ( I N ) , X l ( I N ) , B ( I N , I N )

READ (s,*) N, ( (A(I,J),J=1,N),I=1,N)


READ (5,*) ((B(I,J),J=1,N),I=1,N)
I{RITE(6,*) (r************ ITEIU\TM SOLUTIONOF **************|
WRfTE(6, *) ( | ************** A*X:U{MBDA*B*X **************** |
WRITE(6,*)
I{RITE(6, *) ('MATRIX A' )
CALL PRTNTA(A,IN,N, N, 6)
WRITE(6, *)
ISRITE(5,*) ( TMATRIX B' )
CALL PRTNTA(8, IN, N, N, 6)
I{RITE(6, *)
CALL LUFAC (A, UPTRI, LOWTRI, IN, N)
R[:AD(5,*) (x0(I),I=1,N)
READ (s,*) ToL,ITS
E I G E N V A L UE O U A T I O N S 107

ITERS = 0
CALL VECCOP(X0,Xt,N)
CALL MWULT (B, IN, XO,N, N, X1)
C A L L S U B F O R( L O W T R I , I N , X 1 , N )
C A L L S U B B A C ( U P T R I ,I N , X 1 , N )
BIG = 0.0
DO 2 I = l,N
_
DO
I I ' I A B S=( x 1 ( I ) ) . G r . A B s( B r c ) ) B r c : x 1 ( r )
3 I l,N
xl(I) = x1(I) /BIG
C A L L C H E C O N( X 1 , X O , N , T O L , I C O N )
ITERS=ITERS+1
rF (ICoN.EQ.O .AND. TTERS.LT.ITS) co To 1
SUM= 0.0
DO 4 I = l,N
suM=suM+x1(I)**2
SIJH - sQRT(suM)
Do 5 r = 1,N
x1 (r) = x1 (I) /suH
VfRITE(6,*) ( ISMALLEST ETGEWALUE (I,AMBDA),)
WRITE (5,1OO0) 1.O/BIG
WRITE(6, *)
!{RITE(6,*) ('CoRRESPONDING EIGEWECTOR'
)
CALL PRINTV(Xl,N,6)
WRITE(6, *)
wRrTE(6,*) ('TTERATIONS TO COIWERGENCE')
WRITE (6,2OOO) ITERS
1000 FORMAT (E12.4)
2 0 0 0 FORMAT (rs)
STOP
END

Array size N

Array A (A(I,J),J: I, N),r: l, N


8.0 4.0 -24.O 0.0
4.0 16.0 0.0 4.0
-24.0 0.0 t92.0 24.0
0.0 4.0 24.0 8.0
Array B ( B ( r ,J ) ,J : I , N ) , r : 1 , N
0.0,6667 -0.01667 -0.1 0.0
-0.01667 0.13333 0.0 - 0.01667
-0.1 0.0 4.8 0.1
0.0 -0.01667 0.1 0.06667
Initial vector xqr )r, :l,N
1.0 1.0 t.0 1.0
Tolerance TOL
l.E-5

Iterationlimit ITS
100

Fig. 4.4(a) Input data for program 4.4


lOB CHAPTER 4

************ TTERATM OF **************


SOLLTTION
************** A*X=Ir\MBDA*B*X ****************

MATRIX A
.8000E+o1 .4000E+01 -. 2400E+02 . 0000E+oo
.4000E+o1 .1600E+02 .00008+00 .4000E+01
- -24008+02 . O0008+OO .19208+03 .24008+02
. ooooE+o0 .4000E+01 -2400E.+02 .800oE+O1

MATRIX B
.66578-01 -- 1667E-01 -. 1000E+00 .0000E+00
-.1667E-01 . 13338+00 ,00008+00 -.1567E-01
-.1000E+oo .0000E+00 .4800E+01 .1000E+00
. ooooE+oo -. 1667E-01 . 1000E+oo .6667E-01

SMALLEST EIGEWALUE (LAMBDA)


.9944E+01

CORRESPONDTNG ETGENVECTOR
.68988+00 . OOO0E+00 .2200E+OO -,6898E+OO

ITERATTONS TO CONVERGENCE
6 t

Fig. a.a@) Resultsfrom Program4.4

4.6.1 Conversionof the generalisedproblem to standard form

Several solution techniquesfor eigenvalueproblems require the equation to be in the


'standard'
form of eq. 4.1, namely

Ax:lx (4.1)
If the generalised
form ol eq.4.34is encountered,
it is alwayspossibleto convertit into
the standardform. For example,given
Ax:lBx (4.34)
one could think of invertingB (or A) by someprocedureto yield the forms
B- rAx:lx
or (4.38)
I
A- tBx:=x
A

whicharein standardform.However,evenfor symmetricalB (or A) theproductsB- rA


-
and A rB arenot in generalsymmetrical. In orderto preservesymmetry,thefollowing
strategycan be used.
Startingfrom eq.4.34we can factoriseB by Cholesky'smethod(seeChapter2) to
give
Ax: LLl.rx' (4.3e)
or L- lAx: LLrx (4.40)
Now we let
L-rA:C (4.41)
EIGENVALUEEOUATIONS109

using the columns of


which can be solvedfor C by repeatedforward-substitutions
A in the form
LC:A (4.42)

We now have
Cx: l.Lrx (4.43)

By setting
Lrx: y (4.44)

or x: L-ty (4.4s)
eq.4.43becomes
CL-TY: f,Y (4.46)

If we set
CL_T:D (4.4't)

or L- rcr- Dr
using the columnsof Cr
this can be solvedlor Dr by repeatedforward-substitutions
in the form
LDr:Cr (4.48)

It will be found that in the resulting standard form equation

Dry:Dy:/y (4.4e)

thematrix D is symmetrical.Note that theeigenvalues of eq.4.49are the sameasthose


y
are rather
of eq.4.34but the eigenvectors than the original x. Theseoriginalscan be
recoveredquite simply from eq. 4.44usingback-substitution.

'standard[orm'
Program45. Conversionof Ax:lBx to
A programdescribingthis processis now presentedusingasdata the sameproblem specification
as was usedin the previous program.The nomenclatureusedis as follows:

Simpleuariables
N Number of equations

Variabielengtharrays
AbB Arrays A and B in Ax: lBx
D Diagonal matrix (vector)in LDLr factorisation
C, E Temporary storage
DD The array D: Dr in eq.4.49

PARAMETERrestTiction
IN>N
110 CHAPTER
4

PROGRAM P45

P R O G R A M4 . 5 COIWERTfoN OF A*X:L,\MBDA*B*X
TO STANDARD FORM

ALTER NEXT LINE TO CHANGE PROBLEM STZE


c
PARAMETER (IN=20)

R E A L A ( I N , I N ) , D ( T N ) , B ( T N , I N ) , C ( I N , I N ) , E ( I N ) , D D( I N , T N )

READ (5,*) N, ( (A(I,J),J:1,N),f=1,N)


READ (5,*) ((B(I,J),J=1,N),I=1,N)
WRITE(6, *) (r******** C O I W E R S I O NO F A * X = L A M B D A * B * X * * * * * * * * * * r )
WRITE(6,*) (r******** T O S T A N D A R DS Y M M E T R I C A LF O R M * * * * * * * * * * * ' )
WRITE(6,*)
wRrTE(5,*) (,MATRIX A')
CALL PRINTA (A, IN, N, N, 6)
I{RITE(6,*)
wRrTE(6, *) ( THATRIX Br )
r CALL PRINTA(B,IN,N,N,6)
I{RITE(6,*)
CALL LDLT(B,IN,D,N)
DO 1 I = l,N
1 D(I) = saRT(D(I) )
Do 2 J = 1,N
D o 2 I = J , N
2 B(r,J) = B(r,J) /D(J)
DO 3 I = 1,N - 1
Do 3 J = I + 1,N
3 B(I,J) : B(J,I)
DO 4 J = 1,N
Do 5 I = 1,N
s E(r) = A(I,J)
CALL SUBFOR(B,IN,E,N)
DO 6 I = l,N
6 c(I,J) = E(I)
4 CONTTNUE
WRITE(6,*) (TMATRIX Cr)
CALL PRINTA(C, IN,N, N, 6)
WRITE(6, *)
C A L L M A T R A N( D D , T N , C , T N , N , N )
DO 7 J = 1,N
DO 8 I : l,N
B E(I) = DD(I,J)
CALL SUBFOR(B,IN,E,N)
Do 9 I : 1,N
e DD(I,J) = E(I)
7 CONTINUE
wRrTE(6,*) ('FINA! S Y H I ' I E T R T C A LM A T R I X D ' )
CALL PRINTA(DD, IN,N, N, 6)
STOP
END

The programbeginsby readingin the numbcrof equationsand the A and B matrices.Matrix B is


factorisedusinglibraryroutineLp lr and theCholeskyfactorsarethenobtainedby dividingthe
termsof Lby Jdt,.
Equation4.42is thcn solvedfor C. The columnsof A arefirst copiedinto a temporarystorage
vectorE and forwardsubstitutionusingsutron leadsto the columnsof C. The C matrix is
printedout and thentransposed into DD usinglibraryroutineMATRAN.Then,in a verysimilar
sequenc€ eq.4.48is solvedfor DD (DDr) by first copyingthe columnsinto E and
of operations,
then callingsuBFoR.The final symmetricmatrix is printedout as shownin Fig. 4.5(b).If the
E I G E N V A L U EE O U A T I O N S 111

Array size N
J

Array A ( A ( I , J )J, : 1 , N ) ,r : 1 , N
8.0 4.0 -24.O 0.0
4.0 16.0 0.0 4.0
-24.0 0.0 192.0 24.0
0.0 4.0 24.0 8.0
Array B ( B ( r , JJ) ,: t , N I r : 1 , N
0.06667 -0.01667 -0.1 0.0
- 0.01667 0.13333 0.0 _ 0.0166?
-0.1 0.0 4.8 0.1
0.0 -0.01667 0.1 0.06667
Fig. 4.5(a) Input data for Program 4.5

******** cowERsroN oF A*X=IJu!{BDA*B*X *.********


******** TO STANDARDS Y M M E T R I C AFLO R M* * * * * * * * * * *
HATRIX A
.8000E+01 .4000E+01 -.2400E+O2 . OOooE+OO
.4000E+01 .16008+02 .00008+oo .4000E+o1
--24008+O2 .O000E+oo .1920E+O3 .2400E+O2
.0000E+oo . {o00E+01 .2400E+02 .80008+01
MATRIX B
.66678-01 -.1657E-O1 -.10008+oo . ooooE+oo
-.15678-01 .1333E+OO .ooo0E+oo -.1667E-O1
-. 100oE+oo . ooooE+oo .4800E+01 .10008+oo
.0000E+oo -.1657E*01 .100oE+oo .5557E-01
UATRIX C
. 3098E+02 .1549E+O2 -.92958+02 . ooooE+oo
.16708+02 .47308+02 -.167OE+02 .1113E+O2
-.5029E+O1 .4311E+O1 .7LA4E+O2 - 1149E+O2
-4001E+O1 .2400E+O2 -8000E+o2 .3200E+o2

FINAL SYMUETRICAI UATRIX D


.12008+03 .6466E+02 -.1948E+02 .1549E+O2
.6466E+O2 .1432E+O3 .8496E+01 .6957 E+O2
-.19488+02 .8496E+01 .3011E+02 .4215E+02
. 1549E+02 .6957E+O2 .4215E+02 .1333E+O3

Fig. as(b) Resultsfrom Program4.5

'largest'eigenvalue of this matrix is calculatedusing Program 4.1 with a toleranceof 10-5,


convergence to the eigenvalue7:240 will be found in 15 iterations.
If A and B are switched,the eigenvalues of the resultingsymmetricmatrix found by this
programwould be thereciprocalsoftheir previousvalueswhenA and B werenot switched.For
example,if A and B areswitchedand the resultingsymmetricmatrix solvedby Program4.1,the
'largest'eigenvalue given
is as 0.1006.This is the reciprocalof 9.94which was the 'smallest'
eigenvalueobtainedpreviouslyby Program4.4.

4.7 Transformation rnethods


Returning to the basic eigenvalue equation

Ax:2x (4.1)
112 C H A P T E R4

into an equationwith the sameeigenvalues


we can seethat it can be'transformed'
A*x:,lx (4.50)
'premultiplied'
if eq.4.1is 'postmultiplied'
by a matrixP and by theinverseof P. This
gives the equation

P- rAPx:2P- tPx (4.51)

which is of the form of eq. 4.50 with

A*: P- lAP (4.s2)

The concept behind such a transformation technique is to employ it so as to make the


eigenvaluesof A* easierto find than were the eigenvaluesof the original A.
Even if A were symmetrical,it is highly unlikely that A* in eq.4.52 would retain
symmelry. However, if A is symmetrical, it can be shown that the matrix

A*: PTAP (4.53)

will always be symmetrical.When this transformationis applied to eq.4.1,the resulting


eigenvalueproblem is

A*x: prApx: Lprpx (4.s4)

I
For the eigenvaluesof A* to be the same as those of A, we have to arrange that the
! transformation matrix has the additional prop€rty

F
l
PTP: I

Matrices of this typ€ are said to be


is the so-called
'rotation matrix'
'orthogonal', and a matrix which has this property
(4.ss)

[-cosa -sinal

l, P:1. (4.56)
I
Lsln q cos ctl

ri
I
Applying this transformation to the A matrix of eq.4.23,we have

f cosa sinaff2 ll fcosa -sina-l


ti l*:l ll -ll I
f-sina cosal[-l 2l [-sina cos al
: (4.s7)
I
_f z+zsinacosa cos2a-sin2q
-fcos2a-sin2a I
2-2sinacosal

in which A* is clearly symmetrical for any value of a. In this casethe obvious value of
a to choose to give an A+ with known eigenvaluesis such as to make A* a diagonal
matrix. Elimination of the off-diagonal terms occurs if

c o s 2 a- s i n 2 q : 0 (4.58)

i n w h i c h c a s et a n a : 1 a n d a : n l 4 , g i v i n g s i n d : c o s , : l l ' / l . T h e resultingdiagonal-
EIGENVALUE
EOUATIONS 113

ised A* then has the form

n.:[;I (4.se)

As before,this showsthat the eigenvalues of A are +3 and +1.


For matricesA which are biggerthan 2 x2,the transformationmatrix is'padded
out'by putting I on theleadingdiagonalandzerosoff-diagonalin everyrow exceptthe
two with respectto whichthe rotation is to beperformed.For example,if A is 4 x 4, the
transformationmatrix could haveanv of 6 forms

cosd -sina 0

il
sina cosd 0
(4.60)
0 0 1
0 0 0

l t*l cos a
(4.61)
0
sin a

and so on.
Matrix 4.60wouldannihilatethe terms(1,2)and (2, l) in the originalA matrix after
the transformationvia PrAP whilematrix 4.61would annihilatetermsG 4) and,(4,2).
The effectof the ls and 0sis to leavethe otherrowsand columnsof A unchanged. This
still meansthat oFdiagonal terms which becomezero during one transformation
revertto nonzerovalueson subsequent transformations and so the methodis iterative
as we havecometo expect.
The earliestform of this type of iteration is called 'Jacobi diagonalisation',and it
proceedsby annihilating the largestoff-diagonal term in each rotation.
Generalisingeqs4.57for any symmetricmatrix A we have

^.:[_:'i: -::f
xl [;;"f[:T; (4.62)

leadingto off-diagonaltermsin A* of the form


- sin2c)
a* ij: a* ji- (- ai;* ai) cosa sin c + a,-,(cos2a (4.63)
which must be madeznro by choosingthe appropriatea.
For example,a*;; is made zero by putting
2atj
tanza: (4.64\
aii_ajj

To makea simpleprogramfor Jacobidiagonalisationwe havethereforeto searchfor


1'14 C H A P T E R4

the off-diagonaltermin A of largestmodulusand find the row and columnin whichit


lies.The rotation anglea can then be computedfrom eq.4.64andthe transformation
matrix P setup. Matrix P can then be transposedusinga library subroutine,and the
matrix productsto form A*, as requiredby eq.4.53,can be carriedout. The leading
diagonalof A* will convergeto the eigenvalues of A.

Example 4.2

Use Jacobi diagonalisation to find the eigenvaluesof the symmetrical matrix

21
A:[4
L2 3J

Solution4-2

AsAisonly2x2,oneiterationshouldbesufficient.WewishtoA
e ltizm: A
i nzar t-e2

^ 2(2)
1u112a:----:--L:4

a:37.98"

Transformation matrix

- -
p : [T' a sinal _ [o.zs820.6ts4l
Lsrnd cosql L0.6154 0.7882-l

A*:PTAT

p,A:[ 0.78820.6154-l
[4
'-l:[ 4.38363.4226f
L-0.61540.7882J
L2 3J L-0.8852l.l338l
-0.6154-l_[5.561 0l
4.38363.42261[0.7882
A.:[
L-0.8852t.t338J10.6154 0.7882JL 0 1.4381
Hencethe eigenvalues are 5.561and 1.438approximately.(The exactsolution to 3
decimalplacesis 5.562and 1.438.)

Program 4.6. Jacobi diagonalisation


A program that usesJacobi diagonalisationis now illustrated which usesthe following
nomenclature:

Simpleoariables
N Number of equationsto be solved
ITS Maximum numberof iterationsallowed
TOL Iteration tolerance
ITERS Current numberof iterations
ALPHA Rotation anglefor transformations
EIGENVALUEEOUATIONS 115

Variable length arrays


A The original A matrix
AI Temporary storage
P Transformationmatrix
PT Transposeof transformationmatrix
ENEW Latestvectorof eigenvalues
EOLD Previousvectorof eisenvalues

PARAMETER
restriction
IN>N

PROGRAM P46
c
PROGRAM 4.6 JACOBI TRANSFORHATION FOR EIGENVALUES
c OF SYMIIIETRIC HATRICES

c ALTER NEXT LINE TO CHANGE PROBLEI,{ SIZE)

PARAHETER (IN=10)
c
REAL A(IN, TN),A1 (IN, IN),P(IN, IN),PT(IN, IN),ENEW(IN), EOLD(TN)
c
PI = 4.*ATAN(l.)
R E A D ( 5 ,* ) N , ( ( A ( I , J ) , J = r , N ) , f = 1 , N )
DO 2 I = 1,N
'
Do 2 J = I,N
2 A(J, r) = A(I,J)
READ (5,*) TOL,ITS
wRrTE(6,*) ('***********r JACOBI ITERATION*******************r)
WRITE(6, *) (r******** F O R S y H M E a R T CI n T R I C E S * * * * * * r * * * * * * * * ' )
I{RITE(6,*)
IJRITE(6, *) ('HATRIX A')
CALL PRINTA (A, IN, N,N, 6)
!{RITE(5,*)
CALL NTJLVEC(EOLD,N)
ITERS = o
99 fTERS - ITERS + 1
BfG : O.
Do 3 I = lril
DO 3 J = f + 1,N
r F ( A B s ( A ( r , J )) . c r . B r c ) T H E N
BIG = ABS(A(I,J) )
HOLD = A(I,J)
IROW = I
rcol, = J
END IF
3 CONTINUE
DEN : A(IRow,IRow) - A(IcoL,IcoL)
IF (DEN.EQ.O.) THEN
ALPHA = PI*.25
IF (HoLD.LT.o.) ALPHA = -ALPHA
ELSE
ALPHA = . S*ATAN(2. *HoLD/DEN)
END IF
CT = coS(ALPHA)
sT = sIN(ALPHA)
CALL NULL(P,IN,N,N)
Do 4 I : 1,N
4 P(I,I) = r.
P(TROW,IROW) = q1
P(ICoL,IcoL) = g1
P(IROW, rCOL) = -5'1'
P(rcol.,IRol.l) = 51
ALPHA = ALPHA*L80./PI
116 C H A P T E R4

C A L L M A T R A N( P T , I N , P , I N , N , N )
C A L L M A T H U L ( P T ,I N , A , I N , A 1 , I N , N , N , N )
C A L L M A T H U L ( A 1 ,I N , P , I N , A , I N . N , N , N )
DO 5 I = 1rN
6 ENEW(I) = A(I,I)
C A L L C H E C O N( E N E W , E O L D , N , T O L , I C O N )
rF (TTERS.LT.TTS .AND. TCON.EQ.O) GO rO 99
WRITE(6.*) ( ' F I N A L T R A N S F O R M A T I O NM A T R I X P ' )
CALL PRINTA(P, IN,N,N, 6)
wRIrE(6, *)
WRITE(6,*) ( ' F I N A L T R A N S F O R M E DM A T R I X A ' )
CALL PRINTA (A, IN,N,N, 6)
ITRITE(6, *)
wRrTE(6,*) (,TTERATTONS TO CONVERGENCE' )
wRrTE(6,1000) rTERS
1000 FoRMAT (15)
STOP
END

Array size N
J

Array (AU,J),J=I,N),I:1,N
(accountingfor symmetry) 10.0 5.0 6.0
20.0 4.0
30.0

Tolerance TOL
t.E-1

Iteration limit ITS


25
Fig. 4.6(a) Input data for Program4.6

*********t** JACOBI ITERATION **t*t**************


******** FOR Sru{ETRIC HATRTCES t******t*r*t***

I.IATRIX A
.1000E+02 .5000E+01 .5000E+01
.5000E+01 .20008+02 .400oE+o1
.6000E+01 .4000E+01 . 3 0008+02

FINAI TRANSFORMATTON }IATRIX P


.10008+01 .00008+00 .2089E-04
. ooooE+oo .1o0oE+01 .00008+00
- . 2 0 8 9 E - 0 4 . O O O O E + o o. 1 0 0 0 8 + 0 1

FTNAL TRANSFORMED UATRTX A


.7r428+Or .1898E-10 .727 6E-rO
.4731E-O6 .19158+02 .9683E-06
-.46458-07 .9088E-O6 .337LE+O2

TTERATTONS TO CONVERGENCE

Fig. a.6@) Resultsfrom Program 4.6


EIGENVALUE OUATIONS t t t

As shownin Fig' 4-(a) the numberof equationsis read


followedby thearrayA. SinceA mustbe
symmetrical,only its uppertriangleis readwhich is then
copiei into the'lowertriangle.The
convergen@ toleranoe anditerationlimit completethedata.Tlie iterationloop
and takesup the restof the program.The lariestoff-diagonal beginsat label99
t..m is storeaas HOLD with its
position in row and column IROW and ICOL
respectively.
Rotation angleALPHA is then computedfrom eq.
4.64 andits cosinecT and sine ST.
Transformationmatrix P is then easilycomputed.
Callsto MATRANand veruul enablethe
product in eq' 4'53to bccalculated.The eigenvectors
in ENEW canthen becomparedwith those
in EoLD usingsubroutinecsrcoN. The output in
Fig. a.(b) showsthat the eigenvarues of
r ul
^:L
['o ':;J
: (4.65)

are computed to the given tolerance in 7 iterations and


are 7.14, lg.l5 and 33.71 respectively.

4.7.1 Commentson Jacobidiagonalisation

Although Program4.6 illustratesthe transformationprocess


well for teachingpur-
poses,it would not beusedto solvelargeproblems.one
wouldnever,in practice,store
the transformationmatricesp and p\ but perhapsressobviousry,
thesearchingprocess
itself becomesvery time-consumingas N increases. Alternativesto the basicJacobi
methodwhichhavebeenproposedincludeserialelimination
in whichthe off-diagonal
elementsare eliminated in a predeterminedsequence,thus
avoiding ,"u."hing
altogether,and a variationof this techniquein which serial
eliminationis performed
only on thoseelements whosemodulusexceeds a certainvalueor'threshold,.Whenall
off-diagonaltermshavebeenreducedto thethreshold,it canbe
further reducedand the
processcontinued.
Jacobi'sideacanalsobe implementedin orderto reduceA to
a tridiagonalmatrix
A* havingthe sameeigenvalues (ratherthan a diagonalA* asin the basictechnique).
This is called'Givens'smethod',which has the advantageof
beingnon_iterative. of
course'some methodmust still be found for calculatingthe
eigenvalues of the tri-
diagonal A*. A more popular tridiagonalisationtechniqu"
i, "uU"a .Householder,s
method'which is describedin the next section.

4.7.2. Householder'stransformationto nidiagonal form

Equation 4.55 gave the basic prop€rty that transformation


matricesshould have,
namely
PTP:I
(4.55)
and the Householdertechniqueinvolveschoosing
P: I - 2wwr (4.66)
118 CHAPTER
4

wherew is a column vectornormalisedsuchthat


rYrw: l.
&.67)
For example,let

( l \
l - f
lJ2l
W:(
(4.68)
) r f
lo)
which has the required product. Then

[1 1l
zww':l. |
LI lJ @.69)

and we see that

o -tl
P:I-2ww':I g'70)
L- I oJ
which has the desired property

P-t:Pr:P
$.71)

l',
I
so that eq. 4.55 is automatically satisfied.
In order to eliminate terms in the first row of A outside the tridiagonal, the vector
w is taken as

to: {0 ,il2 ,A)zwn .. .wo}r g.72)


Thus the transformation matrix for row I is, assuming A is 3 x 3:
t l
- t

t
[ r o o l
t
P':lo t-2w2, -2w2w3
| f+lll
-2w2w3 | -2w32)
L0

When the product Pr APr is carried ou! the first row of the resultingmatrix contains
the three terms
a'Lt:all

e* 12: et2-2w2(apw2* alswr):7 @74)


a * t 3 : e r t - 2 w 3 ( a12 w2 * a 1 3 w 3 ): 0

ktting

h : a 1 2 w 2 - fa s w 3
E I G E N V A L U E E O U A T I O N S1 1 9

we see that to make c*rr:0 as required we need

a 1 3- 2 w 3 h : 0 . (4.7s)
Equation 4.67 gives another equation in the w;, namely

w 2 2* w 3 2 : l (4.76)
allowing the w, to be determined. The formulae derived from eq. 4.74 are

r2:etz2*arr' (4.77)
and

2 h 2: r z - e t z . r (4.78)
Instead of computing using w it is convenient to use

v *2hw: -r, att}r


{0,or, (4.7e)
leading to the transformation matrix

I
P:I--; wr (4.80)
lh-

Ineq.4.77 for the determination of r, the sign should be chosensuch that r is of opposite
sign to a12 in this case.
For a general row i, the vector v takes the form

" : { 0 , 0 , 0 , . . . 0 ,e i , i +| - r , e i , i + 2 , . . . a r , ^ \ t (4.8l)

Program 4.7. Householder's reduction to tridiagonal form


The procedure is illustrated in the following program with the nomenclature as given bclow (note
that the reduction process is not iterative in character):

Simple uariables
N Number of equations to be reduced
R See eq. 4.74 or 4.77
rr I
H -2zGeeeq.4.80)

Variable length anays


A Original matrix to be reduced (on completion holds the tridiagonalised form)
AI Working space
P Transformation matrix
V Vector v (seeeq. 4.79)

PARAMETER
restriction
IN >N

The input is shownin Fig. 4.7(a).The numberof equationsand upper triangleof symmetric
matrix A arefirst readin. Then N - 2 transformations
aremadefor rowsdesignatedby counter
,I2O
C H A P T E R4

PROGRAM P47

c P R O G R A M4 . 7 H O U S E H O L D E RR E D U C T I O N O F A S Y W E T R I C
MATRTX TO TRIDIAGONAL FORM

ALTER NEXT LINE TO CHANGE PROBLEM SIZE


c
PARAMETER (fN:2o)
c
REAL A(IN,IN),A1(rN,rN),P(IN,IN),V(rN)

READ (5,*) N, ( (A(I,J),J=I,N),I:l-,N)


D O 1 1 I = 1 r N
D o 1 1 J : I , N
11 A(J,I) = A(I,J)
wRrrE(6,*) ('** H O U S E H O L D E RR E D U S T I O N O F A S Y M M E T R I C M A T R T X * * , )
WRITE(6,*) (r************* TO TRIDIAGONAL FORM ***************t)
WRITE(6, *)
wRrTE(6, *) ( 'HATRTX A' )
.CALL PRINTA(4, rN, N, N, 5)
WRITE(6,*)
Do 1 K : 1,N - 2
R = O.O
Do 2 L = K,N - 1
2 R: R + A(K,L+l)*A(K,L+1)
R: SQRT(R)
Ir (R*A(K,K+1).GT.0.0) R = -R
H : -1.o/ (R*R-R*A(K,I(+1) )
CALL NULVEC(V,N)
v(K+1) =A(K,K+1) -R
DO 3 L = K + 2,N
3 V(L) = A(K,L)
CALL WMULT(VrV, P, IN, N, N)
CALL HSHULT(P, IN,H, N, N)
DO 4 L = l,N
4 P(L,L) = P(L,L) + 1.0
CALL HATMUL(A,IN, P, TN,A1, IN, N, N. N)
C A L L H A T H U L( P , I N , A 1 , I N , A , I N , N , N , N )
1 CONTINUE
I{RITE(6, *) ('TRANSFORMED }IATRIX A. )
C A L L P R I N T A ( A ,I N , N , N , 6 )
STOP
END

K. Values of r, I and v are computed and the vector product required by eq. 4.80 is carried out
using library routine vvMULT. Transformation matrix P can then be formed and two matrix
multiplications using MATMUL complete the transformation. Figure 4.7(b)shows the resulting
tridiagonalised A, whose eigenvalucs would then have to be computed by some other method,
perhaps by vector iteration as previously described or by a characteristic polynomial method as

Array size N
4
Arrai' (Air,J),J:I, N} I:1, N
(accountingfor symmetry) 1.0 -3.0 -L0 1.0
r0.0 - 3.0 6.0
3.0 -2.0
1.0

Fig. 4.7(a) Input data for Program4.7


EIGENVALUE EOUATIONS 121

** HOUSEHOLDER REDUCTION OF A SYUHETRTC HATRIX **


************* TO TRIDIAGONAL FORM ******r********

MATRIX A
.10008+01 -.3000E+01 -.20008+01 .10008+01
-.30008+01 .10008+02 -.3000E+01 .6000E+01
-.2000E+o1 -.30008+o1 .30008+o1 -.20008+o1
.1000E+01 .6000E+01 -.2000E+01 . 10008+01
TRANSFORHED HATRTX A
.10008+01 .37d28+Ol- .8449E_07 _.30178_07
. 37428+Ol .2785E+01 -. 52468+01 . OOOOE+OO
.8449E-07 -.52468+01 .1020E+02 _.44808+01
-.3017E-07 .23848-06 -.4480E+01 .1015E+01

Fig. a.7ft) Resultsfrom Program4.7

shown in the followingsection.Alternatively,anothertransformationmethodmay be used,as


shown in the next program.
. The matrix arithmeticin this algorithmhasbeendeliberatelykept simple.In practice,more
involvedalgorithmscan greatlyreducestorageand computationtime in this method.

4.7.3 LR Transformation for eigenvalues of tridiagonalised matrices

A transformation method most applicable to sparsely populated (band or tridia-


gonalised) matrices is the so-called 'LR' transformation. This is based on repeated
factorisation using what we called in Chapter 2'LlJ'factorisation.
Thus
AI:LU:LR g.B2)
for any stepk of the iterativetransformation.The stepis completedby re-multiplying
the factorsin reverseorder. that is
A&*r : IJL: RL (4.83)
Sincefrom eq.4.82
U:a- t4r
(4.84)
the multiplication in eq. 4.83 implies

Ak+r_L-rArL (4.85)
showing that L has the property required of a transformationmatrix P. As iterations
proceed,the transformedmatrix At tendsto an upper triangular matrix whoseeigen-
valuesare equal to the diagonalterms.

Example 4.j
'LR'
Perform translormation on the non-svmmetric matrix

l::l
122 C H A P T E R4

Solution4.3

Ao: [4
'l : o-l 3 -l
[t [o
L2 U 10.5 tJ L0 -0.5J
"A ,-Lo 'l[r
: [ 4 -o.sj 0l_[ 5.5 3l
Lo.srl: [-o.zs -05-j
:t I 0l[55 3 I
L-0.045 1l Lo -0.36361

^n,:-Lo 3 l- t ol [s.36s 3 -l
[:t I
-0.3636_J -0.3636J
L-o.o+srl: Lo.oroq
[t ol t-5.365 3 I
:
Lo.oo'rl Lo.oro+ -0.3728)
-l
_ [s.res ,
o.3 [r
ol : s.37433 I
I _o.oorz
-0.3728J
10.0164 10.0031 ll_0.3728)
[
.{3 is nearlyuppertriangular,henceits eigenvalues areapproximately5.37and -0.37
which are exact to 2 decimal places.
Although the method would be implementedin practiceusing specialstorage
strategies,it is illustratedin Program 4.8 for the simplecaseof a squarematrix A.

Program 44. LR transformation


This programusesthe followingnomenclature:

Simpleuariables
N Number of equationsto be solved
ITS Maximum numberof iterationsallowed
TOL Iteration tolerance
ITERS Current numberof iterations

Variablelengtharrays
A Matrix Ar
L l,ower triangular factor of A
U Upper triangular factor of A
EOLD Prcvious estimateof eigenvalues
ENEW New cstimateof eisenvalues

PARAMETER
restriction
IN>N

The program beginsby readingthe number of equations,followedby the coefficientsof A, the


convergencetoleranceand iteration limit. Input and output are shown in Figs 4.8(a)and (b)
respectively.
The iterationloop is thenentered,and beginswith a callto lurnc whichcompletes
the factorisationof the currentAr into L and U. Theseare multipliedin reverseorder using
t''letMULand thenewestimateof theeigenvalues is foundin thediagonaltermsof thenewAr * r,
seeeq.4.83.
EIGENVALUEEOUATIONS 123

PROGRAM P48
c
c PROGRAM 4.8 L-R TRANSFORMATTON FOR EIGE}WALUES

c ALTER NEXT TO CHANGE PROBLET,I SIZE

PARAMETER (IN=2o)

R E A L A ( I N , I N ) , U ( T N , I N ) , L ( T N , I N ) , E O L D( I N ) , E N E W ( I N )

READ (5,*) N, ( (A(I,J),J=1,N),I:1,N)


READ (5,*) TOL,TTS
$rRrTE(6, *) ( | ************ L - R T R A N S F O R H A T J O N* * * * r * * * * * * : r * * * * * * I
)
WRITE(6,*)
WRITE(6,*) ( IMATRrX A.)
CALL PRINTA(A, IN, N, N, 5)
WRITE(6,*)
CALL NULVEC(EOLD,N)
ITERS = O
1 ITERS : ITERS + L
. CALL LUFAC(A,U,L,IN,N)
C A L L M A T M U L ( U ,I N , L , I N , A , I N , N , N , N )
DO 2 I = l,N
2 ENEW(I) = A(I,I)
C A L L C H E C O N( E N E W , E O L D , N , T O L , I C O N )
IF (ITERS.LT.ITS .AND. ICON.EQ.O) cO TO 1
I{RITE(6,r) (TFINAL TRANSFORMEDDIAGONAL OF MATRIX A')
CALL PRINW(ENEW,N,6)
WRITE(6,*)
WRITE(6,*) ( TITERATTONS TO CONVERGENCET)
wRIrE(6,1O00) ITERS
1 0 0 0 F o R M A T( r s )
STOP
END

Array size N
5

Array (A 0, J),J: r, N),r:1, N


2.0 - 1.0 0.0
- 1.0 2.0 - 1.0
0.0 - 1.0 1.0
Tolerance TOL
r.E-7
Iteration limit ITS
r00
Fig. 4.8(a) Input datafor Program4.8

* * * * * * * r * r * * L-R TFANSFORMATION **t **t t **t ***t rl **

MATRIX A
.20008+01 -.1000E+01 . ooooE+oo
-.10008+01 .20008+01 -.10008+01
. o0008+00 -.10008+01 .1000E+ol

FTNAL TRANSFORUED DIAGONAL OF HATRTX A


.3247E+01 .15558+01 .198IE+OO

ITERATIONS TO CONVERGENCE
22

Fig. 4.8@) Resultsfrom Program4.8


124 C H A P T E R4

The convergencecheck is then called, and the converged eigenvaluesprinted.


'QR'
In variations of this method, for example the technique, other matrices can be
substituted for L but the programming follows the same lines.

4.7.4 Lanczosreductionto tridiagonal form

In Chapter2,wesawthat someiterativetechniques for solvinglinearequations,suchas


the steepestdescentmethod,could be reducedto a loop involving a singlematrix by
vector multiplicationfollowed by various simple vector operations.The Lanczos
methodlor reducingmatricesto tridiagonalform, while preservingtheir eigenvalues,
involvesvery similaroperations,and is in fact linked to the conjugategradienttech-
niqueof Program2.12.
The transformationmatrix P is in this methodconstructedusingmutually ortho-
gonal vtctors. As usualwe seekan eigenvalue-preserving translormationwhich for
symmetrywasgiven by eq. 4.54:
PrAPx:,lPrPx (4.s4)
A meansof ensuring PrP: I is to construct P from mutually orthogonal vectors,say p,
q, r for a 3 x 3 matrix. Then

',-l
[0, Pz o,l [0, er (4.86)
r ' n : l c , Q zn ' l l o ' e z " I
Lr, 12 ,'.J Lp, Qt ,.1
F t
pt q pt -'l l
lp'p
: l n ' o q ' qo " l : I '
L.tp rtq .tt I

In the Lanczosmethod,we requirePTAPto be a symmetrictridiagonalmatrix, say,


r
Fr 0 l
"
lot I
14: prAP: I Br d,2 0' I (4.87)

[o f, o,l
and so
AP:PM (4.88)

SinceP is madeup of the orthogonalvectors


P:[p q r] (4.8e)

we can expandeq.4.88to give


Ap:a1p*f 1Q
Aq: B1p+ az\* frzr (4.e0)
Ar: f zg*a{

"J'
EIGENVALUEEOUATIONS 125

To construct the 'Lanczos vectors' p, q and r we note from the first of eqs 4.90 that

ptAp:arptp * f rptq (4.e1)


so that if a1 is chosen to be prAp then prq :0 for any f and so p and q are orthogonal.
Knowing cr, the first of eqs 4.90 can be solved for p1q and since qrq: l, normalisation
of 0A such that lfl'qil^",: I will yield llpy We then proceed to find a2 and p2 in the
sameway and continue for all n rows of A. Denoting [p q r] by [yi] the algorithm is as
fo :0):
follows(assuming
Yj :AYi-fi-r !i-r
di :Y;rYi

zi :vi-dtyi (4.e2)
p :12,'z)'t'
I
!i+r: qzi

Program 4.9. Lanczos reduction to tridiagonal form


The program uses the following nomenclature:

Stmple uariables
N Number of equations to be reduced

Variable length arrays


A Matrix of (symmetrical) coefficients
ALPHA Diagonal of tridiagonal (see eq. 4.87)
BETA Off-diagonal of tridiagonal (seeeq. 4.87)
v )
Z ! Temporaryvectors
Y0 ( (seeeq. 4.92)
Yt)

PARAMETER
restric|ion
IN >N

PROGRAM P49
c
c PROGRAM 4.9 I.ANCZOS REDUCTION OF A SYMMETRIC
c MATRIX TO TRIDTAGONAL FORM
c
c ALTER NEXT LTNE TO CTIANGE PROBLEI.T STZE

PARAMETER (fN:2o)
c
R E A L A ( I N , I N ) , A L P H A( I N ) , B E T A( o : r N - ] . ) , v ( r N ) , yo (rN), y1 (rN), z (rN)

READ (5, *) N, ( (A(r,J) ,J=I,N) , I:1,N)


D O ] . O I = 1 , N
D O l O J : I , N
10 A(J,I) = A(I,J)
READ (s,*) (Yr-(r),r=1,N)
WRITE(6,*) ('**** L A N C Z O S R E D U C T T O NO F A S Y M M E T R I CM A T R I X * * * * , )
126 CHAPTER 4

WRITE(6,*) (r************* TO TRIDIAGONAL FORM *************r.*r)


I{RITE(6,*)
WRITE(6,*) ( TMATRIXA' )
CALL PRINTA(A, IN,N, N, 5)
WRITE(6,*)
CALL NULVEC(YO,N)
BETA(o) : e.s
DO 1 J = 1rN
C A L L M V } T U L T ( A ,I N , Y 1 , N , N , V )
Do 2 I : 1,N
2 v(r) = v(r) - BETA(J-1)*Yo(I)
CALL VECCOP(Y1,YO,N)
CALL VDOTV(Y1,V,ALPHA(J), N)
rF (J.EQ.N) Go To 1
DO 3 I = 1,N
3 z(r) = v(r) - ALPHA(J)*Y1(I)
CALL VDOTV(2, Z, BETA(J), N)
BETA(J) = sQRT(BETA(J) )
Do 4 I = 1,N
4 Y1(I) : z(I)/BETA(J)
1 CONTTNUE
WRITE(6,*) ( ' T R A N S F O R M E DH A r N D I A G O N A L O F M A T R I X A ' )
CALL PRINTV(ALPHA, N, 5)
WRITE(6,*)
WRITE(6,*) ( I T R A N S F O R M E DO F F - D I A G O N A L O F M A T R I X A ' )
wRIrE(6,1oO) (BETA(r) , I:1,N-]')
r-00 F o R M A T( 1 X , 5 E 1 2 . 4 )
STOP
END

Array size N
4
Array (Atr,J),r: l, N),J:r, N
(accountingfor symmetry) 1.0 -3.0 -2.o 1.0
10.0 - 3.0 6.0
3.0 -2.O
1.0
Startingvector Yr ( r I r :r , N
1.0 0.0 0.0 0.0

Fig. 4.9(a) Input data for Program


4.9

i*** IJ\NCZOS REDUCTTON OF A SYMMETRIC HI{TRIX r**i


*t*********** TO TRfDIAGONAL FORI{ ********rr******

HATRTX A
.1000E+01 -.30008+01 -.2000E+01 .1000E+01
-.3000E+01 .1000E+02 -.3000E+o1 .6000E+01
-.20008+01 -.3000E+01 .3000E+01 -.2000E+o1
.1000E+01 .6000E+o1 -- 20ooE+o1 .10ooE+o1
TRANSFORMED HAIN DIAGONAL OF HATRIX A
.1000E+01 .2786E+01 .1020E+O2 .1015E+01

TRANSFORMED OFF-DIAGONAL OF HAFRIX A


.37428+Ot .5246E+01 .44808+01

Fig. a.9(b) Resultsfrom Program4.9


E I G E N V A L U E E O U A T I O N1 S
27

Note that the processis not iterative. Input and output are listed in Figs 4.9(a) and (b)
respectively.The number of equations,N, is first read in, followed by the upper triangle
coefficientsof A which assumessymmetry.The starting vector yr, which is arbitrary as long
as YIyr: l, is then read in and B6 set to 0.
The main loop carriesout cxactlythe operationsof eqs4.92to build up N valuesof a and
the N - I valuesof p which are printedat theend of the program.For the givenstartingvector
yr:{1 0 0 0}r, the tridiagonalisation yieldsthe sameresultas Householder's,but this is not
always the case.

4.8 Characteristicpolynomial
methods
At thebeginningof this chapterweillustratedhow theeigenvalues of a matrix form the
rootsof an nth orderpolynomial,calledthe'characteristicpolynomial'.We pointedout
that the methodsof Chapter3 could,in principle,be usedto evaluatetheseroots,but
that thiswill rarelybe an effectivemethodof eigenvalue
determination.However,there
are effectivemethods which are based on the properties o[ the characteristic
polynomial.Theseareparticularlyattractivewhenthematrix whoseeigenvalues hasto
be found is a tridiagonalmatrix, and so are especiallyappropriatewhenusedin con-
junction with the Householderor Lanczostransformations describedin the previous
section.

4.8.1 Evaluatingdeterminantsof tridiagonalmatrices

In the previoussectionwe illustratednon-iterativemethodsof reducingmatricesto


tridiagonalequivalents.
The resultingeigenvalueequationbecomes
d'1 flr 0 0 0 X1 X1

Q2 a 0 0 X2 X2
B, P2

0 f , d3 f t o
0 0 o
P3 d'4fo (4.e3)

fr,-,
f,-, qn xr xn

The problemthereforebecomesoneof findingthe rootsof the determinantalequation

x r - A f r 0 0 0
fl, dz-A f, 0 0
O f, o t-A ft 0
0 0 fl, d+-) flo :0 (4.94)

'
fo-r
F,- t an- )
128 C H A P T E R4

Although we shall not find these roots directly, consider the calculation of the deter-
minant on the left-hand side of eqs 4.94.
I f n : 1 , d e t r ( , l:)a t - ; .
For n:2.

I or-7 F, :(ar- ) . \ ( a 1 -1 ) - f l t t (4.e5)


Oetr{zt}:l
| fr, az-)

If n:3,

dt-A flt 0
o, -
az-A : ( o . - 1 )l lof ,' - A A) (4.96)
det.(t): fl, fr. , . z _ A 0r'@r-
0 fl, %-A

relationshipbuildsup enablingdet3(l)to beevaluatedsimply


We seethat a recurrence
may
from a knowledgeof det2(t)anddetl(i). If welet deto(,tr):1,thegeneralrecurrence
be written
det,(l):(4"-l) det,- ,(1)- f1-, det"-r(1) (4.e1)
Therefore,for any valueof I we canquicklycalculatedet,(l), andif weknow therange
within whicha root det,(/.):0 must lie,its valuecan be computedby, for example,the
bisectionmethodof Program3.2.The remainingdifficultyis to guidethechoicesof ,t so
as to be sure of bracketinga root. This task is made much easierdue to a special
'Sturm
property possessed by the principal minors of eqs 4.94,which is calledthe
sequen€'prop€rty.

property
4.8.2 The Sturm s€quence

A specificexampleof the left-handsideof eqs 4.94is shown below,for n:5:

a
Z - A
l
|
t - Lt
0 0 0
-1 2-A -1 0 0

tAt: 0 -1 L_A -1 0 (4.e8)

-l 2-1 -l

0 0 -1 2-1

The principalminorsof A arethe submatric€s outlinedby the dashedlinesi.e.formed


by eliminatingthe nth, (n- l)th, etc.,
row and column of A. Theeigenvaluesof A andof
its principalminorswill be found to be givenby the followingtable (for example, by
using Program4.8)
E I GE N V A L U EE O U A T I O N S 129

A5 A4 A3 A2 Ar
3.732
3.618
3.0 3.4r4
2.618 3.0
2.0 2.0 2.0
1.0
r.382 1.0
0.586
0.268
0.382

The characteristic polynomialsof An and their roots,the eigenvalues,are alsoshown


in Fig. 4.10.From the tabular and graphicalrepresentations it can be seenthat each
succeeding set of eigenvaluesn, n-1, n-2 etc.,always'separates'the preceding set,
that is, the eigenvalues of A"*, alwaysoccurin the gapsbetweenthe cigenvalues of

d e t u( l )

det. (l )

deto(l)

Fig. 4.10 Characteristic


polynomialsand eigenvalues
for A. of eq. 4.98
130 C H A P T E R4

A,. This separationprop€rty is found for all symmetricA and is calledthe 'Strum
sequence' property.
Its mostusefulconsequence is that for any guessed,l.,
thenumberof signchangesin
detr(t)for i : 0, 1,2,...nis equalto thenumberof eigenvalues of A whicharelessthani.
Recallingthat deto(j.): 1.0,wecan se€from Fig. 4.10thefollowingsignchangecounts,
noting that det;(,l):0 is not a change:

Numberof changesin sign


:Number of eigenvalueslessthan ,t
5
A
5
3 J

2
a
t
I I

Program 4'10. Characteristic polynomial method for symmetric positivc definite tridiagonal
matrices
An example of a program utilising the features of the characteristic polynomial in the
determination of eigenvaluesis listed below. It uses the following nomenclature:

Simple uariables
N Number of tridiagonal rows
J Eigenvalue wanted: J: I for largest, etc, I < J < N
AL Current estimate of root
ALMAX Upper estimate of root
TOL Iteration tolerance
ITS Maximum number of iterations allowed
OLDL Lower estimate of root
NUMBER Number of sign changes of det,(l)
ITERS Current number of iterations

Variable length arrays


ALPHA L*ading diagonal entries of A
BETA Offdiagonal entries of A
DET det"(l) (see eq. 4.97)

PARAMETER
restriction
IN >N.

Input and output areshownin Figs 4.ll(a) and (b) respectively.


The numberof equationsis
first readin followedby the diagonal(ALPHA) and off-diagonal(BETA) termsof the tridiagonal
symmetric matrix previously obtained by methods such as Householder or Lanczos.The
remainingdata consistof J, the requiredeigenvaluewhere"I=l is the largestetc.,a starting
guessof 1(AL),andupperlimit to I (ALMAX), a convergence toleranceanditerationlimit. Since
symmetric,positivedefiniteA are implied,ail of the eigenvalues will be positive.
Thedatarelateto theexamplegivenin eqs4.98.An upperlimit of 5.0ischosenasbeingbigger
EIGENVALUE OUATIONS t J l

thanthebiggesteigenvalue, and thefirstguessfor l. is chosento behalfthisvalue,


value of det6('l')is set to 1.0 and an itiration that is 2.5.The
loop for tr,. uir".ii" o'.o.., executed.
procedurecontinuesuntil the iterationlimit is The
or the toreran"i,,or, is satisfiedby
subroutinecHe cx. Thevalueof det,(x)caredDET(l) :T:l:d
is s.r,o,, _i*..quired by therecursion
PROGRAM P410

C PROGRAM 4.10 CHARACTERISTIC POLYNOMIAL METHOD FOR


c "TGEMTALUES
oF syMMETRrcrnrorecoxal
.
C ALTER NEXT LTNE TO CHANGE PROBLEi,T
STZE
P*ETER (IN=20)
"
REAL ALPHA(rN),BETA(rN),DET(o:IN)
c
READ (s,*) N, (ALPHA(r),r=1,N)
READ (s, *) (BETA(r) ,r=1,N_i)'
READ (5,*) J,AL,ALMAX,TOL,IiS
wRrrE(6,*) (r**** cHenecrinrsrrc poLyNoMrAL
METH.D FoR *******r)
. (,a ETGENVALUEs
oF A syMMErRrc
rnroraconar,MArRrx*,i
ffirtflif|:li
llJlr(e, *) (,uArN DTAGoNAL'
CALL
)
PRTNTV(ALPHA, N, 6)
WRITE(6, *)
*) ( 'oFF_DTAGoNAL'
Il_rTE(6, )
CALL PRINTV(BETA, N_1, 6)
WRITE(6, *)
wRrrE(6, *) ( 'ETGENVALUE REeurRED,
I=T,ARGEST, 2=NEXTT . . ETc. , )
wRrTE(6,1000) J
WRITE(6, *)
DET(o) = 1.9
AOLD = ALMAX
(' ETGENVALUEDETERMTNANT
NWBERoF
#liIB[:;:i SMALLER')
T H A N C U Rlggls
RENT VALUE 'i
ITERS = o
10 ITERS : ITERS + 1
DET(I):ALPHA(1) -AL
DO 1 f = 2,N
t *DEr(r-1) - BErA(r-1)*BErA(r_1)
R&1J;l =- ;otn*(r)-AL) *DEr(r_2)
DO 2 I = l,N
rF (DET(r).EQ.o.o) co To 2
rF (DET(r_1).EQ.o.o) THEN
SrGN = DET(I)*DET(I_2)
ELSE
srGN _ DET(r)*DET(r_1)
END IF
rF (SrGN.LT.O.0) NWBER = NUMBER+ I
2 CONTTNUE
rF (NI'HBER.LE.N-J) THEN
OLDL = AL
AL = 0.5* (AL+ALMAX)
ELSE
ALMAX = AL
AL = 0.5* (OLDL+AL)
END IF
wRrTE (6,100) AL,DET(N) ,NU},IBER
C A L L C H E C K( A L , A O L D , T O L , r c o N )
IF(ITERS. NE. rTs. AND.rCON. rQ. O)coTO 1O
WRITE(6, *)
I{RITE(6, *) ( | TTERATIONS TO CONVERGENCE'
WRITE(6,1000) ITERS )
1O0 FORMAT(2E12.4,r15)
1000 FoRMAT (15)
STOP
END
132 C H A P T E R4

Array size N
5

Main diagonal ALPHA(r),r:1, N


2.0 2.0 2.0 2.0 2.0
Off-diagonal BETA(r): l, N- I
- 1.0- 1.0- 1.0- 1.0

Eigenvalue required J
I

Starting value AL
2,5

Maximum value ALMAX


of root 5.0
.
Tolerance TOL
l.E-5

Iteration limit ITS


100

Fig.4.Il(a) I n p u t data for Program4.10

**** ctiJ\zutcTERISTIC POLYNOHTAL HETHOD FOR r******


* EIGENVALUES OF A SYI'IMETRIC TRIDIAGONAL UATRIX *

UATN DTAGONAL
. 20008+01 - 2000E+01 .2000E+o1 .20008+01 .20008+01

OFF-DIAGONAL
-.1000E+01 -.10008+01 -.100oE+o1 -.10008+01

EIGEIIVAIUE REQUIRED, 1=LARGEST, 2=NEXTT -. ETC.

EIGEWAI,UN DETERMINANT NUT{BER OF ROOTS SMALLER


T}IAN CURRENT VAI,UE
.37508+01 -.1031E+01 3
.3L258+01 -.2256E+OO 5
.3438E+01 .5183E+O0 4
.3594E+01 .1431E+01 4
.3672E+01 .11298+01 4
.3711E+01 .6t 48E+OO 4
.3730E+O1 .2397E+OO 4
.37408+01 .1891E-01 4
.3735E+01 -.1003E+00 5
.3733E+01 -.39958-01 5
. 37328+01 -, 1034E-O1 5
.3732E+01 .4332E-O2 4
.3732E+01 -.29908-02 5
.3732E+01 .67408-03 4
.37328+01 -.11588-02 5
-3732E+O1 -.2415E-03 5
.37328+0! .2164E-O3 4

ITERATIONS TO CO}N'ERGENCE
T7

Fig. a.ll(b) Resultsfrom Program4.10


E I G E N V A L U EE O U A T I O N S

eq' 4'97 and then the other det,(l) are formed by recursion.The
numberof sign changesis
detectedby SIGN and accumulatedas NUMBER.
The output in Fig.4.llbshows that the largesteigenvalue, namely3.732isslighttyless
than the 12thestimate'sinceNUMBER is recordedas 5, ttrat is,
there'arestill 5 eigenvalues
Iessthan 3'7323.HoweverDET(5) has beenfound in the bisectionp.o*r,
to be of the order
of 0'002'showingthat convergence hasessentially
beenobtained.More effectiveinterpolation
proc€ssesthan bisectioncan of coursebe devised_

4.8.3 Generalsymmetricmatrices,e.g.bandmatrices
Theprinciplesdescribedin the previoussectioncan beapplied
to generalmatrices,but
the simplerecursionformulafor findingdet(,1)no longerapplies.a *"y of computing
det(2) is to factoriseAn, using the techniquesdescribed program
in 2.3, to yierd
Al: LDLr- Theproductof thediagonalelements in D is thedeterminantof A,. Further
usefulinformationthat can be derivedfrom D is that in the
factorisationof A _ l.I, the
numberof negativeelementsin D is equalto thenumberof eigenvalues
smallerthan i.

4.9 Exercises
I Use vector iteration to find the mode corresponding

'{i:}
to the largest eigenvarueof

l:;,;J
[]
Answer:{0.21490.4927O.8433}rcorrespondingto the eigenvalue14.43.

Use vectoriterationto find the largesteigenvalue


of the matrix

[ 3 -rl
L-r 2)
and its associatedeigenvector.
Answer:3.618associated with {0.8507 -0.5257}r.

Use shiftedvector iteration to find the smallesteigenvalueand eigenvector


of thc system
given in Exercisel.
Answer:{0.8360 -0.5392 0.1019}rcorrespondingto the eigenvalue
0.954.
The eigenvalues
of the matrix

lijltl
134 C H A P T E 4R

are5 + 2 cosl(in)l6f wherei: 1,2,3,4,5.Provethis usingshiftedinverseiterationwith shifts


6.'t, 6.r,5.3,4.1,3.3.
Answer:6.732,6.0,5.0,4.0,3.268.

Find the eigenvaluesand eigenvectorsof the system

f2 l-l [r') .[5 2l fr'l


I l 1 l : A l l <
L r rJ [',J 1 2 lJ [' r J
\:2.618, x' : { -0.35680.9342\r
Ansv'er'.
7z: 0.382,x2 : {0.9342- 0.3568}r

6 Show that the systemin Exercise5 can be reducedto the 'standard form'

[04 0.21Jx,1 . f',.l


l - --11 l:^\ |
L0.2 2.6) lxz) (xr)

and hence find both of its eigenvalues.


How would you recover the eigenvectorsof the original system?
Answer:0.382.2.618
Seetext, Section 4.6.1.

Use Householder's method to tridiagonalisc the matrix

1 -1: 1 1 - l
f
, L z t l

l t 2 3 r l
Ll 2 3 4)

Answer:

- 1 . 7 3 20 0
l --t I
| 1.732 7.667 1.24't 0 |
o t.247 0.e762-0.1237
I |
L o o -o.rz37 o.35zt
I

Use Lanczos'smethod to tridiagonalisethe matrix in Exercise7, usingthe starting vector


{0.50.50.50.5}r.
Answer:

ltr 2.2sr3o o I
I z.zgtl 1.643 o.zi3sso I
t t '
0 0.2'13ss
0.53e 0.06e43
I |
Lo o 0.06943
0.3182
I
E I G E N V A L U E E O U A T I O N 1S3 5

Find the eigenvaluesof the tridiagonalisedmatrices in Exercises7 and 8.


Answer:8.291, 1.00,0.4261,0.2832 in both cases.

of the matrix
l0 Find all the eigenvalues

ti::l
Provethat the eigenmodes associated with theseeigenvalues are orthogonal.
A n s w e r7: L : 1 , 7 2 : 5 , 1 r : 5 , a s s o c i a t ewdi t h t h e m o d e s{ 1 0 - l } r , { l 0 l } r a n d { 0 I 0 } r
respectively.

ll Using the characteristicpolynomial method for symmetricaltridiagonal matrices,calculate


'
of the matrix
all the eigenvalues

0
0

Lll -1

-l
a

lessthan thc current one.


and the numbcrof eigenvalues
Answer: 3.732299 4
3.000004 4
2.000001 3
1.000001 2
0.26795 I

4.10 Further reading


Bathe, KJ. and Wilson, E.L. (1976). Nwnerical Methods in Finite Element Arwlysis, Prentice-Hall Engle-
wood Cliffs, New Jersey.
Chatclin, F. (1987). Eigenoalues of Marrices, Wi.ley, London.
Conte, S. and de Boor, C. (1980). Elenwntary Nwnerical Arnlysis,3rd cdn, McGraw-Hill, New York.
Fox, L. (1964). An lntoduction to Nwrcrical Linear Algebra, Clarcndon Prcss, Oxford.
Frobcrg, C.E. (1969). lntroiluction to Numerical Linear Algebra, 2nd cdn, Addison-Weslcy, Rcading
Massachusetts.
Givcns, J.W. (1954). Numerical computation of thc characteristic values of a real symmctric matrix, Oak
Ridge National Laboratory Report ORNL-I574.
Golub, G. and Van l,oan, C. (1983} Matrix Computations, Johns Hopkins Prcss, Baltimore.
Gourlay, A.R. and Watson, G.A. (1973). Computation Methods for Matrix Eigenproblems,Wiley, London.
Householder, A. (1964). The Theory of Matrices in Nrnurical Analysis, Ginn, Boston.
Jennings, A. (1977). Matrix Computation for Engineers and Scientists, Wiley, Chichester.
l,anczos, C. (1950). An itcration method for the solution ofthc eigenvalue problems oflinear differcntial and
integral op€rators, J. Res. Nat. Bur. |tand'45,255-282.
Parlett, B. (1980). fhe Synntetric Eigenualue Problem, Prentice-Hall, Englewood Cliffs, New Jersey.
136 C H A P T E R4

Rutishauser, H. (1985).Solution of eigenvaluesproblems with the LR transformation, Nat. Bur. Standards


A p p l . M a t h . 9 e r . , 4 9 ,4 7 - 8 1 .
stewart, G.W. (1973). Introduction to Matrix computations, Academic Press,New York.
wilkinson, J.H. (1965). The Algebraic EigenualueProblem, clarendon Press,Oxford.
Wilkinson, J.H. and Reinsch, C. (1971). Handbook lor Automatic Computation, Vol II: Linear Algebra,
Springer-Verlag, Berlin.

You might also like