You are on page 1of 76

Algebraic Eigenvalue Problem

Computers are useless. They can only give answers.

Pablo Picasso

1
Fall 2010
Topics to Be Discussed
zThis
This unit requires the knowledge of eigenvalues
and eigenvectors in linear algebra.
zThe following topics will be presented:
¾The Power method for finding the largest
eigenvalue
i l and d its
it corresponding
di eigenvector
i t
¾Coordinate rotation
¾Rotating a symmetric matrix
¾Classic Jacobi method (1846) for finding all
eigenvalues and eigenvectors of a symmetric
matrix
2
Eigenvalues & Eigenvectors: 1/3
zGiven
Given a square matrix A, if one can find a
number (real or complex) λ and a vector x such
that AA⋅xx = λx holds, λ is an eigenvalue and x an
eigenvector corresponding to λ (of matrix A).
zSince the right-hand side of A⋅x = λx can be
rewritten as λI⋅x, where I is the identity matrix,
we have A⋅x = λI⋅x and (A-λI)x = 0. 0
zSolving for λ from equation det(A-λI) = 0 yields
all
ll eigenvalues
i l off A,
A where
h d t() is
det() i the
th
determinant of a matrix.

3
Eigenvalues & Eigenvectors: 2/3
zIf
If A is a n×n matrix, det(A λI) = 0 is a
det(A-λI)
polynomial of degree n in λ, and has n roots (i.e.,
n possible values for λ), some of which may be
complex conjugates (i.e., a+bi and a-bi).
zHowever people rarely use this method to find
zHowever,
eigenvalues because (1) directly expanding
det(A-λI) = 0 to a polynomial is tedious
tedious, and (2)
there is no close-form solution if n > 4.
zM
zMany methods
th d ttransform
f A to
t simpler
i l forms
f so
that det(A-λI) = 0 can be obtained easily.

4
Eigenvalues & Eigenvectors: 3/3
zThe eigenvalues
g of a diagonal
g matrix are its
diagonal entries.
zFor example,
p , if we have a diagonal
g matrix:
⎡d 1 ⎤
⎢ d2 ⎥
⎢ ⎥
A=⎢ ⎥
⎢ ⎥
⎢ d n −1 ⎥
⎢⎣ d n ⎥⎦
zThen, det(A-λI)=0 is
(d 1 − λ )(d 2 − λ )… (d n −1 − λ )(d n − λ ) = 0
zHence, the roots of det(A-λI)=0 are the di’s.
5
Power Method: 1/11
zWhat if we take a gguess z and compute
p A⋅z?
zIf z is actually an eigenvector, then A⋅z = λz.
zLet w = A A⋅zz = λz.
λz Since for every entry of w
and z we have wi = λzi and λ = wi/zi.
zIf z is not an eigenvector, then w may be a
vector closer to an eigenvector than z is.
zTherefore, we may use w in the next iteration to
find an even better approximation.
zFrom w, we have u = A A⋅w;
w; from u we have v =
A⋅u; etc. Hopefully, some vector x will satisfy
A⋅x = λx.
6
Power Method: 2/11
z Note that: if x is an eigenvector,
g , αx is also an
eigenvector because α(A⋅x)=α(λx) and A⋅(αx)
= λ(αx)!
z Therefore, we may scale an eigenvector. The
simplest way is to scale the vector by the
component with maximum absolute value.
After scaling, the value of each component is
i [-1,1].
in [ 1 1]
z Example: Let x be [15, -20, -8]. Since |-20| is
th llargest,
the t th
the scaling
li ffactor
t iis -20
20 andd th
the
scaled x is [-15/20, 1, 8/20].

7
Power Method: 3/11
zThis
This scaling has an advantage.
zGiven a vector z, we compute w = A⋅z.
zIf w is
i a good i t off λz,
d approximate λ we have
h w ≈ λz
λ
= A⋅z.
zTherefore, we should have wi ≈ λzi for every i.
zIf vector z is scaled so that its largest entry, say
zk, is 1, then wk ≈ λzk = λ!
zIn
In other words, the scaling factor is an
approximation of an eigenvalue!

8
Power Method: 4/11
zWe
We may start with a z and compute w w=A⋅z.
A z.
zThe largest component wk of w is an
approximation of an eigenvalue λ (i.e.,(i e wk ≈ λ).
λ)
zThen, w is scaled with its largest component wk
and
d usedd as a new z (i.e.,
(i z = w/w / k).
)
zThis process is applied iteratively until we have
|A⋅z – wkz| < ε, where ε is a tolerance value.

9
Power Method: 5/11
zSuppose
pp this p
process starts with vector x0.
zThe computation of xi is xi = wi/wi,k = (A⋅xi-1)/wi,k,
where wi,k is the maximum component of wi.
zSince xi-1 = wi-1/wi-1,k = (A⋅xi-2)/wi-1,k, we may
rewrite the xi as follows for some c, d and g:
xi = c(A⋅x
(A i-1) = c(dA(Ax
(dA(A i-2)) = gA A2xi-2
zContinuing this process, we have the following
for some p:
xi = pAix0
zHence xi is obtained by some power of A,
zHence, A and,
and
hence, the “power” method.

10
Power Method: 6/11
zExample: Consider the followingg 2×2 matrix
⎡ 2 3⎤
A=⎢ ⎥
⎣1 4 ⎦
zThis matrix has eigenvalues 5 and 1 and
corresponding eigenvectors [1,1] and [-3,1]
zLet us start with z=[1/2,1] . Since the maximum
entry
t off z is
i 1,
1 no scaling
li isi needed.
d d
zCompute w=A⋅z = [4,9/2].

11
Power Method: 7/11
zSince w=[4,9/2] and its largest
g entryy is 9/2,
zThe approximate eigenvalue is 9/2
zThe scaled z = w/(9/2) = [8/9,1]
zCompute w = A⋅z =[43/9,44/9]. Now, we have
zThe approximate eigenvalue is 44/9
zThe new z = [43/44,1]
zCompute w = A⋅z = [109/22,219/44] and we have
zThe approximate eigenvalue is 219/44
zThe new z = [218/219,1]
zAfter 3 iterations, we have an approximate
eigenvalue 219/44 = 4.977 ≈5 and eigenvector
[218/219 1] = [0
[218/219,1] 9954 1] ≈[1,1].
[0.9954,1] [1 1]
12
Power Method: 8/11
zA
A is the input matrix, z an approx. eigenvector

z = random and scaled vector ! initialize


DO ! loop until done
w = A*z
max = 1 ! find the |
|max|
| entry
DO i = 2, n
IF (ABS(w(i)) > ABS(w(max))) max = i
END DO
eigen_value = w(max) ! ABS(w(max)) the largest
DO i = 1, n ! Scale w(*) to z(*)
z(i) = w(i)/eigen_value
w(i)/eigen value
END DO
IF (ABS(A*z – eigen_value*z)) < Tol) EXIT
END DO
13
Power Method: 9/11
p
zExample: Find an eigenvalue
g and its
corresponding eigenvector of A, x0 = [1,1,1,1]:
⎡11 − 26 3 − 12 ⎤
⎢3 − 12 3 −6⎥
⎢ ⎥
⎢31 − 99 15 − 44⎥
⎢ ⎥
⎣9 − 10 − 3 − 4 ⎦

zIter 1: w=[-24,-12,-97,-8], approx λ = -97


w=[ 24 12 97 8] approx. 97 and
new z=w/(-97)=[0.247423,0.123711,1,0.0824742].
zIter 2: w = [1.51546,1.76289,6.79381,-2.34021],
[1 51546 1 76289 6 79381 2 34021]
approx. λ = 6.79381, z=w/(6.79381) =
[0.223065,0.259484,1,-0.344461]
[0.223065,0.259484,1, 0.344461]
14
Power Method: 10/11
zIter
Iter 3: w = [2.84067,2.62216,11.3824,
[2.84067,2.62216,11.3824,-2.20941],
2.20941],
approx. λ = 11.3824, new z=w/λ=[0.249567,
0.230369,1, 0.194107]
0.230369,1,-0.194107]
zIter 4: w=[2.08492,2.14891,8.47074,-2.28116],
approx λ = 8.47074,
8 47074 new z = w/λ =
[0.246132,0.253687,1,-0.269299]
z15 more iterations ………
zIter 19: approx. λ = 9 and corresponding
eigenvector (i.e., z) = [0.25, 0.25,1,-0.25]

15
Power Method: 11/11
What does power method do?
zWhat
zIt finds the largest eigenvalue (i.e., dominating
eigenvalue) and its corresponding eigenvector
eigenvector.
zIf vector z is perpendicular to the eigenvector
corresponding di tto ththe llargestt eigenvalue,
i l power
method will not converge in exact arithmetic.
zThus, z may be a random vector, initially.
zConvergence rate is |λ2/λ1|, where λ1 and λ2 are
the largest and second largest eigenvalues.
zIf ratee iss << 1,, faster
s e co
convergence
ve ge ce iss possible.
poss b e. Iff
it is close to 1, convergence will be very slow. 16
Jacobi Method: Basic Idea
zFinding
Finding all eigenvalues and their corresponding
eigenvectors is not an easy task.
zHowever in 1846 Jacobi found a relatively easy
zHowever,
way to find all eigenvalues and eigenvectors of a
symmetric matrix.
matrix
zJacobi suggested that a symmetric matrix would
b di
be diagonall after
ft b being
i ttransformed
f d repeatedly
t dl
with appropriate “rotations.”
zIn what follows, we shall talk about coordinate
rotation, rotations applied to a symmetric matrix,
and Jacobi’s method.
17
Coordinate Rotation: 1/2
zSuppose
Suppose rotating system
(x,y) an angle of θ yields
(x’,y’).
(x ,y ). The relationship yy’ y
between (x’,y’) and (x,y) is P
x ' = cos(θ ) x + sin(θ ) y xx’
y ' = − sin(θ ) x + cos(θ ) y θ

zThis can be represented in x


a matrix form: rotation matrix
⎡ x ' ⎤ ⎡ cos(θ ) sin(θ ) ⎤ ⎡ x ⎤
⎢ y '⎥ = ⎢ − sin(θ ) cos(θ ) ⎥ ⋅ ⎢ y ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
18
Coordinate Rotation: 2/2
zAn
An nn-dimensional
dimensional rotation matrix is n×n.
zIf rotation is on the xp-xq plane with an angle θ,
the (p,q)-rotation
(p q)-rotation matrix Rp,q(θ) is:
⎡1 ⎤

⎢ 0 ⎥

⎢ 1 ⎥
⎢ ⎥ row p
⎢ cos(θ ) sin(θ ) ⎥
R p ,q (θ ) = ⎢ 1 ⎥
⎢ ⎥
⎢ − sin(θ ) cos(θ ) ⎥ row q
⎢ 1 ⎥
⎢ ⎥


0 ⎥
⎣ 1 ⎥⎦ 19
col p col q
Symmetric Matrix Rotation: 1/11
zA
A symmetric matrix A = [ai,j i j]n×n is a matrix
satisfying ai,j = aj,i, where 1 ≤ i < j ≤ n.
zIn other words
words, a symmetric matrix is
“symmetric” about its diagonal.
zTh transpose
zThe t off matrix i BT.
t i B is
zRotation matrix Rp,q(θ) is not symmetric.
zRotating a matrix A with rotation matrix R is
computed as A’ = RT•A•R
zIf A is symmetric, A’ is also symmetric.

20
Symmetric Matrix Rotation: 2/11
zGiven a symmetric
y matrix A = [[ai,j
i j]n×n and a
rotation matrix Rp,q(θ), written as R for
simplicity, find A’ = RT•A•R.
zThis is an easy task: we compute H=A•R,
followed by A’ = RT•H.
zDo we have to use matrix multiplication?
zNo, it is not necessary due to the very simple
form of the rotation matrix R and RT.

21
Symmetric Matrix Rotation: 3/11
Observation: A
zObservation: A•RR is identical to A except for
column p and column q (C=cos(θ) and S=sin(θ))
p q p q
⎡ ⎤ ⎡1 ⎤


⎥ ⎢
⎥ ⎢ 0 0 ⎥

⎢ ⎥ ⎢ 1 ⎥
⎢ ⎥ ⎢p ⎥
⎢ ⎥ ⎢ C S ⎥
⎢i ai , p ai ,q ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
A• R = ⎢

⎥•⎢
⎥ ⎢
0 1
0 ⎥

⎢ ⎥ ⎢q ⎥
⎢ ⎥ ⎢ −S C ⎥
⎢ ⎥ ⎢ 1 ⎥
⎢ ⎥ ⎢ ⎥


⎥ ⎢
⎥ ⎢
0 0 ⎥

⎣ ⎦ ⎣ 22 1⎦
Symmetric Matrix Rotation: 4/11
zA•R
A R is computed as follows, where C and S are
cos(θ) and sin(θ), respectively.
zOther than column p and column q,q all entries
are identical to those of A.
⎡ a1, p C − a1,q S a1, p S + a1,q C ⎤
⎢ a2, p C − a2,q S a2, p S + a2,q C ⎥
⎢ ⎥
⎢ ai,j ⎥
⎢ ⎥
⎢ a p , p C − a p ,q S a p , p S + a p ,q C ⎥
A• R = ⎢ ai,j ai,j ⎥
⎢ ⎥
⎢ aq , p C − aq ,q S aq , p S + aq ,q C ⎥
⎢ ⎥
⎢ ⎥
⎢ an −1, p C − an −1,q S an −1, p S + an −1,q C ⎥
⎢ an , p C − an ,q S a ⎥
⎣ i,j an , p S + an ,q C ⎦ 23
col p col q
Symmetric Matrix Rotation: 5/11
zSuppose
Suppose we have computed H H=A•R,
A R, how do we
compute A’=RT•A•R=RT•H?
zThe transpose of R,
R RT, is very similar to R
T
⎡1 ⎤ ⎡1 ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ 1 ⎥ ⎢ 1 ⎥
⎢ ⎥ ⎢ ⎥
⎢ C S ⎥ ⎢ C −S ⎥
RT = ⎢ 1 ⎥ =⎢ 1 ⎥
⎢ ⎥ ⎢ ⎥
⎢ −S C ⎥ ⎢ S C ⎥
⎢ 1 ⎥ ⎢ 1 ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ 1 ⎥⎦ ⎢ 24 ⎥
1⎦
⎣ ⎣
Symmetric Matrix Rotation: 6/11
zComputing
Computing AA’=RRT•H
H is very similar to
computing A•R.
zThe only difference is row p and row q.
q
⎡1 p q ⎤ ⎡ i j ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ 1 ⎥ ⎢ ⎥
p ⎢⎢ C −S
⎥ ⎢
⎥ ⎢


⎢ ⎥ ⎢p a p ,i a p, j ⎥
⎢ ⎥ ⎢ ⎥
RT • H = ⎢ 1 ⎥•⎢ ⎥
⎢ ⎥ ⎢ ⎥
q ⎢⎢ S C
⎥ ⎢
⎥ ⎢q aq ,i aq , j


⎢ 1 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣ 1⎦ ⎣ 25

Symmetric Matrix Rotation: 7/11
zHere
Here is the result of A RT•A•R=R
A’=R A R RT•H: H:
⎡ p a C−a S q a S +a C
1, p 1, q 1, p 1, q ⎤
⎢ a C−a S a S +a C ⎥
⎢ ai,j
ij
2, p 2, q 2, p 2, q a ij
i,j ⎥
⎢ ⎥
p ⎢⎢a C − a S a C − a S
p ,1 q ,1 A
p ,2 q ,2A '
p, p
'
p ,q a p ,nC − aq ,n S ⎥

R • A• R = ⎢
T ⎥
⎢ ⎥
⎢ a p ,1S + aq ,1C a p ,2 S + aq ,2C A'q , p A'q ,q a p ,n S + aq ,n C ⎥
⎢ ⎥
q
⎢ ⎥
⎢ a an −1,1 p C − an −11,q S an −11, p S + an −11,q C a ⎥
⎢ i,j
an , p C − an ,q S an , p S + an ,q C
i,j ⎥
⎣ ⎦
Because of symmetry, we only
A' p , p = a p , p C 2 − 2a p ,q C × S + aq ,q S 2 update the upper triangular part.

A'q ,q = a p , p S 2 + 2a p ,q C × S + aq ,q C 2
A' p ,q = A'q , p = ( a p , p − aq ,q ) C × S + a p ,q C 2 − S 2 ( ) 26
Symmetric Matrix Rotation: 8/11
Part I: Update ap,p
zPart p p, ap,q
p q and aq,q
q q, where p < q.
Ap' , p = a p , p C 2 − 2a p.q C × S + a p ,q S 2
Aq' ,q = a p , p S 2 + 2a p.q C × S + a p ,q C 2
Ap' ,q = Aq' , p = (a p , p − a q ,q )C × S + a p ,q (C 2 − S 2 )
! PART I: Update a(p
a(p,p),
p) a(q
a(q,q),
q) a(p
a(p,q)
q)
! C = cos(θ) and S = sin(θ)
p q
! a(*,*) is an n×n symmetric matrix
! p, q : for (p,q)
(p,q)-rotation,
rotation, where p < q
p
App = C*C*a(p,p) – 2*C*S*a(p,q) + S*S*a(q,q)
Aqq
qq = S*S*a(p,p)
(p,p) + 2*C*S*a(p,q)
(p,q) + C*C*a(q,q)
(q,q)
Apq = C*S*(a(p,p)-a(q,q)) + (C*C-S*S)*a(p,q) q
a(p,p) = App
a(q,q) = Aqq
a(p,q) = Apq
27
NOTE: only the upper triangular portion is updated!
Symmetric Matrix Rotation: 9/11
Part II: Update Row 1 to Row pp-1.
zPart 1.
Ai', p = ai , p C − ai ,q S
Ai',q = ai , p S + ai ,q C
! PART II: Update column p and column q from
! row 1 to row p
p-1.
1.
!
! h is used to save the new value of a(i,p) p q
! since a(i,p) is used to compute a(i,q)
! and cannot be destroyed right away!
p
DO i = 1, p
p-1
h = C*a(i,p) – S*a(i,q)
a(i,q) = S*a(i,p) + C*a(i,q) q
a(i,p) = h
END DO
28
NOTE: only the upper triangular portion is updated!
Symmetric Matrix Rotation: 10/11
Part III: Update Row pp+11 to Row qq-1.
zPart 1.
Ai', p = ai , p C − ai ,q S
! PART III: Update the portion between Ai',q = ai , p S + ai ,q C
! row p+1 and row q-1 p q
!
! h is
i used to save the new value
! of a(p,i) because a(p,i) is used p
! to compute a(i,q) and cannot be
! d t
destroyed
d right
i ht away!
!
q
DO i = p+1, q-1
h = C*a(p
C*a(p,i)
i) – S*a(i,q)
S*a(i q)
a(p,i)
a(i,q) = S*a(p,i) + C*a(i,q)
a(i,q)
a(p,i) = h
END DO
Note the symmetry in the update! 29
Note also that a(p,i) = a(i,p) and a(q,i) = a(i,q)
Symmetric Matrix Rotation: 11/11
Part IV: Update Row qq+11 to Row n .
zPart
! PART IV: Update column q+1 to column n
!
Ai', p = ai , p C − ai ,q S
! h i
is used
d to save the
h new value
l
Ai',q = ai , p S + ai ,q C
! of a(p,i) because a(p,i) is used to
! compute a(q,i) and cannot be
! d t
destroyed
d right
i ht away!
! p q
! Due to symmetry, this part actually
! updates the last sections of row p
! and Row q p

DO i = q+1, n
h = C
C*a(p,i)
a(p,i) – S
S*a(q,i)
a(q,i) q
a(q,i) = S*a(p,i) + C*a(q,i)
a(p,i) = h
END DO

Note the symmetry in the update! 30


Eigenvalues of 2×2 Symmetric
Matrices: 1/4
zConsider
Consider a 2×2 symmetric matrix A:
⎡ a1,1 a1,2 ⎤
A=⎢ ⎥ 1 2 = a2,1
where a1,2 21
⎣ a2,1 a2,2 ⎦
zApplying a rotation in the xy-plane yields the
following symmetric matrix A’ for some angle θ,
where C=cos(θ) and S=sin(θ) :

⎡a1,1C 2 − 2a1,2 C × S + a 2 ,2 S 2
A' = R ⋅ A ⋅ R = ⎢
T (a 1,1 − a 2 ,2 ) C × S + a (
1, 2 C 2
− S 2
)⎤⎥
⎣⎢ a1,1 S 2 + 2a1,2 C × S + a 2 ,2 C 2 ⎥⎦

31
Eigenvalues of 2×2 Symmetric
Matrices: 2/4
zThe
The off
off-diagonal
diagonal element is
(a 1,1 − a 2 ,2 ) C × S + a 1, 2 C 2
−(S 2
)
zIf a θ can be
b chosen
h so th
thatt th
the off-diagonal
ff di l
elements a1,2 and a2,1 are 0, matrix A is diagonal
andd th
the di
diagonall entries
t i are eigenvalues!
i l !
(a 1,1 ( )
− a2,2 ) C × S + a1,2 C 2 − S 2 = 0
⇓ simple facts from trigonometry
− a1,2
=
C×S
=
cos(θ ) sin(θ ) sin(2θ ) = 2 sin(θ ) cos(θ )
a1,1 − a2,2 C 2 − S 2 cos 2 (θ ) − sin 2 (θ )
cos(2θ ) = cos2 (θ ) − sin 2 (θ )

a1,2 2 cos(θ ) sin(θ ) 1 sin(2θ ) tan(2θ )
= × = =
a2,2 − a1,1 2 cos 2 (θ ) − sin 2 (θ ) 2 cos(2θ ) 2

2a1,2 1 ⎛ 2a1,2 ⎞ if a1,1 ≠ a2,2
tan(2θ ) = θ = tan ⎜
−1
⎟ otherwise, θ=π/4
32
a2,2 − a1,1 2 ⎝ a 2 ,2 − a1,1 ⎠
Eigenvalues of 2×2 Symmetric
Matrices: 3/4
zConsider this A:
⎡2 3⎤
A=⎢ ⎥
⎣ 3 4⎦
zFrom matrix A, we have a1,1 = 2, a2,2 = 4 and a1,2 =
a2,1 = √3.
zSince tan(2θ) = 2a1,2/(a2,2- a1,1) = √3,
√ we have 2θ =
π/3, θ = π/6, S = sin(θ) = ½, C = cos(θ) = (√3)/2.
zTh new a1,1 is
zThe i a1,1C2- a1,2C×S+a
C×S+ 2,2S2 = 1,
1 the
th new
a2,2 is a1,1S2+ a1,2C×S+a2,2C2 = 5, and the new a1,2 =
a2,1
2 1 = 0.
zTherefore, eigenvalues of A are +1 and +5!

33
Eigenvalues of 2×2 Symmetric
Matrices: 4/4
Let us verify the result. Since S = sin(θ) = ½ and
zLet
C = cos(θ) = (√3)/2, the rotation matrix R is:
⎡ 1 3⎤
⎡ C S⎤ ⎢ 2 ⎥
2 ⎥
R=⎢ ⎥=⎢
⎣ − S C ⎦ ⎢− 3 1 ⎥
⎣⎢ 2 2 ⎥⎦

zThe rotated A is A’ = RT⋅A⋅R :


⎡ 1 3⎤ ⎡ 1 3⎤
⎢ − ⎥ 3⎤ ⎢ 2 ⎥
A' = R ⋅ A ⋅ R = ⎢ 2 2 ⎥⋅⎡ 2 2 ⎥= ⎡1 0⎤
⎥⋅⎢
T
⎢ ⎢0 5⎥
⎢ 3 1 ⎥ ⎣ 3 4⎦ ⎢ 3 1 ⎥ ⎣ ⎦

⎢⎣ 2 2 ⎥⎦ ⎢⎣ 2 2 ⎥⎦
zA’ is diagonal and eigenvalues of A are 1 and 5.

34
Classic Jacobi Method: 1/13
zJacobi
Jacobi published a method in 1846 capable of
finding all eigenvalues and eigenvectors of a
symmetric matrix with repeated rotations.
zFind an off-diagonal entry with maximum
absolute value,
value say ap,q, where p < q.q
zIf |ap,q| < ε, where ε is a given tolerance, stop.
zApply a (p,q)-rotation to eliminate ap,q and aq,p.
zRepeat this process until all off-diagonal elements
become very small (i.e., absolute value < ε).
zThee diagonal
d go entries
e es aree eeigenvalues.
ge v ues.
zRotations do not alter eigenvalues! 35
Classic Jacobi Method: 2/13
zBasically,
Basically, Jacobi method starts with a symmetric
matrix A0 = A.
zFind a rotation matrix R1 so that an off-diagonal
entry of A1 = R1T⋅A0⋅R1 becomes 0.
zTh find
zThen, fi d a rotation
t ti matrix
t i R2 so that
th t an off-
ff
diagonal entry of A2 = R2T⋅A1⋅R2 becomes 0.
zThe entry of A1 eliminated by R1 can become non-
zero in A2; however, it would be smaller.
zNote the following fact:
( )
A2 = R2T ⋅ A1 ⋅ R2 = R2T ⋅ R1T ⋅ A0 ⋅ R1 ⋅ R2
A2 = R2T ⋅ R1T ⋅ A0 ⋅ R1 ⋅ R2 36
Classic Jacobi Method: 3/13
zRepeating
p g this p
process,, for iteration i,, a rotation
matrix Ri is found to eliminate one off-diagonal
entryy of Ai = RiT⋅Aii-11⋅Ri .
zThus, Ai is computed as follows:

Ai = Ri ⋅ R
T
i −1
T
⋅ ⋅ R2 ⋅ R ⋅ A0 ⋅ R1 ⋅ R2 ⋅
T
1
T
Ri −1 ⋅ Ri
zJacobi showed that after some number of
iterations, all off-diagonal entries are small and
the resulting matrix Am is diagonal.
zTherefore, the diagonal entries of Am are the
eigenvalues of A.
37
Classic Jacobi Method: 4/13
zHere
Here is a template of the Jacobi method.
DO
p = 1 find max off-diagonal entry
q = 2
DO i = 1, n
DO j = i+1
i+1, n
IF (ABS(a(p,q)) < ABS(a(i,j)) THEN
p = i
q = j
END IF
END DO
END DO
IF (ABS(a(p,q)) < Tol) EXIT
Apply a (p,q)-rotation to matrix a(*,*)
END DO
38
Classic Jacobi Method: 5/13
zThe
The only remaining part is an efficient and
accurate way of “applying a (p,q)-rotation.”
zWe saw the 2×2 case earlier: find an appropriate
rotation angle θ, compute C=cos(θ) and S=sin(θ),
and update matrix A.A
zThis approach requires tan-1(θ) , which can be
ti
time consuming
i and d may llose significant
i ifi t digits.
di it
zTherefore, we need a faster and more accurate
method.

39
Classic Jacobi Method: 6/13
zThe followingg shows the new ap,p
p p, aq,q
q q and ap,q
p q after
rotation.
A' p , p = a p , p C 2 − 2a p ,q C × S + aq ,q S 2
A'q ,q = a p , p S 2 + 2a p ,q C × S + aq ,q C 2
A' p ,q = A'q , p = ( a p , p − aq ,q ) C × S + a p ,q C 2 − S 2
( )
zSetting the new Ap,q to 0 yields an angle θ that can
eliminate Ap,q and Aq,p.
zWe shall use a different way to find tan(θ), from
which sin(θ) and cos(θ) can be computed easily
without the use of the tan-1() function.

40
Classic Jacobi Method: 7/13
zWe
We follow the 2×2 case.

(
Ap ,q = ( a p , p − aq ,q ) C × S + a p ,q C 2 − S 2 = 0 )

a p ,q C×S cos((θ ) sin((θ ) 2 cos((θ ) sin((θ )
= 2 = = ×
aq ,q − a p , p C −S 2
cos (θ ) − sin (θ ) 2 cos 2 (θ ) − sin 2 (θ )
2 2


a p ,q 1 sin(2θ ) 1
= × = tan(2θ )
aq ,q − a p , p ( θ) 2
2 cos(2
⇓ rewrite to use cot()
2a p , q aq ,q − a p , p
tan(2θ ) = ⇒ cot(2θ ) =
aq ,q − a p , p 2a p , q 41
Classic Jacobi Method: 8/13
zBut,
But, what we really need is tan(θ)!
zLet t = tan(θ), and we have t=S/C.
zF
zFrom cot(2θ),
t(2θ) we have
h th
the ffollowing:
ll i
cos(2θ ) cos 2 (θ ) − sin 2 (θ ) C 2 − S 2
cot(2θ ) = = =
sin(2θ ) 2sin(θ ) cos(θ ) 22SS × C
zDivide the numerator and denominator with C2:
(C 2 − S 2 ) / C 2 1 − S 2 / C 2 1 − t 2
( θ) =
cot(2 = =
(2 S × C ) / C 2
2( S / C ) 2t
zTherefore, we have:

1− t 2
where ∆ = cot(2θ ) =
aq ,q − a p , p
∆= 2a p , q
2t
2t
42
Classic Jacobi Method: 9/13
zFrom ∆=(1-t
( 2))/(2t),
( ) we have t2+2∆t-1 = 0.
zThis means the desired t = tan(θ) is one of the two
roots of t2+2∆t-1 = 0.
zThe roots of t2+2∆t-1 = 0 are
t = −∆ ± ∆ 2 + 1
zWhich root is better?
zImportant Fact: If x1 and x2 are roots of
x2+bx+c=0, then x1+x2=-b and x1×x2=c.
zSi
zSince the
th product
d t off th t off t2+2∆t-1
the roots +2∆t 1 = 0 is
i -1,
1
the smaller (or desired) one must be in [-1,1].

43
Classic Jacobi Method: 10/13
zConsider the followingg manipulation:
p
avoid cancellation

( ) −∆ ∓ ∆ + 1
2
1
−∆ ± ∆ 2 + 1 × =
−∆ ∓ ∆ 2 + 1 ∆ ± ∆2 + 1
zWe have to avoid cancellation when ∆ is large.
zIf ∆ > 0, + The denominator is ∆+(∆2+1)1/2 >
0 use +.
1 and the positive root is less than 1.
zIf ∆ < 0,
0 use –. The denominator is ∆-(∆
∆ (∆2+1)1/2 < -
1 and the negative root is greater than -1.

44
Classic Jacobi Method: 11/13
If ∆ > 0 (resp., ∆ < 0), the desired “smaller”
zIf smaller root is
1/(∆+(∆2+1)1/2) (resp., 1/(∆-(∆2+1)1/2).
zThis root can be rewritten as follows:
sign(∆)
t= and | t |≤ 1
| ∆ | + ∆ +1
2

zSince |t| ≤ 1, the angle of rotation is in [-π/4,π/4].


zAfter t (i.e., tan(θ)) is computed, C=cos(θ) and
S=sin(θ)
S sin(θ) are the following
1
C = cos(θ ) = and S = sin(θ ) = C × t
1+ t 2

45
Classic Jacobi Method: 12/13
Compute ∆ and t, and obtain C=cos(θ) and S=sin(θ)
! This section computes C and S
! From a(p,p), a(q,q) and a(p,q)
t = 1.0
IF (a(p,p) != a(q,q)) THEN
D = (a(q,q)-a(p,p))/(2*a(p,q))
t = SIGN(1/(ABS(D)+SQRT(D*D+1)),D)
SIGN(1/(ABS(D)+SQRT(D*D+1)) D)
END IF
C = 1/SQRT(1+t*t)
S = C*t

In Fortran 90, SIGN(a,b)


means using the sign of
b with the absolute value
of a. Thus, SIGN(10,-1)
and SIGN(-15,1) yield
-10 and 15, respectively. 46
Classic Jacobi Method: 13/13
zFinally,
Finally, the classic Jacobi method is shown below.
zScan the upper triangular portion for max
|a(p q)| where p < q.
|a(p,q)|, q
zA (p,q)-rotation based on the values of C and S
sets a(p,q)
a(p q) and a(q,p)
a(q p) to zero
zero.
Classic Jacobi Method
DO
Find the max |a(p,q)| entry, p < q
IF (|a(p,q)|
| | < Tol) EXIT
From a(p,p), a(q,q) and a(p,q) compute t
From t compute C and S
Perform a (p,q)
(p,q)-rotation
rotation with a(p,q)
a(p,q)=a(q,p)=0
a(q,p) 0
END DO
47
Computation Example: 1/5
zConsider
Consider the following symmetric matrix:
⎡12 6 −6 ⎤
A = ⎢⎢ 6 16 2 ⎥⎥
⎢⎣ −6 2 16 ⎥⎦
zThe largest element is on row 1 and column 2.
zSince a1,1=12, a1,2=6 and a2,2=16, we have ∆ =
(a2,2-a1,1)/(2a1,2)=0.33333334, and t = 0.7207582.
zFrom t = 0.7207582, we have C = 0.8112422 and
S = 0.5847103.
⎡ 0.8112422 0.5847103 ⎤
R1,2 = ⎢⎢-00.5847103
5847103 00.8112422
8112422 ⎥

⎢⎣ 1⎥⎦ 48
Computation Example: 2/5
1 2 ⋅A⋅R1,2
zThe new A = R1,2 T
1 2 is
eliminated
⎡7.6754445 0.0 -6.036874 ⎤
A = ⎢⎢ 00
0.0 20.32456 1 885777 ⎥⎥
20 32456 -1.885777
⎢⎣ -6.036874 -1.885777 16.0 ⎥⎦

zThe off-diagonal entry with the largest absolute


value is a1,3 = -6.036874.
6 036874
zSince a1,1=7.6754445, a1,3=-6.036874 and a3,3=16,
∆ = (a3,3-a
a1,1)/(2a1,3) = -0.68947533,
0 68947533 t= -0.5251753,
0 5251753
C=0.885334, and S=-0.4645553.

49
Computation Example: 3/5
zThe rotation matrix R1,3 is:
⎡ 0.885334 -0.46495553⎤
R1,3 = ⎢⎢ 1 ⎥

⎢⎣0.46495553 0.885334 ⎥⎦
zThe new matrix A = R1,3T⋅A⋅R1,3 is
was 0! eliminated
⎡ 4.505028 -0.8768026 0.0 ⎤
A = ⎢⎢ 20.32456 -1.669543⎥⎥
⎢⎣ 19 17042 ⎥⎦
19.17042

eliminate
li i this
hi one in
i the
h next iteration
i i
50
Computation Example: 4/5
zThe largest
g entry y is a2,3=-1.669543.
zSince a2,2=20.32456, a2,3=-1.669543,
a3,3=19.17042, ∆ = (a3,3-a2,2)/(2a2,3)=0.34564623,
andd t = 0.71240422.
0 71240422
zTherefore, C=0.81445753 and S=0.58022314.
zTh new rotation
zThe t ti matrix
t i R2,3 is:
i
⎡1 ⎤
R2,3 = ⎢⎢ 0.81445753 0.58022314 ⎥⎥
⎢⎣ -0.58022314 0.81445753⎦⎥

51
Computation Example: 5/5
zThe new matrix A = R2,3T⋅A⋅R2,3 is: They were 0!

⎡ 4.505028 -0.7141185 -0.5087411⎤


A = ⎢⎢ 21.51395 ⎥
0.0 eliminated

⎢⎣ 17.98103 ⎥⎦

zWith 5 more iterations, the new matrix A


becomes
⎡ 4.455996 ⎤
A = ⎢⎢ 21.54401 ⎥

⎢⎣ 18.0 ⎥⎦

zThe eigenvalues are 4.455996, 21.54401, 18.0


zIn hand calculation of small matrices, direct
matrix
t i multiplication
lti li ti may beb more convenient!
i t!
52
Where Are the Eigenvectors: 1/6
An important fact: If R is a rotation matrix,
zAn
then R-1=RT! So, R’s inverse is R’s transpose.
Note that C2 + S2 = 1!
identity matrix
⎡1 ⎤ ⎡1 ⎤


⎢ 1 0
⎥⎢
⎥⎢
⎥⎢ 1
0 ⎥


⎢ ⎥⎢ ⎥
⎢ C S ⎥⎢ C −S ⎥
⎢ ⎥⎢ ⎥

R i RT = ⎢

0 1
0
⎥⎢
⎥i⎢
⎥⎢
0 1 0 ⎥
⎥=I

⎢ ⎥⎢ ⎥
⎢ −S C ⎥⎢ S C ⎥
⎢ ⎥⎢ ⎥

0
1 1
0
⎢ ⎥⎢ ⎥
⎢ ⎥⎢ ⎥
⎢ ⎥⎢ ⎥
⎣ 1⎦ ⎣ 1 ⎦53
Where Are the Eigenvectors: 2/6
zTwo more simple p facts: ((A⋅B))-1 = B-1⋅A-1
and (A⋅B)T = BT⋅AT.
zJacobi method uses a sequence
q of rotation
matrices R1, R2, …, Rm to transform the given
matrix A to a diagonal form D:
(
RmT ⋅ RmT−1 ⋅ ( ( ( ) )
⋅ R2T ⋅ R1T ⋅ A ⋅ R1 ⋅ R2 ⋅ )⋅ R )⋅ R
m− 1 m =D
zThe above is equivalent
q to:
( )
RmT ⋅ RmT−1 ⋅ ⋅R2T ⋅ R1T ⋅ A ⋅ ( R1 ⋅ R2 ⋅ ⋅ Rm−1 ⋅ Rm ) = D
zSince (A B)T = BT⋅A
(A⋅B) AT, we have the following:
( R1 ⋅ R2 ⋅ ... ⋅ Rm −1 ⋅ Rm )T ⋅ A ⋅ ( R1 ⋅ R2 ⋅ ... ⋅ Rm −1 ⋅ Rm ) = D
54
Where Are the Eigenvectors: 3/6
zLet V= R1· R2· …·Rm. Then,, we have VT·A·V=D.
zWe shall show V-1 = VT. Since R-1 = RT and
V −1 = ( R1 ⋅ R2 ⋅ Rm ) −1 = Rm −1 ⋅ ⋅ R2 −1 ⋅ R1−1
we have
−1 −1 −1 −1
⋅ R2 ⋅ R = ( R1 ⋅ R2 ⋅ ⋅ Rm ) = V T
T
V = Rm ⋅ ⋅ R2 ⋅ R1 = Rm ⋅
T T
1
T

zTherefore, V-1·A·V = D holds.


zM lti l i both
zMultiplying b th sides
id byb V yields
i ld A·V
AV=VV·D.
D

V −1 ⋅ A ⋅ V = D
( )
V ⋅ V −1 ⋅ A ⋅ V = V ⋅ D
A ⋅V = V ⋅ D 55
Where Are the Eigenvectors: 4/6
zLet the column vectors of V be v1, v2, …,, vn ((i.e.,,
V = [v1|v2|v3|…|vn] ).
zThen,, V⋅D = [[d1v1||d2v2||…|d
| nvn] and A⋅vi = divi,
and the eigenvectors are the columns of V!
v1 v2 vn-1 vn
⎡v1,1 v1,2 v1,n −1 v1,n ⎤ ⎡ d1 ⎤
⎢v
⎢ 2,1 v2,2 v2,n −1 v2,n ⎥⎥ ⎢
⎢ d2 0 ⎥

⎢ ⎥⋅⎢ ⎥ = [ d1 ⋅ v1 | d 2 ⋅ v 2 | | dn ⋅ vn ]
⎢ ⎥ ⎢ ⎥
⎢vn −1,1 vn −1,2 vn −1,n −1 vn −1,n ⎥ ⎢ 0 d n −1 ⎥
⎢vn ,1 vn ,2 vn ,n −1 vn ,n ⎥⎦ ⎢⎣ d n ⎥⎦

V D
56
Where Are the Eigenvectors: 5/6
zThe
The following shows an inefficient way using
matrix multiplication.

! A is the input n×n symmetric matrix


V = the identify matrix
DO
find the largest off-diagonal entry |a(p,q)|
IF (|a(p,q)| < Tol) EXIT
compute ∆,
∆ t, S andd C
update matrix a(*,*)
V = V*R ! eigenvectors
END DO
! Eigenvalues are the diagonal entries of A
! Eigenvectors are the columns of V

57
Where Are the Eigenvectors: 6/6
zThe computation
p of V=V⋅R is similar to A⋅R!
zV = [vi,j] is not symmetric, and two complete
columns ((i.e.,, columns p and q) must be updated.
p
⎡ v1, p C − v1,q S v1, p S + v1,q C ⎤
⎢ v2, p C − v2,q S v2, p S + v2,q C ⎥
⎢ ⎥
⎢ vi,j ⎥
⎢ ⎥
⎢ v p , p C − v p ,q S v p , p S + v p ,q C ⎥
V •R = ⎢ vi,j vi,j ⎥
⎢ ⎥
⎢ vq , p C − vq ,q S vq , p S + vq ,q C ⎥
⎢ ⎥
⎢ ⎥
⎢ vn −1, p C − vn −1,q S vn −1, p S + vn −1,q C ⎥
⎢ vn , p C − vn ,q S v ⎥
⎣ i,j vn , p S + vn ,q C ⎦
col p col q 58
Example-Continued:
Example Continued: 1/4
zThe
The input matrix is:
⎡12 6 −6 ⎤
A = ⎢⎢ 6 16 2 ⎥⎥
⎢⎣ −6 2 16 ⎥⎦
zSince a1,2 is the largest, rotation matrix R1,2 is:
⎡ 0.8112422 0.5847103 ⎤
R1,2 = ⎢⎢-0.5847103 0.8112422 ⎥

⎢⎣ 1⎥⎦
zMatrix V, approx. eigenvectors, is I⋅R1,2
⎡ 0.8112422 0.5847103 ⎤
V = I ⋅ R1,2 = ⎢− 0.5847103 0.8112422 ⎥
⎢ ⎥
⎢⎣ 1⎥⎦
59
Example-Continued:
Example Continued: 2/4
zNow,
Now, the new matrix A is:
⎡7.6754445 0.0 -6.036874 ⎤
A = ⎢⎢ 0 20.32456 -1.885777 ⎥⎥
⎢⎣ -6.036874 -1.885777 16.0 ⎥⎦
zThe largest off-diagonal is a1,3 and R1,3 is
⎡ 0.885334 -0.46495553⎤
R1,3 = ⎢⎢ 1 ⎥

⎢⎣0.46495553
0 46495553 885334 ⎥⎦
00.885334
zTherefore, new approx. eigenvectors matrix V is
⎡ 0.7182204 0.5847103 − 0.3771916⎤
V = V ⋅ R1,3 = ⎢− 0.5176639 0.8112422 0.27186423 ⎥
⎢ ⎥
⎢⎣ 0.4649555 0 0.885334 ⎥⎦
60
Example-Continued:
Example Continued: 3/4
zFor
For iteration 3, matrix A is:
⎡ 4.505028 -0.8768026 0.0 ⎤
A = ⎢⎢ 20.32456 -1.669543⎥⎥
⎢⎣ 19.17042 ⎥⎦
zSince a2,3 is the largest, R2,3 is:
⎡1 ⎤
R2,3 = ⎢⎢ 0.81445753 0.58022314 ⎥⎥
⎢⎣ -0.58022314
0 58022314 81445753⎥⎦
00.81445753
zThe approx. eigenvector matrix V is:
⎡ 0.7182204 0.6950770 0.03205594⎤
V = V ⋅ R2 ,3 = ⎢− 0.5176639 0.5029804 0.6921234 ⎥
⎢ ⎥
⎢⎣ 0.4649555 − 0.5136913 0.7210670 ⎥⎦
61
Example-Continued:
Example Continued: 4/4
zFive
Five more iterations yields the new matrix A:
⎡ 4.455996 ⎤
A = ⎢⎢ 21 54401
21.54401 ⎥

⎢⎣ 18.0 ⎥⎦
z The approx. eigenvector matrix V is:

⎡ 0.7473423 0.6644393 -0.3606413E-6 ⎤


V = ⎢⎢-0.4698294 0.5284512 0.7071065 ⎥⎥
⎢⎣ 0.4698295 -0.5284505 0.7071069 ⎥⎦
Normally eigenvalues are sorted and eigenvectors are normalized
62
Convergence of Jacobi Method
zThe
The classic Jacobi method always converges.
zLet S(A) be the sum of squares of all off-diagonal
entries where A=[ai,j] is a n×n symmetric matrix:
entries,
n n n −1 n
S ( A) = ∑ ∑ a = 2∑
2
i, j ∑ ai2, j
i =1 j =1, j ≠ i i =1 j = i +1

zThen, the sequence of values S(Ak) decreases


monotonically to zero, where Ak is the result of the
k-th rotation.
zThis means eventually all off-diagonal entries will
becomee zeros
beco e os (i.e., d
diagonal).
go ).
63
Two Improvements: 1/4
zThe followingg two useful improvements
p are due
to H. Rutishauser.
zThe followingg formula was used to compute p t=
tan(θ):
sign(∆) aq ,q − a p , p
t= where ∆ =
| ∆ | + ∆2 + 1 2a p , q
zIf ∆ is large, ∆2 may cause overflow.
zTo avoid overflow, one may set t to 1/(2∆) if ∆ is
large because ∆2+1 is close to ∆2.
zHow large is large enough so that ∆2
will not overflow? Exercise!
64
Two Improvements: 2/4
zThe
The updating formulas for the new ap,p
p p and aq,q
q q,
and the formulas for rows p and q and columns
p and q of matrix A can be modified so that they
are computationally more stable than the
original.
zReminder: The following formulas were used:

C2 − S 2 aq ,q − a p , p
= cot(2θ ) =
2C × S 2a p , q
S
C = cos((θ ) S = sin((θ ) t = = tan((θ )
C
65
Two Improvements: 3/4
zSimplify
Simplify the formula for the new ap,p
p p:
A' p , p = a p , p C 2 − 2a p ,q C × S + aq ,q S 2
= a p , p (1 − S 2 ) − 2a p ,q C × S + aq ,q S 2
= a p , p + S 2 ( aq ,q − a p , p ) − 2a p ,q C × S
⎛ C 2
− S 2

= a p , p + S ⎜ 2a p , q
2
⎟ − 2a p , q C × S
⎝ 2C × S ⎠
= a p , p − ta p ,q
zTh one ffor aq,q is
zThe i similar:
i il
C 2 − S 2 aq ,q − a p , p
A'q ,q = aq ,q + ta p ,q =
2C × S 2a p , q
66
Two Improvements: 4/4
zColumns
Columns p and q of AA’ = A
A⋅R
R and V
V’ = V
V⋅RR
(eigenvector matrix) were updated as follows:
A'i , p = C × ai , p − S × ai ,q and A'i ,q = S × ai , p + C × ai ,q
zLet τ = tan(θ/2) = S/(1+C) .
zNote the following trigonometry identity:
S 1− C
τ = tan(θ / 2) = = ⇒ C = 1 − S ×τ
1+ C S
zNow, we have
A'i , p = C × ai , p − S × ai ,q A'i ,q = S × ai , p + C × ai ,q
= (1 − S × τ )ai , p − S × ai ,q = S × ai , p + (1 − S × τ ) × ai ,q
= ai , p − S ( ai ,q + τ × ai , p ) = ai ,q + S ( ai , p − τ × ai ,q )
67
Cyclic Jacobi Methods: 1/5
zTo
To find the max entry, the upper diagonal n(n
n(n-
1)/2 entries must be scanned.
zHowever performing a rotation only requires
zHowever,
4n multiplication (i.e., updating two columns).
zI thi
zIs this ““search”
h” worthwhile?
th hil ? IIn other
th word,
d
would this search for the max entry requires
more time
ti than
th updating
d ti ththe matrix?
t i ?
zWhat if we forget about the search and just
perform rotations in some order?
zCyclic Jacobi methods just does that.
68
Cyclic Jacobi Methods: 2/5
zA version of cyclic
y Jacobi methods scans the
matrix in row order:
(1, 2) (1,3) (1, 4) (1, n)
(2,3) (2, 4) (2, n)
(3, 4) (3, n) one sweep

(n − 1, n)
zIf the encountered entry |ap,q| ≥ ε,
ε do a (p,q)-
(p q)
rotation to eliminate it.
zA complete round is called a sweep.
sweep
zIf a sweep does not eliminate any entry, all
entries are small enough and stop!
69
Cyclic Jacobi Methods: 3/5
zHere
Here is a template of this special cyclic method:
DO
NO_change
NO change = .TRUE.
TRUE one sweep
DO p = 1, n-1
DO q = p+1, n
IF (ABS(a(p,q))
(ABS( ( )) >>= T
Tol)
l) THEN
NO_change = .FALASE.
perform a (p,q)–rotation
update eigenvectors
END IF
END DO
END DO
IF (NO_change) EXIT
END DO
70
Cyclic Jacobi Methods: 4/5
zThis cyclic
y Jacobi method converges,
g , and the
sum of squares of off-diagonal entries S(Ak) is a
monotonic, non-increasing sequence.
zObservation: Since S(Ak) is the sum of
squares of all off-diagonal entries, if it decreases
to zero all off-diagonal entries should be even
smaller!
zTherefore, S(Ak) can be used as a tolerance.
zInstead of recomputing S(Ak) for each (p,q)–
rotation, we may update it after each sweep!

71
Cyclic Jacobi Methods: 5/5
zHere
Here is another, better version:
DO Since this double DO only
S = 0
goes through the upper
DO p = 1, n-1
DO q = p+1, n
triangular part, S should
S = S + a(p,q)**2 be doubled
do bled to compute
comp te S(A).
S(A)
END DO
END DO
IF (2*S < Tol**2) EXIT
perform a sweep without
checking tolerance This actuallyy means
n −1
END DO n
S ( A) = ∑∑
i =1 j =1, j ≠ i
ai2, j < ε

72
A Few Notes: 1/3
zComputing
Computing eigenvalues and eigenvectors is not
easy for general matrices.
zMethods (e(e.g.,
g Givens and Householder) are
available to reduce a symmetric matrix to the
tridiagonal form from which eigenvalues and
eigenvectors can be computed efficiently.
⎡α1 γ 2 ⎤

0
⎢γ α γ ⎥
⎢ 2 2 3 ⎥
⎢ γ 3 α3 γ 4 ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥

⎢ 0 α n − 2 γ n −1
γ n −1 α n −1 γn ⎥

⎢ ⎥ 73
⎢⎣ γn α n ⎥⎦
A Few Notes: 2/3
zMethods are available to reduce a general
g
matrix to the Hessenberg form.
⎡i i i i i⎤
⎢i i i i i ⎥⎥

⎢ i i i i⎥
⎢ ⎥ diagonal
⎢ ⎥
⎢ i i i⎥
⎢ ⎥
⎣ 0 i i⎦
zThen, other methods are used to find
eigenvalues
i l and
d eigenvectors
i t off a matrix
t i in
i
Hessenberg form.

74
A Few Notes: 3/3
zOne of the most p powerful and recommended
methods is the QR algorithm, which can be used
with tridiagonal and Hessenberg forms.
zHowever, Jacobi’s method is more accurate
than QR![1]
zCheck http://www.netlib.org/lapack/
for free linear algebra Fortran programs.
zYou may also find useful programs in
Numerical Recipes by Press, at el. There are
commercial products such as the IMSL and
NAG libraries, and Matlab.
[1] J. Demmel and K. Veselić, Jacobi’s method is more accurate than QR, 75
SIAM J. Matrix Anal. Appl., Vol. 13(1992), No. 4(Oct), pp. 1204-1245.
The End

76

You might also like