You are on page 1of 16

Chapter 3 Iterative methods for Linear system

Why do we need to solve big linear systems?
E.g. 1 Spectral method: Express solution as : u(x) =
of linear equation.

Pn

i=1

ai φi (x). To look for ai ’s We need to solve system

E.g. 2 Finite difference method : Consider u”(x) = f (x) where 0 < x < 1 and u(0) = a0 , u(1) = a1 .
In calculus, we know
g(x + h) ≈
+g(x − h) ≈
⇒ g(x + h) + g(x − h) ≈

f ”(x) ≈

h2
g”(x)
2
h2
g(x) − hg 0 (x) + g”(x)
2
2g(x) + h2 g”(x)
g(x) + hg 0 (x) +

g(x + h) − 2g(x) + g(x − h)
h2

1
Now, partition [0, 1] into xi = ih; h = n+1
.
Then, the differential equation can be approximated by

ui+1 − 2ui + ui−1
= f (xi ), i = 1, 2, ..., n
h2

 

u1
f (x1 ) − u0
 u2  

f (x2 )
system of

 

⇔ A .  = 
←
.
.
.
linear
system
 .  

.
f (xn ) − u1

un
where

2
 −1
1 

A= 2
h 

−1
2
..
.


−1
..
.
−1




..

.

2 −1 
−1 2

How to solve big linear system?
From linear algebra, we learnt gaussian elimination.


 
a11 a12 · · · a1n
x1
b1
 a21 a22 · · · a2n   x2   b2


 
 ..
..   ..  =  ..
.
.
.
.
 .
.
.
.  .   .
an1 an2 · · · ann
xn
bn


  0 
c11 c12 · · · c1n
x1
b1
0 





c
·
·
·
a
x
22
2n  
2 

 b2 

..   ..  =  ..  →
..

.
.  .   . 
cnn
xn
b0n
Computational cost : O(n3 ) [Check it if interested]
From linear algebra, we also learnt LU factorization.
Decompose a matrix A into A = LU.
Then solve the equation by:
A~x = ~b ⇔ L(u~x) = ~b
1



by elementary

→
row operation

Upper triangular matrix
Solved by
backward substitution

it converges to the solution of A~x = f . where D contains the diagonal entries of A only. Computational cost : O(n3 ) Goal : Develop iterative method : Find a sequence ~x0 . It can be shown that if {~xn } converges. for xk+1 n Example : Consider  5  −3 2 −2 9 −1     3 x1 −1 1   x2  =  2  −7 x3 3 2 .. For A =symmetric positive definite (~xT A~x > 0 for all ~x). ~x2 . Splitting choice 1 : Jacobi Method Split A as A = D + (A − D).Let ~y = u(~x).. such that ~xR → ~x = solution as k → ∞ (Stop when the error is small enough). . Many different choices of the splitting !! Goal : N is simple to take inverse ( such as diagonal ). Decomposition can be done by Cholesky decomposition : (Numerical analysis). .   .. Then A~x = f~ : D~xk+1 + (A − D)~xk k+1 ⇔ D~x ⇔ ~xk+1 = f~ (D − A)~xk + f~ = = D−1 (D − A)~xk + D−1 f~ This is equivalent to solving :  a11 xk+1 + a12 xk2 + · · · + a1n xkn = f1  1    k  a21 x1 + a22 xk+1 + · · · + a2n xkn = f2 2 . Solve for L~y = ~b first (backward substitution). A ∈ Mn×n (R) (n is big ) We can split A as follows: A = M + (A − M ) = M − (M − A) = N − P Solving A~x = f~ is equivalent to solving: (N − P )~x = f ⇔ N~x = P ~x + f~ We can develop an iterative scheme as follows N~xn+1 = P ~xn + f~ ∞ to get a sequence {~xn }n=1 . then A = LLT . Then solve for u~x = ~y (easy). ~x1 ..    k k an1 x1 + an2 x2 + · · · + ann xk+1 = fn n for xk+1 1 for xk+1 2 . Splitting method for general linear systems Consider the system A~x = f~.

    x1 1 = ! x2 1 How about Gauss Seidel ?? It also doesn’t converges ! The real solution should be : Our next goal is to check when Jacobi method and Gauss-Seidel method would converge..423 5 = 0 0 0 9 0   −1  2  3  . Splitting Choice 2 : Gauss Seidel Method Split A as : A=L+D+U Develop an iterative scheme as : L~xk+1 + D~xk+1 + U~xk = f~ This is equivalent to :            a11 xk+1 + a12 xk2 + · · · + a1n xkn = f1 1 a21 xk+1 + a22 xk+1 + · · · + a2n xkn = f2 1 2 ..331 .~x = which −294 −300127 doesn’t converges. −0.. for xk+1 n Gauss-Seidel is equivalent to ~xk+1 = −(L + D)−1 U~xk + (D + L)−1 f~ Exampl : Continue with last example  ~xk+1 5 = −  −3 2 0 9 −1  0 0 0  0 −7 0   3 5 1  ~xk +  −3 0 2 −2 0 0 0 9 −1 −1   0 −1 0   2  −7 3    −0. . The sequence converges in 7 iteration to ~x7 =  0. The sequence converges in 7 iteration to : ~x7 =  0.186 Start with ~x0 =  0 . Answer : Matrix A must satisfy certain property : Strictly diagonal dominant (SDD).186 0 Start with ~x0 =  0 .Then:  −1  0 0 0   3 −7 −2   2 −3 5 0 0 0 −1  ~xk +  0 9 0 ~xk+1 1 0 0 0 −7    0 −0.331 0 −0. 7 −1 x2 6 Then : Jacobi method gives : ~xk+1 = 0  Start with ~x = 0 0   1 1 0 0 −1 . 3 . ~x = . k+1 k+1 an1 x1 + an2 x2 + · · · + ann xk+1 = fn n for xk+1 1 for xk+1 2 . Analysis of Covergence Let A = N − P .. . Then ~x =  −1  −4 6  0 −7 2 5 0   −34 −34 . ~x = ~xk +  1 0 0 −1 −1  −4 6       −174 −214374 3 7 . .423 0  Do Jacobi / Gauss-Seidel Method always converges ?      1 −5 x1 −4 Example : Consider : = .

en ) =eigenvector of A = (aij ) with eigenvalues λ. the spectral radius of M : ρ(M ) = max {|λk | : λk = eigenvalues of M } k is a good indicator of the rate of convergence. Then ~em → 0 as m → ∞. 1. 1. In other words. . 2. . λ2 .. But finding ρ(M ) is difficult ! Solution : Numerically (Next Topic) Useful Theorem: Gerschgorin Theorem T Consider : ~e = (e1 . k In order to reduce error by a factor of 10−m . then we need k iterations when |λ1 | ≤ 10−m . e2 .. .. We call R : asymptotic convergence rate. Suppose we can order the eigenvalues : |λ1 | ≥ |λ2 | ≥ |λ3 | ≥ . Then : n n X X ~em = M m~e0 = ai M m ~ui = aλm ui i ~ i=1 i=1 where λ1 . We have : N~x = P ~x + f~ to obtain iterative scheme : N~xm+1 = P ~xm + f~.. Let ~x∗ be the solution of A~x = ~b... Define error em := ~xm − ~x∗ . . ≥ |λn | Then ( ~e m = λm 1 a1 ~u1 + n X  ai i=2 λi λ1 ) m u~i Assume |λ1 | < 1.λn =corresponding eigenvalues (can be C). ∴ A~x∗ = ~b. m = 0.. m = 0. Then : A~e = λ~e 4 .... (~ui can be complex-valued vectors ) P n Let ~e0 = i=1 ai ~ui ... ~un } is the set of linear independent eigenvectors of M . That is m m k≥ := − log10 (ρ(M )) R We call ρ(M ) : asymptotic convergence factor. 2.. Now N~xm+1 = P ~xm + f~ (1) N~x∗ = P ~x∗ + f~ (2) (1) − (2) : N (~xm+1 − ~x∗ ) = P (~xm − ~x∗ ) ⇔ N~em+1 = P ~em ⇔ ~em+1 = N −1 P ~em So let M = N −1 P .Goal : Solve A~x = f~ ⇔ (N − P )~x = f~. we have ~em = M m~e0 Assume {~ u1 . ..

all eigenvalues are real.j6=l Therefore   N N [ X Ball  |alj | = circle with radius 2 and centre at (2. Then : |el | |all − λ| N X ≤ j=1. In factm eigenvalues of A are : λ1 = 3.Hence for each i (1 ≤ i ≤ n) N X aij ej = λei aij ej = λei j=1 N X ⇔ aii ei + j=1.j6=i ⇔ |ei | |aii − λ| ≤ N X |aij | |ej | j=1.j6=i |alj |. lj j=1.0) l=1 j=1.j6=i Suppose the component of the largest absolute value is |el | = 6 0 (∴ |el | ≥ |ej | ∀j).j6=i ⇔ ei (aii − λ) N X = − aij ej j=1. Note : We don’t know l unless we know λ and ~e. λ2 = 2.j6=l  N For l = 2 and 3.618.382.j6=l PN That is the ball with centre all and with radii j=1. Ball |alj | = {λ : |λ − 2| ≤ 1}. Thus 0 ≤ λ ≤ 4. P  N For l = 1 and 4.618.j6=l ⇒ |all − λ| N X ≤ N X |alj | |ej | ≤ |alj | |el | j=1. Ball |a | = {λ : |λ − 2| ≤ 2} . λ3 = 1.j6=l So we have  λ ∈ Ball  N X  |alj | j=1.j6=l |alj | j=1.j6=l Since A is symmetric. λ4 = 0. 5 .382. But : we can conclude  λ∈ N [ l=1  N  X |alj | Ball    j=1 j6=l Example : Determine the upper bounds on the eigenvalues for the matrix:   2 −1 0 0  −1 2 −1 0   A=  0 −1 2 −1  0 0 −1 2   SN PN Then all eigenvalues lie within l=1 Ball j=1.j6=l |alj | . Pj=1.

then A must be non-singular.. Proof : Note that n fi 1 X ~xm+1 = − aij ~xm . Hence. n (1) j + i aii aii j=1. Now A is SDD iff : N X |all | > |alj | .j6=i fi aii (2) (1)-(2): ~em+1 =− i 1 aii n X aij ~em j j=1. i = 1.That is ρ(A) = λ1 ≤ 4. So A is non-singular. eigenvalue cannot be 0.j6=i ∗ Let x be the solution.  SN PN Proof : Recall all eigenvalues λ ∈ l=1 Ball j=1.. Now.. we prove the convergence of Jacobi Method.j6=i Theorem 1 : If a matrix A is SDD. n j=1. Recall that Jacobi Method can be written as : ~xm+1 = D−1 (D − A) ~xm + D−1 f~ Theorem: The Jacobi method converges to the solution of Ax = f~ if A is strictly diagonally dominant. l = 1. 2.. To prove the convergence of Jacobi method and the Gauss-Seidel method... 2. Contradiction. Then we also have ~x∗i = − 1 aii n X aij ~x∗j + j=1. n j=1. then ∃~v 6= 0 such that A~v = ~0 = 0~v .j6=l P  N Therefore every ball Ball j=1. ..j6=i Therefore . let us introduce some definition.. Definition: A matrix M = (aij ) is called strictly diagonally dominant (SDD) if: n X |aii | > |aij | .j6=l |alj | must not contain 0.. i = 1. 2. . . If A is singular.j6=l |alj | . This implies λ = 0 is an engenvalue.

m+1 .

.

~e .

≤ i ≤ .

.

n X .

aij .

.

m .

.

.

.

~ej .

.

aii .

j6=i . j=1.

j6=i . n X j=1.

.

aij .

m .

.

k~e k ∞ .

aii .

.

.

 .

k~em k = max .

~em j  = r k~em k∞  .

.

n X .

aij .

.

.

< 1 r = max .

aii .

i j=1. k~em k∞ ≤ rm ~e0 ∞ 6 .j6=i ⇒ ~em+1 ∞ m ≤ r k~e k∞ Inductively.

Then : ~x∗i = − i−1 X aij j=1 aii n X fi aij ∗ ~xj + a a ii j=i+1 ii ~x∗j − (2) (1)-(2): ~em+1 =− i i−1 X aij ~em+1 − j aii . Proof : Gauss-Seidel Method can be written as: ~xm+1 =− i i−1 X aij aii j=1 ~xm+1 − j n X fi aij m ~xj + a aii j=i+1 ii (1) Let ~x∗ be the solution of A~x = f.Therefore k~em k∞ → 0 as m→∞ Theorem : The Gauss-Seidel Method converges to the solution of A~x = f~ if A is strightly diagonally dominant.

.

o nP .

aij .

j6=i .. n m = (em 1 .. en ). and r = maxi j=1.. .

aii .

we will prove: j=1 Let ~em n X aij m ~e a j j=i+1 ii . Again.

m+1 .

.

~e .

2. ≤ r k~em k .∀i Induction on i : When i = 1.. . . ∞ ∞ m = 0.. 1.

m+1 .

.

~e .

1 .

n .

X .

a1j .

.

m .

.

.

.

.

≤ .

a11 .

· ~ej j=2 ≤ k~em k∞ .

n .

X .

a1j .

.

.

.

a11 .

j=2 m ≤ r k~e k∞ .

.

Assume .

~em+1 .

.. 2. i − 1. Then : .. ≤ r k~em k∞ for k = 1. .

m+1 .

.

~e .

i .

.

.

n i−1 .

X X .

aij .

.

m .

.

aij .

.

m+1 .

.

+ .

.

.

~ej .

.

.

.

~e .

aii .

.

aii .

j j=i+1 j=1 .

.

.

i−1 .

n X X .

aij .

.

aij .

m m .

.

.

.

≤ r k~e k∞ .

aii .

+ k~e k∞ .

aii .

j=1 j=i+1 .

.

n X .

aij .

.

.

< k~em k∞ .

aii .

≤ j=1.j6=i m ≤ r k~e k∞ .

.

.

. < r k~em k . By MI.

~em+1 i ∞ Hence m+1 ~e < r k~em k ∞ ∞ Therefore k~em k∞ < rm ~e0 ∞ → 0 as m → ∞ as r < 1 7 .

~xk+1 = We need to check:  ρ(M ) = ρ 0 0 −1  0 0 −1 0  ~xk + (L + D)−1~b 1 100 Eigenvalue of M :  1 λ− 100  λ=0⇒λ= Therefore ρ(M ) = 1 ⇒ ~em = 100  1 or λ = 0 100 1 100 m ~ KG−S So Guass Seidel converges faster.λn = eigenvalues of A 8 . we need k iteration such that: k |λ1 | ≤ 10−m ⇔ k ≥ For Jacobi. recall that in order to reduce error by a factor of 10−m .. Therefore both Jacobi method and Gauss-Seidel method converge.   10 0 Solution : Jacobi method: D = 0 10 ~xk+1 = D−1 (D − A)~xk + D−1   12 21  Let  M= 10 0 0 10 = 10 0 0 10 −1  Need to check the spectral radius of M :  −1  10 0 0 ρ(M ) = ρ 0 10 −1 −1  0 −1 −1 0 −1 0 !  1 m 10 ~j K  10 1 0 10 1 − 10  −1 0 0 −1  ~xk + D−1  12 21    =ρ 0 1 − 10 1 − 10 0  1 1 1 = 0 ⇒ λ = 10 or λ = − 10 Eigenvalue λ of M : λ2 − 100 1 Therefore M is diagonalizable and ρ(M ) = 10 . ~ j where ~em = error = ~xm − ~x∗ = Recall: ~em = ρ(M )m K Now. λ2 . k ≥ m − log10 (ρ(M )) =m m 1 − log10 ( 100 ) = m 2 Therefore Gauss-Seidel converges twice as fast as Jacobi. k ≥ m 1 − log10 ( 10 ) For Guass-Seidel.U = 1 10 0 −1 0  . consider the Guass-Seidel method: Take    10 0 0 L+D = . What if M = N −1 P is not diagonalizable. .. Theorem : Let A ∈ Mn×n (C) be a complex-valued matrix. ρ(A) = max {|λi |} i where λ1 . In fact. Compare the convergence rates of the two methods.Example : Consider  A~x = 10 1 1 10  x1 x2   =  12 21 = ~b A is SDD.

λs =eigenvalues of A.  ··· ··· .. k Jm (λs ) s     Also.   k λk−1 i 1 k λi Since ρ(A) < 1.     . ∃ invertible Q ∈ Mn×n (C) such that : A = QJQ−1 where J =Jordan Canonical form of A. lim Ak~v = k→0 lim λk~v k→∞ = ~v lim λk ⇒0 k→∞ ∴ limk→∞ λk = 0 ⇒ |λ|<1. Ak = QJ k Q−1 and  k (λ1 ) Jm 1   Jk =    k (λ2 ) Jm 2 . .where ρ(A) is called the spectral radius. From linear algebra. λ2 . . .. k Therefore limk→∞ Jm = 0 ∀i and so J k → 0 as k → ∞... .. . λi     ∈ Mmi ×mi (C)  1  λi 1≤i≤s Now. Jms (λs ) where  λi  1 λi    Jmi (λi ) =    1 . Then Ak~v = λk~v .. for k ≥ mi − 1  λki      0   k Jmi (λi ) =       k 1  λk−1 i λki   k λk−2 i  2  k λk−1 i 1 . Actually. (⇐) Let λ1 . λki 0  k i +1 λk−m i m i−1   k i +1 λk−m i mi−2 . . i Thus. then |λi | < 1 ∀i. ρ(A) > 1 implies lim Ak ∞ as k → ∞ k→∞ 9 kAk∞ = max {aij }              .. Then lim Ak = 0 iff ρ(A) < 1 k→∞ Proof : (⇒) Let λ be an eigenvalues with eigenvector ~v . lim Ak = lim QJ k Q−1 = 0 k→∞ k→∞ Remark : Following the same idea.   Jm1 (λ1 ) Jm2 (λ2 )   J =  . ... Thus ρ(A)<1. Thus.. .

SOR is equivalent to splitting A:  A=N −P =    1 1 L+ D − D − (D + U ) ω ω In fact. Condition for the convegence of SOR Theorem : The necessary condition (not sufficient) condition for SOR to converge is 0 < ω < 2. Proff : Consider: det(N −1  −1  ! 1 1 det L+ D D − (D + U ) ω ω  −1 !   1 1 det D det D−D ω ω    1  −1 ω det D det ((1 − ω) D) ω n det ((1 − ω) I) = (1 − ω)  P) = = = = 10 . Splitting Choice 3 : Successive overrelaxation Method (SOR) A=L+D+U Consider the iterative scheme : (Introduce sequence xk and x ¯k ) Lxk+1 + D¯ xk+1 + U xk = b (∗)  xk+1 = xk + ω x ¯k+1 − xk (∗∗)    1 k+1 k k+1 x + (ω − 1) x ⇔x ¯ = ω Put (∗∗) to (∗). we have: or   1 1 L + D xk+1 + (ωU + (ω − 1) D) xk = b ω ω     1 1 k+1 L+ D x = D − (D + U ) xk + b (SOR) ω ω Clearly. .. SOR is equivalent to solving :  a11 x ¯k+1 + a12 xk2 + · · · + a1n xkn = b1  1    k+1  a21 x1 + a22 x ¯k+1 + · · · + a2n xkn = f2 2 . then M k → 0 as k → ∞.      an1 xk+1 + an2 xk+1 + · · · + ann x ¯k+1 = fn n 1 2  for xk+1 = xk1 + ω x ¯k+1 − xk1 1 1  for xk+1 = xk2 + ω x ¯k+1 − xk2 2 2 . ..  k+1 k for xn = xn + ω x ¯k+1 − xkn n Remark : SOR = Gauss-Seidel if ω = 1.Corollary : The iteration scheme xk+1 = M xk + b converges iff ρ(M ) < 1 Proof : Consider xk+1 = M xk + b (1) x = M x + b (2) x = solution Therefore ek+1 = M (xk − x) ⇒ ek+1 = M ek ⇒ ek = M k e0 If ρ(M ) < 1.

SOR method converges iff ρ(N −1 P ) < 1. ρ (MG−S ) = G-S converges faster !!  10 1 1 10  . consider the convergence rate for SOR method.  So. we need to check the eigenvalues of the matrix:  −1   1 1 L+ D D − (D + U ) ω ω Example: Let go back to Ax = ~b where A = 1 Recall ρ(MJacobi ) = 10 .  M = = = = −1   1 D − (D + U ) ω  −1   1 1 (D + ωL) (D − ω (D + U )) ω ω −1 1 ω (D + ωL) ((1 − ω) D − ωU ) ω −1 (D + ωL) ((1 − ω) D − ωU ) 1 L+ D ω We examine ρ(M )     10 0 0 1 (1 − ω) −ω 0 10 0 0   10 (1 − ω) −ω = and 0 10 (1 − ω)    1  10 0 0 −1 10 (D + ωL) = ⇒ (D + ωL) = ω 1 ω 10 − 100 10   ω 1−ω − 10 −1 2 ∴MSOR = (D + ωL) ((1 − ω) D − ωU ) = ω − ω(1−ω) 10 100 + 1 − ω Characteristic polynomial of MSOR is:   2 ω ω 2 (1 − ω) [(1 − ω) − λ] +1−ω−λ − =0 100 100 (1 − ω) D − ωU = 11 . Q  Sonce det N −1 P = i λi λi are eigenvalues of M −1 N Therefore  Y n n n (1 − ω) = det N −1 P = λi ≤ max |λi | = ρ N −1 P i ⇒ ρ N −1 P  ≥ |ω − 1| Now. 1 100 Now. SOR conveges if and only if −1  !   1 1 <1 D − (D+U) ρ N −1 P = ρ L+ D ω ω Therefore. Remark : In general. Recall the SOR method reads: ~xk+1 =  L+ 1 D ω −1    −1 1 1 ~b D − (D + U ) ~xk + L + D ω ω So. to find the sufficient condition for SOR method to converges. |ω − 1| ≤ ρ M −1 N < 1 ⇒ 0 < ω < 2.

If the eigenvalue of : 1 αD−1 L + D−1 U.002512579 (Compare to ρ (MGS ) = 0.002512579 (Which is very close to gauss-Seidel) But !! ρ(MSOR ) = 0. ρ (MJ ) = 0. we can improve the spped of convergence significantly !!! • One major task in computational math is to find the right parameter !! How can we choose optimal ω for simple case?? • In general. And let A = D + L + U . α 6= 0 α is independent of α. Then the matrix is said to be consistently ordered. Choice of ω?? Let us choose ω such that 4 (1 − ω) + So.Simplify:   ω2 2 + (1 − ω) = 0 λ − λ 2 (1 − ω) + 100 2 Then: ω2 ω λ = (1 − ω) + ± 200 20 1 . difficult to choose ω • ω is usually chosen as 0 < ω < 2 • But for some special matrix. Theorem: If A is consistently ordered. then the optimum ω for SOR method is: ω= 2 p 1 + 1 − ρ(MJ )2 where MJ = M in the Jacobi method.1) ∴ converges much faster than G-S!! Remark : • ρ(MSOR ) is very sensitive to ω.01. When ω = 1 (Gauss-Seidel method). If you can hit the right value of ω. optimum ω can be easily found. Definition: Consider the system A~x = ~b. λ = 0 or λ = 100 Changing ω changes λ. MJ = −D−1 (L + U ) Proof: Consistently ordered means the eigenvalues of   1 αD−1 L + D−1 U α 12 . the equation has equal root : ω2 100 λ r 4 (1 − ω) + ω2 100 =0 ω2 200 = (1 − ω) − 2 (1 − ω) = ω − 1   ω2 ∵ 4 (1 − ω) + 100 = (1 − ω) + The smallest value of ω (2 > ω > 0) such that 4 (1 − ω) + ω2 100 = 0 is: ω = 1.

the non-zero eigenvalue λmust satisfy:    √ √ (1 − ω − λ) 1 √ det D − √ U − λL ω λ ω λ λ   √ −1 1 −1 (λ + ω − 1) √ ⇒ det λD L + √ D U − I λ ω λ = 0 = 0 Since A is consistently ordered.are the same as those for D−1 L + D−1 U  (Jacobi matrix. then the non-zero eigenvalue of MSOR safisfies: µ= λ+ω−1 √ ω λ for some λ (∗) For ω 6= 0. Let the eigenvalues of MJ be µ. 1 10   1 0 − 10α Then: αD−1 L + α1 D−1 U = ⇒ λ2 − α − 10 0 −1 10α  13  α − 10 = 0 ⇒ λ2 = 1 100 . put α = 1) Now consider the eigenvalues of MSOR . the eigenvalues of √ 1 λD−1 L + √ D−1 U λ are the same as those of MJ = −D−1 (L + U ). We can show that this happens whenever the roots in (∗∗) are equal when µ takes the maximum norm. λ satisfies: det [(1 − ω − λ) D − ωU − λωL] = 0 Sinbce ω 6= 0. We want to find ω such that ρ (MSOR ) is as small as possible. That is. λ depends on ω. we can solve (∗) and set µ2 ω 2 ± µω λ = (1 − ω) + 2 r (1 − ω) + µ2 ω 2 4 (∗∗) Each µ gives one or two eigenvalues λ. The characteristic polynomial det (MSOR − λI) = 0 or   −1 det (D + ωL) ((1 − ω) D − ωU ) − λI = 0 or det (D + ωL) −1 det [(1 − ω) D − ωU − λ (D + ωL)] = 0 So. r µ2 ω 2 µω (1 − ω) + = 0 4 2 p ⇒ω = 1 ∓ 1 − µ2 We look for smallest value of ω (0 < ω < 2) ad so ω= 2 2 p p = 2 1 + 1 − ρ(BJ )2 1+ 1−µ   10 1 Example: Consider A = .

A= .   ∗ λn−1 ∗ ∗ λn        Example: Solve −u” = f.     −1 2 . tridiagonal matrix is consistently ordered:  λ1 ∗  λ2 ∗   .. . . h2      A    u(x1 ) . . Tp.. k+1 −1     −1 1 1 1 k ~b D+L − 1 D − U ~x + D+L ω ω ω −1  1 −1 ~b D+L (D + ωL) [(1 − ω) D − ωU ] ~xk + ω  = = Recall: • SOR converges ⇒|ω − 1| < 1 or 0 < ω < 2   −1 • In general.. then −u” = f can be approximated by      ~  = b. The iterative scheme: 1 ω ~x  − 1 D − U. . convergence of SOR ⇐⇒ ρ (D + ωL) [(1 − ω) D − ωU ] < 1 • ω = 1 →G-S method Remark: In particular. .. 2  −1   where A =     −1 .0025126 SOR method: Take N = ω1 D + L. u(0) = a. . 1] by x0 = 0 < x1 = h < x2 = 2h < · · · < 1 = xn Approximate u” by u(x+h)−2u(x)+u(x−h) . P = Then: A = N − P .p−1 14       Dp . . .  T23 .. . ∴ Optimal ω : ω= 2 2 p q = 1 + 1 − ρ(MJ )2 1+ 1− = 1. .... . u(1) = b Partition [0. .. . .0025126 (sameasexample1) 1 100 ∴The fastest convergence rate is: ρ (MSOR ) = 0..∴A is consistently ordered. −1 u(xn−1 ) Example of consistent ordering matrix Example 1: Consider a block tridiagonal matrix  D1  T21   A=   of the form T12 D2 . 2 −1      −1  2 .... .

.  1 zD−1 L + D−1 U = X D−1 L + D−1 U X −1 z where   I zI    X=   2 z I . A is consistently ordered Then: ρ (MSOR.p−1       where Ti ∈ Mn×n are tridiagonal matrices. 15 (A = L + D + U ) .. β = ρ (MJAC ) < 1 4. . for 0 < ω ≤ ωopt ω − 1 for ωopt ≤ ω < 2 where ωopt = 2 p 1 + 1 − β2 Convergence condition for SOR Theorem: If A is symmetric positive definite. then the SOR converges for 0 < ω ≤ 1. To see this. then ρ(MSOR ) < 1 We will prove by contradiction. ( ρ (MSOR. The SOR method reads: ~xk+1 = MSOR ~xk + ~c where −1 MSOR = (D + ωL) ((1 − ω) D − ωU ) . .ω ) < 1 In face.          . z p−1 I Example 2: Another type of consistent ordered matrices:  T1 D12  D21 T2 T23   .Young] Assume: 1. Proof: If A is SDD.. A= . then aii 6= 0 and A is invertible. then the SOR method converges for all 0 < ω < 2. Theorem: If A is strictly diagonally dominant. 2) 2. Note. ω ∈ (0. . We need to show that if 0 < ω ≤ 1.. it suffices to see that D−1 L + D−1 U and zD−1 L + z1 D−1 U are similar for all z 6= 0. . MJAC has only real eigenvalue 3.ω ) = 1 − ω + 12 ω 2 β 2 + ωβ q 1−ω+ ω2 β 2 4 . Dp−1.p Tp Dp. Then A is consistently ordered.where Di are diagonal matrices. Proof: Complicated ! Theorem: [D.

det (D + ωL) Then −1 1−  1 ω (1 − ω) D + ωL + U λ λ 6= 0.   1 det λ (D + ωL) (D + ωL) − ((1 − ω) D − ωU ) = 0 λ      1 ω −1 = 0 ⇒ det (D + ωL) det 1 − (1 − ω) D + ωL + U λ λ ⇒ det (C) = 0  −1 where  C= since aii 6= 0. . Then det (λI − MSOR ) = 0 Also.Suppose ∃ eigenvalue such that |λ| ≥ 1.

.

  .

.

1 1 (1 − ω) |aii | |Cii | = .

.

1 − (1 − ω).

.

(Contradiction) ∴ all eigenvalues of A should satisfy |λ| < 1. Thus ρ(MSOR ) < 1 and hence SOR converges. |aii | ≥ 1 − λ |λ| ≥ ω |aii | > ω n X |aij | ≥ ω i−1 X j=1 j=1.j6=1 So C is also SDD. Thus det(C) 6= 0. 16 |aij | + n ω X |aij | |λ| j=i+1 .