Professional Documents
Culture Documents
Lec 15&16-Slides
Lec 15&16-Slides
MA11004
SECTIONS 1 and 2
Dr. Jitendra Kumar
Professor
Department of Mathematics
Indian Institute of Technology Kharagpur
West Bengal 721302, India
Webpage: http://www.facweb.iitkgp.ac.in/~jkumar/
Gauss Seidel Method
𝑥 (𝑘+1) = 𝐷 −1 𝑏 − 𝐿𝑥 𝑘+1 − 𝑈𝑥 𝑘
⇒ 𝐷+𝐿 𝑥 𝑘+1 = 𝑏 − 𝑈𝑥 𝑘
𝑥 (𝑘+1) = − 𝐷 + 𝐿 −1 𝑈𝑥 𝑘 + (𝐷 + 𝐿)−1 𝑏
5𝑥 + 𝑦 + 2𝑧 = 13
𝑥 + 3𝑦 + 𝑧 = 12
−𝑥 + 2𝑦 + 4𝑧 = 8
Compute the solution 𝑥 (1) after one Gauss-Siedel and Jacobi method. Take 𝑥 (0) = [1, 1, 1]𝑇 .
Jacobi method:
(1)
1 0 (0)
1
𝑥 = 13 − 𝑦 − 2𝑧 = 13 − 1 − 2 = 2
5 5
(1)
1 0 (0)
1 10
𝑦 = 12 − 𝑥 −𝑧 = 12 − 1 − 1 =
3 3 3
1 1 7
𝑧 (1) = 8 + 𝑥 (0) − 2𝑦 (0) = 8 + 1 − 2 =
4 4 4
−𝑥 + 2𝑦 + 4𝑧 = 8
Gauss-Siedel Method:
(1)
1 0 (0)
1
𝑥 = 13 − 𝑦 − 2𝑧 = 13 − 1 − 2 = 2
5 5
1 1
𝑦 (1) = 12 − 𝑥 1 − 𝑧 (0) = 12 − 2 − 1 = 3
3 3
1 1
𝑧 (1) (1)
= 8 + 𝑥 − 2𝑦 (1)
= 8+2−6 =1
4 4
−𝑥 + 2𝑦 + 4𝑧 = 8
Jacobi Method: 𝐺𝐽 = −𝐷 −1 𝐿 + 𝑈
−3/5 13/5 2
= −2/3 + 4 = 10/3
−1/4 2 7/4
𝑥 + 3𝑦 + 𝑧 = 12
−𝑥 + 2𝑦 + 4𝑧 = 8 5 0 0
𝐷+𝐿 = 1 3 0
−1 2 4
𝑇
−1
1 12 −4 5
⟹ 𝐷+𝐿 = 0 20 −10
60
0 0 15
1/5 0 0
−1
⟹ 𝐷+𝐿 = −1/15 1/3 0
1/12 −1/6 1/4
𝑥 + 3𝑦 + 𝑧 = 12
−𝑥 + 2𝑦 + 4𝑧 = 8 5 0 0 1/5 0 0
⟹ 𝐷+𝐿 −1 = −1/15 1/3 0
𝐷+𝐿 = 1 3 0
−1 2 4 1/12 −1/6 1/4
1/5 0 0 0 1 2 1/5 0 0 13
𝒙(𝑘+1) = − −1/15 1/3 0 0 0 1 𝒙 𝑘 + −1/15 1/3 0 12
1/12 −1/6 1/4 0 0 0 1/12 −1/6 1/4 8
Starting guess: 1, 1, 1 𝑇
Jacobi Method
Gauss-Seidel Method
The iterative methods converge for any initial guess if and only if all the eigenvalues of the iteration
OR
The iterative methods converge if and only if the spectral radius (largest absolute eigenvalue)
Sketch of the proof: Suppose 𝐴 is diagonalizable. Then there exist a matrix 𝑃 such that
𝐴 = 𝑃𝐷𝑃−1
𝜆1𝑚 0 ⋯ 0
0 𝜆𝑚2 ⋮
with 𝐷=
⋮ 0
0 0 ⋯ 0 𝜆𝑚
𝑛
⟹ 𝑒𝑘 = 𝐺 𝑥 𝑘−1 −𝑥 ⟹ 𝑒𝑘 = 𝐺𝑒𝑘−1 ⟹ 𝑒𝑘 = 𝐺 𝑘 𝑒0
Remark : If the spectral radius of 𝐺 is small, then the convergence is rapid and if the radius of 𝐺 is close to
(ii) 𝜆𝑥 = 𝜆 𝑥 , ∀𝜆 ∈ℝ
(iii) 𝑥 + 𝑦 ≤ 𝑥 + 𝑦
Matrix Norm: Let 𝐴, 𝐵 ∈ ℝ𝑛×𝑛 . Similar to vector norm, matrix norm also satisfies the following properties
(i) 𝐴 > 0 for 𝐴 ≠ 0 and 𝐴 = 0 for 𝐴 = 0
(ii) 𝜆𝐴 = 𝜆 𝐴 , ∀ 𝜆 ∈ ℝ
(iii) 𝐴 + 𝐵 ≤ 𝐴 + 𝐵
Note: For any vector norm, we can also define a corresponding matrix norm (called induced matrix norm ) as
𝐴𝑥
𝐴 = max ⟹ 𝐴𝑥 ≤ 𝐴 𝑥
𝑥≠0 𝑥
𝑛 𝑛 𝑛
2 𝐂𝐨𝐥𝐮𝐦𝐧 𝐒𝐮𝐦 𝐍𝐨𝐫𝐦: 𝐴 = max 𝑎𝑖𝑗
𝐅𝐫𝐨𝐛𝐞𝐧𝐢𝐮𝐬 𝐍𝐨𝐫𝐦: 𝐴 𝐹 = 𝑎𝑖𝑗 1
𝑗
𝑖=1 𝑗=1 𝑖=1
1. If any norm of iteration matrix 𝑮 is less than 1, i.e. 𝐺 < 1, then the iterative methods converge
for any initial guess.
2. If 𝐴 is strictly diagonally dominant by rows (or by columns) then the Jacobi and Gauss-Siedel methods
Note that 𝐺𝑥 = 𝜆𝑥 ⟹ 𝜆𝑥 = 𝐺𝑥
⟹ |𝜆| 𝑥 ≤ 𝐺 𝑥
⟹ |𝜆| ≤ 𝐺
⟹ 𝜌(𝐺) ≤ 𝐺
= (𝐼 − 𝐷 −1 𝐴) 𝑥 𝑘 + 𝐷 −1 𝑏
The scheme will converge if 𝐼 − 𝐷 −1 𝐴 ∞ <1⟺ (Row sum norm)
𝑛 𝑛 𝑛
1 1
⟺ max 𝑎𝑖𝑗 < 1 ⟺ 𝑎𝑖𝑗 < 1, ∀𝑖 ⟺ 𝑎𝑖𝑗 < |𝑎𝑖𝑖 |, ∀𝑖
𝑖 |𝑎𝑖𝑖 | |𝑎𝑖𝑖 |
𝑗=1,𝑗≠𝑖 𝑗=1,𝑗≠𝑖 𝑗=1,𝑗≠𝑖
If 𝐴 is diagonally dominant by rows then 𝐼 − 𝐷 −1 𝐴 ∞ < 1 and hence Jacobi method will converge.
Similarly one can prove if 𝐴 is diagonally dominant by columns. Convergence of Gauss-Seidel also follows
similar steps.
Dr. Jitendra Kumar – IIT KHARAGPUR 32
Example: Consider the following system of equations
5𝑥 + 𝑦 + 2𝑧 = 13
𝑥 + 3𝑦 + 𝑧 = 12
−𝑥 + 2𝑦 + 4𝑧 = 8
The coefficient matrix is strictly diagonally dominant by rows and hence both the
2𝑥 + 𝑦 + 𝑧 = 4
𝑥 + 2𝑦 + 𝑧 = 4
𝑥 + 𝑦 + 2𝑧 = 4
2 1 1
Solution: The coefficient matrix 𝐴 = 1 2 1
1 1 2
0 0 0 0 1 1 2 0 0
𝐿= 1 0 0 𝑈= 0 0 1 𝐷= 0 2 0
1 1 0 0 0 0 0 0 2
−1
2 0 0 0 1 1 1 4 0 0
𝐺𝐺𝑆 = −(𝐿 + 𝐷)−1 𝑈 = − 1 2 0 0 0 1 = − −2 4 0
8
1 1 2 0 0 0 −1 −2 4
1 1 1 1 1 9 32 + 8 + 10 50
∥ 𝐺𝐺𝑆 ∥𝐹 = + + + + + = = <1
4 4 16 16 4 64 64 64
−𝑥 + 2𝑦 + 4𝑧 = 8
Jacobi Method 𝜌𝐽 : 0.4860
Gauss Seidel as by construction seems to be faster than Jacobi method. However this is not true in general.
There are examples where Jacobi converges faster than Gauss Seidel.
If we change the order of the equations the iterative methods may not be convergent.
𝑥 + 3𝑦 + 𝑧 = 12
−𝑥 + 2𝑦 + 4𝑧 = 8
5𝑥 + 𝑦 + 2𝑧 = 13
5 0 1 𝜌𝐽 = 1.3121
Example 1: 𝐴= 6 4 2
−1 5 2 𝜌𝐺𝑆 = 0.4000
Example 2:
−3 3 −6 𝜌𝐽 = 0.8133
𝐴 = −4 7 −8
5 7 −9 𝜌𝐺𝑆 = 1.1111
Example 4:
7 6 9 𝜌𝐽 = 0.6411
𝐴= 4 5 −4
−7 −3 8 𝜌𝐺𝑆 = 0.7746
The iterative methods converge if and only if the spectral radius (largest absolute eigenvalue) of 𝐺 is less than 1, i.e. 𝜌 𝐺 < 1.
Sufficient Conditions
1. If any norm of iteration matrix 𝑮 is less than 1, i.e. 𝐺 < 1, then the iterative methods converge
for any initial guess.
2. If 𝐴 is strictly diagonally dominant by rows (or by columns) then the Jacobi and
Gauss-Siedel methods converge for any initial guess.
Dr. Jitendra Kumar – IIT KHARAGPUR 43
Thank You
Dr. Jitendra Kumar – IIT KHARAGPUR 44