You are on page 1of 139

Introduction to Numerical Analysis

(Linear Systems: Iterative Methods)

MA 214, Spring 2023-24.

MA 214 - NA Spring 2023-24 1 / 46


Vector Norms

Definition (Vector Norm)

MA 214 - NA Spring 2023-24 2 / 46


Vector Norms

Definition (Vector Norm)


A vector norm on a vector space V is a function ∥ · ∥ : V → [0, ∞)
having the following properties:

MA 214 - NA Spring 2023-24 2 / 46


Vector Norms

Definition (Vector Norm)


A vector norm on a vector space V is a function ∥ · ∥ : V → [0, ∞)
having the following properties:
1 ∥x∥ ≥ 0 for all x ∈ V.

MA 214 - NA Spring 2023-24 2 / 46


Vector Norms

Definition (Vector Norm)


A vector norm on a vector space V is a function ∥ · ∥ : V → [0, ∞)
having the following properties:
1 ∥x∥ ≥ 0 for all x ∈ V.
2 ∥x∥ = 0 if and only if x = 0.

MA 214 - NA Spring 2023-24 2 / 46


Vector Norms

Definition (Vector Norm)


A vector norm on a vector space V is a function ∥ · ∥ : V → [0, ∞)
having the following properties:
1 ∥x∥ ≥ 0 for all x ∈ V.
2 ∥x∥ = 0 if and only if x = 0.
3 ∥αx∥ = |α|∥x∥ for all x ∈ V and for all α ∈ R.

MA 214 - NA Spring 2023-24 2 / 46


Vector Norms

Definition (Vector Norm)


A vector norm on a vector space V is a function ∥ · ∥ : V → [0, ∞)
having the following properties:
1 ∥x∥ ≥ 0 for all x ∈ V.
2 ∥x∥ = 0 if and only if x = 0.
3 ∥αx∥ = |α|∥x∥ for all x ∈ V and for all α ∈ R.
4 ∥x + y∥ ≤ ∥x∥ + ∥y∥ for all x, y ∈ V.

MA 214 - NA Spring 2023-24 2 / 46


Vector Norms (contd.)

Notation : x = (x1 , x2 , · · · , xn )T ∈ Rn .

MA 214 - NA Spring 2023-24 3 / 46


Vector Norms (contd.)

Notation : x = (x1 , x2 , · · · , xn )T ∈ Rn .
Example:
1 Euclidean norm on Rn is denoted
v by ∥ · ∥2 , and is defined by
u n
uX
∥x∥2 = t |xi |2 .
i=1

MA 214 - NA Spring 2023-24 3 / 46


Vector Norms (contd.)

Notation : x = (x1 , x2 , · · · , xn )T ∈ Rn .
Example:
1 Euclidean norm on Rn is denoted
v by ∥ · ∥2 , and is defined by
u n
uX
∥x∥2 = t |xi |2 .
i=1
2 l∞ norm, which is also called maximum norm, on Rn is denoted
by ∥ · ∥∞ , and is defined by
∥x∥∞ = max { |x1 |, |x2 |, · · · , |xn | } .

MA 214 - NA Spring 2023-24 3 / 46


Vector Norms (contd.)

Notation : x = (x1 , x2 , · · · , xn )T ∈ Rn .
Example:
1 Euclidean norm on Rn is denoted
v by ∥ · ∥2 , and is defined by
u n
uX
∥x∥2 = t |xi |2 .
i=1
2 l∞ norm, which is also called maximum norm, on Rn is denoted
by ∥ · ∥∞ , and is defined by
∥x∥∞ = max { |x1 |, |x2 |, · · · , |xn | } .
3 l1 norm on Rn is denoted by ∥ · ∥1 , and is defined by
∥x∥1 = |x1 | + |x2 | + · · · + |xn |.
MA 214 - NA Spring 2023-24 3 / 46
Matrix Norms Subordinate to a Vector Norm

An important matrix norm

MA 214 - NA Spring 2023-24 4 / 46


Matrix Norms Subordinate to a Vector Norm

An important matrix norm

Definition (Matrix Norm subordinate to a vector norm)


Let ∥ · ∥ be a vector norm on Rn . The matrix norm subordinate to
the vector norm ∥ · ∥ is defined by

∥A∥ := sup {∥Ax∥ : x ∈ Rn , ∥x∥ = 1} for A ∈ Mn (R).

MA 214 - NA Spring 2023-24 4 / 46


Matrix Norms Subordinate to a Vector Norm (contd.)

The matrix norm subordinate to a vector norm has additional properties.

MA 214 - NA Spring 2023-24 5 / 46


Matrix Norms Subordinate to a Vector Norm (contd.)

The matrix norm subordinate to a vector norm has additional properties.


Theorem
Let ∥ · ∥ be a matrix norm subordinate to a vector norm. Then
1 ∥Ax∥ ≤ ∥A∥ ∥x∥ for all x ∈ Rn .

MA 214 - NA Spring 2023-24 5 / 46


Matrix Norms Subordinate to a Vector Norm (contd.)

The matrix norm subordinate to a vector norm has additional properties.


Theorem
Let ∥ · ∥ be a matrix norm subordinate to a vector norm. Then
1 ∥Ax∥ ≤ ∥A∥ ∥x∥ for all x ∈ Rn .
2 ∥I∥ = 1 where I is the Identity matrix.

MA 214 - NA Spring 2023-24 5 / 46


Matrix Norms Subordinate to a Vector Norm (contd.)

The matrix norm subordinate to a vector norm has additional properties.


Theorem
Let ∥ · ∥ be a matrix norm subordinate to a vector norm. Then
1 ∥Ax∥ ≤ ∥A∥ ∥x∥ for all x ∈ Rn .
2 ∥I∥ = 1 where I is the Identity matrix.
3 ∥AB∥ ≤ ∥A∥ ∥B∥ for all A, B ∈ Mn (R).

MA 214 - NA Spring 2023-24 5 / 46


Matrix Norms Subordinate to a Vector Norm (contd.)

Theorem (Matrix norm subordinate to the maximum norm)


The matrix norm subordinate to the l∞ norm (also called maximum
norm) on Rn

MA 214 - NA Spring 2023-24 6 / 46


Matrix Norms Subordinate to a Vector Norm (contd.)

Theorem (Matrix norm subordinate to the maximum norm)


The matrix norm subordinate to the l∞ norm (also called maximum
norm) on Rn
∥x∥∞ = max { |x1 |, |x2 |, · · · , |xn | }

MA 214 - NA Spring 2023-24 6 / 46


Matrix Norms Subordinate to a Vector Norm (contd.)

Theorem (Matrix norm subordinate to the maximum norm)


The matrix norm subordinate to the l∞ norm (also called maximum
norm) on Rn
∥x∥∞ = max { |x1 |, |x2 |, · · · , |xn | }
is denoted by ∥A∥∞ and is given by

X
n
∥A∥∞ = max |aij |.
1≤i≤n
j=1

MA 214 - NA Spring 2023-24 6 / 46


Matrix Norms Subordinate to a Vector Norm (contd.)

Theorem (Matrix norm subordinate to the maximum norm)


The matrix norm subordinate to the l∞ norm (also called maximum
norm) on Rn
∥x∥∞ = max { |x1 |, |x2 |, · · · , |xn | }
is denoted by ∥A∥∞ and is given by

X
n
∥A∥∞ = max |aij |.
1≤i≤n
j=1

The norm ∥A∥∞ given by the above formula is called


maximum-of-row-sums norm.
MA 214 - NA Spring 2023-24 6 / 46
Matrix Norms Subordinate to a Vector Norm (contd.)

Theorem (Matrix norm subordinate to the Euclidean norm)


The matrix norm subordinate to the l2 norm (also called Euclidean
norm) on Rn

MA 214 - NA Spring 2023-24 7 / 46


Matrix Norms Subordinate to a Vector Norm (contd.)

Theorem (Matrix norm subordinate to the Euclidean norm)


The matrix norm subordinate to the l2 norm (also called Euclidean
norm) on Rn v
u n
uX
∥x∥2 = t |xi |2
i=1

MA 214 - NA Spring 2023-24 7 / 46


Matrix Norms Subordinate to a Vector Norm (contd.)

Theorem (Matrix norm subordinate to the Euclidean norm)


The matrix norm subordinate to the l2 norm (also called Euclidean
norm) on Rn v
u n
uX
∥x∥2 = t |xi |2
i=1

is denoted by ∥A∥2 q
∥A∥2 = max |λi |,
1≤i≤n

MA 214 - NA Spring 2023-24 7 / 46


Matrix Norms Subordinate to a Vector Norm (contd.)

Theorem (Matrix norm subordinate to the Euclidean norm)


The matrix norm subordinate to the l2 norm (also called Euclidean
norm) on Rn v
u n
uX
∥x∥2 = t |xi |2
i=1

is denoted by ∥A∥2 q
∥A∥2 = max |λi |,
1≤i≤n

where λ1 , λ2 , · · · , λn are eigenvalues of the matrix AT A.

MA 214 - NA Spring 2023-24 7 / 46


Condition Number of a Matrix

Theorem
Let A be an invertible n × n matrix.

MA 214 - NA Spring 2023-24 8 / 46


Condition Number of a Matrix

Theorem
Let A be an invertible n × n matrix. Let x and x̃ be the solutions of the
systems
Ax = b and Ax̃ = b̃,

MA 214 - NA Spring 2023-24 8 / 46


Condition Number of a Matrix

Theorem
Let A be an invertible n × n matrix. Let x and x̃ be the solutions of the
systems
Ax = b and Ax̃ = b̃,
respectively, where b and b̃ are given vectors.

MA 214 - NA Spring 2023-24 8 / 46


Condition Number of a Matrix

Theorem
Let A be an invertible n × n matrix. Let x and x̃ be the solutions of the
systems
Ax = b and Ax̃ = b̃,
respectively, where b and b̃ are given vectors. Then

∥x − x̃∥ ∥b − b̃∥
≤ ∥A∥ ∥A−1 ∥
∥x∥ ∥b∥

MA 214 - NA Spring 2023-24 8 / 46


Condition Number of a Matrix (contd.)
Theorem
Let A and à be non-singular square matrices, and b ̸= 0.
If x and x̃ are the solutions of the systems
Ax = b and Ãx̃ = b,
respectively, then
∥x̃ − x∥ ∥A − Ã∥
≤ ∥A∥ ∥A−1 ∥ .
∥x̃∥ ∥A∥

MA 214 - NA Spring 2023-24 9 / 46


Condition Number of a Matrix (contd.)

Definition
Let A be an n × n invertible matrix. Let a matrix norm be given that is
subordinate to a vector norm. Then the condition number of the matrix
A is defined as
κ(A) := ∥A∥ ∥A−1 ∥.

MA 214 - NA Spring 2023-24 10 / 46


Condition Number of a Matrix (contd.)

∥x − x̃∥ ∥b − b̃∥
≤ ∥A∥ ∥A−1 ∥
∥x∥ ∥b∥
∥x̃ − x∥ ∥A − Ã∥
≤ ∥A∥ ∥A−1 ∥ .
∥x̃∥ ∥A∥

MA 214 - NA Spring 2023-24 11 / 46


Condition Number of a Matrix (contd.)

∥x − x̃∥ ∥b − b̃∥
≤ ∥A∥ ∥A−1 ∥
∥x∥ ∥b∥
∥x̃ − x∥ ∥A − Ã∥
≤ ∥A∥ ∥A−1 ∥ .
∥x̃∥ ∥A∥

It is clear that if the condition number is small, then the relative error in
the solution will also be small whenever the relative error in the right
hand side vector is small.

MA 214 - NA Spring 2023-24 11 / 46


Condition Number of a Matrix (contd.)

∥x − x̃∥ ∥b − b̃∥
≤ ∥A∥ ∥A−1 ∥
∥x∥ ∥b∥
∥x̃ − x∥ ∥A − Ã∥
≤ ∥A∥ ∥A−1 ∥ .
∥x̃∥ ∥A∥

It is clear that if the condition number is small, then the relative error in
the solution will also be small whenever the relative error in the right
hand side vector is small.
On the other hand, if the condition number is large, then the relative
error could be very large even though the relative error in the right hand
side vector is small. MA 214 - NA Spring 2023-24 11 / 46
Condition Number of a Matrix (contd.)

Example: Consider 5x1 + 7x2 = 0.7, 7x1 + 10x2 = 1


T
Solution: x = (0, 0.1) .

MA 214 - NA Spring 2023-24 12 / 46


Condition Number of a Matrix (contd.)

Example: Consider 5x1 + 7x2 = 0.7, 7x1 + 10x2 = 1


T
Solution: x = (0, 0.1) .
The perturbed system with RHS vector b̃ = (0.69, 1.01)T given by
5x1 + 7x2 = 0.69, 7x1 + 10x2 = 1.01

has the solution x̃ = (−0.17, 0.22)T .

MA 214 - NA Spring 2023-24 12 / 46


Condition Number of a Matrix (contd.)

Example: Consider 5x1 + 7x2 = 0.7, 7x1 + 10x2 = 1


T
Solution: x = (0, 0.1) .
The perturbed system with RHS vector b̃ = (0.69, 1.01)T given by
5x1 + 7x2 = 0.69, 7x1 + 10x2 = 1.01

has the solution x̃ = (−0.17, 0.22)T .


∥x − x̃∥∞
The relative error in solution with the l∞ vector norm is = 1.7,
∥x∥∞

MA 214 - NA Spring 2023-24 12 / 46


Condition Number of a Matrix (contd.)

Example: Consider 5x1 + 7x2 = 0.7, 7x1 + 10x2 = 1


T
Solution: x = (0, 0.1) .
The perturbed system with RHS vector b̃ = (0.69, 1.01)T given by
5x1 + 7x2 = 0.69, 7x1 + 10x2 = 1.01

has the solution x̃ = (−0.17, 0.22)T .


∥x − x̃∥∞
The relative error in solution with the l∞ vector norm is = 1.7,
∥x∥∞
∥b − b̃∥∞
The relative error in RHS vector is = 0.01.
∥b∥∞

MA 214 - NA Spring 2023-24 12 / 46


Condition Number of a Matrix (contd.)

Example: Consider 5x1 + 7x2 = 0.7, 7x1 + 10x2 = 1


T
Solution: x = (0, 0.1) .
The perturbed system with RHS vector b̃ = (0.69, 1.01)T given by
5x1 + 7x2 = 0.69, 7x1 + 10x2 = 1.01

has the solution x̃ = (−0.17, 0.22)T .


∥x − x̃∥∞
The relative error in solution with the l∞ vector norm is = 1.7,
∥x∥∞
∥b − b̃∥∞
The relative error in RHS vector is = 0.01.
∥b∥∞
The condition number of the coefficient matrix of the system is 289.

MA 214 - NA Spring 2023-24 12 / 46


Condition Number of a Matrix (contd.)
Definition
A matrix with a large condition number is said to be ill conditioned. A
matrix with a small condition number is said to be well conditioned.

MA 214 - NA Spring 2023-24 13 / 46


Condition Number of a Matrix (contd.)
Definition
A matrix with a large condition number is said to be ill conditioned. A
matrix with a small condition number is said to be well conditioned.
Example: The Hilbert matrix is given by
 
1 12 1
3 · · · 1
n
1 1 1
· · · 1 
2 3 4 n+1 
· ··· 
Hn = ·
;

 · · · 
· ··· 
1 1 1 1
n n+1 n+1 · · · 2n−1

MA 214 - NA Spring 2023-24 13 / 46


Condition Number of a Matrix (contd.)
Definition
A matrix with a large condition number is said to be ill conditioned. A
matrix with a small condition number is said to be well conditioned.
Example: The Hilbert matrix is given by
 
1 12 1
3 · · · 1
n
1 1 1
· · · 1 
2 3 4 n+1 
· ··· 
Hn = ·
;

 · · · 
· ··· 
1 1 1 1
n n+1 n+1 · · · 2n−1
25
κ(H4 ) = ∥H4 ∥∥H−14 ∥= 13620 ≈ 28000
MA 214 - NA
12 Spring 2023-24 13 / 46
Condition Number of a Matrix (contd.)

Theorem
Let A ∈ Mn (R) be non-singular. Then, for any singular n × n matrix B,
we have
1 ∥A − B∥
≤ .
κ(A) ∥A∥

MA 214 - NA Spring 2023-24 14 / 46


Condition number of a Matrix (contd.)

MA 214 - NA Spring 2023-24 15 / 46


Proof :

We have
   

1 1 1 
 1 
 1  1 

= =  ≤  
κ(A) ∥A∥∥A ∥ ∥A∥ 
−1 ∥A−1 x∥  ∥A∥  ∥A−1 y∥ 
max
x̸=0 ∥x∥ ∥y∥

where y ̸= 0 is arbitrary. Take y = Az, for some arbitrary vector z.


Then we get
 
1 1 ∥Az∥
≤ .
κ(A) ∥A∥ ∥z∥
MA 214 - NA Spring 2023-24 16 / 46
Proof cont ...

Let z ̸= 0 be such that Bz = 0 (this is possible since B is singular), we


get
1 ∥(A − B)z∥

κ(A) ∥A∥∥z∥
∥(A − B)∥∥z∥

∥A∥∥z∥
∥A − B∥
= .
∥A∥

MA 214 - NA Spring 2023-24 16 / 46


Iterative Methods for Linear Systems

A is diagonal ⇒ the linear system Ax = b can be solved easily.

MA 214 - NA Spring 2023-24 17 / 46


Iterative Methods for Linear Systems

A is diagonal ⇒ the linear system Ax = b can be solved easily.


Otherwise?

MA 214 - NA Spring 2023-24 17 / 46


Iterative Methods for Linear Systems

A is diagonal ⇒ the linear system Ax = b can be solved easily.


Otherwise?
First rewrite the matrix A as
A = D − C,

MA 214 - NA Spring 2023-24 17 / 46


Iterative Methods for Linear Systems

A is diagonal ⇒ the linear system Ax = b can be solved easily.


Otherwise?
First rewrite the matrix A as
A = D − C,
where D, C ∈ Mn (R) be such that
 
a11 0 · · · 0
 0 a22 · · · 0 
D=  ... .. ..  and − C = A − D.
. ··· . 
0 0 · · · ann
MA 214 - NA Spring 2023-24 17 / 46
Iterative Methods: Jacobi Method (contd.)

Ax = b ⇒ Dx = Cx + b.

MA 214 - NA Spring 2023-24 18 / 46


Iterative Methods: Jacobi Method (contd.)

Ax = b ⇒ Dx = Cx + b.
Choose (arbitrarily) some specific value for x, say x = x(0) , then the
resulting system
Dx = Cx(0) + b
can be readily solved.

MA 214 - NA Spring 2023-24 18 / 46


Iterative Methods: Jacobi Method (contd.)

Ax = b ⇒ Dx = Cx + b.
Choose (arbitrarily) some specific value for x, say x = x(0) , then the
resulting system
Dx = Cx(0) + b
can be readily solved.
Let us call the solution of this system as x(1) . That is,
Dx(1) = Cx(0) + b.

MA 214 - NA Spring 2023-24 18 / 46


Iterative Methods: Jacobi Method (contd.)

We have Dx(1) = Cx(0) + b.

MA 214 - NA Spring 2023-24 19 / 46


Iterative Methods: Jacobi Method (contd.)

We have Dx(1) = Cx(0) + b.


Similarly, we can compute
Dx(2) = Cx(1) + b.

MA 214 - NA Spring 2023-24 19 / 46


Iterative Methods: Jacobi Method (contd.)

We have Dx(1) = Cx(0) + b.


Similarly, we can compute
Dx(2) = Cx(1) + b.
Repeat this procedure to get a general iterative procedure as
Dx(k+1) = Cx(k) + b,

MA 214 - NA Spring 2023-24 19 / 46


Iterative Methods: Jacobi Method (contd.)

We have Dx(1) = Cx(0) + b.


Similarly, we can compute
Dx(2) = Cx(1) + b.
Repeat this procedure to get a general iterative procedure as
Dx(k+1) = Cx(k) + b, k = 0, 1, 2, · · · .

MA 214 - NA Spring 2023-24 19 / 46


Iterative Methods: Jacobi Method (contd.)

We have Dx(1) = Cx(0) + b.


Similarly, we can compute
Dx(2) = Cx(1) + b.
Repeat this procedure to get a general iterative procedure as
Dx(k+1) = Cx(k) + b, k = 0, 1, 2, · · · .
If D is invertible, then we can write
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · ,
where B = D−1 C and c = D−1 b.
MA 214 - NA Spring 2023-24 19 / 46
Iterative Methods: Jacobi Method (contd.)

We have Dx(1) = Cx(0) + b.


Similarly, we can compute
Dx(2) = Cx(1) + b.
Repeat this procedure to get a general iterative procedure as
Dx(k+1) = Cx(k) + b, k = 0, 1, 2, · · · .
If D is invertible, then we can write
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · ,
where B = D−1 C and c = D−1 b. This iterative procedure is called the
Jacobi method.
MA 214 - NA Spring 2023-24 19 / 46
Iterative Methods: Jacobi Method (contd.)
Let us illustrate this method in the case of 3 × 3 system
a11 x1 + a12 x2 + a13 x3 = b1
a21 x1 + a22 x2 + a23 x3 = b2
a31 x1 + a32 x2 + a33 x3 = b3

MA 214 - NA Spring 2023-24 20 / 46


Iterative Methods: Jacobi Method (contd.)
Let us illustrate this method in the case of 3 × 3 system
a11 x1 + a12 x2 + a13 x3 = b1
a21 x1 + a22 x2 + a23 x3 = b2
a31 x1 + a32 x2 + a33 x3 = b3
When the diagonal elements of this system are non-zero, we can rewrite
the above equation as
1
x1 = (b1 − a12 x2 − a13 x3 )
a11
1
x2 = (b2 − a21 x1 − a23 x3 )
a22
1
x3 = (b3 − a31 x1 − a32 x2 )
a33
MA 214 - NA Spring 2023-24 20 / 46
Iterative Methods: Jacobi Method (contd.)
(0) (0) (0)
Take x(0) = (x1 , x2 , x3 )T to be an initial guess to the true solution x.

MA 214 - NA Spring 2023-24 21 / 46


Iterative Methods: Jacobi Method (contd.)
(0) (0) (0)
Take x(0) = (x1 , x2 , x3 )T to be an initial guess to the true solution x.
Define a sequence of iterates (for k = 0, 1, 2, · · · ) by
(k+1) 1 (k) (k)
x1 = (b1 − a12 x2 − a13 x3 )
a11
(k+1) 1 (k) (k)
x2 = (b2 − a21 x1 − a23 x3 )
a22
(k+1) 1 (k) (k)
x3 = (b3 − a31 x1 − a32 x2 ).
a33
which is the Jacobi iterative procedure in the case of 3 × 3 system.

MA 214 - NA Spring 2023-24 21 / 46


Iterative Methods: Jacobi Method (contd.)
(0) (0) (0)
Take x(0) = (x1 , x2 , x3 )T to be an initial guess to the true solution x.
Define a sequence of iterates (for k = 0, 1, 2, · · · ) by
(k+1) 1 (k) (k)
x1 = (b1 − a12 x2 − a13 x3 )
a11
(k+1) 1 (k) (k)
x2 = (b2 − a21 x1 − a23 x3 )
a22
(k+1) 1 (k) (k)
x3 = (b3 − a31 x1 − a32 x2 ).
a33
which is the Jacobi iterative procedure in the case of 3 × 3 system.
Will the Jacobi iterative sequence converges?
MA 214 - NA Spring 2023-24 21 / 46
Iterative Methods: Jacobi Method (contd.)

Example: Consider the system


6x1 + x2 + 2x3 = −2,
x1 + 4x2 + 0.5x3 = 1,
−x1 + 0.5x2 − 4x3 = 0.
The Jacobi iteration is

MA 214 - NA Spring 2023-24 22 / 46


Iterative Methods: Jacobi Method (contd.)

Example: Consider the system


6x1 + x2 + 2x3 = −2,
x1 + 4x2 + 0.5x3 = 1,
−x1 + 0.5x2 − 4x3 = 0.
The Jacobi iteration is
(k+1) 1 (k) (k)
x1 = (−2 − x2 − 2x3 ),
6
(k+1) 1 (k) (k)
x2 = (1 − x1 − 0.5x3 ),
4
(k+1) −1 (k) (k)
x3 = (0 + x1 − 0.5x2 ).
4
MA 214 - NA Spring 2023-24 22 / 46
Iterative Methods: Jacobi Method (contd.)

The exact solution (upto 6-digit rounding) of this system is


x ≈ (−0.441176, 0.341176, 0.152941).

MA 214 - NA Spring 2023-24 23 / 46


Iterative Methods: Jacobi Method (contd.)

The exact solution (upto 6-digit rounding) of this system is


x ≈ (−0.441176, 0.341176, 0.152941).
Choosing the initial guess x(0) = (0, 0, 0),

MA 214 - NA Spring 2023-24 23 / 46


Iterative Methods: Jacobi Method (contd.)

The exact solution (upto 6-digit rounding) of this system is


x ≈ (−0.441176, 0.341176, 0.152941).
Choosing the initial guess x(0) = (0, 0, 0),
x(1) ≈ (−0.333333, 0.250000, 0.000000),

MA 214 - NA Spring 2023-24 23 / 46


Iterative Methods: Jacobi Method (contd.)

The exact solution (upto 6-digit rounding) of this system is


x ≈ (−0.441176, 0.341176, 0.152941).
Choosing the initial guess x(0) = (0, 0, 0),
x(1) ≈ (−0.333333, 0.250000, 0.000000),
x(2) ≈ (−0.375000, 0.333333, 0.114583),

MA 214 - NA Spring 2023-24 23 / 46


Iterative Methods: Jacobi Method (contd.)

The exact solution (upto 6-digit rounding) of this system is


x ≈ (−0.441176, 0.341176, 0.152941).
Choosing the initial guess x(0) = (0, 0, 0),
x(1) ≈ (−0.333333, 0.250000, 0.000000),
x(2) ≈ (−0.375000, 0.333333, 0.114583),
x(3) ≈ (−0.427083, 0.329427, 0.135417),

MA 214 - NA Spring 2023-24 23 / 46


Iterative Methods: Jacobi Method (contd.)

The exact solution (upto 6-digit rounding) of this system is


x ≈ (−0.441176, 0.341176, 0.152941).
Choosing the initial guess x(0) = (0, 0, 0),
x(1) ≈ (−0.333333, 0.250000, 0.000000),
x(2) ≈ (−0.375000, 0.333333, 0.114583),
x(3) ≈ (−0.427083, 0.329427, 0.135417),
x(4) ≈ (−0.433377, 0.339844, 0.147949),

MA 214 - NA Spring 2023-24 23 / 46


Iterative Methods: Jacobi Method (contd.)

The exact solution (upto 6-digit rounding) of this system is


x ≈ (−0.441176, 0.341176, 0.152941).
Choosing the initial guess x(0) = (0, 0, 0),
x(1) ≈ (−0.333333, 0.250000, 0.000000),
x(2) ≈ (−0.375000, 0.333333, 0.114583),
x(3) ≈ (−0.427083, 0.329427, 0.135417),
x(4) ≈ (−0.433377, 0.339844, 0.147949),
x(5) ≈ (−0.439290, 0.339851, 0.150825),
and so on.
MA 214 - NA Spring 2023-24 23 / 46
Iterative Methods: Jacobi Method (contd.)

Example: Consider the system

x1 + 4x2 + 0.5x3 = 1,
6x1 + x2 + 2x3 = −2,
−x1 + 0.5x2 − 4x3 = 0.

which is exactly the same as the system discussed above.

MA 214 - NA Spring 2023-24 24 / 46


Iterative Methods: Jacobi Method (contd.)

Example: Consider the system

x1 + 4x2 + 0.5x3 = 1,
6x1 + x2 + 2x3 = −2,
−x1 + 0.5x2 − 4x3 = 0.

which is exactly the same as the system discussed above.


The only difference is the interchange of first and second equation.

MA 214 - NA Spring 2023-24 24 / 46


Iterative Methods: Jacobi Method (contd.)

Example: Consider the system

x1 + 4x2 + 0.5x3 = 1,
6x1 + x2 + 2x3 = −2,
−x1 + 0.5x2 − 4x3 = 0.

which is exactly the same as the system discussed above.


The only difference is the interchange of first and second equation.
Hence, the exact solution is same as given above

x ≈ (−0.441176, 0.341176, 0.152941).


MA 214 - NA Spring 2023-24 24 / 46
Iterative Methods: Jacobi Method (contd.)
The Jacobi iterative sequence for this system is given by
(k+1) (k) (k)
x1 = (1 − 4x2 − 0.5x3 ),
(k+1) (k) (k)
x2 = (−2 − 6x1 − 2x3 ),
(k+1) −1 (k) (k)
x3 = (0 + x1 − 0.5x2 ).
4

MA 214 - NA Spring 2023-24 25 / 46


Iterative Methods: Jacobi Method (contd.)
The Jacobi iterative sequence for this system is given by
(k+1) (k) (k)
x1 = (1 − 4x2 − 0.5x3 ),
(k+1) (k) (k)
x2 = (−2 − 6x1 − 2x3 ),
(k+1) −1 (k) (k)
x3 = (0 + x1 − 0.5x2 ).
4
Choosing the initial guess x(0) = (0, 0, 0)

MA 214 - NA Spring 2023-24 25 / 46


Iterative Methods: Jacobi Method (contd.)
The Jacobi iterative sequence for this system is given by
(k+1) (k) (k)
x1 = (1 − 4x2 − 0.5x3 ),
(k+1) (k) (k)
x2 = (−2 − 6x1 − 2x3 ),
(k+1) −1 (k) (k)
x3 = (0 + x1 − 0.5x2 ).
4
Choosing the initial guess x(0) = (0, 0, 0)
x(1) ≈ (1, −2, 0),

MA 214 - NA Spring 2023-24 25 / 46


Iterative Methods: Jacobi Method (contd.)
The Jacobi iterative sequence for this system is given by
(k+1) (k) (k)
x1 = (1 − 4x2 − 0.5x3 ),
(k+1) (k) (k)
x2 = (−2 − 6x1 − 2x3 ),
(k+1) −1 (k) (k)
x3 = (0 + x1 − 0.5x2 ).
4
Choosing the initial guess x(0) = (0, 0, 0)
x(1) ≈ (1, −2, 0),
x(2) ≈ (9, −8, −0.5),

MA 214 - NA Spring 2023-24 25 / 46


Iterative Methods: Jacobi Method (contd.)
The Jacobi iterative sequence for this system is given by
(k+1) (k) (k)
x1 = (1 − 4x2 − 0.5x3 ),
(k+1) (k) (k)
x2 = (−2 − 6x1 − 2x3 ),
(k+1) −1 (k) (k)
x3 = (0 + x1 − 0.5x2 ).
4
Choosing the initial guess x(0) = (0, 0, 0)
x(1) ≈ (1, −2, 0),
x(2) ≈ (9, −8, −0.5),
x(3) ≈ (33.25, −55, −3.25),

MA 214 - NA Spring 2023-24 25 / 46


Iterative Methods: Jacobi Method (contd.)
The Jacobi iterative sequence for this system is given by
(k+1) (k) (k)
x1 = (1 − 4x2 − 0.5x3 ),
(k+1) (k) (k)
x2 = (−2 − 6x1 − 2x3 ),
(k+1) −1 (k) (k)
x3 = (0 + x1 − 0.5x2 ).
4
Choosing the initial guess x(0) = (0, 0, 0)
x(1) ≈ (1, −2, 0),
x(2) ≈ (9, −8, −0.5),
x(3) ≈ (33.25, −55, −3.25),
x(4) ≈ (222.625, −195, −15.1875),

MA 214 - NA Spring 2023-24 25 / 46


Iterative Methods: Jacobi Method (contd.)
The Jacobi iterative sequence for this system is given by
(k+1) (k) (k)
x1 = (1 − 4x2 − 0.5x3 ),
(k+1) (k) (k)
x2 = (−2 − 6x1 − 2x3 ),
(k+1) −1 (k) (k)
x3 = (0 + x1 − 0.5x2 ).
4
Choosing the initial guess x(0) = (0, 0, 0)
x(1) ≈ (1, −2, 0),
x(2) ≈ (9, −8, −0.5),
x(3) ≈ (33.25, −55, −3.25),
x(4) ≈ (222.625, −195, −15.1875),
x(5) ≈ (788.59375, −1307.375, −80.03125), and so on.
MA 214 - NA Spring 2023-24 25 / 46
Iterative Methods: Jacobi Method (contd.)
Define the error in the kth iterate x(k) compared to the exact solution by
e(k) = x − x(k) .

MA 214 - NA Spring 2023-24 26 / 46


Iterative Methods: Jacobi Method (contd.)
Define the error in the kth iterate x(k) compared to the exact solution by
e(k) = x − x(k) .
Recall the iterative procedure is given in the form
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · .

MA 214 - NA Spring 2023-24 26 / 46


Iterative Methods: Jacobi Method (contd.)
Define the error in the kth iterate x(k) compared to the exact solution by
e(k) = x − x(k) .
Recall the iterative procedure is given in the form
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · .
From this it follows easily that e(k) satisfies the system
e(k+1) = Be(k) .

MA 214 - NA Spring 2023-24 26 / 46


Iterative Methods: Jacobi Method (contd.)
Define the error in the kth iterate x(k) compared to the exact solution by
e(k) = x − x(k) .
Recall the iterative procedure is given in the form
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · .
From this it follows easily that e(k) satisfies the system
e(k+1) = Be(k) .
Taking norms in the last equation we get
∥e(k+1) ∥ = ∥Be(k) ∥

MA 214 - NA Spring 2023-24 26 / 46


Iterative Methods: Jacobi Method (contd.)
Define the error in the kth iterate x(k) compared to the exact solution by
e(k) = x − x(k) .
Recall the iterative procedure is given in the form
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · .
From this it follows easily that e(k) satisfies the system
e(k+1) = Be(k) .
Taking norms in the last equation we get
∥e(k+1) ∥ = ∥Be(k) ∥ ≤ ∥B∥∥e(k) ∥

MA 214 - NA Spring 2023-24 26 / 46


Iterative Methods: Jacobi Method (contd.)
Define the error in the kth iterate x(k) compared to the exact solution by
e(k) = x − x(k) .
Recall the iterative procedure is given in the form
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · .
From this it follows easily that e(k) satisfies the system
e(k+1) = Be(k) .
Taking norms in the last equation we get
∥e(k+1) ∥ = ∥Be(k) ∥ ≤ ∥B∥∥e(k) ∥ ≤ · · · ≤

MA 214 - NA Spring 2023-24 26 / 46


Iterative Methods: Jacobi Method (contd.)
Define the error in the kth iterate x(k) compared to the exact solution by
e(k) = x − x(k) .
Recall the iterative procedure is given in the form
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · .
From this it follows easily that e(k) satisfies the system
e(k+1) = Be(k) .
Taking norms in the last equation we get
∥e(k+1) ∥ = ∥Be(k) ∥ ≤ ∥B∥∥e(k) ∥ ≤ · · · ≤ ∥B∥k+1 ∥e(0) ∥.

MA 214 - NA Spring 2023-24 26 / 46


Iterative Methods: Jacobi Method (contd.)
Define the error in the kth iterate x(k) compared to the exact solution by
e(k) = x − x(k) .
Recall the iterative procedure is given in the form
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · .
From this it follows easily that e(k) satisfies the system
e(k+1) = Be(k) .
Taking norms in the last equation we get
∥e(k+1) ∥ = ∥Be(k) ∥ ≤ ∥B∥∥e(k) ∥ ≤ · · · ≤ ∥B∥k+1 ∥e(0) ∥.
Thus, when ∥B∥ < 1, the iteration method always converges for any
initial guess x(0) .
MA 214 - NA Spring 2023-24 26 / 46
Iterative Methods: Jacobi Method (contd.)
The question then is when does the matrix B in iterative procedure
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · ,
satisfy ∥B∥ < 1?

MA 214 - NA Spring 2023-24 27 / 46


Iterative Methods: Jacobi Method (contd.)
The question then is when does the matrix B in iterative procedure
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · ,
satisfy ∥B∥ < 1?
For this, we need a notion called diagonally dominant matrices which
we will define now.

MA 214 - NA Spring 2023-24 27 / 46


Iterative Methods: Jacobi Method (contd.)
The question then is when does the matrix B in iterative procedure
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · ,
satisfy ∥B∥ < 1?
For this, we need a notion called diagonally dominant matrices which
we will define now.
Definition (Diagonally Dominant Matrices)
A matrix A is said to be diagonally dominant if it satisfies the
inequality X n
|aij | < |aii |, i = 1, 2, · · · , n.
j=1,j̸=i

MA 214 - NA Spring 2023-24 27 / 46


————————————————————————–

MA 214 - NA Spring 2023-24 28 / 46


Iterative Methods: Jacobi Method (contd.)

Theorem
If the coefficient matrix A is diagonally dominant, then the Jacobi
method

x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · ,

converges.

MA 214 - NA Spring 2023-24 28 / 46


Iterative Methods: Jacobi Method (contd.)

Theorem
If the coefficient matrix A is diagonally dominant, then the Jacobi
method

x(k+1) = Bx(k) + c, k = 0, 1, 2, · · · ,

converges.
Proof: Recall that the components of the Jacobi iterating sequence x(k)
is given by  
1 Xn
= bi − aij xj  , i = 1, 2, · · · , n.
(k+1) (k)
xi
aii
j=1,j̸=i
MA 214 - NA Spring 2023-24 28 / 46
Iterative Methods: Jacobi Method (contd.)

Each component of the error satisfies


(k+1)
Xn
aij (k)
ei =− e , i = 1, 2, · · · , n.
aii j
j=1
j̸=i

MA 214 - NA Spring 2023-24 29 / 46


Iterative Methods: Jacobi Method (contd.)

Each component of the error satisfies


(k+1)
Xn
aij (k)
ei =− e , i = 1, 2, · · · , n.
aii j
j=1
j̸=i

This gives X
n
(k+1) aij
|ei | ≤ ∥e(k) ∥∞ .
aii
j=1
j̸=i

MA 214 - NA Spring 2023-24 29 / 46


Iterative Methods: Jacobi Method (contd.)

Each component of the error satisfies


(k+1)
Xn
aij (k)
ei =− e , i = 1, 2, · · · , n.
aii j
j=1
j̸=i

This gives X
n
(k+1) aij
|ei | ≤ ∥e(k) ∥∞ .
aii
j=1
j̸=i
Define X
n
aij
µ = max .
1≤i≤n aii
j=1
j̸=i
MA 214 - NA Spring 2023-24 29 / 46
Iterative Methods: Jacobi Method (contd.)

We have
(k+1)
|ei | ≤ µ∥e(k) ∥∞ ,
which is true for all i = 1, 2, · · · , n.

MA 214 - NA Spring 2023-24 30 / 46


Iterative Methods: Jacobi Method (contd.)

We have
(k+1)
|ei | ≤ µ∥e(k) ∥∞ ,
which is true for all i = 1, 2, · · · , n.
Since this is true for all i = 1, 2, · · · , n, we have
∥e(k+1) ∥∞ ≤ µ∥e(k) ∥∞ .

MA 214 - NA Spring 2023-24 30 / 46


Iterative Methods: Jacobi Method (contd.)

We have
(k+1)
|ei | ≤ µ∥e(k) ∥∞ ,
which is true for all i = 1, 2, · · · , n.
Since this is true for all i = 1, 2, · · · , n, we have
∥e(k+1) ∥∞ ≤ µ∥e(k) ∥∞ .

Then iterating the last inequality we get


∥e(k+1) ∥∞ ≤ µk+1 ∥e(0) ∥∞ .

MA 214 - NA Spring 2023-24 30 / 46


Iterative Methods: Jacobi Method (contd.)

We have
(k+1)
|ei | ≤ µ∥e(k) ∥∞ ,
which is true for all i = 1, 2, · · · , n.
Since this is true for all i = 1, 2, · · · , n, we have
∥e(k+1) ∥∞ ≤ µ∥e(k) ∥∞ .

Then iterating the last inequality we get


∥e(k+1) ∥∞ ≤ µk+1 ∥e(0) ∥∞ .
The matrix A is diagonally dominant if and only if µ < 1.

MA 214 - NA Spring 2023-24 30 / 46


Iterative Methods: Jacobi Method (contd.)

We have
(k+1)
|ei | ≤ µ∥e(k) ∥∞ ,
which is true for all i = 1, 2, · · · , n.
Since this is true for all i = 1, 2, · · · , n, we have
∥e(k+1) ∥∞ ≤ µ∥e(k) ∥∞ .

Then iterating the last inequality we get


∥e(k+1) ∥∞ ≤ µk+1 ∥e(0) ∥∞ .
The matrix A is diagonally dominant if and only if µ < 1. Therefore, if
A is diagonally dominant, the Jacobi method converges.
MA 214 - NA Spring 2023-24 30 / 46
Iterative Methods: Jacobi Method (contd.)

Recall that we used Jacobi iterative method to compute solution to the


following two systems :
6x1 + x2 + 2x3 = −2,
x1 + 4x2 + 0.5x3 = 1,
−x1 + 0.5x2 − 4x3 = 0.

MA 214 - NA Spring 2023-24 31 / 46


Iterative Methods: Jacobi Method (contd.)

Recall that we used Jacobi iterative method to compute solution to the


following two systems :
6x1 + x2 + 2x3 = −2,
x1 + 4x2 + 0.5x3 = 1,
−x1 + 0.5x2 − 4x3 = 0.
and
x1 + 4x2 + 0.5x3 = 1,
6x1 + x2 + 2x3 = −2,
−x1 + 0.5x2 − 4x3 = 0.
MA 214 - NA Spring 2023-24 31 / 46
Iterative Methods: Mathematical Error

Let x∗ denote the computed solution using some method.

MA 214 - NA Spring 2023-24 32 / 46


Iterative Methods: Mathematical Error

Let x∗ denote the computed solution using some method.


Mathematical error is given by
e = x − x∗ .

MA 214 - NA Spring 2023-24 32 / 46


Iterative Methods: Mathematical Error

Let x∗ denote the computed solution using some method.


Mathematical error is given by
e = x − x∗ .
Residual Vector
r = b − Ax∗

MA 214 - NA Spring 2023-24 32 / 46


Iterative Methods: Mathematical Error

Let x∗ denote the computed solution using some method.


Mathematical error is given by
e = x − x∗ .
Residual Vector
r = b − Ax∗
Since b = Ax, we have
r = b − Ax∗ = Ax − Ax∗ = A(x − x∗ ).

MA 214 - NA Spring 2023-24 32 / 46


Iterative Methods: Mathematical Error

Let x∗ denote the computed solution using some method.


Mathematical error is given by
e = x − x∗ .
Residual Vector
r = b − Ax∗
Since b = Ax, we have
r = b − Ax∗ = Ax − Ax∗ = A(x − x∗ ).
The above identity can be written as
Ae = r.
MA 214 - NA Spring 2023-24 32 / 46
Iterative Methods: Stopping Criteria
When to stop an iteration?

MA 214 - NA Spring 2023-24 33 / 46


Iterative Methods: Stopping Criteria
When to stop an iteration?

Can we stop when


∥e(k) ∥ = ∥x − x(k) ∥
in the kth iteration in some norm is sufficiently small?

MA 214 - NA Spring 2023-24 33 / 46


Iterative Methods: Stopping Criteria
When to stop an iteration?

Can we stop when


∥e(k) ∥ = ∥x − x(k) ∥
in the kth iteration in some norm is sufficiently small?
This can be decided by looking at the residual error vector of the kth
iteration defined as
r(k) = b − Ax(k) .

MA 214 - NA Spring 2023-24 33 / 46


Iterative Methods: Stopping Criteria
When to stop an iteration?

Can we stop when


∥e(k) ∥ = ∥x − x(k) ∥
in the kth iteration in some norm is sufficiently small?
This can be decided by looking at the residual error vector of the kth
iteration defined as
r(k) = b − Ax(k) .
Thus, for a given sufficiently small positive number ϵ, we stop the
iteration if
∥r(k) ∥ < ϵ.
MA 214 - NA Spring 2023-24 33 / 46
Iterative Methods: Gauss-Seidel Method

Consider the 3 × 3 system


a11 x1 + a12 x2 + a13 x3 = b1 ,
a21 x1 + a22 x2 + a23 x3 = b2 ,
a31 x1 + a32 x2 + a33 x3 = b3 .

MA 214 - NA Spring 2023-24 34 / 46


Iterative Methods: Gauss-Seidel Method

Consider the 3 × 3 system


a11 x1 + a12 x2 + a13 x3 = b1 ,
a21 x1 + a22 x2 + a23 x3 = b2 ,
a31 x1 + a32 x2 + a33 x3 = b3 .
For aii ̸= 0, i = 1, 2, . . . , n, we can write
1
x1 = (b1 − a12 x2 − a13 x3 ),
a11
1
x2 = (b2 − a21 x1 − a23 x3 ),
a22
1
x3 = (b3 − a31 x1 − a32 x2 ).
a33
MA 214 - NA Spring 2023-24 34 / 46
Iterative Methods: Gauss-Seidel Method (contd.)

Let
(0) (0) (0)
x(0) = (x1 , x2 , x3 )T
be an initial guess to the true solution x.

MA 214 - NA Spring 2023-24 35 / 46


Iterative Methods: Gauss-Seidel Method (contd.)

Let
(0) (0) (0)
x(0) = (x1 , x2 , x3 )T
be an initial guess to the true solution x.
The iteration sequence (for k = 0, 1, 2, · · · ) is given by
(k+1) 1 (k) (k)
x1 = (b1 − a12 x2 − a13 x3 )
a11
(k+1) 1 (k) (k)
x2 = (b2 − a21 x1 − a23 x3 )
a22
(k+1) 1 (k) (k)
x3 = (b3 − a31 x1 − a32 x2 ).
a33
MA 214 - NA Spring 2023-24 35 / 46
Iterative Methods: Gauss-Seidel Method (contd.)

Let
(0) (0) (0)
x(0) = (x1 , x2 , x3 )T
be an initial guess to the true solution x.
Define a sequence of iterates (for k = 0, 1, 2, · · · ) by
(k+1) 1 (k) (k)
x1 = (b1 − a12 x2 − a13 x3 )
a11
(k+1) 1 (k+1) (k)
x2 = (b2 − a21 x1 − a23 x3 )
a22
(k+1) 1 (k+1) (k+1)
x3 = (b3 − a31 x1 − a32 x2 ).
a33
MA 214 - NA Spring 2023-24 36 / 46
Iterative Methods: Gauss-Seidel Method (contd.)

Let
(0) (0) (0)
x(0) = (x1 , x2 , x3 )T
be an initial guess to the true solution x.
Define a sequence of iterates (for k = 0, 1, 2, · · · ) by
(k+1) 1 (k) (k)
x1 = (b1 − a12 x2 − a13 x3 )
a11
(k+1) 1 (k+1) (k)
x2 = (b2 − a21 x1 − a23 x3 )
a22
(k+1) 1 (k+1) (k+1)
x3 = (b3 − a31 x1 − a32 x2 ).
a33
This method is called the Gauss-Seidel
MA 214 - NA
Iteration method. Spring 2023-24 36 / 46
Iterative Methods: Gauss-Seidel Method (contd.)
Theorem
If the coefficient matrix is diagonally dominant, then the Gauss-Seidel
method converges.

MA 214 - NA Spring 2023-24 37 / 46


Iterative Methods: Gauss-Seidel Method (contd.)
Theorem
If the coefficient matrix is diagonally dominant, then the Gauss-Seidel
method converges.
Proof:
The Gauss-Seidel  method reads 
1  X (k+1) X (k) 
i−1 n
(k+1)
xi = bi − aij xj − aij xj , i = 1, 2, · · · , n.
aii  
j=1 j=i+1

MA 214 - NA Spring 2023-24 37 / 46


Iterative Methods: Gauss-Seidel Method (contd.)
Theorem
If the coefficient matrix is diagonally dominant, then the Gauss-Seidel
method converges.
Proof:
The Gauss-Seidel  method reads 
1  X (k+1) X (k) 
i−1 n
(k+1)
xi = bi − aij xj − aij xj , i = 1, 2, · · · , n.
aii  
j=1 j=i+1

Therefore, the error in each component is given by


(k+1)
X
i−1
aij (k+1) X aij (k)
n
ei =− e − e , i = 1, 2, · · · , n.
aii j aii j
j=1 j=i+1
MA 214 - NA Spring 2023-24 37 / 46
Iterative Methods: Gauss-Seidel Method (contd.)
Proof:
The Gauss-Seidel method reads 
1  X (k+1) X (k) 
i−1 n
(k+1)
xi = bi − aij xj − aij xj , i = 1, 2, · · · , n.
aii  
j=1 j=i+1

Therefore, the error in each component is given by


(k+1)
X
i−1
aij (k+1) X aij (k)
n
ei =− e − e , i = 1, 2, · · · , n.
aii j aii j
j=1 j=i+1

Define X
i−1
aij X
n
aij
αi = , βi = , i = 1, 2, · · · , n,
aii aii
j=1 j=i+1
with the convention that α1 = βn = 0.
MA 214 - NA Spring 2023-24 38 / 46
Iterative Methods: Gauss-Seidel Method (contd.)

Therefore, the error in each component is given by

(k+1)
Xi−1
aij (k+1) X aij (k)
n
ei =− e − e , i = 1, 2, · · · , n.
aii j aii j
j=1 j=i+1

Define X
i−1 X
n
aij aij
αi = , βi = , i = 1, 2, · · · , n,
aii aii
j=1 j=i+1

with the convention that α1 = βn = 0.


(k+1)
⇒ |ei | ≤ αi ∥e(k+1) ∥∞ + βi ∥e(k) ∥∞ , i = 1, 2, · · · , n. (1)
MA 214 - NA Spring 2023-24 39 / 46
Iterative Methods: Gauss-Seidel Method (contd.)

Define X
i−1 X
n
aij aij
αi = , βi = , i = 1, 2, · · · , n,
aii aii
j=1 j=i+1

with the convention that α1 = βn = 0.


(k+1)
⇒ |ei | ≤ αi ∥e(k+1) ∥∞ + βi ∥e(k) ∥∞ , i = 1, 2, · · · , n. (2)
Let l be such that (k+1)
∥e(k+1) ∥∞ = |el |.

MA 214 - NA Spring 2023-24 40 / 46


Iterative Methods: Gauss-Seidel Method (contd.)

Define X
i−1 X
n
aij aij
αi = , βi = , i = 1, 2, · · · , n,
aii aii
j=1 j=i+1

with the convention that α1 = βn = 0.


(k+1)
⇒ |ei | ≤ αi ∥e(k+1) ∥∞ + βi ∥e(k) ∥∞ , i = 1, 2, · · · , n. (2)
Let l be such that (k+1)
∥e(k+1) ∥∞ = |el |.
Then with i = l in (3),

∥e(k+1) ∥∞ ≤ αl ∥e(k+1) ∥∞ + βl ∥e(k) ∥∞ .


MA 214 - NA Spring 2023-24 40 / 46
Iterative Methods: Gauss-Seidel Method (contd.)

(k+1)
⇒ |ei | ≤ αi ∥e(k+1) ∥∞ + βi ∥e(k) ∥∞ , i = 1, 2, · · · , n. (3)
Let l be such that (k+1)
∥e(k+1) ∥∞ = |el |.
Then with i = l in (3),

∥e(k+1) ∥∞ ≤ αl ∥e(k+1) ∥∞ + βl ∥e(k) ∥∞ .

Observe
X
n
aij
µ = max = max (αi + βi )
1≤i≤n aii 1≤i≤n
j=1
j̸=i

MA 214 - NA Spring 2023-24 41 / 46


Iterative Methods: Gauss-Seidel Method (contd.)

Let l be such that (k+1)


∥e(k+1) ∥∞ = |el |.
Then with i = l in (3),

∥e(k+1) ∥∞ ≤ αl ∥e(k+1) ∥∞ + βl ∥e(k) ∥∞ .

Observe
X
n
aij
µ = max = max (αi + βi )
1≤i≤n aii 1≤i≤n
j=1
j̸=i

We know A is a diagonally dominant matrix


⇒µ<1
MA 214 - NA Spring 2023-24 42 / 46
Iterative Methods: Gauss-Seidel Method (contd.)

Let l be such that (k+1)


∥e(k+1) ∥∞ = |el |.
Then with i = l in (3),

∥e(k+1) ∥∞ ≤ αl ∥e(k+1) ∥∞ + βl ∥e(k) ∥∞ .

Observe
X
n
aij
µ = max = max (αi + βi )
1≤i≤n aii 1≤i≤n
j=1
j̸=i

We know A is a diagonally dominant matrix


⇒µ<1 ⇒ αl < 1.
MA 214 - NA Spring 2023-24 42 / 46
Iterative Methods: Gauss-Seidel Method (contd.)

Then with i = l in (3),


∥e(k+1) ∥∞ ≤ αl ∥e(k+1) ∥∞ + βl ∥e(k) ∥∞ .
Observe
X
n
aij
µ = max = max (αi + βi )
1≤i≤n aii 1≤i≤n
j=1
j̸=i
We know A is a diagonally dominant matrix
⇒µ<1 ⇒ αl < 1.
βl
⇒ ∥e(k+1) ∥∞ ≤ ∥e(k) ∥∞ .
1 − αl
MA 214 - NA Spring 2023-24 43 / 46
Iterative Methods: Gauss-Seidel Method (contd.)

We know A is a diagonally dominant matrix


⇒µ<1 ⇒ αl < 1.
βl
⇒ ∥e(k+1) ∥∞ ≤ ∥e(k) ∥∞ .
1 − αl
Define
βi
η = max .
1≤i≤n 1 − αi

MA 214 - NA Spring 2023-24 44 / 46


Iterative Methods: Gauss-Seidel Method (contd.)

We know A is a diagonally dominant matrix


⇒µ<1 ⇒ αl < 1.
βl
⇒ ∥e(k+1) ∥∞ ≤ ∥e(k) ∥∞ .
1 − αl
Define
βi
η = max .
1≤i≤n 1 − αi

Then the above inequality yields

∥e(k+1) ∥∞ ≤ η∥e(k) ∥∞ .
MA 214 - NA Spring 2023-24 44 / 46
Iterative Methods: Gauss-Seidel Method (contd.)

∥e(k+1) ∥∞ ≤ αl ∥e(k+1) ∥∞ + βl ∥e(k) ∥∞


βl
⇒ ∥e(k+1) ∥∞ ≤ ∥e(k) ∥∞ .
1 − αl
Define
βi
η = max .
1≤i≤n 1 − αi
Then the above inequality yields
∥e(k+1) ∥∞ ≤ η∥e(k) ∥∞ .
For each i,
βi
(αi + βi ) − =
1 − αi MA 214 - NA Spring 2023-24 45 / 46
Iterative Methods: Gauss-Seidel Method (contd.)

∥e(k+1) ∥∞ ≤ αl ∥e(k+1) ∥∞ + βl ∥e(k) ∥∞


βl
⇒ ∥e(k+1) ∥∞ ≤ ∥e(k) ∥∞ .
1 − αl
Define
βi
η = max .
1≤i≤n 1 − αi
Then the above inequality yields
∥e(k+1) ∥∞ ≤ η∥e(k) ∥∞ .
For each i,
βi αi [1 − (αi + βi )]
(αi + βi ) − =
1 − αi 1MA− αi
214 - NA Spring 2023-24 45 / 46
Iterative Methods: Gauss-Seidel Method (contd.)

∥e(k+1) ∥∞ ≤ αl ∥e(k+1) ∥∞ + βl ∥e(k) ∥∞


βl
⇒ ∥e(k+1) ∥∞ ≤ ∥e(k) ∥∞ .
1 − αl
Define
βi
η = max .
1≤i≤n 1 − αi
Then the above inequality yields
∥e(k+1) ∥∞ ≤ η∥e(k) ∥∞ .
For each i,
βi αi [1 − (αi + βi )] αi
(αi + βi ) − = ≥ [1 − µ] ≥ 0,
1 − αi 1MA− α i
214 - NA
1 − α i Spring 2023-24 45 / 46
Iterative Methods: Gauss-Seidel Method (contd.)

Define
βi
η = max .
1≤i≤n 1 − αi
Then the above inequality yields
∥e(k+1) ∥∞ ≤ η∥e(k) ∥∞ .
For each i,
βi αi [1 − (αi + βi )] αi
(αi + βi ) − = ≥ [1 − µ] ≥ 0,
1 − αi 1 − αi 1 − αi
⇒ η ≤ µ < 1.
Thus, when the coefficient matrix A is diagonally dominant,
Gauss-Seidel method converges. MA 214 - NA Spring 2023-24 46 / 46

You might also like