You are on page 1of 66

University of Tripoli

Mechanical & Industrial Engineering Department


www.me.uot.edu.ly

ME626 Advance Numerical Analysis

Lecture 5:
Solution to a System of Linear
Algebraic Equations
Instructor: Samah Alghoul
Outlines

➢Solution to a System of Linear Algebraic Equations

➢Direct Solver:

▪ Gaussian Elimination

▪ Banded Linear System Solvers


▪ Tridiagonal Matrix Algorithm (TDMA)

▪ Pentadiagonal Matrix Algorithm

➢Iterative Solvers

➢Gauss Jacobi

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 2


Introduction

Importance!
› linear algebraic equations arising out of discretization of a differential
equation, have two key attributes:
1. the coefficient is sparse and, often, banded,
2. the coefficient matrix is large.

› These two attributes play a critical role in deciding the type of


algorithm to be used in solving the linear system of equations.
I. Direct Solvers
Direct Solvers

› Solution obtained by the method of substitution

› Solution obtained by this method has no errors other than round-off


errors.
› Referred to as the exact numerical solution.
➢“Numerical” because the governing PDE still has to be discretized
and solved numerically.

➢“Exact” because the algebraic equations resulting from


discretization of the PDE are solved exactly.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 5


Gaussian Elimination
Direct Solvers - Gaussian Elimination

➢Also known as naïve Gaussian elimination.

➢“Naïve” because it disregards problems associated with floating point


precision or inexact arithmetic.

➢It involves two main steps:

1. Forward elimination

2. Backward substitution

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 7


Direct Solvers - Gaussian Elimination

➢ Let us consider a system of K linear algebraic equations of the


general form

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 8


Direct Solvers - Gaussian Elimination

› Forward Elimination Step, we start from the first (topmost) equation,


and express φ1in terms of all the other φ’s.

› Next, we substitute into each of the equations except the first


(topmost) one.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 9


Direct Solvers - Gaussian Elimination

› The first equation remains unchanged.


› It is known as the pivot equation in the steps shown above.
› The subscript p denotes the pivot equation.
› The following transformation has occurred:

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 10


Direct Solvers - Gaussian Elimination
› In the next step, an expression for φ2 needs to be derived from the
second equation of and substituted into the equations below it.
› When repeated K−1 times (it is not required for the last equation), the
forward elimination step will have been completed, and the resulting
matrix will assume the following form:

Traiangular
Upper

Matrix
ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 11
Direct Solvers - Gaussian Elimination

nlong ~ K3

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 12


Direct Solvers - Gaussian Elimination
› Backward Substitution, the unknowns, φi, are obtained starting from
the last (bottommost) equation. Thus,

› Next, the φK is substituted into the second last equation, and


rearranged to obtain

› The process continues until all K unknowns have been determined.


The backward substitution algorithm may be generalized by the
following equation:

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 13


Direct Solvers - Gaussian Elimination

Nlong ~ K2

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 14


Direct Solvers - Gaussian Elimination

Number of Operations

› Are of interest when estimating the computational efficiency of any


algorithm.

› Multiplication and division are long operations, as they require


substantially more computing time than addition or subtraction.

› It is assumed a multiplication and a division require the same amount


of time, although, in reality, a division is slightly more time consuming
than a multiplication.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 15


Direct Solvers - Gaussian Elimination

Number of Operations

› Number of long operations in the two phases of the Gaussian


elimination algorithm scale as follows:
▪ Forward elimination: nlong ∼K3
▪ Backward substitution: nlong ∼K2

› Therefore, Gaussian elimination is prohibitive for solution of large


systems.

› For example, if the number of equations is increased by a factor of


10, the computational time will increase by a factor of 1000.
ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 16
Direct Solvers - Gaussian Elimination

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 17


Direct Solvers - Gaussian Elimination

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 18


[A] and {B} for 5x5 Grid; gray shows the boundary nodes
1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 -261
2 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 -146
3 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 -130
4 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 -114
5 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 0
6 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 -146
7 0 16 0 0 0 16 -64 16 0 0 0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 7 -3105
8 0 0 16 0 0 0 16 -64 16 0 0 0 16 0 0 0 0 0 0 0 0 0 0 0 0 8 -1552
9 0 0 0 16 0 0 0 16 -64 16 0 0 0 16 0 0 0 0 0 0 0 0 0 0 0 9 0
10 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 114
11 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11 -130
12 0 0 0 0 0 0 16 0 0 0 16 -64 16 0 0 0 16 0 0 0 0 0 0 0 0 12 -1552
13 0 0 0 0 0 0 0 16 0 0 0 16 -64 16 0 0 0 16 0 0 0 0 0 0 0 13 0
14 0 0 0 0 0 0 0 0 16 0 0 0 16 -64 16 0 0 0 16 0 0 0 0 0 0 14 1552
15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 15 130
16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 16 -114
17 0 0 0 0 0 0 0 0 0 0 0 16 0 0 0 16 -64 16 0 0 0 16 0 0 0 17 0
18 0 0 0 0 0 0 0 0 0 0 0 0 16 0 0 0 16 -64 16 0 0 0 16 0 0 18 1552
19 0 0 0 0 0 0 0 0 0 0 0 0 0 16 0 0 0 16 -64 16 0 0 0 16 0 19 3105
20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 20 146
21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 21 0
22 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 22 114
23 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 23 130
24 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 24 146
25 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 25 261

19
1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 -260.6
2 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 -146.1
3 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 -130.3
4 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 -114.5
5 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 0
6 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 -146.1
7 0 0 0 0 0 0 -64 16 0 0 0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 7 1569.1
8 0 0 0 0 0 0 0 -60 16 0 0 4 16 0 0 0 0 0 0 0 0 0 0 0 0 8 924.24
9 0 0 0 0 0 0 0 0 -60 16 0 1.07 4.27 16 0 0 0 0 0 0 0 0 0 0 0 9 2078.2
10 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 114.49
11 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11 -130.3
12 0 0 0 0 0 0 0 0 0 0 0 -60 17.1 0.29 0 0 16 0 0 0 0 0 0 0 0 12 990.26
13 0 0 0 0 0 0 0 0 0 0 0 0 -55 17.2 0 0 4.59 16 0 0 0 0 0 0 0 13 548.35
14 0 0 0 0 0 0 0 0 0 0 0 0 0 -54 16 0 1.53 5.06 16 0 0 0 0 0 0 14 1796.5
15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 15 130.27
16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 16 -114.5
17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -59 17.5 0.45 0 0 16 0 0 0 17 2135.2
18 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -54 17.6 0 0 4.72 16 0 0 18 2316.5
19 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -53 16 0 1.67 5.25 16 0 19 3796.8
20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 20 146.06
21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 21 0
22 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 22 114.49
23 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 23 130.27
24 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 24 146.06
25 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 25 260.55

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 20


3000
All Calculations Time (s)
2500 Solver Time (s)
2000 Poly. (All Calculations Time (s))
Time, min

1500 y = 2E-08x2 -3 E-06x0.0017 +2 x + 2.9508

1000

500

0
0 1000 2000 3000 4000 5000 6000
Number of equtions

Matlab on Intel Core™ i5 1.6 GHz Processor with 4 GB RAM


ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 21
Direct Solvers - Gaussian Elimination

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 22


Direct Solvers - Gaussian Elimination

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 23


Direct Solvers - Gaussian Elimination

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 24


Direct Solvers - Gaussian Elimination

› Memory Issues
› For example, if a 3D computation with just 40 nodes in each direction,
– Number of nodes = number of equations = K = 403 = 64,000.
– a coefficient matrix of size K 2 ≈ 4 × 109 , i.e., 4 billion real numbers.
– If double precision is used, the memory required to store them would be
4 × 109 x 8 = 32 billion bytes = 32 GB RAM
– Which is beyond the scope of a modern-day computer unless parallel
computing is employed.
– problems of practical interest require far more than 64,000 nodes,
implying that from a memory requirement standpoint, Gaussian
elimination is.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 25


Direct Solvers - Gaussian Elimination

› In summary, GE is prohibitively inefficient for the numerical solution of


PDEs even on a relatively medium-sized mesh. Due to three reasons

1. Processing Time.
▪ The method takes very long time due to long operations included.

2. Approximation issues
▪ It would produce acceptable results if A’s in a given row are within
several orders of magnitude variation.

3. Memory Issue.
▪ The coefficient matrix is of size K × K, the memory necessary to store
this matrix can quickly grow.
ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 26
Banded Linear System
Solvers
Direct Solvers - Banded Linear System Solvers

› Banded matrices: a special class of sparse matrices arises out of


discretization of a PDE if a structured mesh is used.

› The simplified Gaussian elimination algorithm for a tridiagonal matrix


system is also commonly known as
– the tridiagonal matrix algorithm (TDMA)

– or the Thomas algorithm, named after Llewellyn Thomas

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 28


Direct Solvers - Banded Linear System Solvers

› Importance!

› Computers cannot distinguish between a zero and a nonzero automatically.

▪ It treats a zero as any other real number, and multiplications by zero are
as time consuming as multiplications by nonzeroes.

▪ An approach to store only the nonzeroes and their locations in the matrix
where they are situated, will reduce the memory usage significantly.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 29


Direct Solvers - Banded Linear System Solvers

› Number of Operations
› In previous example, for a 64,000-node mesh, the real number storage will
reduce dramatically from 64,0002 to at most 64,000×7, assuming a 3D mesh
is being used along with a second-order central difference scheme.
› Not storing the zeroes will also improve computational efficiency
dramatically. This is because multiplications by zeroes will not be performed
in the first place.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 30


Direct Solvers - Banded Linear System Solvers

› let us consider the solution Poisson equation subject to the boundary


conditions shown.

› Which discretized as

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 31


Direct Solvers - Banded Linear System Solvers

› The resulting coefficient matrix, may be written as

Superdiagonal

Tridiagonal
matrix

Subdiagonal

Central diagonal
ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 32
Direct Solvers - Banded Linear System Solvers

› First, instead of storing the full [A] matrix, only the three diagonals are
stored in the following form:

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 33


Direct Solvers - Banded Linear System Solvers

› The solution follows same procedure as Gaussian elimination, yet it is


in more simplified.
› First, the topmost equation is rearranged to express φ1 in terms of φ2.
› The result is then substituted into all the other equations, and the
process is repeated until the last equation is reached.
› This represents the forward elimination process.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 34


Direct Solvers - Banded Linear System Solvers

› At the end of this process, the new coefficient matrix will assume an
upper triangular shape with only two diagonals.

› The next step is to solve this system using backward substitution


starting with the last equation.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 35


Direct Solvers - Banded Linear System Solvers

Nlong ~ 3(N-1)

Nlong ~ 2(N-1)

Nlong ~ N

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 36


Direct Solvers - Banded Linear System Solvers

Pentadiagonal Matrix System


› Even for a 1D problem, use of higher-order schemes extends the
stencil beyond the three points used in a second-order central
difference scheme.
› In such a scenario, the resulting matrix has five diagonals – two on
each side of the central diagonal.
› The five diagonals (or bands), are all clustered in the middle
› This is in contrast with the matrices arising out of 2D PDEs discretized
on a structured mesh, in which case, we also get five diagonals but
two of the diagonals, corresponding to the nodes k - N and k+N
reside N spaces away from the central diagonal.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 37


Direct Solvers - Banded Linear System Solvers

› The difference between the two scenarios is depicted pictorially as


shown, and is an important one.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 38


Direct Solvers - Banded Linear System Solvers

› When the diagonals are clustered in the middle, the forward elimination phase
of the Gaussian elimination algorithm retains only the five diagonals, and no
nonzero elements are generated beyond the two upper diagonals.
› When the diagonals are not clustered, the entire upper triangle gets populated
with nonzeroes once the forward elimination phase has been completed.
› This implies that in the latter case, simply allocating memory for the five
diagonals is not sufficient.
› Due to this reason, the five-banded matrices arising out of discretization of a
2D PDE is generally not referred to as a pentadiagonal matrix. The term
pentadiagonal is only reserved for matrices in which all five diagonals are
clustered in the middle.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 39


Direct Solvers - Banded Linear System Solvers

› Pentadiagonal matrix systems may be solved using the exact same


procedure as tridiagonal systems.
› Let us consider the following general pentadiagonal system of
equations:

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 40


Direct Solvers - Banded Linear System Solvers

Nlong ~ 8(N-1)

Nlong ~ N

Nlong ~ 3(N-1)

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 41


II. Iterative Solvers
Iterative Solvers

› As shown in the preceding section, direct solution is prohibitive both


from a memory usage and a computational efficiency standpoint.
› This leaves us with only one other alternative: solving the equations
iteratively.
▪ The number of iterations required depends on the initial guess.
▪ The farther the initial guess is from the correct solution, the more
iterations it will take.
▪ The accuracy of the solution depends upon when the iterations are
stopped.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 43


Iterative Solvers

› Suppose a 3 × 3 set of equations. If the diagonal elements are all


nonzero, the first equation can be solved for x1, the second for x2, and
the third for x3 to yield
› we can start the solution process by choosing guesses for the x’s

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 44


Iterative Solvers

Convergence,
› Is the process of the solution approaching the exact numerical
solution with successive iterations.
› Convergence error: the error between the exact numerical solution
and the partially converged solution.
› The only way to eliminate convergence error is to continue the
iteration until machine accuracy has been reached.
› This means that the convergence error is comparable with the round-
off error, and further iterations will simply make the solution oscillate.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 45


Iterative Solvers

› In practice, taking the solution to machine accuracy is unnecessary,


and the iterations are generally terminated before that.
› Convergence is not always monotonic.
› Successive iterations may not always lead to an answer that is closer
to the correct answer.
› Depending on the type of equation being solved and the iteration
scheme, the answers may move away from the correct solution before
approaching it again.
› In some cases, with successive iterations, the answer may continually
deviate from the correct answer. Such a scenario is known as
divergence.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 46


Iterative Solvers

› Two factors contribute to convergence or divergence:


1. Type of coefficient matrix.
2. The iterative scheme.
› Convergence criterion

– and

› matrices which obey the criteria shown are known as diagonally dominant.
› Diagonal dominance is sufficient but not necessary condition.
ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 47
Residual and Correction
Form of Equations

48
The Residual and Correction Form of Equations

› Finite difference equations for the 2D Poisson equation for a node k,


can be written in general form as:

➢where, K denotes the total number of nodes.

➢Nnb,k denotes the total number of neighboring nodes to node k.

➢aj is the factor that premultiplies φj in the finite difference equation.

➢ aj also known as link coefficients because they represent the link


or interconnection between nodes.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 49


The Residual and Correction Form of Equations

› For the five-band system it may be written as

› Discretized Poisson in 2D

› By comparing,

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 50


The Residual and Correction Form of Equations
› Let the value of φ at the nth iteration be denoted by φ(n).
› The residual, may be written as

› Where R(n) is residual vector at the nth iteration.


› The residual is a measure of nonconvergence.
› If the entire residual vector is zero, it implies that each of the
equations that we set out to solve have been satisfied exactly.
› If one of the elements of the residual vector is nonzero, the system of
equations has not been satisfied.
ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 51
The Residual and Correction Form of Equations

› Let us now consider a case, where starting from the nth iteration
(previous), we are trying to find the solution at the (n+ 1)th iteration
(current).
› Therefore, for the (n+ 1)th iteration, we may write

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 52


The Residual and Correction Form of Equations

› Let the change (or correction) in the value of  from the previous to
the current iteration be denoted by ’.

› Substitution

› Rearranging, we get

› Then
correction form of the
algebraic equation

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 53


The Residual and Correction Form of Equations

› There are two approaches that are commonly used to compute


accumulated residuals, namely the L1Norm and the L2Norm. They are
defined as follows:

› we now define convergence as the state when the following criterion


has been satisfied:
› where εtol is the prescribed tolerance.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 54


The Residual and Correction Form of Equations
› The value of εtol depends on the problem at hand, and also on how
much accuracy is desired.
› For example, if φ represents temperature in a heat transfer
calculation, then typical values of φ may range between 100–1000
Kelvin.
› If the solution domain is 1 m long, then, with 101 grid points, the grid
spacing, ∆x = 0.01 m. Thus, the order of magnitude of each term in
the algebraic equation will be ∼T /(∆x)2∼107.
› In this case, specifying a tolerance, εtol equal to 10−3 would imply
reduction in the residual by approximately 10 orders of magnitude,
which would produce temperatures that are accurate approximately to
the 7th decimal place.
ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 55
The Residual and Correction Form of Equations

› On the other hand, if the computations were performed using


nondimensional equations in which nondimensional temperature and
length both varied between 0 and 1, then, an individual term in the
equation would be of order of magnitude ∼104.
› In such a case, specifying a tolerance, εtol, equal to 10-3 would imply
reduction in the residual by approximately 7 orders of magnitude, and
nondimensional temperatures accurate in the 7th decimal place or
dimensional temperatures accurate in the 4th decimal place.
› Of course, the latter case would require fewer iterations since the
residuals are forced to decrease by 7 orders of magnitude, as
opposed to 10. However, it would also yield poorer accuracy.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 56


The Residual and Correction Form of Equations

Grid Dependency
› The magnitude of the residual is also dependent on grid spacing and
the number of nodes.
› In order to remove problem dependency on the choice of the
prescribed tolerance, it is preferable to monitor the normalized
residual rather than the raw residual.
› The normalized residual is computed as follows:

▪ where R2max is the maximum value of R2 up to the current iteration.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 57


The Residual and Correction Form of Equations

› If the prescribed tolerance is now applied to R2* rather than R2, the
convergence criterion becomes problem independent.
› For example, in the scenario discussed in the preceding paragraph, if
εtol = 10-6, in both cases, the residual would be reduced by 6 orders of
magnitude, and the corresponding dimensional temperatures would
be accurate in the 3rd decimal place.
› On account of its generality, the scaled or normalized residual
method is used routinely for monitoring convergence in general-
purpose codes for solving linear systems.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 58


Jacobi Method
Jacobi Method

› The Jacobi method is a point-wise iteration


method because the solution is updated
sequentially node by node or point by point.

› Since the right-hand side of the update


formula uses only previous iteration values,
the pattern used to sweep through the nodes
in the computational domain is not relevant.

Nodes treated explicitly

Nodes treated explicitly

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 60


Jacobi Method

Algorithm: Jacobi Method


› Step 1: Guess values of φ at all nodes,
› We denote these values as φ(0). If any of the boundaries have
Dirichlet boundary conditions, the guessed values for the boundary
nodes corresponding to that boundary must be equal to the
prescribed boundary values.
› Step 2: Apply the Jacobi formula for interior nodes,

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 61


Jacobi Method

› Step 3: Compute the residual vector using φ(n+ 1), and then compute
R2(n + 1).

› Step 4: Monitor convergence, i.e., check if R2( n+1) < εtol ? If YES, then
go to Step 7. If NO, then go to Step 5.

› Step 5: Replace old guess by new value: φ(n)=φ(n+1)

› Step 6: Go to Step 2.

› Step 7: Stop iteration and postprocess the results.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 62


Jacobi Method

Number of Operations
› In the above algorithm, every iteration requires 10 long operations per
node – 5 in the update formula, and 5 in the computation of the
residual.
› For K nodes, this amounts to 10K long operations.
› In addition, there are approximately K long operations in the
computation of R2, and one square-root operation.
› Assuming that the square-root operation counts as 2 long operations,
the total number of long operations is approximately 11K + 2 per
iteration.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 63


Jacobi Method

› Let us now consider a 2D problem, such as in Example 3.1, computed


on a 80×80 mesh (K = 6400).
› For this particular case, the total number of long operations per
iteration would be 11 × 6400 + 2 = 70,402.
› The total number of long operations, if Gaussian elimination were to
be used, would be approximately K3/3 = 8.73×1010.
› Thus, as long as the Jacobi method requires less than 8.73
×1010/70,402 =1.24 ×106 iterations, it would be computationally
superior to Gaussian elimination.

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 64


65
21 22 23 24 25

16 17 18 19 20

11 12 13 14 15

6 7 8 9 10

1 2 3 4 5

ME 626 ADVANCED NUMERICAL METHODS 7/6/2019 66

You might also like