You are on page 1of 30

S CHOOL OF C OMPUTATIONAL AND A PPLIED M ATHEMATICS

U NIVERSITY OF THE W ITWATERSRAND

Numerical Methods
(MECN3031A/MECN3032A/CHMT3008A)

Lecture Notes
CONTENTS

Contents

Page

1 Preliminaries 3
1.1 Roundoff Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Truncation Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Norms of Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.2 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Systems of Linear Equations 8


2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Introduction: Matrix representation . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Uniqueness of Solution . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.3 Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Methods of Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Overview of Direct Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.1 Gauss Elimination Method . . . . . . . . . . . . . . . . . . . . . . 11
2.4.2 Gaussian Elimination with Partial Pivoting . . . . . . . . . . . . . 14
2.4.3 LU decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.4 Doolittle decomposition . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.5 Crout decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.6 Cholesky Factorization . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.7 Tridiagonal systems . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5 Iterative methods for linear algebraic equations . . . . . . . . . . . . . . 22
2.5.1 Jacobi’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5.2 Gauss–Seidel method . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5.3 Convergence criteria for Jacobi and Gauss-Seidel methods . . . 25
2.5.4 Relaxation method . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.6 Tutorial 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 2 of 30


SECTION 1

Preliminaries

1.1
Roundoff Error
Calculators and computers perform only finite digit arithmetic, that is to say, calcu-
lations are only performed with approximate representations of the actual numbers,
i.e.
π = 3.14159265...
with a calculator we may have only a five decimal place representation 3.14159. So if
x ∗ is an approximation to x, the error is defined by x ∗ = x +ϵ, where ϵ can be positive
or negative. Floating point representation uses a mantissa m multiplied by a base b
to an exponent e to represent any number x as

x = m × be

where the mantissa has a decimal point after its first digit. An example of this would
be representing the number 105.09 as 1.0509 × 102 . In computer systems, a finite
number of bits are used to represent both the mantissa and the exponent, and both
are stored in base 2. Typically, one bit is used for the sign, 23 bits are used for the
mantissa in base two, and eight bits are used for the exponent (with one of those
eight bits used for the sign of the exponent). This gives approximately seven decimal
places accuracy for small numbers.

Definition 1

Absolute Error,
|ϵ| = |x − x ∗ |, (1.1)
provided x ̸= 0

Definition 2

Relative Error,
|ϵ| |x − x ∗ |
= , (1.2)
|x| |x|
provided x ̸= 0

Example 1: If x = 0.02 and x ∗ = 0.01, then,

Absolute Error 0.01


Relative Error 0.5

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 3 of 30


1.1 Roundoff Error

Example 2: If x = 0.1000 × 101 and x ∗ = 0.1100 × 101 , then,

Absolute Error 0.1


Relative Error 0.1

If x = 0.1000 × 10−1 and x ∗ = 0.1100 × 10−1 , then,

Absolute Error 0.001


Relative Error 0.1

If x = 0.1000 × 10−4 and x ∗ = 0.1100 × 10−4 , then,

Absolute Error 0.000001


Relative Error 0.1

Therefore, as a measure of accuracy, the absolute error may be misleading and the
relative error more meaningful. Clearly, in practice we cannot determine the actual
absolute or relative error since the exact solution we are looking for is required for
their evaluations, thus we make use of approximations.

Initial data for a problem, can for a variety of reasons be subject to errors. In this
course we are concerned with numerical techniques in which we perform a num-
ber of steps or iterations to attain a solution. A good numerical technique has the
property that a small error in the initial data leads to a small error in the final results.
Conversely, a technique containing a small error in the initial data that generates a
large error in the final solution is a poor numerical technique.

Often, loss of accuracy caused by roundoff error can be avoided by careful sequenc-
ing of operations or reformulation of the problem.

Example 3: The roots of ax 2 + bx + c = 0 are,

p
−b ± b 2 − 4ac
α, β = . (1.3)
2a
p
If b 2 ≫ 4ac then b 2 − 4ac ≈ b. So,
p
−b + b 2 − 4ac
α= ≈0 (1.4)
2a
In calculating α we are subtracting nearly equal numbers and large errors can arise.
To avoid this we change the form of the quadratic formula by “rationalising the nu-
merator",
p p
−b + b 2 − 4ac −b − b 2 − 4ac
α= × p
2a −b − b 2 − 4ac
−2c
= p (1.5)
b + b 2 − 4ac
In the calculation of β we are adding two nearly equal numbers so there are no prob-
lems.

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 4 of 30


1.2 Truncation Error

Example 4: Given a polynomial,

P (x) = ax 3 + bx 2 + c x + d , (1.6)

this requires 6 multiplications (since x 3 is evaluated x × x × x on a computer) and 3


additions. We often evaluate this in nested form,

P (x) = ((ax + b)x + c)x + d (1.7)

evaluation now takes 3 multiplications and 3 additions. Thus there are fewer opera-
tions and therefore less opportunity for errors to be introduced, plus the execution
will be faster. This point is extremely important! Computational expense is some-
thing that should always be at the back of your mind when considering computa-
tions involving large data sets and calculations.

1.2
Truncation Error
This should not be confused with roundoff error. Truncation error comes about as
a result of approximations made in the formulation of numerical methods and is
completely unrelated to computational sources of error. An example of this is sin(x)
which, as a transcendental function, is defined through an infinite Taylor expan-
sion. Computer systems typically work around this by approximating the series us-
ing finitely many terms, truncating it at some predefined order. This has the effect
of introducing truncation error.

1.3
Norms of Vectors and Matrices
Norms are essential in numerical work since they enable us to have a measure of the
size of a vector or matrix. A norm is a real valued function and is required to possess
the following properties,

1. ||A|| ≥ 0, for all A.

2. ||A|| = 0, if and only if A is the zero matrix (vector).

3. ||c A|| = |c|||A||, for all c ∈ R and all A.

4. ||A + B || ≤ ||A|| + ||B ||, for all A and B (called the triangle inequality).

In order to distinguish between different norms we use a subscript.

1.3.1 Vectors

Example 7: The most commonly used norms for a vector x̄ ∈ Rn are,


n
ℓ1 :
X
||x̄||1 = |x i |, (1.8)
i =1

the Euclidean norm (Least squares & minimum energy),


s
n
ℓ2 : ||x̄||2 =
X
x i2 , (1.9)
i =1

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 5 of 30


1.3 Norms of Vectors and Matrices

and the ∞ norm,


ℓ∞ : ||x̄||∞ = max |x i |, (1.10)
1≤i ≤n

If x = [−3 1 0 2]T then,

||x||1 = | − 3| + |1| + |0| + |2| = 6

p p
||x||2 = (−3)2 + 12 + 02 + 22 = 14

||x||∞ = max{| − 3|, |1|, |0|, |2|} = 3

1.3.2 Matrices

If A ∈ Rm×n the ℓ1 and ℓ∞ norms are,


n
X
||A||1 = max |a i j |, (1.11)
1≤ j ≤m i =1
n
X
||A||∞ = max |a i j |, (1.12)
1≤i ≤n j =1

thus they are the maximum column and row sum respectively.

Example 8:

 
5 −2 2
A= 3 1 2 (1.13)
−2 −2 3

If we sum the absolute values in each column we get {10 5 7} therefore,

||A||1 = 10. (1.14)

If we sum the absolute values in each row we get,


 
9
6 , (1.15)
7
 

so ||A||∞ = 9.

There is no simple formula for the ℓ2 norm of a matrix, one method is,
¤1
||A||2 = ρ(A T A) 2 ,
£
(1.16)

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 6 of 30


1.3 Norms of Vectors and Matrices

Figure 1.1: 2D & 3D norm representation

where ρ(A T A) means the eigenvalue of largest absolute value.

The ℓ2 norm of a matrix is extremely important since no other norm is smaller, i.e.
it is the “tightest" measure of the magnitude of a matrix.
Example 9: For the above example,
    
5 3 −2 5 −2 2 38 −3 10
A T A = −2 1 −2  3 1 2 = −3 9 −8 , (1.17)
2 2 3 −2 −2 3 10 −8 17

the eigenvalues of this matrix are {41.9693 11.0153−7.074551i 11.0153+7.07455i },


the spectral radius is the eigenvalue of largest magnitude,

ρ(A T A) = 41.9693, (1.18)

and the ℓ2 norm is,


¤1
||A||2 = ρ(A T A) 2 = 6.47837.
£
(1.19)
We note the following useful property, that if A and B are compatible matrices or
vectors then,
||AB || ≤ ||A|| ||B ||. (1.20)
Note that a consistent norm must be used in using this inequality, i.e. all ℓ1 , ℓ2 or
ℓ∞ norms.

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 7 of 30


SECTION 2

Systems of Linear Equations

2.1
Overview

Objective
In this topic we present methods of solving linear systems of equations. Two classes
of methods are discussed: Direct methods and indirect methods.

Learning outcomes
At the end of this section you shold be able to

• Represent systems of linear equations in matrix form

• Solve linear systems using Gauss elimination with pivoting

• Solve linear systems using LU decomposition methods

• Approximate solutions to linear systems using the following indirect methods:


- Jacobi’s method
- Gauss-Seidel method
- Relaxation method

• Determine convergence of the above methods.

2.2
Introduction: Matrix representation
A linear system is a set of linear equations. Systems of linear equations arise in a
large number of areas, both directly in the mathematical modelling of physical situ-
ations and indirectly in the numerical solution of other mathematical problems.

2.2.1 Notation

A system of algebraic equations has the form:

A 11 x 1 + A 12 x 2 + . . . + A 1n x n = b 1
A 21 x 1 + A 22 x 2 + . . . + A 2n x n = b 2
..
. (2.1)
A n1 x 1 + A n2 x 2 + . . . + A nn x n = b n

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 8 of 30


2.3 Methods of Solution

Where the coefficients A i j and the constants b j are known, and x i represent the
unknowns. In matrix-vector notation, the equations are written as:
¯ ¯   
¯ A 11 A 12 . . . A 1n ¯ x 1 b1
¯ ¯
¯A A . . . A ¯  x  b 
¯ 21 21 2n ¯  2   2 
 .  =  . ,
¯ . .. .. ¯¯ 
..
  
¯ .
¯ . . . . ¯  ..   .. 
¯ ¯
¯ A n1 A n2 . . . A nn ¯ x n bn

or simply,
Ax = b. (2.2)
We can also make use of the augmented form for computational purposes which
follows as:
[A|b] =

A 11 A 12 ... A 1n b1
A 21 A 21 ... A 2n b2
.. .. .. .. ..
. . . . .
A n1 A n2 ... A nn bn

2.2.2 Uniqueness of Solution

A system of n linear equations with n unknowns has a unique solution, provided


that the determinant of the coefficient matrix is non-singular, that is to say: |A| ̸= 0.
The rows and columns of a non-singular matrix are linearly independent, meaning
that no row or column is a linear combination of other rows or columns.
Should the coefficient matrix be singular, the equations may have infinitely many
solutions, or no solutions at all depending on the constant vector (this is the princi-
ple of rank).

2.2.3 Linear Systems

The process of modelling problems using linear systems leads to equations of the
form Ax = b, where b is the input and x represents the response of the system. The
coefficient matrix A represents the characteristics of the system and is independent
of the input. That is to say if the input changes, the equations have to be solved
with a different b but the same A. Thus, it would be desirable to have an equation
solving algorithm that can handle any number of constant vectors with minimal
computational effort.

2.3
Methods of Solution
There are two classes of methods for solving systems of equations: Direct and Indi-
rect methods. In direct methods only one solution is obtained after manipulating
the augmented matrix of the system. This is done by performing row operations.

Elementary operations on systems of equations


Several methods are based on manipulating the augmented form of the linear sys-
tem by elementary row operations. These operations are

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 9 of 30


2.4 Overview of Direct Methods

• Interchanging two equations in a system gives a new system which is equiva-


lent to the old one. This operation is denoted (R i ) ↔ (R j ).

• If we multiply an equation with a non-zero number, we obtain a new system


equivalent to the old one. This operation is denoted (λR i ) → (R i ).

• If we replace one equation with the sum of two equations, we again obtain an
equivalent system. This operation is denoted (R i + λR j ) → (R i ).

Indirect methods start with a guess at the solution x, and then repeatedly refine the
solution until a certain convergence criterion is reached. Iterative methods are gen-
erally less efficient than their direct counterparts because of the large number of
iterations required. However, they do have significant computational advantages if
the coefficient matrix is very large and sparsely populated.

2.4
Overview of Direct Methods
In this course we will consider three major direct methods, each of which make use
of elementary row operations. These methods are listed in the Table 2.1. In the table

Method Initial Form Final Form

Gaussian Elimination Ax=b Ux=c


LU Decomposition Ax=b LUx=b
Gauss-Jordan Elimination Ax=b Ix=c

Table 2.1: Direct Methods

above, U represents the upper triangular matrix, L the lower triangular matrix and I
the identity matrix. Thus a 3 × 3 upper triangular matrix has the form,
¯ ¯
¯U11 U12 U13 ¯¯
¯
U = ¯¯ 0 U22 U23 ¯¯
¯ 0 0 U33 ¯

while a 3 × 3 lower triangular matrix appears as,


¯ ¯
¯L 11 0 0 ¯¯
¯
L = ¯¯L 21 L 22 0 ¯¯
¯L L 32 L 33 ¯
31

Example 2.1: Determine whether the following matrix is singular:


¯ ¯
¯2.1 −0.6 1.1 ¯¯
¯
A = ¯¯3.2 4.7 −0.8¯¯
¯3.1 −6.5 4.1 ¯

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 10 of 30


2.4 Overview of Direct Methods

Solution
¯ ¯ ¯ ¯ ¯ ¯
¯ 4.7 −0.8¯¯ ¯3.2 −0.8¯¯ ¯3.2 4.7 ¯¯
|A| = 2.1 ¯¯ − (−0.6) ¯ + 1.1 ¯
−6.5 4.1 ¯ ¯3.1 4.1 ¯ ¯3.1 −6.5¯
= 2.1(14.07) + 0.6(15.6) − 1.1(35.37) = 0
Thus because the determinant is zero, the matrix is singular.

2.4.1 Gauss Elimination Method

There are several methods due to Gauss. The most general one is the Gauss elimi-
nation method, a special case of which is the Gauss-Jordan. Between these two are
other methods which have been proposed to try a deal with the problem of error
that sometimes arises when applying the Gaussian elimination method as it is.
Gauss elimination algorithm has two steps:
1. Forward elimination: Put the equations in upper triangular form

2. Back substitution: Solve for the unknown solution vector


Consider the system of equations Ax=b:
    
a 11 a 12 · · · a 1n x1 b1
 a
 21 a 22 · · · a 2n x2 b2
   
   
 . .. .. .. = .. ,
..

 .
 . . . . . .
   
   
a n1 a n2 ··· a nn xn bn
a system of n equations and n unknowns.
Forward elimination step
Step 1: Express the equation system in augmented form
 ¯ 
a 11 a 12 . . . a 1n ¯¯ b 1
 a
 21 a 22 . . . a 2n ¯ b 2
¯ 

[A|b] = 
 ..
¯ .
¯ .

 . ¯ .


¯
a n1 a n2 . . . a nn ¯ b n
Step 2: To eliminate the elements below a 11 we apply the sequence of row operations
ai 1
Ri ← Ri − mi 1 R1 , mi 1 = i = 2, 3, . . . , n
a 11
We call a 11 the pivot and m i 1 the multiplier. Clearly, a 11 ̸= 0. If a 11 ̸= 0 then the new
augmented matrix obtained is,
 ¯ 
a 11 a 12 a 13 . . . a 1n ¯¯ b 1
(1) (1) (1) ¯ (1) 
 0
 a 22 a 23 . . . a 2n ¯ b2 
 . ¯ . 
 . ¯ . 
 . ¯ . 
(1) (1) (1) ¯
¯ (1)
0 a n2 a n3 . . . a nn bn
The superscript (1) refers to coefficients which may have changed as a result of row
operations in the i -th step.
(1)
Repeat the process to eliminate the elements below the diagonal element a 22 .

a i(1)
2
Ri ← Ri − mi 2 R2 , mi 2 = (1)
, i = 3, 4, . . . , n
a 22

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 11 of 30


2.4 Overview of Direct Methods

(1)
The element a 22 is now the pivot:
¯
a 11 a 12 a 13 a 1n b1
 
··· ¯
(1) (1) (1)
b 2(1)
¯

 0 a 22 a 23 ··· a 2n ¯
¯


(2) (2)
0 0 a 33 ... a 3n b 3(2)
 ¯ 
 ¯ 
.. .. .. .. .. ..
 ¯ 
.
 ¯ 
 . . . . ¯ . 
(2) (2) (2)
¯
0 0 a n3 ··· a nn ¯ bn

. The procedure is repeated until we have introduced zeros below the main diagonal
in the first n − 1 columns. We then have the desired upper triangular form,
¯
a 11 a 12 a 13 ... a 1n ¯ b1
 
(1) (1) (1) ¯ b (1)
¯

 0 a 22 a 23 ... a 2n ¯ 2


(2) (2) (2)
0 0 a 33 ... a 3n ¯ b3
 ¯ 
 
.. ..
 ¯ 
 ¯ 
 . ¯ . 
(n−1) ¯ b (n−1)
¯
0 0 0 ... a nn n

.s

Back Substitution
We may then use back substitution to obtain:

b n(n−1)
xn = (n−1)
(2.3)
a nn
à !
1 n
b i(i −1) − a i(ij−1) x j
X
xi = , i = n − 1, . . . , 1 (2.4)
a i(ii −1) j =i +1

Consider the following:

 ¯ 
1 1 1 ¯ 4
¯
 2 3 1 ¯ 9 
¯
1 −1 −1 ¯ −2

R 2 ← R 2 − 2R 1 , R3 ← R3 − R1
 ¯ 
1 1 1 ¯¯ 4
 0 1 −1 ¯¯ 1 
0 −2 −2 ¯ −6

R 3 ← R 3 + 2R 2
 ¯ 
1 1 1 ¯¯ 4
 0 1 −1 ¯ 1 
¯
0 0 −4 ¯ −4
Writing the system in full,

x1 + x2 + x3 = 4
x2 − x3 = 1
−4x 3 = −4

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 12 of 30


2.4 Overview of Direct Methods

We can now solve directly for x 3 , x 2 and x 1 ,

x3 = −4/(−4) = 1
x2 = 1 + x3 = 2
x1 = 4 − x2 − x3 = 1

Exercise set 6.1 Burden & Faires

(5a) - Use Gaussian Elimination to solve the following linear system,

x 1 − x 2 + 3x 3 = 2
3x 1 − 3x 2 + x 3 = −1
x1 + x2 = 3

Solution:
 ¯ 
1 −1 3 ¯¯ 2
 3 −3 1 ¯¯ −1 
1 1 0 ¯ 3

R 2 ← R 2 − 3R 1
 ¯ 
1 −1 3 ¯¯ 2
 0 0 −8 ¯¯ −7 
1 1 0 ¯ 3

R3 ← R3 − R1
 ¯ 
1 −1 3 ¯ 2
¯
 0 0 −8 ¯ −7 
¯
0 2 −3 ¯ 1

R2 ↔ R3
 ¯ 
1 −1 3 ¯ 2
¯
 0 2 −3 ¯ 1 
¯
0 0 −8 ¯ −7

Using backwards substitution,

−7 1³ ´
x3 = = 0.875, x2 = 1 + 3x 3 = 1.8125 and x 1 = 2 + x 2 − x 3 = 1.1875
−8 2

(9) - Given the linear system,

2x 1 − 6αx 2 = 3
3αx 1 − x 2 = − 32

(a) - Find values of α for which the system has no solution,

When α = 1/3 there is no solution since the equation describes parallel lines.

(b) - Find values of α for which the system has an infinite number of solutions,

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 13 of 30


2.4 Overview of Direct Methods

When α = −1/3 there is an infinite number of solutions with x 1 = x 2 + 1.5 and x 2


arbitrary because the 2 equations describe the same lines.

(c) - Assuming a unique solution exists for a given α, find the solution,

If α ̸= ±1/3 then the unique solution is,

−3 −3
x1 = and x 2 =
2(3α − 1) 2(3α − 1)

2.4.2 Gaussian Elimination with Partial Pivoting

The GE method fails if the pivot a i i is zero or small. Division by zero or a very small
number increases the error in the computation and may lead to an unexpected so-
lution. This problem is overcome by use of partial pivoting. This helps reduce round
off errors too.

To perform partial pivoting we ensure that for each step the diagonal element a i i
has the largest absolute value possible. Search the i t h column for the element with
the largest magnitude. This element becomes the new pivot and the row in which it
is found is swapped with one with zero or small pivot.

Gaussian elimination with partial pivoting:

1. Find the entry in the first column with the largest absolute value. This entry is
called the pivot.

2. Perform a row interchange, if necessary, so that the pivot is in the first row.
Remember, the pivot is the entry in the first column with the largest absolute
value.

3. Use elementary row operations to reduce the remaining entries in the first
column to zero.

Example 2.1 Using the example,


· ¸ · ¸
0.0030 59.14 59.17 5.291 06.130 46.78
becomes
5.291 06.130 46.78 0.0030 59.14 59.17

Pivoting on 5.291 and using the multiplier

0.003
m= = 0.000567
5.291
yields · ¸
5.291 06.130 46.78
0 59.14 58.91

from which we obtain

x2 = 0.9961
46.78 + 6.130(0.9961) 52.89
x1 = = = 9.996
5.291 5.291

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 14 of 30


2.4 Overview of Direct Methods

Although not exact, this solution is closer to the expected solution, than when par-
tial pivoting was not applied.

Exercise set 6.2 Burden & Faires:

(9a) - Solve the system using Gaussian Elimination,

0.03x 1 + 58.9x 2 = 59.2


5.31x 1 − 6.1x 2 = 47

Solution,

because a i i is small relative to the other entries we are required to swap the pivot.
So, · ¯ ¸
0.03 58.9 ¯¯ 59.2
5.31 −6.10 ¯ 47
becomes, · ¯ ¸
5.31 −6.10 ¯¯ 47
0.03 58.9 ¯ 59.2
Then,

5.31
R2 → R2 − R1
0.03
· ¯ ¸
5.31 −6.10 ¯
¯ 47
0 10425.3 ¯ 10478.4

10478.4
x2 = = 1.005, ⇒ 5.31x 1 = 47 + 6.1x 2
10425.3
1
⇒ x1 = (47 + 6.1305)
5.31

= 10.005

Note: The exact solutions are: x 1 = 10 & x 2 = 1. Thus the error has generated by
rounding off error.

Scaled partial pivoting

Closely related to partial pivoting is scaled partial pivoting. This strategy involves
scaling the coefficients in the system by dividing each row by the largest absolute
coefficient in that respective row. Matrix reduction then proceeds as usual, applying
partial pivoting where necessary.

Example 2.2
   
3 2 100 105 0.03 0.02 1.00 1.05
 −1 3 100 102  becomes  −0.01 0.03 1.00 1.02 
1 2 −1 2 0.50 1.00 −0.50 1.00

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 15 of 30


2.4 Overview of Direct Methods

2.4.3 LU decomposition

The Gauss elimination method has the disadvantage that the right hand side vec-
tor b must be known in advance for the elimination step to be carried out. The LU
decomposition method involves only the coefficient matrix A and can hence be per-
formed independent of the vector b. LU decomposition method is closely related to
Gauss elimination and is usually the method used in most applications.
Consider the n × n linear system of equations

Ax = b (2.5)

The general principle is to factorize (or decompose) the matrix A into two triangular
matrices as
A = L U,
where L and U are strictly lower and upper triangular matrices. The system

Ax = LUx = b

can then be solved by letting


Ux = y
so that
Ax = Ly = b.
First we solve the system
Ly = b (2.6)
by forward substitution for y, and then the system

Ux = y (2.7)

by backward substitution for x.


But first we need to LU decompose the matrix A. We consider three specific ap-
proaches for decomposing A.
Consider a 3 × 3 matrix
 
a 11 a 12 a 13
A =  a 21 a 22 a 23 
a 31 a 32 a 33

to be factorised to LU form as
    
a 11 a 12 a 13 l 11 0 0 u 11 u 12 u 13
 a 21 a 22 a 23  =  l 21 l 22 0  0 u 22 u 23 
a 31 a 32 a 33 l 31 l 32 l 33 0 0 u 33

Step 1:
Start by letting u 11 = a 11 .
Step 2: Multiplying Row 1 of L by each column of U and equating to the corre-
sponding entry in A yields

ł11 u 11 = l 11 a 11 = a 11 ⇒ l 11 = 1
l 11 u 12 = a 12 ⇒ u 12 = a 12
l 11 u 13 = a 13 ⇒ u 13 = a 13

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 16 of 30


2.4 Overview of Direct Methods

Thus the first row of U is a copy of the first row of A.


Step 3:
Now multiplying Row 2 of L by each column of U and equating to the corresponding
entries in A yields
a 21
l 21 u 11 = l 21 a 11 = a 21 ⇒ l 21 =
a 11
l 21 u 12 + l 22 u 22 = a 22 ⇒?
l 21 u 13 + l 22 u 23 = a 23 ⇒?

Hence we run into difficulty determining further coefficients of L and U. To over-


come this we present specific approaches in which either l i i = 1 or u i i = 1.
Setting l i i = 1 leads to Doolittle’s decomposition method.
The case u i i = 1 leads to Crout’s method.

2.4.4 Doolittle decomposition

Setting l 22 = 1 in Step 3 above leads to

l 21 u 12 + u 22 = a 22 , ⇒ u 22 = a 22 − l 21 u 12
= a 22 − aa11
21
a 12

a 21
l 21 u 13 + u 23 = a 23 , ⇒ a 23 − a 13
a 11
Thus Rows 2 of U and L are determined.
Similarly, setting l 33 = 1, multiplying Row 3 of L with U, and equating yields
a 31
l 31 u 11 = a 31 ⇒ l 31 =
a 11
l 31 u 12 + l 32 u 22 = a 32 ⇒ l 32 ³ 32 − l 31 u 12 )´/u 22
= (a
= a 32 − aa31
11
a 12 /u 22
l 31 u 13 + l 32 u 23 + 1 · u 33 = a 33 ⇒ u 33 = a 33 − l 31 a 13 − l 32 u 23
= a 33 − 2k=1 l 3k u k3
P

Hence LU factorisation is complete. (Note the alternation on computation of l i j and


ui j .
The general formula for Doolittle’s factorisation of the general system with
    
a 11 a 12 · · · a 1n 1 0 ··· 0 u 11 u 12 · · · u 1n
 a
 21 a 22 · · · a 2n   l 21 1 ··· 0  0 u 22 · · · u 2n 
  
 
A=  .. ..  =
  ..
 .. .. ..  
  .. .. . . .. 
 . .   . . . .  . . . . 

a n1 a n2 · · · a nn l n1 l n2 · · · 1 0 0 · · · u nn

is à !
jX
−1
1
li j = ai j − l i k uk j ; i = j + 1, . . . , n (2.8)
uj j k=1

and
2
X
ui j = ai j − l i k uk j (2.9)
k=1

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 17 of 30


2.4 Overview of Direct Methods

An alternative way to compute U is via GE and then it is easy to show that


à !
jX
−1
1
li j = ai j − l i k u k j ; i = j + 1, . . . , n (2.10)
uj j k=1

In fact, when no partial pivoting is used the values of l i j are just the multipliers used
in GE.

Example 2.3 Use Doolittle’s LU decomposition to solve the system

2x 1 − 3x 2 + x 3 = 7
x 1 − x 2 − 2x 3 = −2
3x 1 + x 2 − x 3 = 0

Solution
    
2 −3 1 1 0 0 2 −3 1
A= 1 −1 −2  =  l 21 1 0  0 u 22 u 23 
3 1 −1 l 31 l 32 1 0 0 u 33

2l 21 = 1 ⇒ l 21 = 1/2
−3l 21 + u 22 = −1 ⇒ u 22 = −1 + 3(1/2) = 1/2
l 21 + u 23 = −2 ⇒ u 23 = −2 − 1/2 = −5/2

For Row 3 we have

2l 31 = 3 ⇒ l 31 = 3/2
1
−3l 31 + l 32 = 1 ⇒ l 32 = 2(1 + 3(3/2)) = 11
2
5
l 31 − l 32 + u 33 = −1 ⇒ u 33 = 25
2
Thus
1 0 0
   
2 −3 1
1 1 −5
L= 2 1 0 , U= 0 2 2

3
2 11 1 0 0 25
Now letting y = Ux, we have

1 0 0
    
y1 7
1
Ly =  2 1 0   y 2  =  −2  ,
3
2 11 1 y3 0

leading to

y1 = 7
y2 = −2 − 7/2 = −11/2
µ ¶
3 11
y3 = 0 − (7) − 11 − = 50
2 2
and finally,
    
2 −3 1 x1 7
1
 0
2
−5  
2
x 2  =  − 11
2
,
0 0 25 x3 50

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 18 of 30


2.4 Overview of Direct Methods

yielding the required solution using back substitution

x3 = 2
11 5
x2 = 2(− + ) = −1
2 2
1
x1 = (7 − 2 + 3(−1)) = 1
2

2.4.5 Crout decomposition

A similar procedure is by Crout. In this case u i i = 1, i = 1, 2, . . . , n and A is decop-


mosed to
    
a 11 a 12 · · · a 1n l 11 0 ··· 0 1 u 12 ··· u 1n
 a
 21 a 22 · · · a 2n   l 21 l 22 · · · 0 0 1 ··· u 2n
   
 
A= .. .. = .
  . .. .. ..  .. .. .. .. 
 . .   . . . . . . . .
 
 
a n1 a n2 ··· a nn l n1 l n2 ··· l nn 0 0 ··· 1

2.4.6 Cholesky Factorization

For symmetric, positive definite matrices, factorisation can be done by Cholesky’s


method.
Definition
A matrix is symmetric positive definite if

A = AT , and xT Ax > 0, for all x ̸= 0

Quick checks for positive definiteness

• A positive definite matrix has all real eigenvalues.

• A symmetric matrix A is positive definite if and only if each leading principal


submatrix has a positive determinant.

Example
 
2 −1 0
For the matrix A =  −1 2 −1 ,
0 −1 2
The submatrix A 1 = [2] and |A 1 | = 2 > 0.
¯ ¯
¯ 2 −1 ¯¯
Also |A 2 | = ¯¯ = 3 > 0, and
−1 2 ¯
|A 3 | = |A| = 4 > 0.

Therefore A is positive definite.

If A is symmetric, then U = LT and hence

A = LU = L LT

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 19 of 30


2.4 Overview of Direct Methods

By following a procedure similar to that used in the previous methods, the elements
of L can be obtained from solving
    
l 11 0 0 l 11 l 21 l 31 a 11 a 12 a 13
 l 21 l 22 0   0 l 22 l 32  =  a 21 a 22 a 23 
l 31 l 32 l 33 0 0 l 33 a 31 a 32 a 33

This leads to
2 p
l 11 = a 11 ⇒ l 11 = a 11
l 11 l 21 = a 12 ⇒ l 21 = a 12 /l 11 (= a 21 /l 11 )
l 11 l 31 = a 13 ⇒ l 31 = a 13 /l 11 (= a 31 /l 11 )

Thus the first column of L ( or first row of LT ) is found. Continuing in a similar


manner we obtain the other two columns of L as
q
2 2 2
l 21 + l 22 = a 22 ⇒ l 22 = a 22 − l 21
l 31 l 21 + l 32 l 22 = a 32 ⇒ l 32 = (a 32 − l 31 l 21 )/l 22
q
2 2 2 2
l 31 + l 32 = a 33 ⇒ l 33 = a 33 − (l 31 + l 32 )

Thus in general, the recurrence relations are


p
l 11 = a 11
li j = a 1i /l 11 , i = 1, 2, . . . , n
iX
−1
li i = (a i i − l i2k )1/2 , i = 2, . . . , n
k=1
jX
−1
li j = (a i j − l j k l i k )/l j j , j = 1, 2, . . . , i − 1, i ≥ 2
k=1

Example 2.4
 
4 2 14 p
A= 2 17 −5  l 11 = 4 = 2
14 −5 83 l 21 = 2/2 = 1, l 31 = 14/2 = 7
p
l 22 = 17 − 1 = 4
l 32 = (−5 − 7(1))/4 = −3
p
l 33 = 83 − 49 − 9 = 5

Therefore    
2 0 0 2 1 7
L= 1 4 0 , and LT =  0 4 −3  .
7 −3 5 0 0 5

For the i t h row,


jX
−1
1
li j = (a i j − l j k l i k ), for j = 1, 2, · · · , i − 1 (2.11)
lj j k=1

and v
iX
−1
u
u
l i i = ta i i − l i2k
k=1

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 20 of 30


2.4 Overview of Direct Methods

Example 2.5 LU decompose the 3 × 3, symmetric matrix


 
4 2 14
 2 17 −5 
14 −5 83

Solution     
4 2 14 l 11 0 0 l 11 l 21 l 31
 2 17 −5  =  l 21 l 22 0  0 l 22 l 32 
14 −5 83 l 31 l 32 l 33 0 0 l 33
Solve to get l 11 = 2, l 21 = 1, l 31 = 7, l 22 = 4, l 32 = −3, l 33 = 5.

2.4.7 Tridiagonal systems

A tridiagonal system is one with a bandwidth of 3. For such a matrix the LU de-
composition simplifies greatly, and in general, require no pivoting. The coefficient
matrix A of a tridiagonal system can be expressed generally as
 
a 11 a 12
 a 21 a 22 a 23 
 

 a 31 a 32 a 33 

A= .. .. ..

. . .
 
 
 
 a n−1,n−2 a n−1,n−1 a n−1,n 
a n,n−1 a nn
where the blanks represent elements whose value is zero. We LU decompose A. Set
A = LU where  
1
 l 21 1 
 

 l 32 1 

L= 
.. ..

. .

 
 
 l n−1,n−2 1 
l n,n−1 1
 
u 11 u 12

 u 22 u 23 


 u 33 u 34


U= 
.. ..

. .

 
 
 u n−1,n−1 u n−1,n 
u nn
So if we multiply L and U match elements we obtain

a 11 = u 11
a i ,i +1 = u i ,i +1 i = 2, 3, . . . , n − 1
a i ,i −1 = l i ,i −1 u i −1,i −1 , i = 2, 3, . . . , n
ai i = l i ,i −1 u i −1,i + u i i , i = 1, 2, . . . , n

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 21 of 30


2.5 Iterative methods for linear algebraic equations

Thus the elements of L and U can be calculated from

u 11 = a 11 (2.12)
u i ,i +1 = a i ,i +1 , i = 1, . . . , n (2.13)
a i ,i −1
l i ,i −1 = , i = 2, . . . , n (2.14)
u i −1,i −1
u i ,i = a i i − l i ,i −1 u i −1,i , i = 2, . . . , n (2.15)

Remark: If u i i = 0 for any i then the method fails.


Once the LU decomposition is complete we obtain the solution to the system
LU x = g by solving the two systems

L y = g, Ux=y

Due to the structure of L and U this simplifies to

y1 = g1
yi = g i − l i ,i −1 y i −1 , i = 2, . . . , n (2.16)

and

xn = yn
(y i − u i ,i +1 x i +1 )
xi = , i = n − 1, . . . , 1 (2.17)
ui i

2.5
Iterative methods for linear algebraic equations
For large linear systems, the full matrix factorization becomes impractical. Iterative
methods can often be used in such circumstances. These schemes are also called
indirect because the solution is obtained from successive approximations. Here we
consider several of such schemes.
An iterative solution scheme for a systems of equations can always be written in
the form:
x(i +1) = Bx(i ) + c, i = 0, 1, 2, . . . (2.18)
where B is an iteration matrix, c is a constant vector and i is an iteration counter.
We start with an initial guess x(0) of the true solution x of the system A x = b. Using
the iterative scheme (2.18) we generate a sequence of vectors x(1) , x(2) , x(3) , . . . each
of which is a better approximation to the true solution than the previous one. This
is called iterative refinement.
The iterative refinement is stopped when two successive approximations are
found to differ, in some sense, by less than a given tolerance. We shall use the stop-
ping criteria:
|x (ij ) − x (ij −1) |
max < ϵ, i > 0 (2.19)
1≤ j ≤n x (ij )

Consider an n × n system of equations A x = b where A is non-singular and the


diagonal elements of A are non-zero. Define

• L to be strictly lower triangular part of A.

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 22 of 30


2.5 Iterative methods for linear algebraic equations

• U to be strictly upper triangular part of A.


• D to be diagonal part of A.
i.e.,
A = D+L+U
where L, D and U are defined by
½ ½ ½
ai j , i > j ai j , i = j ai j , i < j
Li j = Di j = Ui j =
0, i ≤ j 0, i ̸= j 0, i ≥ j
For example a 3 × 3 matrix can be represented as
       
a 11 a 12 a 13 0 0 0 a 11 0 0 0 a 12 a 13
 a 21 a 22 a 23  =  a 21 0 0 + 0 a 22 0 + 0 0 a 23 
a 31 a 32 a 33 a 31 a 32 0 0 0 a 33 0 0 0
Hence substituting A = L + D + U in A x = b, we get
(L + D + U)x = b
We can then re-arrange the equation to get
D x = −(L + U) x + b (2.20)
This is the basis for Jacobi’s method.

2.5.1 Jacobi’s method

Consider a system of equations Ax = b where A is an n × n matrix. Solving the i t h


equation for x i we get
b 1 − (a 12 x 2 + a 13 x 3 + . . . + a 1n x n )
x1 =
a 11
b 2 − (a 21 x 1 + a 23 x 3 + . . . + a 2n x n )
x2 = (2.21)
a 22
..
.
b n − (a n1 x 1 + a n2 x 2 + . . . + a nn−1 x n−1 )
xn =
a nn
In matrix form this is:
x = D−1 [b − (L + U)x] (2.22)
We can write equation (2.22) in iterative form as:
x(i +1) = D−1 [b − (L + U)x(i ) ] (2.23)
which is clearly the standard form (i.e. of the form of equation (2.18)) for iterative
solution with B J = −D−1 (L + U) and c = D−1 b. This iteration procedure is called the
Jacobi’s method.
For computer purposes
It is easier to iterate (2.23)in the form (2.20) as
D x(i +1) = b − (L + U) x(i ) (2.24)
By letting y = b − (L + U)x(i ) , iterations involve 2 steps:

• Computing y = b − (L + U)x(i ) , then

• Solving Dx(i +1) = y (forward substitution )

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 23 of 30


2.5 Iterative methods for linear algebraic equations

2.5.2 Gauss–Seidel method

The Gauss-Seidel iteration uses the most recent estimates at each step in the hope
of achieving faster convergence:

b 1 − (a 12 x 2(i ) + a 13 x 3(i ) + . . . + a 1n x n(i ) )


x 1(i +1) =
a 11
b 2 − (a 21 x 1(i +1) + a 23 x 3(i ) + . . . + a 2n x n(i ) )
x 2(i +1) = (2.25)
a 22
..
.
b n − (a n1 x 1(i +1) + a n2 x 2(i +1) + . . . + a nn−1 x n−1
(i +1)
)
x n(i +1) =
a nn

or in discrete form
à !
1
x (ij +1) (i +1) (i )
X X
= bj − a j k xk − a j k xk (2.26)
aj j k< j k> j

In matrix form
x(i +1) = D−1 [b − Lx(i +1) − Ux(i ) ] (2.27)
where the most recent estimates are used throughout. For this method the iteration
matrix is
BGS = −(D + L)−1 U and c = (D + L)−1 b.
For computer purposes
We can similarly use the form

(D + L)x(i +1) = b − Ux(i ) (2.28)

and let y = b − Ux(i ) and then carry out each iteration in 2 steps:

• Computing y = b − Ux(i ) , then

• Solving (D + L)x(i +1) = y by forward sustitution.

Example 2.6 Approximate the solution to the system

4x 1 + 3x 2 = 24
3x 1 + 4x 2 − x 3 = 30
−x 2 + 4x 3 = −24

by performing 3 iterations of the (i) Jacobi method (ii) Gauss seidel method. (The
exact solution is x = (3, 4, −5)T .)

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 24 of 30


2.5 Iterative methods for linear algebraic equations

2.5.3 Convergence criteria for Jacobi and Gauss-Seidel methods

Convergence of an iterative method means the successive approximations will tend


to a particular vector x as i → ∞.

Theorem 1

For any real x(0) , the sequence {x(k) }∞


k=0
defined by (2.18) converges to the unique
solution x = Bx + c if and only if ∥B∥ < 1.

(See Appendix for definition of ∥ · ∥, the norm of a matrix. A special condition holds
for diagonally dominant matrices:

Theorem 2

A sufficient condition for convergence of the Jacobi and the Gauss–Seidel meth-
ods is that the coefficient matrix is diagonally dominant:
X
|a i i | > |a i j |, ∀ i
j ̸=i

This means that systems will sometimes converge even if the coefficient matrix is
not diagonally dominant. Occasionally, it is possible to re–arrange a system of equa-
tions to give a diagonally dominant coefficient matrix.
Example 2.7
 
1 3 −5
A= 1 4 1 
4 −1 2
We have
i =1 : |1| > |3| + | − 5| = 8 (not true)
i =2 : |4| > |1| + |1| = 2 (true)
i =3 : |2| > |4| + | − 1| = 5 (not true)
Clearly inequalities are not satisfied for i = 1 and i = 3, so this matrix is not diago-
nally dominant. If we re-arrange A by swaping Rows 1 and 3. to get
 
4 −1 2

A = 1 4 1 
1 3 −5
then
i =1 : |4| > | − 1| + |2| = 3 (true)
i =2 : |4| > |1| + |1| = 2 (true)
i =3 : |5| > |1| + |3| = 4 (true)

i.e A is diagonally dominant.
Note:
If both the Jacobi and the GS are convergent, the GS method converges twice as fast
as the Jacobi method.

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 25 of 30


2.5 Iterative methods for linear algebraic equations

2.5.4 Relaxation method

This is a method used to achieve faster convergence, or in some cases to obtain con-
vergence of some systems that are not convergent by Gauss-Seidel. This method is
(i +1)
a weighted average of x(i ) and xGS :
(i +1)
x(i +1) = (1 − ω)x(i ) + ωxGS , 0<ω<2
In component form
à !
ω
x (ij +1) = (1 − ω)x (ij ) + (i +1) (i )
X X
bj − a j k xk − a j k xk (2.29)
aj j k< j k> j

where ω ∈ (0, 2) is some weight factor, called the relaxation coefficient. It can be
shown that the solution diverges for ω ∉ (0, 2). ω is chosen to accelerate convergence
• If ω = 1,⇒ Gauss–Seidel iteration.
• If 1 < ω < 2, ⇒ Successive Over–relaxation (SOR).
• If 0 < ω < 1, ⇒ Successive under–relaxation .
Equation (2.29) can be re-arranged as
à !
a j j x (ij +1) + ω a j k x k(i +1) = ωb j + (1 − ω)a j j − ω a j k x k(i )
X X
k< j k> j

which in matrix form is


(D + ωL)x(i +1) = ωb + [(1 − ω)D − ωU]x(i ) ,
or h i
x(i +1) = (D + ωL)−1 ωb + [(1 − ω)D − ωU]x(i )
Therefore the iteration matrix and the constant vector are
Bω = (D + ωL)−1 [(1 − ω)D − ωU], c = (D + ωL)−1 ωb
To obtain an optimum value of ω it can be shown that, if λ is the largest eigenvalue
in magnitude of B J = D−1 (L + U) then
2
ωopt = p .
1 + 1 − λ2
For large systems, determining λ may be complicated, however techniques do exist
for its estimation.
With an optimal value of ω (usually ω > 1), the convergence rate of SOR can be
an order of magnitude higher than that of GS.
Example 2.8 For the same example used for jacobi and Gauss-Seidel method (2.29)
with ω = 1.25 is
3(1.25) (i ) 24(1.25)
x 1(i +1) = (1 − 1.25)x 1(i ) − x2 +
4 4
(i +1) 3(1.25) (i +1) (i ) 1.25 (i ) 30(1.25)
x2 = − x1 + (1 − 1.25)x 2 + x +
4 4 3 4
(i +1) 1.25 (i +1) (i ) 24(1.25)
x3 = x + (1 − 1.25)x 3 −
4 2 4
...
etc

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 26 of 30


2.5 Iterative methods for linear algebraic equations

If x(0) = (1, 1, 1)T , five iterations lead to


   
x1 3.00037211
 x 2  =  4.0029250 
x3 −5.0057135

Computer exercise - Jacobi Method

1. Define the matrices A, L, D, and U for this example.

2. Define b. (It should be a column vector)

3. Compute the true solution as x=A\ b.

4. Initialise x to be (1, 1, 1)T .

5. Decide how many iterations you want to perform, say N .


The following loop will compute N iterations:

for i=1:N
y=b-(L+U)*x(:,i);
x(:,i+1)=D\y
end

6. Check if the iteration have converged to a single solution. If not note the last
approximation resulting from the computation.

7. Clear the variable x and initialise it as (0, 0, 0)T .

8. Repeat the loop for the same number of iterations.

9. Have the iterations converged to a single solution? Check the last approxima-
tion for this round of computations.

10. Which initial value has led to a better approximation of the true solution?

Computer exercise - Gauss-Seidel Method

1. Follow the steps used for the jacobi method, except that the loop will now b:

for i=1:N
y=b-U*x(:,i);
x(:,i+1)=(L+D)\y
end

2. Compare the speed of convergence of the jacobi and Gauss-Seidel methods.


(It should evident that the Gauss-Seidel method is faster.)

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 27 of 30


2.6 Tutorial 1

2.6
Tutorial 1
1. Use Gauss elimination to solve the following systems of equations

(a) x − 3y + z = 4 (b) x 1 + x 2 + x 3 = 4
2x − 8y + 8z = −2 2x 1 + 3x 2 + x 3 = 9
−6x + 3y − 15z = 9 x 1 − x 2 − x 3 = −2
(Ans: z = −2, y = −1, x = 3) (Ans: x 3 = 1, x 2 = 2, x 1 = 1)

(c) (Use six significant figures in your computations.)

3x − 0.1y − 0.2z = 7.85


0.1x + 7y − 0.3z = −19.3
0.3x − 0.2y + 10z = 71.4
(Ans: z = 7.00003, y = −2.50000, x = 3.00000)

2. For what values of a and b does the system

u + 2v + 3w = 7
2u + 3v + 4w = 10
3u + 5v + a = b

have (i) no solution, (ii) Infinitely many solution, (iii) unique solution.

3. Use GE with partial pivoting to solve following

(a)

x2 − x3 = 1
x 1 − x 2 + 3x 3 = 2
2x 1 + x 2 − x 3 = 3

(Ans: x = [1 2 1]T )

(b)    
−0.002 4.000 4.000 x1 7.998
 −2.000 2.906 −5.387   x 2   −4.481 
3.00 −4.031 −3.112 x3 −4.143

(Ans: [x 1 x 2 x 3 ] = [1 1 1]T )

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 28 of 30


2.6 Tutorial 1

(c)    
7 35 1 x1 10.6
 3 15 3   x 2   4.8 
3 20 5 x3 5.5
(Ans: x 3 = 0.1, x 2 = 0.1, x 1 = 1.0)

4. Solve the following systems of equations using LU decomposition of A.

(a)     
6 −2 0 x1 14
 9 −1 1   x 2  =  21 
3 7 5 x3 9

(Ans: x = [2, −1, 2]T .)

(b)     
4 2 3 x1 78
 12 9 6   x 2  =  240 
8 8 6 x3 172

(Ans: [x 1 x 2 x 3 ] = [16 4 2]T .)

5. Apply the LU tridiagonal method to the following systems

1 2 0 0 x1 c1
    
 3 1 2 0   x2   c2 
  = 
 0 3 1 2   x3   c3 
0 0 3 1 x4 c4

where (i) c = [3 6 6 4]T , and (ii) c = [2 8 − 7 − 1]T ,


(Ans: (i) [1 1 1 1]T and (ii) [2 0 1 − 4]T )

6. Using Jacobi and GS methods perform 5 iterations on the system

3x 1 + 3x 2 − 7x 3 = 4
3x 1 − x 2 + x 3 = 1
3x 1 + 6x 2 + 2x 3 = 0

using the initial approximation [1 1 1]T .

(a) Are the results converging?


(b) Check to see if the matrix is diagonally dominant.
(c) If not diagonally dominant re–arrange it to make it diagonally dominant
and repeat the iterations. Are the results convergent this time?
Exact solution:x = [0.4444, −0.0833, −0.4167]

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 29 of 30


2.6 Tutorial 1

7. Perform the first three Jacobi and GS iterations for the solution of the following
system starting from (0, 0, 0, 0, 0)

8 −2 1 0 0 x1 7.2
    

 −2 8 −2 1 0 
 x2  
  2.1 


 1 −2 8 −2 1 
 x3 =
  1.6 

 0 1 −2 8 −2  x4   2.1 
0 0 1 −2 8 x5 7.2

8. Perform the GS and SOR (ω = 1.25) iterations on the system

4x − 3y + 7z = 7
4x − 8y + z = 21
−2x + y + 5z = 15

starting with [1 1 1]T . Compare your solutions with the true solution which
you can find using Gauss elimination method.

9. Use the Gauss–Seidel method to obtain the solution of the system.

3x 1 − 0.1x 2 − 0.2x 3 = 7.85


0.1x 1 + 7x 2 − 0.3x 3 = −19.3
0.3x 1 − 0.2x 2 + 10x 3 = 71.4

The true solution is x 1 = 3, x 2 = −2.5, x 3 = 7. Use the stopping criterion

|x (i ) − x (i −1) |
µ ¶
e i = max < 10−2
|x (i ) |

10. Consider the system of equations


    
1 4 1 x1 2
 4 1 0   x2  =  1 
0 1 4 x3 3

Perform 5 iterations on this system using Gauss-Seidel method. First ensure


that A is diagonally dominant. If not re–arrange it to make it diagonally dom-
inant.

Numerical Methods MECN3031A/MECN3032A/CHMT3008A 30 of 30

You might also like