You are on page 1of 44

HACETTEPE UNIVERSITY

DEPARTMENT OF ENVIRONMENTAL
ENGINEERING

CEV 206 – NUMERICAL ANALYSIS

LECTURE 7

Dr. Ece Kendir Çakmak


LU Decomposition
 Gauss Elimination:

Inefficient when solving equations with the same


coefficients [A], but with different b’s

LU decomposition separate the time-consuming


elimination of the matrix [A]from the manipulations of
the right-hand side{B}.
-As[A] has been “decomposed,” multiple right-hand-
side vectors can be evaluated in an efficient manner.
Overview of LU Decomposition

Expression as an upper
triangular system
 Assume that there is
a lower diagonal
matrix with 1’s on
the diagonal

 If you multipy [L]


with
Chapra and Canale (2010)

1. LU decomposition step.[A] is factored or “decomposed” into lower [L] and upper


[U]triangular matrices.
2. Substitution step.[ L] and [U] are used to determine a solution {X} for a right-hand side{ B}
-
- is used to generate an intermediate vector { D} by forward substitution.

Then, the result is substituted into back substitution for{X}


LU Decomposition of Gauss Elimination
 Gauss elimination can be used to decompose [A] into [L]
and [U].

direct product of the forward elimination


-upper triangular format
 the matrix [L] is also produced during the step

The first step in Gauss elimination is to


multiply row 1 by a21/a11 and substract result
from second row
Then, to multiply row 3 by a31/a11 and
substract result from third row

Then to multiply modified second row by


a’32/a’22 and substract result from third row
 We could save the f ’s and manipulate {B}later.
 [A] matrix can therefore be written as:

an efficient storage of the LU


decomposition of [A]
Example ( LU Decomposition with Gauss
Elimination)
 Derive an LU decomposition based on the Gauss
elimination for:

SOLUTION:

1.Multiply first equation with 0.1/3 and


7.00333x2-0.293333x3
substract from the second equation:

2. Multiply first equation with 0.3/3 and -0.190000x2+9.900000x3


substract from the third equation

3. Multiply modified second equation with


10.0120x3
-0.190000/7.00333 and substract from the
modified third equation
 After forward elimination:

And for L, we can calculate factors:


f21:0.1/0.3: 0.0333333
f31: 0.3/3: 0.1
f32: -0.190000/7.00333: −0.0271300
 Finally,

You can verify the result by performing the


multiplication of [L][U ] to check with original [A]
 Now we can perform the substution step:

Forward-elimination phase of
conventional Gauss elimination

The forward-substitution phase

Do not
change b’s

multiplying out the left-hand side

d1: 7.85
0.0333333*d1+d2: -19.3  d2= −19.5617
0.1*d1+-0.0271300d2+d3: -19.3  d3= −70.0843
 Then,

X’s can be solved:


Crout Decomposition
 Doolittle decomposition: The [L] matrix has 1’s on the
diagonal.

 Crout decomposition: a [U] matrix with 1’s on the


diagonal.

This approach generates [U] and [L] by sweeping


through the matrix by columns and rows,
There is no need to store the 1’s on the diagonal of [U] or the 0’s for [ L] or [U]
because they are givens in the method

the values of [U] can be stored in the zero space of [L].

as each element of [L] and [U ] is computed, it can be substituted for the


corresponding element of [A]
Matrix Inverse

Calculation of the inverse:

column-by-column fashion by generating


solutions with unit vectors as the right-hand-
side constants.
the resulting solution will be the first column
of the matrix inverse.

the resulting solution will be the second


column of the matrix inverse
Example:
 Employ LU decomposition to determine the matrix
inverse:
 Solution:

 Initial step: The forward-substitution solution procedure


with a unit vector (with 1 in the first row) as the right-
hand-side vector

[L]

We can solve D’s :


 Second step: Put d’s to find x’s

[U]
 Third step: The calculated x values will be the first
column of the inverse matrix

Now, we need to operate two further calculations to


determine the values in the second and the third
column…
 The result:

 You can verify your result by checking:


Error Analysis and System Condition
 Inverse provides information whether systems are ill-
conditioned or not.

 How to perform:

 1. Scale the matrix of coefficients [A] so that the largest


element in each row is 1. Invert the scaled matrix. if there
are elements of the inverse matrix [A]-1 has several
orders of magnitudeg reater than one, it is likely that the
system is ill-conditioned
 2. Multiply the inverse by the original coefficient matrix
and assess whether the result is close to the identity
matrix. If not, it indicates ill-conditioning

3. Invert the inverted matrix and assess whether the result


is sufficiently close to the original coefficient matrix. If not,
it again indicates that the system is ill-conditioned
Matrix Condition Number
 Definition of norm: a real-valued function that provides a
measure of the size or “length” of multi-component
mathematical entities such as vectors and matrices

a vector in three-dimensional Euclidean


space F= [a b c]

The length of this vector

Chapra and Canale (2010)


 The size of a vector:

 The size of a matrix:


Uniform matrix norm
 Alternative norm

uniform matrix norm:

the sum of the absolute value of the elements is


computed for each row, and the largest of these is
taken as the norm
Matrix Condition Number
 Matrix condition number can be calculated as:

This number will be greater than or equal to 1

Numbers close to 1 are said to be well-conditioned.


Example:
 Use the row-sum norm to estimate the matrix condition
number for the 3×3 Hilbert matrix (notoriously ill-
conditioned)

Solution:

First step: Normalized so that the maximum


element in each row is 1
Sum: 1+1/2+1/3=1.833
Sum: 1+2/3+1/2=2.1667
Sum: 1+3/4+3/5=2.35 (uniform
matrix norm)

=2.35

Now we need to find, inverse matrix :

Sum: 9+18+10=37
Sum: 192
Sum: 180
= 192*2.35= 451.2

ill conditioned
Special Matrices
 Banded matrix: a square matrix that has all elements
equal to zero, with the exception of a band centered on
the main diagonal

 A band matrix with k1 = k2 = 0 is a diagonal matrix


 A band matrix with k1 = k2 = 1 is a tridiagonal matrix

BW: band width


HBW: half band
width
 Gauss elimination or conventional LU are inefficient
since if pivoting is unnecessary none of the elements
outside the band would change from their original values
of zero
Tridiagonal Systems
 Band width:3

Thomas algorithm:
1. Decomposition
ek=ek/fk-1, fk=fk-ek-gk-1
2.Forward substitution
rk=rk-ek*rk-1
3. Back substitiution:
xk=(rk-gk*xk+1)/fk
Example:
 Solve the tridiagonal system with the Thomas algorithm

 Solution:

 Decomposition:
e3: -1/1.550=-0.645
e2: -1/2.04=-0.49
f3=2.04-(-0.645)*(-1)=1.395
f2=2.04-(-0.49)*(-1)=1.550
g3=-1
g2=-1
Transformed
 Forward substitution:
 r2=r2-e2-r1=0.8-(-0.49)*40.8=20.8……

Right hand side:

Finally, T values can be found via back substitution with the


[U] matrix and the results above
:
T4: 210.996/1.323=159.48
….
…..
Cholesky Decomposition
 For symmetric matrices

 Decomposing as:

Check the example in the reference book!


Gauss- Siedel
 Most commonly used iterative method!

If we have 3*3 set equations and


the diagonal elements are non-
zero

Solving by choosing guesses for the x’s.


INITAL GUESS IS ZERO as a simple way.
 Set zero as the initial guess
 Substitute the zeros into the first equation
For calculation of new value x1=b1/a11
 Substitute new x1 for new calculation of x2 (x3 is still
zero) in the second equation
 Use new x1 and x2 to calculate x3 in the third equation

 REPEAT THE ENTIRE PROCEDURE UNTIL


Example:
 Perform Gauss- Siedel for:

Solution (First iteration):

First step: Assume x2 and x3 is zero

X1: 7.85/3=2.61667 (first equation)


 Second step: Find x2 when x3 is zero and x1=2.61667
 X2: -2.794524 (from second equation)

 Third step: Find x3 when x2= -2.794524 and x1=2.61667


 X3: 7.005610 (from third equation)
 Second iteration:
 Find x1 when x2= -2.794524 and x3: 7.005610 ( from first
equation)
 Find x2 and x3 as previously stated
 CHECK approximation errors and go for further
iterations if needed.
Convergence Criterion for Gauss-Siedel
 In the case of two simultaneous equations, the Gauss-
Seidel algorithm can be defined as:
The partial derivatives of these equations

If they are inserted to the u


(x1,x2) and v (x1,x2) equations

the absolute values of the slopes must be less


than 1to provide convergence.
NEXT WEEK
 QUIZ 4( from this chapter, next Friday 08.04.2022)
 ASSIGNMENT 4 (from this chapter, uploaded on Monday
04.04.2022)

You might also like