You are on page 1of 8

Numerical Methods for Eng [ENGR 391]

[Lyes KADEM 2007]

III. LU decomposition Method


Gauss elimination becomes inefficient when solving equations with the same coefficients for [A]
but with different bs.
LU decomposition separates the time consuming elimination of [A] form the manipulation of b .
Hence, the decomposed [A] could be used with several b s in an efficient manner.
LU decomposition is based on the fact that any square matrix [A] can be written as a product of
two matrices as:
[A]=[L][U]
Where [L] is a lower triangular matrix and [U] is an upper triangular matrix.
III.1. Crouts method
To illustrate the Crouts method for LU decomposition, let us start with an example, we consider
the 33 matrix:

a11
a
21
a31

a12
a22
a32

a13 l11
a23 = l 21
a33 l31

a12
a22
a32

a13 l11
a23 = l21

a33 l31

0
l 22
l32

0
0
l33

1
0

u12
1
0

u13
u 23
1

Hence

a11
a
21
a31

l11u12

l11u13

l21u12 l22 l21u13 l22u23


l31u12 l32 l31u13 l32u23 l33

We can find, therefore, the elements of the matrices [L] and [U] by equating the two above
matrices:

l11 a11 ; l 21 a21 ; l31 a31

a12 a12

l11u12 a12 , hence u12


l11 a11

l 21u12 l 22 a 22 , hence l 22 a 22 l 21u12

l31u12 l32 a32 , hence l32 a32 l31u12

l11u13 a13 , hence u13 a13 a13

l11 a11

l u l u a , hence u a 23 l21u13
23
23
21 13 22 23
l 22

l31u13 l32 u 23 l33 a33 , hence l33 a33 l31u13 l32 u 23

Linear Algebraic Equations

52

Numerical Methods for Eng [ENGR 391]

[Lyes KADEM 2007]

Oooooooooouffffffffff !!!.
For a general nn matrix, you have to apply the following expressions to find the LU
decomposition of a matrix [A]:

j 1

lij aij lik u kj


k 1

; ij; i=1,2,,n

i 1

aij lik u kj
k 1
u ij
lii

; i<j; j=2,3,,n

and u ii 1 ; i=1,2,,n
IMPORTANT NOTE
As for the 33 matrix (see above), it is better to follow a certain order when computing the
terms of the [L] and [U] matrices. This order is: l i1,u1j; li2,u2j; ; li,n-1,un-1,,j; lnn.

Example
Find the LU decomposition of the following matrix using Crouts method:

a11

[A]= a 21

a31

a12
a22
a32

a13 2
a23 = 4
a33 3

1
3
2

1
1
2

III.2. Solution of equations


Now to solve our system of linear equations, we can express our initial system:

A x b
Under the following form

A x LU x b
To find the solution x , we define first a vector z :

z U x
Our initial system becomes, then:

Linear Algebraic Equations

L z b

53

Numerical Methods for Eng [ENGR 391]

[Lyes KADEM 2007]

As [L] is a lower triangular matrix the z can be computed starting by z 1 until zn. Then the values
of x can be found using the equation:

z U x
as [U] is an upper triangular matrix, it is possible to compute x using a back substitution
process starting xn until x1. [You will better understand with an example ]
The general form to solve a system of linear equations using LU decomposition is:

z1

b1
l11
i 1

zi

bi lik z k
k 1

lii

; i 2,3,..., n

And

xn z n
xi z i

k i 1

ik

xk ; i n 1, n 2,...,2,1

Example
Solve the following equations using the LU decomposition:

2 x1 x2 x3 4

4 x1 3x2 x3 6

3x 2 x 2 x 15
1 2 3
Note on storage of [A]; [L]; [U]
1- In practice, the matrices [L] and [U] do not need to be stored separately. By omitting
the zeroes in [L] and [U] and the ones in the diagonal of [U] it is possible to store the
elements of [L] and [U] in the same matrix.
2- Note also that in the general formula for LU decomposition once an element of the
matrix [A] is used, it is not needed in the subsequent computations. Hence, the
elements of the matrix generated in point (1) above can be stored in [A]

III.3. Choleskis method for symmetric matrices

Linear Algebraic Equations

54

Numerical Methods for Eng [ENGR 391]

[Lyes KADEM 2007]

In many engineering applications the matrices involved will be symmetric and defined positive. It
is, then, better to use the Choleskis method.
In the Choleskis method, our matrix [A] is decomposed into:
[A]=[U]T [U]
where [U] is an upper triangular matrix.
The elements of [U] are given by:

u11 a11 1 / 2

u a1 j ; j 2,3,..., n
1j u
11

i 1

uii aii u
k 1

uij

2
ki

1/ 2

; i 2,3,..., n

i 1
1

aij ukiukj ; i 2,3,..., n and j i 1, i 2,..., n


uii
k 1

u 0; i j
ij

III.4. Inverse of a symmetric matrix


If a matrix [A] is square, there is another matrix [A] -1, called the inverse of [A]; such as:
[A][A]-1=[A]-1[A]=[I] ; identity matrix
To compute the inverse matrix, the first column of [A] -1 is obtained by solving the problem (for 33
matrix):

1
A x b 0 ; second column:
0

0
A x b 1 ; and third column;
0

0
A x b 0
1

The best way to implement such a calculation is to use LU decomposition.
In engineering, the inverse matrix is of particular interest, since its elements represent the
response of a single part of the system to a unit stimulus of any other part of the system.
III.5. Matrix condition number

Linear Algebraic Equations

55

Numerical Methods for Eng [ENGR 391]

[Lyes KADEM 2007]

III.5.1. Vector and matrix norms


A norm is a real-valued function that provides a measure of the size or length of multicomponent
mathematical entities:

For a vector x :

x1

x2

x
.
.

x n
The Euclidean norm of this vector is defined as:

x x12 x 22 ... x n2

1
2

In general, Lp norm of a vector x is defined as:

LP

x
i 1

1
P

P
i

Note
If the value of (p) is increased to infinity in the above expression, the value of the L norm will

tend to the value of the largest component of x :


L max xi

For a matrix; the first and infinity norms are defined as:
n

A 1 max
aij
1 j n

= maximum column sum

i 1
n

A max
aij
1 i n

= maximum of row sum.

j 1

III.5.2. Matrix condition number


The matrix condition number is defined as:

Linear Algebraic Equations

56

Numerical Methods for Eng [ENGR 391]

[Lyes KADEM 2007]

Cond A A A1

For a matrix [A], we have that: cond [A] 1


and
x
x

Cond A

A
A

Therefore, the error on the solution can be as large as the relative error of the norm of [A]
multiplied by the condition number.
If the precision on [A] is t-digits (10 -t) and Cond[A]=10C, the solution on [x] may be valid to only t-c
digits (10c-t).
III.6. Jacobi iteration Method
The Jacobi method is an iterative method to solve systems of linear algebraic equations.
Consider the following system:

a11 x1 a12 x2 a13 x3 ... a1n xn b1


.

.
an1 x1 an 2 x2 an 3 x3 ... ann xnn bn
This system can be written under the following form:

1
b1 a12 x2 a13 x3 ... a1n xn
x

a11

x1

1
bn an1x1 an 2 x2 ... an,n 1xn,n 1
ann

The general formulation is:

xi

1
aii

bi

j 1, j i

ij

x j ; i=1,2,,n

Here we start, by an initial guess for x 1; x2; ; xn, and we compute the new values for the next
iteration. If no good initial guess is available, we can assume each component to be zero.
We generate the solution at the next iteration using the following expression:

xik 1

1
bi
aii

j 1, j i

ij

x kj ; i=1,2,,n; and for k=1,2, , convergence

Linear Algebraic Equations

57

Numerical Methods for Eng [ENGR 391]

[Lyes KADEM 2007]

The calculation must be stopped if:

xik 1 xik
;
xik

is the desired precision.

It is possible to show that a sufficient condition for the convergence of the Jacobi method is:

a ii

j 1, j i

ij

III.7. Gauss-Seidel iteration Method


It can be seen that, in Jacobi iteration method, all the new values are computed using the values
at the previous iteration. This implies that both the present and the previous set of values have to
be stored. Gauss-Seidel method will improve the storage requirement as well as the
convergence.
k 1
k 1
k 1
In Gauss-Seidel method, the values x1 , x 2 ,..., x i
computed in the current iteration as
k

k 1

well as xi 2 , xi 3 ,..., x n , are used in finding the value x i 1 . This implies that always the
most recent approximations are used during the computation. The general expression is:

xik 1

1
aii

i 1

bi aij x kj 1

j 1
NEW

xik ; i=1,2,...,n; k=1,2,3, ..

OLD

j i 1

ij

Note
- The Gauss-Seidel method will converge to the correct solution irrespective of the initial
estimate, if the system of equations is diagonally dominant. But, in many cases, the solution
will converges, even if the system is weakly diagonally dominant.

Example
Find the solution of the following equations using the Gauss-Seidel iteration method:

5 x1 x2 2 x3 1

2 x1 6 x2 3x3 2
2 x x 7 x 32
1 2 3

III.7.1. Improvement of convergence using relaxation


Relaxation is used to enhance convergence, the new value is written under the form:

Linear Algebraic Equations

58

Numerical Methods for Eng [ENGR 391]

[Lyes KADEM 2007]

xiNEW xiNEW (1 ) xiOLD


And usually, 0<<2
If 0<<1
If 1<<2

we are using under-relaxation, used to make a non-convergent system converge


or to damp the oscillations.
we are using over-relaxation, to accelerate the convergence of an already
convergent system.

The choice of depends on the problem to be solved.

III.8. Choice of the method


1- If the equations are to be solved for different right-hand-side vectors, a direct method, like
LU decomposition, is preferred.
2- The Gauss-Seidel method will give accurate solution, even when the number of
equations is several thousands (if the system is diagonally dominant). It is usually twice
as fast as the Jacobi method.

Linear Algebraic Equations

59

You might also like