You are on page 1of 66

Chapter 7

Numerical Methods for the Solution of


Systems of Equations

1
Introduction
 This chapter is about the techniques for solving linear and
nonlinear systems of equations.
 Two important problems from linear algebra:
– The linear systems problem:

– The nonlinear systems problem:

2
7.1 Linear Algebra Review

3
4
Theorem 7.1 and Corollary 7.1
 Singular v.s. nonsingular

5
Tridiagonal Matrices
 Upper triangular:

 Lower triangular:

 Symmetric matrices, positive definite matrices……


 The concepts of independence/dependence, spanning,
basis, vector space/subspace, dimension, and
orthogonal/orthonormal should review……

6
7.2 Linear Systems and Gaussian
Elimination
 In Section 2.6, the linear system can be written as a
single augmented matrix:

 Elementary row operations to solve the linear system


problems:

 Row equivalent: if we can manipulate from one matrix to


another using only elementary row operations, then the
two matrices are said to be row equivalent.
7
Theorem 7.2

8
Example 7.1

9
Example 7.1 (con.)

10
Partial Pivoting

11
The Problem of Naive Gaussian
Elimination
 The problem of naive Gaussian elimination is the potential
division by a zero pivot.
 For example: consider the following system

 The exact solution:

 What happens when we solve this system using the naive


algorithm and the pivoting algorithm?
12
Discussion
 Using the naive algorithm:

incorrect
 Using the pivoting algorithm:

correct
13
7.3 Operation Counts
 You can trace Algorithms 7.1 and 7.2 to evaluate the
computational time.

14
7.4 The LU Factorization
 Our goal in this section is to develop a matrix
factorization that allows us save the work from the
elimination step.
 Why don’t we just compute A-1 (to check if A is
nonsingular)?
– The answer is that it is not cost-effective to do so.
– The total cost is (Exercise 7)
 What we will do is show that we can factor the matrix A
into the product of a lower triangular and an upper
triangular matrix:

15
The LU Factorization

16
Example 7.2

17
Example 7.2 (con.)

18
The Computational Cost
 The total cost of the above process:

 If we already have done the factorization, then the cost of


the two solution steps:

 Constructing the LU factorization is surprisingly easy.


– The LU factorization is nothing more than a very slight
reorganization of the same Gaussian elimination algorithm we
studied earlier in this chapter.
19
The LU Factorization : Algorithms 7.5 and 7.6

20
21
22
Example 7.3

23
Example 7.3 (con.)

24
Pivoting and the LU Decomposition
 Can we pivoting in the LU decomposition
without destroying the algorithm?
– Because of the triangular structure of the LU factors,
we can implement pivoting almost exactly as we did
before.
– The difference is that we must keep track of how the
rows are interchanged in order to properly apply the
forward and backward solution steps.

25
Example 7.4

Next page

26
Example 7.4 (con.)

We need to keep
track of the row
interchanges.

27
Discussion
 How to deep track of the row interchanges?
– Using an index array

– For example: In Example 7.4, the final version of J is


you can check that this is
correct.

28
7.5 Perturbation, Conditioning, and
Stability
Example 7.5

29
7.5.1 Vector and Matrix Norms

 For example:
– Infinity norm:

– Euclidean 2-norm:

30
Matrix Norm

 The properties of matrix norm: (1) (2)


 For example:
– The matrix infinity norm:

– The matrix 2-norm:

31
Example 7.6

17    22 11
 22 56   2 0
11 2 14  

32
7.5.2 The Condition Number and
Perturbations

Condition number
Note that

33
Definition 7.3 and Theorem 7.3

34
AA-1= I

35
Theorem 7.4

36
Theorems 7.5 and 7.6

37
Theorem 7.7

38
Definition 7.4

 An example: Example 7.7

39
40
Theorem 7.9

41
Discussion
 Is Gaussian elimination with partial pivoting a stable process?
– For a sufficiently accurate computer (u small enough) and a
sufficiently small problem (n small enough), then Gaussian
elimination with partial pivoting will produce solutions that are
stable and accurate.

42
7.5.3 Estimating the Condition Number
 Singular matrices are perhaps something of a rarity, and all
singular matrices are arbitrarily close to a nonsingular matrix.
 If the solution to a linear system changes a great deal when the
problem changes only very slightly, then we suspect that the matrix
is ill conditioned (nearly singular).
 The condition number is an important indicator to find the ill
conditioned matrix.

43
Estimating the Condition Number

Estimate the condition number


44
Example 7.8

45
7.5.4 Iterative Refinement
 Since Gaussian elimination can be adversely affected by rounding
error, especially if the matrix is ill condition.
 Iterative refinement (iterative improvement) algorithm can use to
improve the accuracy of a computed solution.

46
Example 7.9

47
Example 7.9 (con.)

compare

48
7.6 SPD Matrices and The Cholesky
Decomposition
 SPD matrices: symmetric, positive definite matrices

 You can prove this theorem using induction method.

49
The Cholesky Decomposition
 There are a number of different ways of actually
constructing the Cholesky decomposition.
 All of these constructions are equivalent, because the
Cholesky factorization is unique.
 One common scheme uses the following formulas:
n

 This is a very efficient algorithm.


 You can read Section 9.22 to learn more about Cholesky
50 method.
7.7 Iterative Method for Linear Systems:
a Brief Survey
 If the coefficient matrix is a very large and sparse, then Gaussian
elimination may not be the best way to solve the linear system
problem.
 Why?
– Even though A=LU is sparse, the individual factors L and U may not be
as sparse as A.

51
Example 7.10

52
Example 7.10 (con.)

53
Splitting Methods (details see Chapter 9)

54
Theorem 7.13

55
Definition 7.6

56
Theorem 7.14

 Conclusion:

57
Example of Splitting Methods--
Jacobi Iteration

 Jacobi iteration:

 In this method, matrix M = D.

58
Example 7.12

59
Example 7.12 (con.)

60
Example of Splitting Methods--
Gauss-Seidel Iteration
 Gauss-Seidel Iteration :

 In this method, matrix M = L.

61
Example 7.13

62
Theorem 7.15

63
Example of Splitting Methods--
SOR Iteration
 SOR: successive over-relaxation iteration

64
Example 7.14

65
Theorem 7.16

66

You might also like