Professional Documents
Culture Documents
(Numerical Analysis)
2
Norms of Vectors and Matrices
3
Norms of Vectors and Matrices
Theorem
4
Norms of Vectors and Matrices
Distance
5
Norms of Vectors and Matrices
Theorem
Theorem
6
Norms of Vectors and Matrices
Matrix norm and distance
7
Norms of Vectors and Matrices
Theorem
Corollary
8
Norms of Vectors and Matrices
Theorem
9
Eigenvalues and Eigenvectors
10
Eigenvalues and Eigenvectors
Figure 7.6
11
Eigenvalues and Eigenvectors
Example
12
Eigenvalues and Eigenvectors
Theorem
13
Eigenvalues and Eigenvectors
Example
14
Eigenvalues and Eigenvectors
Convergent matrices
15
Eigenvalues and Eigenvectors
Theorem
16
The Jacobi Iterative Techniques
- Iterative techniques are efficient for large linear systems
- Idea: start with an initial approximation x(0) to the solution x,
and generates a sequence of vectors {x(k)} that converges to x.
Jacobi’s Method
- Consider the i-th equation of the linear system Ax = b
17
The Jacobi Iterative Techniques
Jacobi’s Method
- For each , generate the components of from
18
The Jacobi Iterative Techniques
Example
19
The Jacobi Iterative Techniques
Example
20
The Jacobi Iterative Techniques
Example
21
The Jacobi Iterative Techniques
Example
22
The Jacobi Iterative Techniques
Analysis of the Jacobi method
- The Jacobi method can be written in the form
- We have
23
The Jacobi Iterative Techniques
Analysis of the Jacobi method
24
The Jacobi Iterative Techniques
Example
25
The Gauss-Seidel Method
Improvement:
- When we compute , the components
, are already available.
- They are better approximations than ,
26
The Gauss-Seidel Method
Example
27
The Gauss-Seidel Method
Example
28
The Gauss-Seidel Method
Example
29
The Gauss-Seidel Method
In matrix form:
30
The Gauss-Seidel Method
31
The Gauss-Seidel Method
Remark:
The Gauss-Seidel method is almost always better than the Jacobi
method, but there are linear systems for which the Jacobi system
converges and the Gauss-Seidel does not.
(look at their eigenvalues)
32
General Iteration Methods
- To study the convergence of general iteration methods, we
need to analyze the formula
where is arbitrary.
Lemma
33
General Iteration Methods
Theorem
Corollary
34
35
General Iteration Methods
36
General Iteration Methods
Theorem
Remark:
37
General Iteration Methods
Theorem
Interpretation
- When one method converges, the other also converges; the
Gauss-Seidel method converges faster (i)
- When one method diverges, so does the other (ii)
- (iii): both methods reach the exact solution in a finite number
of iterations.
38
The Conjugate Gradient Method
- n steps to obtain the solution to the equation Ax = b
- Efficient for large sparse systems
- Good results after iterations
Theorem
39
The Conjugate Gradient Method
Theorem
40
The Conjugate Gradient Method
The set of vectors is called A-orthogonal (A
positive definite) if
Theorem
Residual:
41
The Conjugate Gradient Method
Example
42
The Conjugate Gradient Method
43
The Conjugate Gradient Method
Example
44
The Conjugate Gradient Method
Example
45
The Conjugate Gradient Method
Theorem
46
The Conjugate Gradient Method
The Conjugate Gradient Algorithm
47
The Conjugate Gradient Method
Verify that this algorithm generates a set of A-orthogonal vector
Also,
Therefore,
48
The Conjugate Gradient Method
Verify that this algorithm generates a set of A-orthogonal vector
Or:
Hence,
Now, we construct
And suppose that are A-orthogonal.
We need to prove:
49
The Conjugate Gradient Method
Verify that this algorithm generates a set of A-orthogonal vector
Then,
Yet, because of the induction hypothesis.
Besides, because and
.
Finally, .
50
The Conjugate Gradient Method
Example
51
The Gradient Descent Method
Recall: x* is the solution to the positive definite linear system Ax
= b if and only if x* produces the minimal value of:
where
- For the linear system above, we can choose the optimal value
of , which is similar to the conjugate gradient method.
52
The Gradient Descent Method
Gradient Descent (Steepest Descent)
- However, the Gradient Descent method is not efficient for
solving linear systems because of its slow convergence rate.
- It can be used for the optimization problem and non-linear
systems.
53