Professional Documents
Culture Documents
Definition of Dominant Let l1, l2, . . . , and ln be the eigenvalues of an n 3 n matrix A. l1 is called the
Eigenvalue and dominant eigenvalue of A if
Not every matrix has a dominant eigenvalue. For instance, the matrix
30 4
1 0
A5
21
(with eigenvalues of l1 5 1 and l2 5 21) has no dominant eigenvalue. Similarly, the
matrix
3 4
2 0 0
A5 0 2 0
0 0 1
(with eigenvalues of l1 5 2, l2 5 2, and l3 5 1) has no dominant eigenvalue.
3 14,
3
x5t t Þ 0.
SECTION 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES 551
...
xk 5 Axk21 5 A(Ak21x0) 5 Akx0.
For large powers of k, and by properly scaling this sequence, we will see that we obtain
a good approximation of the dominant eigenvector of A. This procedure is illustrated in
Example 2.
34
1
x0 5 .
1
We then obtain the following approximations.
Iteration Approximation
2 212 1 210
31 43 4 3 4 31.004
2.50
x1 5 Ax0 5 5 24
25 1 24
2 212 210
31 43 4 3 4 31.004
28 2.80
x2 5 Ax1 5 5 10
25 24 10
2 212 28 264
31 2543104 5 32224 31.004
2.91
x3 5 Ax2 5 222
2 212 264
53
1 25432224 3 464
463
1.004
136 2.96
x4 5 Ax3 5
2 212 2280
53 43 4 53 4 1903
1.004
568 2.99
x6 5 Ax5
1 25 294 190
552 CHAPTER 10 NUMERICAL METHODS
314,
3
Ax ? x lx ? x l(x ? x)
5 5 5 l.
x?x x?x x?x
In cases for which the power method generates a good approximation of a dominant
eigenvector, the Rayleigh quotient provides a correspondingly good approximation of the
dominant eigenvalue. The use of the Rayleigh quotient is demonstrated in Example 3.
Use the result of Example 2 to approximate the dominant eigenvalue of the matrix
2 212
A5 31 25
. 4
Solution After the sixth iteration of the power method in Example 2, we had obtained.
3 4 3 4
568 2.99
x6 5 < 190 .
190 1.00
2 212 26.02
31 4 31.004 5 322.014
2.99
Ax 5
25
Then, since
Ax ? x 5 (26.02)(2.99) 1 (22.01)(1) < 220.0
and
x ? x 5 (2.99)(2.99) 1 (1)(1) < 9.94,
we compute the Rayleigh quotient to be
Ax ? x 220.0
l5 < < 22.01,
x?x 9.94
which is a good approximation of the dominant eigenvalue l 5 22.
From Example 2 we can see that the power method tends to produce approximations
with large entries. In practice it is best to “scale down” each approximation before pro-
ceeding to the next iteration. One way to accomplish this scaling is to determine the com-
ponent of Axi that has the largest absolute value and multiply the vector Axi by the
reciprocal of this component. The resulting vector will then have components whose
absolute values are less than or equal to 1. (Other scaling techniques are possible. For
examples, see Exercises 27 and 28.
Calculate seven iterations of the power method with scaling to approximate a dominant
eigenvector of the matrix
3 4
1 2 0
A 5 22 1 2 .
1 3 1
Use x0 5 (1, 1, 1) as the initial approximation.
3 43 4 3 4
1 2 0 1 3
Ax0 5 22 1 2 1 5 1 ,
1 3 1 1 5
and by scaling we obtain the approximation
34 3 4
3 0.60
x1 5 15 1 5 0.20 .
5 1.00
554 CHAPTER 10 NUMERICAL METHODS
3 43 4 3 4
1 2 0 0.60 1.00
Ax1 5 22 1 2 0.20 5 1.00
1 3 1 1.00 2.20
and
3 4 3 4
1.00 0.45
1
x2 5 1.00 5 0.45 .
2.20
2.20 1.00
Continuing this process, we obtain the sequence of approximations shown in Table 10.6.
TABLE 10.6
x0 x1 x2 x3 x4 x5 x6 x7
3 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4
1.00 0.60 0.45 0.48 0.51 0.50 0.50 0.50
1.00 0.20 0.45 0.55 0.51 0.49 0.50 0.50
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
3 4
0.50
x 5 0.50 .
1.00
Using the Rayleigh quotient, we approximate the dominant eigenvalue of A to be l 5 3.
(For this example you can check that the approximations of x and l are exact.)
REMARK: Note that the scaling factors used to obtain the vectors in Table 10.6,
x1 x2 x3 x4 x5 x6 x7
↓ ↓ ↓ ↓ ↓ ↓ ↓
In Example 4 the power method with scaling converges to a dominant eigenvector. The
following theorem tells us that a sufficient condition for convergence of the power method
is that the matrix A be diagonalizable (and have a dominant eigenvalue).
Theorem 10.3 If A is an n 3 n diagonalizable matrix with a dominant eigenvalue, then there exists a
nonzero vector x0 such that the sequence of vectors given by
Convergence of the
Ax0, A2x0, A3x0, A4x0, . . . , Akx0, . . .
Power Method
approaches a multiple of the dominant eigenvector of A.
SECTION 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES 555
Proof Since A is diagonalizable, we know from Theorem 7.5 that it has n linearly independent
eigenvectors x1, x2, . . . , xn with corresponding eigenvalues of l1, l2, . . . , ln. We
assume that these eigenvalues are ordered so that l1 is the dominant eigenvalue (with a cor-
responding eigenvector of x1). Because the n eigenvectors x1, x2, . . . , xn are linearly
independent, they must form a basis for Rn. For the initial approximation x0, we choose a
nonzero vector such that the linear combination
x0 5 c1x1 1 c2x2 1 . . . 1 cnxn
has nonzero leading coefficients. (If c1 5 0, the power method may not converge, and a dif-
ferent x0 must be used as the initial approximation. See Exercises 21 and 22.) Now, multi-
plying both sides of this equation by A produces
l2 ln
3 1l 2 x 1l 2 x 4.
k k
Akx0 5 l1k c1x1 1 c2 2 1 . . . 1 cn n
1 1
Now, from our original assumption that l1 is larger in absolute value than the other eigen-
values it follows that each of the fractions
l2 l3 ln
, , ...,
l1 l1 l1
is less than 1 in absolute value. Therefore each of the factors
l2 k l3 k ln
1 2 1 2 1 2
k
, , ...,
l1 l1 l1
must approach 0 as k approaches infinity. This implies that the approximation
Akx0 < l1kc1x1, c1 Þ 0
The proof of Theorem 10.3 provides some insight into the rate of convergence of the
power method. That is, if the eigenvalues of A are ordered so that
|l | . |l | $ |l | $ . . . $ |l |,
1 2 3 n
556 CHAPTER 10 NUMERICAL METHODS
then the power method will converge quickly if l 2 y l 1 is small, and slowly if
| | | |
l 2 y l 1 is close to 1. This principle is illustrated in Example 5.
| | | |
EXAMPLE 5 The Rate of Convergence of the Power Method
36 4
4 5
A5
5
has eigenvalues of l1 5 10 and l2 5 21. Thus the ratio l 2 y l 1 is 0.1. For this
| | | |
matrix, only four iterations are required to obtain successive approximations that agree
when rounded to three significant digits. (See Table 10.7.)
TABLE 10.7
x0 x1 x2 x3 x4
...
31.0004 31.0004 31.0004 31.0004 31.0004 31.0004
1.000 0.500 0.941 0.715 0.714 0.714
...
In this section we have discussed the use of the power method to approximate the
dominant eigenvalue of a matrix. This method can be modified to approximate other eigen-
values through use of a procedure called deflation. Moreover, the power method is only
one of several techniques that can be used to approximate the eigenvalues of a matrix.
Another popular method is called the QR algorithm.
This is the method used in most computer programs and calculators for finding eigen-
values and eigenvectors. The algorithm uses the QR–factorization of the matrix, as pre-
sented in Chapter 5. Discussions of the deflation method and the QR algorithm can be
found in most texts on numerical methods.
SECTION 10.3 EXERCISES 557
3 4 3 4
In Exercises 1–6, use the techniques presented in Chapter 7 to find 0 0 6 0
the eigenvalues of the given matrix A. If A has a dominant eigen- 17. A 5 2 7 0 18. A 5 0 24 0
value, find a corresponding dominant eigenvector. 1 2 21 2 1 1
23
30 4 3 4
2 1 0
1. A 5 2. A 5 C In Exercises 19 and 20, the given matrix A does not have a domi-
24 1 3
nant eigenvalue. Apply the power method with scaling, starting
25 25 with x0 5 s1, 1, 1d, and observe the results of the first four iterations.
3 4 3 4
1 4
3. A 5 4. A 5
23 21 2 23
22
3 4 3 4
1 1 0 1 2
25 19. A 5 3 21 20. A 5 22 22
3 4 3 4
2 3 1 0 0 0 5
5. A 5 0 21 2 6. A 5 3 7 0 0 0 22 26 6 23
0 0 3 4 22 3 21. (a) Find the eigenvalues and corresponding eigenvectors of
21
322 4
In Exercises 7–10, use the Rayleigh quotient to compute the eigen- 3
A5 .
value l of A corresponding to the given eigenvector x. 4
25 23
32 4 34 31 4 3 4
4 5 2 3 21. (b) Calculate two iterations of the power method with scaling,
7. A 5 ,x5 8. A 5 ,x5
23 2 4 1 starting with x0 5 s1, 1d.
22 21. (c) Explain why the method does not seem to converge to a
3 4 34
1 2 1
9. A 5 22 5 22 , x 5 1 dominant eigenvector.
26 6 23 3 22. Repeat Exercise 21 using x0 5 s1, 1, 1d, for the matrix
23
3 4 34
3 2 3 23
3 4
0 2
10. A 5 23 24 9 ,x5 0 A5 0 21 0 .
21 22 5 1 0 1 22
C In Exercises 11–14, use the power method with scaling to approxi- C 23. The matrix
mate a dominant eigenvector of the matrix A. Start with
2 212
x0 5 s1, 1d and calculate five iterations. Then use x5 to approxi-
mate the dominant eigenvalue of A.
A5 31 25 4
21 has a dominant eigenvalue of l 5 22. Observe that Ax 5 lx
30 4 3 4
2 1 0
11. A 5 12. A 5 implies that
27 1 6
1
24 23 A21x 5 x.
322 4 322 4 l
1 6
13. A 5 14. A 5
8 1 Apply five iterations of the power method (with scaling) on
C In Exercises 15–18, use the power method with scaling to approxi- A21 to compute the eigenvalue of A with the smallest magni-
mate a dominant eigenvector of the matrix A. Start with tude.
x0 5 s1, 1, 1d and calculate four iterations. Then use x4 to approxi- C 24. Repeat Exercise 23 for the matrix
mate the dominant eigenvalue of A.
3 4
2 3 1
3 4 3 4
3 0 0 1 2 0 A5 0 21 2 .
15. A 5 1 21 0 16. A 5 0 27 1 0 0 3
0 2 8 0 0 0
558 CHAPTER 10 NUMERICAL METHODS
C 25. (a) Compute the eigenvalues of C In Exercises 27 and 28, apply four iterations of the power method
(with scaling) to approximate the dominant eigenvalue of the given
matrix. After each iteration, scale the approximation by dividing by
3 4 3 4
2 1 2 3
A5 and B 5 . its length so that the resulting approximation will be a unit vector.
1 2 1 4
24
3 4
25. (b) Apply four iterations of the power method with scaling to 7 2
3 4
5 6
each matrix in part (a), starting with x0 5 s21, 2d. 27. A 5 28. A 5 16 29 6
4 3
25. (c) Compute the ratios l2yl1 for A and B. For which do you 8 24 5
expect faster convergence?
26. Use the proof of Theorem 10.3 to show that
AsAkx 0d < l1sAkx 0d
for large values of k. That is, show that the scale factors
obtained in the power method approach the dominant eigen-
value.
Regression Analysis The least squares regression polynomial of degree m for the points {(x1, y1), (x2, y2),
. . . , (xn, yn)} is given by
for Polynomials
y 5 amxm 1 am21xm21 1 . . . 1 a2x2 1 a1x 1 a0,
where the coefficients are determined by the following system of m 1 1 linear equa-
tions.
na0 1 sS xida1 1
sS xi2da2 1 . . . 1 sS ximdam 5 S yi
sS xida0 1 sS xi2da1 1 sS xi3da2 1 . . . 1 sS xim11dam 5 S xiyi
sS xi2da0 1 sS xi3da1 1 sS xi4da2 1 .. . . 1 sS xim12dam 5 S xi2yi
.
.
.
sS xi da0 1 sS xi da1 1 sS xi da2 1 . . . 1 sS xi dam 5 S ximyi
m m11 m12 2m