Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
0 of .
Results for:
P. 1
EigenValue Paper

# EigenValue Paper

Ratings: (0)|Views: 7|Likes:

### Availability:

See more
See less

06/21/2012

pdf

text

original

Three Iterative Methods for Solving the Eigenproblemfor Various Classes of Matrices
Nicholas BenthemKurt O’HearnConnor ScholtenDecember 9, 2011
1 Introduction
Our paper presents and discusses three iterative techniques for solving the eigenproblem: the Powermethod, the
QR
algorithm, and the Jacobi method. Of these methods, the Power method and the
QR
algorithm can be employed on any square matrix in order to determine its eigen-information.Furthermore, the Jacobi method can be used to determine eigen-information for symmetric matrices. Inaddition to providing the steps of and some examples using these methods, the paper also discusesindicators of convergence rates and variations upon these three methods.
1.1 Motivation
We ﬁrst deﬁne the meaning of the terms direct method and iterative method. A
direct method
produces aresult in a ﬁnite number of prescribed computations. An
indirect method
produces a sequence of approximations for a result that (hopefully) converges to the true solution. With this distinction betweendirect and iterative methods in mind, we now consider the following question: why are iterativemethods necessary in solving the eigen-problem?Recall that every
n
x
n
matrix has an associated characteristic equation, which is an
n
degree polynomial.To determine the eigenvalues of a matrix, we must solve this polynomial. Hence, solving theeigen-problem is equivalent to solving polynomials. Now, also consider the following result established by Niels Hendrik Abel in 1823.
Theorem 1
(Abel’s Theorem)
.
Let
p
(
x
)
be a polynomial of degree
n
with complex coefﬁcients. Then if
n
5
,
p
has no ‘solution by radicals’. That is, there exists no direct method to determine the roots of
p
by means of a ﬁnitenumber of elementary operations.
Thus, Abel proved that there is no direct method for ﬁnding the roots of polymonials of degree 5 orhigher. This astonishing historical result means we must employ iterative methods to approximate thesolutions of polynomials, or equivalently to approximate the eigenvalues of
n
x
n
matrices where
n
5
.
1.2 Deﬁnitions, Notations, and Remarks Concerning this Paper
For this paper, all matrices will be square with real entries. We will use the following notation:
Notation 1.
Let
A
n
x
n
be a matrix with real entries. The following expressions are equivalent.1.
A
is an
n
x
n
real matrix2.
A
R
n
x
n
1

We also note that although we only consider real matrices within this paper, many of the ideas hold withcomplex matrices. Additionally, the following two deﬁnitions will be employed throughout the paper.
Deﬁnition 1.
Let
A
R
n
x
n
. We say
A
is simple when
A
has
n
linearly independent eigenvectors. Otherwise,
A
is said to be defective.
Deﬁnition 2.
Let
A
R
n
x
n
. We say
A
is nonsingular when
A
is invertible. Otherwise,
A
is said to be singular.
2 The Power Method
The Power method is used to ﬁnd the strictly dominant eigenvector of any matrix
A
R
n
x
n
. By strictlydominant eigenvector
v
1
, we mean the eigenvector associated with the strictly dominant eigenvalue
λ
1
of
A
. That is,
|
λ
1
|
>
|
λ
2
| |
λ
3
|
...
|
λ
n
|
for the eigenvalues
λ
1
,λ
2
,λ
3
,...,λ
n
of
A
. To avoid complicating the discussion about the Powermethod, we require that
A
be simple. In the following subsections, we present the steps of the powermethod, discuss why the Power method converges, present and discuss an indicator of the rate of convergence, and consider variations upon the Power method.
2.1 Steps of the Power Method
As mentioned, let
A
R
n
x
n
be a simple matrix, and let
v
1
be the strictly dominant eigenvectorassociated with
A
. The following are the steps in the Power method.
Method 1
(The Power Method)
.
1. Choose an initial
x
0
in
R
n
2. For
k
= 0
,
1
,...,
end(a) Compute
Ax
k
(b) Let
µ
k
be the largest entry in absolute value of
Ax
k
(c) Compute
x
k
+1
=
1
µ
k
Ax
k
Then the sequence
=
{
x
0
,x
1
=
Ax
0
,x
2
=
Ax
1
,x
3
=
Ax
2
,...,x
m
=
Ax
m
1
,...
}
generated by the Power method converges to the dominant eigenvector of
A
and the sequence
=
{
µ
0
,µ
1
,µ
2
,µ
3
,...,µ
m
,...
}
converges to the dominant eigenvalue of
A
.
2.2 Proof of the Convergence of the Power Method
To show why the sequence
of approximations converges to the dominant eigenvector, we will ﬁrstanalyze a representation of our initial vector
x
0
in another basis. Then we will consider someconsequences of this representation, particularly as we progress far in the sequence. Recall that theeigenvectors
v
1
,v
2
,...,v
n
of
A
form a basis for
R
n
due to their pair-wise linear independence. In light of this fact, we can take an arbitrary
 q
in
R
n
and write this vector as a linear combination of these basiselements. That is, there exists real numbers
c
1
,c
2
,...,c
n
such that2

q
=
c
1
v
1
+
c
2
v
2
+
...
+
c
n
v
n
.
(1)Additionally, we stipulate that for the eigenvector representation of our arbitrary
 q
in (1),
c
1
is non-zero.That is, we assert that the component of
 q
in the direction of the dominant eigenvector
v
1
of
A
is nonzero.Now, observe that if we multipy both sides of (1) on the left by
A
, we obtain
A q
=
c
1
Av
1
+
c
2
Av
2
+
...
+
c
n
Av
n
=
c
1
λ
1
v
1
+
c
2
λ
2
v
2
+
...
+
c
n
λ
n
v
n
.
(2)We note that the last line in the above equation is a consequence of
v
1
,v
2
,...,v
n
being eigenvectors of
A
.Now, again left multiplying
A
by both sides of (2), we obtain
A
2
 q
=
c
1
λ
21
v
1
+
c
2
λ
22
v
2
+
...
+
c
n
λ
2
n
v
n
.
Hence, if we continue this left hand multiplication by
A
, we have for any positive integer
j
,
A
j
 q
=
c
1
λ
j
1
v
1
+
c
2
λ
j
2
v
2
+
...
+
c
n
λ
jn
v
n
.
(3)Now, if we observe that since we know that
|
λ
1
|
>
0
because
λ
1
is the dominant eigenvalue, then we canconclude from (3) that
1
λ
j
1
A
j
 q
=
c
1
v
1
+
c
2
λ
j
2
λ
j
1
v
2
+
...
+
c
n
λ
jn
λ
j
1
v
n
.
(4)And again since
|
λ
1
|
>
|
λ
2
| |
λ
3
| ≥
...
|
λ
n
|
, we have
λ
j
2
λ
j
1
...
λ
jn
λ
j
1
<
1
.
(5)Thus, as
j
, we can observe from (5) that
λ
j
2
λ
j
1
0
, ...,λ
jn
λ
j
1
0
.
Thus, we see from the above equation that in (4),
1
λ
j
1
A
j
 q
converge to a scalar multiple of the dominanteigenvector
v
1
of
A
. With this result, we now consider the sequence of approximations
generated fromthe Power method in the following form:
=
{
x
0
,x
1
=
Ax
0
,x
2
=
A
2
x
0
,x
3
=
A
3
x
0
,...,A
n
x
0
,...
}
.
Hence, if we represent each of the above terms in
using our previously established eigenvector basisin (1), we now know from (3) that this sequence does indeed converges to a scalar multiple of thedominant eigenvector
v
1
, and the sequence
must indeed converge to the dominant eigenvector
λ
1
.
2.3 Discussion of the Convergence of the Power Method
Since we have now established that the sequences generated by the method do indeed converge, wenow consider how these sequences behave as they converge. Observe that if
|
λ
1
|
= 1
, then
1
λ
j
1
A
j
x
0
=
c
1
v
1
+
c
2
λ
j
2
λ
j
1
v
2
+
...
+
c
n
λ
jn
λ
j
1
v
n
3