Professional Documents
Culture Documents
Linalg 3 Eigenvalues and Eigenvectors PDF
Linalg 3 Eigenvalues and Eigenvectors PDF
Contents
1 Eigenvalues and Eigenvectors
1.1
1.2
1.3
1.4
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
We have discussed (perhaps excessively) the correspondence between solving a system of homogeneous linear
equations and solving the matrix equation
A ~x = ~0,
for
an
nn
matrix and
~x
and
~0
each
n1
column
vectors.
For reasons that will become more apparent soon, a more general version of this question which is also of
interest is to solve the matrix equation
problem corresponds to
where
In the language of linear transformations, this says the following: given a linear transformation
from a vector space
1.1
A ~x = ~x,
= 0.)
~x
does
T :V V
an
corresponding scalar
n n matrix, a nonzero
is called an eigenvalue
vector
of
~x
the set
A ~x = ~x
with
is called an eigenvector of
A,
and the
A.
~0
an eigenvector.
[S1]:
[S2]:
~x with
det(I A) = 0.
It turns out that it is fairly straightforward to nd all of the eigenvalues: because
nn
t.
det(tI A),
p(t)
of the matrix
A,
in the
A.
Notation 1: Some authors instead dene the characteristic polynomial as the determinant of the matrix
A tI rather
n
than (1) .
than
tI A.
tn
Notation 2: It is often customary, when referring to the eigenvalues of a matrix, to include an eigenvalue
the appropriate number of extra times if the eigenvalue is a multiple root of the characteristic polynomial.
Thus, for the characteristic polynomial
t2 (t 1)3 ,
= 0, 0, 1, 1, 1
if we
Remark: The characteristic polynomial may have non-real numbers as roots. Non-real eigenvalues are
absolutely acceptable; the only wrinkle is that the eigenvectors for these eigenvalues will also necessarily
contain non-real entries. (If
A has real number entries, then any non-real roots of the characteristic poly-
nomial will come in complex conjugate pairs. The eigenvectors for one root will be complex conjugates
of the eigenvectors for the other root.)
This statement follows from the observation that the determinant of an upper-triangular matrix is the
product of the diagonal entries, combined with the observation that if
A is upper-triangular,
tI A
then
is also upper-triangular. (If diagonal entries are repeated, the eigenvalues are repeated the same number
of times.)
1
0
0
3
8
i
3
0
are 1, 3, and
2
0
0
1
2
2
0
3
0
are 2,
2, and 3.
tI A
p(t)
p(t).
equal to zero and solve. The roots are precisely the eigenvalues
A,
~x
satisfying
A ~x = ~x.
of
A.
~x form
and the nonzero vectors in the space are the eigenvectors corresponding
.
Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix
tI A =
t1
0
Step 1: We have
0
t1
, so
A=
1
0
0
1
.
= 1, 1
(Alternatively, we could have used the fact that the matrix is upper-triangular.)
1
0
0
1
a
a
=
.
b
b
=1
is given by
1
0
and
Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix
tI A =
t1
0
Step 1: We have
1
t1
, so
a
b
A=
1
0
0
1
1
1
have this
.
.
= 1, 1
(Alternatively, we could have used the fact that the matrix is upper-triangular.)
b = 0.
1
0
1
1
a
a
=
.
b
b
=1
a
b
a basis for the eigenspace with
This requires
is given by
1
0
.
a+b
=
,
b
a
, and so
0
1
0
1
1
and the identity matrix
1
0
0
1
have the same characteristic
polynomial and eigenvalues, but do not have the same eigenvectors. In fact, for
for
1
0
1
1
is 1-dimensional, while the eigenspace for
1
0
0
1
Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix
Step 1: We have
Step 2: Since
Step 3:
tI A =
t2
3
2
t1
, so
A=
2
3
2
1
.
the eigenspace
is 2-dimensional.
= 1,
a
a
=
,
b
b
= 1, 4
2a + 2b
a
For = 1 we want
so we need
=
, which reduces
3a + b
b
"
#
2
2
2
b
, so a basis is given by
.
to a = b. So the vectors we want are those of the form
3
3
3
b
2 2
a
a
2a + 2b
4a
For = 1 we want
=4
, so we need
=
, which reduces to
3 1
b
b
3a + b
4b
1
b
.
a = b. So the vectors we want are those of the form
, so a basis is given by
1
b
2
3
2
1
Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix
Step 1: We have
Step 2: Since
Step 3:
t
tI A = 1
0
0
t
1
0
1 ,
t
so
0
A= 1
0
t
p(t) = det(tI A) = t
1
= 0, i, i
0 0
0 1 .
1 0
1
= t (t2 + 1).
t
a
a
0
0
0
1 b = 0 b , so we need a c = 0 , so a = c and
For = 0 we want
c
c
b
0
0
1
a
b = 0. So the vectors we want are those of the form 0 , so a basis is given by 0 .
1
a
0 0 0
a
a
0
ia
For = i we want 1 0 1 b = i b , so we need a c = ib , so a = 0
0 1 0
c
c
b
ic
0
0
and b = ic. So the vectors we want are those of the form ic , so a basis is given by i .
c
1
a
a
0
ia
0 0 0
For = i we want 1 0 1 b = i b , so we need a c = ib , so
0 1 0
c
c
ic
b
0
a = 0 and b = ic. So the vectors we want are those of the form ic , so a basis is given by
c
0
i .
1
0
1
0
0
0
1
Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix
t1
Step 1: We have tI A = 1
1
2
(t 1) (t 3) + (t 1).
Step 2: Since
Step 3:
1
t1
3 , so p(t) = (t1)
0
t3
0
t1
0
1
A = 1
1
1
3
+(1)
1
t3
0 1
a
a
1 3 b = 1 b ,
0 3
c
c
0 1
1 3 .
0 3
t 1
=
0
= 1, 2, 2
a+c
a
For = 1 we want
so we need a + b + 3c = b , so
a + 3c
c
0
0
c = 0 and a = 0. So the vectors we want are those of the form b , so a basis is given by 1 .
0
0
1 0 1
a
a
a+c
2a
For = 2 we want 1 1 3 b = 2 b , so we need a + b + 3c = 2b , so
1 0 3
c
c
a + 3c
2c
c
a = c and b = 2c. So the vectors we want are those of the form 2c , so a basis is given by
c
1
2 .
1
1
1
1
1.2
k.
So what the theorem says is that the geometric multiplicity is at most the algebraic
multiplicity.
=3
(t 1)3 (t 3)2 ,
=1
is at most
is at most 2-dimensional.
Proof: The statement that the eigenspace has dimension at least 1 is immediate, because (by assumption)
is a root of the characteristic polynomial and therefore has at least one nonzero eigenvector associated
to it.
(I A) ~x = ~0.
If appears k times
k,
I A
B must have at most k rows of all zeroes.
Otherwise, the matrix B (and hence I A too, although this requires a check) would have 0 as an
eigenvalue more than k times, because B is in echelon form and therefore upper-triangular.
But the number of rows of all zeroes in a square matrix is the same as the number of nonpivotal
as a root of the characteristic polynomial, then when we put the matrix
B,
then
columns, which is the number of free variables, which is the dimension of the solution space.
So, putting all the statements together, we see that the dimension of the eigenspace is at most
k.
Corollary: If
is an
nn
matrix with
n-dimensional
are (any)
distinct eigenvalues
vector space
Rn ,
is the determinant of
A.
1.3
A.
Note: The trace of a matrix is dened to be the sum of its diagonal entries.
=
, so that
1 2
1 2
2 1
1 2
1 1
P 1 AP = B .
0 1
1
also has B = Q
AQ. In general, if two matrices A and B are
Remark: The matrix Q =
1 2
1
similar, then there can be many dierent matrices Q with Q
AQ = B .
such that
Similar matrices have quite a few useful algebraic properties (which justify the name similar). If
and
D=P
CP ,
B + D = P 1 AP + P 1 CP = P 1 (A + C)P .
=P
B = P 1 AP
P.
BD = P 1 AP P 1 CP = P 1 (AC)P .
A1
B 1
exists, and
det(P
det(B) = det(P 1 AP ) =
P ) = det(A).
The conjugate has the same characteristic polynomial as the original matrix:
tI P P
A P ) = det(P
det(tI B) = det(P 1
In particular, a matrix and its conjugate have the same eigenvalues (with the same multiplicities).
Also, by using the fact that the trace is equal both to the sum of the diagonal elements and a
coecient in the characteristic polynomial, we see that a matrix and its conjugate have the same
trace.
If
~y
is an eigenvector of
then
P ~y
eigenvalue
is an eigenvector of
if
(with the
same eigenvalue).
A,
A.
that
is
similar to?
A.
1 , , n ,
the simplest form we could plausibly hope for would be a diagonal matrix
1
..
n
whose diagonal elements are the eigenvalues of
A.
If we know that
Since
with
is diagonal,
Dk
D = P 1 AP ,
D = (P
powers of
D.
AP ) = P (A )P , so A = P D P .
k
2 6
4
k
Example: With A =
as above, we have D =
3
7
0
k
k
k
24
224
1 2
4 0
.
=
1 1
0 1
1 + 4k 1 + 2 4k
Then
k th
A:
0
1
, so that
Ak =
1
1
2
1
7
3
4
1
2
example, if k = 1 we get the matrix
1 , which is actually the inverse matrix A . And
3
2
4
1
0 2
2 6
2
if we set k =
we get the matrix B =
, whose square satises B =
= A.
1 3
3
7
2
Observation: This formula also makes sense for values of
Theorem: An
nn
matrix
Proof: If
has
|
P = ~v1
|
|
~vn
|
~v1 , , ~vn
A.
1 , , n ,
|
|
|
|
|
Because ~v1 , , ~vn are eigenvectors, we have A P = A~v1 A~vn = 1~v1
|
|
|
|
|
1
|
|
|
|
|
|
..
But we also have 1~v1 n~vn = ~v1 ~vn
= P D.
.
|
|
|
|
|
|
n
Therefore,
|
n~vn .
|
is invertible, and
D = P 1 AP
AP = P D.
|
|
|
|
|
|
|
~vn then AP = P D says A~v1 A~vn = 1~v1 n~vn , which
|
|
|
|
|
|
|
(by comparing columns) says that A~
v1 = 1~v1 , . . . , A~vn = n~vn . Thus the columns ~v1 , , ~vn of
P are eigenvectors, and (because P is invertible) they are linearly independent.
|
If P = ~v1
|
Finally, the last statement in the proof follows because (as shown earlier) a matrix with
eigenvalues has
distinct
Advanced Remark: As the theorem demonstrates, if we are trying to diagonalize a matrix, we can run into
trouble if the matrix has repeated eigenvalues. However, we might still like to know what the simplest form
a non-diagonalizable matrix is similar to.
The answer is given by what is called the Jordan Canonical Form (of a matrix): every matrix is similar
J1
J2
..
where each
J1 , , Jn
Jn
of the form
J =
1
..
,
1
with
2
Example: The non-diagonalizable matrix 0
0
and J2 = [3].
1
2
0
0
2
0 is in Jordan Canonical Form, with J1 =
0
3
1
2
The existence and uniqueness of the Jordan Canonical Form can be proven using a careful analysis of
~x satisfying (I A)k ~x = ~0
k = 1.)
k.
(Regular
Roughly speaking, the idea is to use certain carefully-chosen generalized eigenvectors to ll in for
the missing eigenvectors; doing this causes the appearance of the extra 1s appearing above the
diagonal in the Jordan blocks.
Theorem (Cayley-Hamilton): If
matrix
p(x)
A,
then
p(A)
is the zero
(where in applying a polynomial to a matrix, we replace the constant term with that constant times
A=
2
3
t2 3t 4. We can compute A2 =
6 6
4 0
0 0
=
.
9 3
0 4
0 0
2
1
, we have
10
9
6
7
t2
det(tI A) =
3
, and then indeed we have
2
= (t 1)(t 2) 6 =
t1
10 6
A2 3A 4I =
9 7
Proof (if
is diagonalizable): If
A is
A.
D = P 1 AP
with
diagonal, and
p(x)
be
0
p(1 )
1
..
..
..
that p(D) = p
= 0.
=
=
.
.
.
0
p(n )
n
Now by conjugating each term and adding the results, we see that 0 = p(D) = p(P 1 AP ) =
P 1 [p(A)] P . So by conjugating back, we see that p(A) = P 0 P 1 = 0.
The diagonal entries of
polynomial of
A.
So
of
Canonical Form
D;
p(J) = 0,
1.4
A = P 1 DP ),
A.
A.
dimension (namely,
the number of times the corresponding eigenvalue appears as a root of the characteristic polynomial)
then the matrix is diagonalizable. Otherwise, the matrix is not diagonalizable.
Example: For
with
A=
0
3
D = P 1 AP ,
2
5
Step 1: We have
D.
tI A =
t
2
3 t 5
= 2, 3.
so
Step 2:
2
a
a
2b
2a
For = 2 we need to solve
=2
, so
=
and thus a = b.
5
b
b
3a + 5b
2b
b
1
The eigenvectors are of the form
so a basis for the = 2 eigenspace is
.
b
1
2
0 2
a
a
2b
3a
For = 3 we need to solve
=3
, so
=
and thus a = b.
3 5
b
b
3a + 5b
3b
3
"
#
2
2
b
The eigenvectors are of the form
so a basis for the = 3 eigenspace is
.
3
3
b
2 0
Step 3: Since the eigenvalues are distinct we know that A is diagonalizable, and D =
. We
0 3
1 2
have two linearly independent eigenvectors, and so we can take P =
.
1
3
3 2
3 2
0 2
1 2
2 0
1
1
To check: we have P =
, so P
AP =
=
=
1
1
1
1
3 5
1
3
0 3
D.
0
3
D=
3
0
0
2
if we wanted. There is no particular reason to care much about
which diagonal matrix we want as long as we make sure to arrange the eigenvectors in the correct order.
1 1 0
Example: For A = 0 2 0 , determine whether there exists a diagonal matrix D and an invertible
0 2 1
1
matrix P with D = P
AP , and if so, nd them.
t1
1
0
t2
0
t2
0 so det(tIA) = (t1)
Step 1: We have tIA = 0
= (t1)2 (t2).
2 t 1
0
2 t 1
The eigenvalues are therefore = 1, 1, 2.
Step 2:
0
a
a
ab
a
0 b = b , so 2b = b and thus b = 0.
1
c
c
2b + c
c
a
1
0
The eigenvectors are of the form 0 so a basis for the = 1 eigenspace is 0 , 0 .
c
0
1
2a
ab
a
a
1 1 0
For = 2 we need to solve 0 2 0 b = 2 b , so 2b = 2b and thus
2c
2b + c
c
c
0 2 1
b
a = b and c = 2b. The eigenvectors are of the form b so a basis for the = 2 eigenspace is
2b
1
1 .
2
1 0 0
Step 3: Since the eigenspace for = 1 is 2-dimensional, the matrix A is diagonalizable, and D = 0 1 0
0 0 2
1 0 1
0 1 .
We have three linearly independent eigenvectors, so we can take P = 0
0 1 2
1 1 0
1 1 0
1 1 0
1 0 1
To check: we have P 1 = 0 2 1 , so P 1 AP = 0 2 1 0 2 0 0 0 1 =
0 1 0
0 1 0
0 2 1
0 1 2
1 0 0
0 1 0 = D.
0 0 2
1
For = 1 we need to solve 0
0
1
2
2
1 1 1
Example: For A = 0 1 1 , determine whether there exists a diagonal matrix D and an invertible matrix
0 0 1
P with D = P 1 AP , and if so, nd them.
t 1 1
1
t 1 1 so det(tIA) = (t1)3 since tIA is upper-triangular.
Step 1: We have tIA = 0
0
0
t1
The eigenvalues are therefore = 1, 1, 1.
Step 2:
For
=1
a = b = 0.
we need to solve
1
0
0
=1
1
a
a
a+b+c
a
1 b = b , so b + c = b and thus
1
c
c
c
c
0
0
the form 0 so a basis for the = 1 eigenspace is 0 .
c
1
1
1
0
is
not diagonalizable .