You are on page 1of 10

Linear Algebra (part 3): Eigenvalues and Eigenvectors

(by Evan Dummit, 2012, v. 1.00)

Contents
1 Eigenvalues and Eigenvectors

1.1

The Basic Setup

1.2

Some Slightly More Advanced Results About Eigenvalues

1.3

Theory of Similarity and Diagonalization

1.4

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

How To Diagonalize A Matrix (if possible) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Eigenvalues and Eigenvectors

We have discussed (perhaps excessively) the correspondence between solving a system of homogeneous linear
equations and solving the matrix equation

A ~x = ~0,

for

an

nn

matrix and

~x

and

~0

each

n1

column

vectors.

For reasons that will become more apparent soon, a more general version of this question which is also of
interest is to solve the matrix equation
problem corresponds to

where

is a scalar. (The original homogeneous system

In the language of linear transformations, this says the following: given a linear transformation
from a vector space

1.1

A ~x = ~x,

= 0.)

to itself, on what vectors

~x

does

act as multiplication by a constant

T :V V

The Basic Setup


Denition: For

an

corresponding scalar

n n matrix, a nonzero
is called an eigenvalue

vector
of

~x

Important note: We do not consider the zero vector


For a xed value of

the set

A ~x = ~x

with

is called an eigenvector of

A,

and the

A.
~0

an eigenvector.

S whose elements are the eigenvectors ~x with A ~x = ~x, together with


V . (This set S is called the eigenspace associated to the eigenvalue .)

the zero vector, is a subspace of

S contains the zero vector.


S is closed under addition, because if A ~x1 = ~x1 and A ~x2 = ~x2 , then A (~x1 + ~x2 ) =
(~x1 + ~x2 ).
[S3]: S is closed under scalar multiplication, because for any scalar , A (~x) = (A ~x) = (~x) =
(~x).

[S1]:

[S2]:

identity matrix, we can rewrite the eigenvalue equation

~x with
det(I A) = 0.

we know precisely when there will be a nonzero vector


is not invertible, or, in other words, when

~x = (I) ~x where I is the


A ~x = ~x = (I) ~x as (I A) ~x = ~0. But
(I A) ~x = ~0: it is when the matrix (I A)

It turns out that it is fairly straightforward to nd all of the eigenvalues: because

nn

Denition: When we expand the determinant


variable

t.

precisely the eigenvalues of

det(tI A),

we will obtain a polynomial of degree

This polynomial is called the characteristic polynomial

p(t)

of the matrix

A,

in the

and its roots are

A.

Notation 1: Some authors instead dene the characteristic polynomial as the determinant of the matrix

A tI rather
n
than (1) .

than

tI A.

I dene it this way because then the coecient of

tn

will always be 1, rather

Notation 2: It is often customary, when referring to the eigenvalues of a matrix, to include an eigenvalue
the appropriate number of extra times if the eigenvalue is a multiple root of the characteristic polynomial.
Thus, for the characteristic polynomial

t2 (t 1)3 ,

we could say the eigenvalues are

= 0, 0, 1, 1, 1

if we

wanted to emphasize that the eigenvalues occurred more than once.

Remark: The characteristic polynomial may have non-real numbers as roots. Non-real eigenvalues are
absolutely acceptable; the only wrinkle is that the eigenvectors for these eigenvalues will also necessarily
contain non-real entries. (If

A has real number entries, then any non-real roots of the characteristic poly-

nomial will come in complex conjugate pairs. The eigenvectors for one root will be complex conjugates
of the eigenvectors for the other root.)

Proposition: The eigenvalues of an upper-triangular matrix are the diagonal entries.

This statement follows from the observation that the determinant of an upper-triangular matrix is the
product of the diagonal entries, combined with the observation that if

A is upper-triangular,

tI A

then

is also upper-triangular. (If diagonal entries are repeated, the eigenvalues are repeated the same number
of times.)

Example: The eigenvalues of

1
0
0

3
8

i
3
0

are 1, 3, and

and the eigenvalues of

2
0
0

1
2
2

0
3
0

are 2,

2, and 3.

To nd all the eigenvalues (and eigenvectors) of a matrix

tI A

Step 1: Write down the matrix


characteristic polynomial
Step 2: Set

p(t)

follow these steps:

and compute its determinant (using any method) to obtain the

p(t).

equal to zero and solve. The roots are precisely the eigenvalues

Step 3: For each eigenvalue

solve for all vectors

solving the homogeneous system


the eigenspace associated to
to

A,

~x

satisfying

A ~x = ~x.

(I A)~x = ~0 via row-reduction.)

of

A.

The resulting solution vectors

~x form

and the nonzero vectors in the space are the eigenvectors corresponding

.


(Either do this directly, or by

Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix


tI A =

t1
0

Step 1: We have

Step 2: The characteristic equation

0
t1

, so

A=

1
0

0
1


.

p(t) = det(tI A) = (t 1)2 .

(t 1)2 = 0 has a double root t = 1.

So the eigenvalues are

= 1, 1

(Alternatively, we could have used the fact that the matrix is upper-triangular.)

1
0

Step 3: We want to nd the vectors with

0
1

 
 

a
a

=
.
b
b




property. Therefore, a basis for the eigenspace with

=1

is given by

1
0


and

Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix


tI A =

t1
0

Step 1: We have

Step 2: The characteristic equation

1
t1


, so

a
b

Clearly, all vectors

A=

1
0

0
1
1
1


have this


.


.

p(t) = det(tI A) = (t 1)2 .

(t 1)2 = 0 has a double root t = 1.

So the eigenvalues are

= 1, 1

(Alternatively, we could have used the fact that the matrix is upper-triangular.)

Step 3: We want to nd the vectors with


which means

can be arbitrary and

b = 0.

1
0

1
1

 
 

a
a
=
.

b
b

=1

a
b

So the vectors we want are those of the form


a basis for the eigenspace with


This requires

is given by

1
0


.



a+b
=
,
b


a
, and so
0

Remark: Note that this matrix

1
0

1
1


and the identity matrix

1
0

0
1


have the same characteristic

polynomial and eigenvalues, but do not have the same eigenvectors. In fact, for


for

1
0

1
1


is 1-dimensional, while the eigenspace for

1
0

0
1

Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix

Step 1: We have

Step 2: Since

Step 3:

tI A =

t2
3

2
t1


, so

A=

2
3

2
1


.

p(t) = det(tI A) = (t 2)(t 1) (2)(3) = t2 3t 4.

p(t) = t2 3t 4 = (t 4)(t + 1),




the eigenspace

is 2-dimensional.

= 1,

the eigenvalues are

 



a
a

=
,
b
b

= 1, 4

 

2a + 2b
a
For = 1 we want
so we need
=
, which reduces
3a + b
b
"
#


2
2
2
b
, so a basis is given by
.
to a = b. So the vectors we want are those of the form
3
3
3
b

 




 

2 2
a
a
2a + 2b
4a
For = 1 we want

=4
, so we need
=
, which reduces to
3 1
b
b
3a + b
4b
 
 
1
b
.
a = b. So the vectors we want are those of the form
, so a basis is given by
1
b
2
3

2
1

Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix

Step 1: We have

Step 2: Since

Step 3:

t
tI A = 1
0

p(t) = t (t2 + 1),

0
t
1

0
1 ,
t

so

0
A= 1
0


t
p(t) = det(tI A) = t
1

the eigenvalues are

= 0, i, i

0 0
0 1 .
1 0


1
= t (t2 + 1).
t

a
a
0
0
0
1 b = 0 b , so we need a c = 0 , so a = c and
For = 0 we want
c
c
b
0
0

1
a
b = 0. So the vectors we want are those of the form 0 , so a basis is given by 0 .
1
a

0 0 0
a
a
0
ia
For = i we want 1 0 1 b = i b , so we need a c = ib , so a = 0
0 1 0
c
c
b
ic

0
0
and b = ic. So the vectors we want are those of the form ic , so a basis is given by i .
c
1

a
a
0
ia
0 0 0
For = i we want 1 0 1 b = i b , so we need a c = ib , so
0 1 0
c
c
ic

b
0
a = 0 and b = ic. So the vectors we want are those of the form ic , so a basis is given by
c

0
i .
1

0
1
0

0
0
1

Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix

t1
Step 1: We have tI A = 1
1
2
(t 1) (t 3) + (t 1).

Step 2: Since

Step 3:


1
t1
3 , so p(t) = (t1)
0
t3

0
t1
0

p() = (t 1) [(t 1)(t 3) + 1] = (t 1)(t 2)2 ,

1
A = 1
1



1
3

+(1)
1

t3

the eigenvalues are

0 1
a
a
1 3 b = 1 b ,
0 3
c
c

0 1
1 3 .
0 3

t 1
=
0

= 1, 2, 2

a+c
a
For = 1 we want
so we need a + b + 3c = b , so
a + 3c
c

0
0
c = 0 and a = 0. So the vectors we want are those of the form b , so a basis is given by 1 .
0
0

1 0 1
a
a
a+c
2a
For = 2 we want 1 1 3 b = 2 b , so we need a + b + 3c = 2b , so
1 0 3
c
c
a + 3c
2c

c
a = c and b = 2c. So the vectors we want are those of the form 2c , so a basis is given by
c

1
2 .
1
1
1
1

1.2

Some Slightly More Advanced Results About Eigenvalues


Theorem: If

is an eigenvalue of the matrix

which appears exactly

polynomial, then the dimension of the eigenspace corresponding to

times as a root of the characteristic

is at least 1 and at most

k.

appears as a root of the characteristic polynomial is called the


, and the dimension of the eigenspace corresponding to is called the geometric

Remark: The number of times that


algebraic multiplicity of
multiplicity of

So what the theorem says is that the geometric multiplicity is at most the algebraic

multiplicity.

Example: If the characteristic polynomial is


3-dimensional, and the eigenspace for

=3

(t 1)3 (t 3)2 ,

then the eigenspace for

=1

is at most

is at most 2-dimensional.

Proof: The statement that the eigenspace has dimension at least 1 is immediate, because (by assumption)

is a root of the characteristic polynomial and therefore has at least one nonzero eigenvector associated

to it.

For the statement that the dimension is at most

(I A) ~x = ~0.
If appears k times

k,

the idea is to look at the homogeneous system

I A
B must have at most k rows of all zeroes.
Otherwise, the matrix B (and hence I A too, although this requires a check) would have 0 as an
eigenvalue more than k times, because B is in echelon form and therefore upper-triangular.
But the number of rows of all zeroes in a square matrix is the same as the number of nonpivotal
as a root of the characteristic polynomial, then when we put the matrix

into its reduced row-echelon form

B,

then

columns, which is the number of free variables, which is the dimension of the solution space.

So, putting all the statements together, we see that the dimension of the eigenspace is at most

v1 , ~v2 , . . . , ~vn are eigenvectors of


Theorem: If ~
are linearly independent.

k.

A associated to distinct eigenvalues 1 , 2 , . . . , n , then ~v1 , ~v2 , . . . , ~vn

~v1 , . . . , ~vn , say a1~v1 + + an~vn = ~0.


~v1 , . . . , ~vn is the zero vector.)
Multiply both sides by the matrix A: this gives A (a1~v1 + + an~vn ) = A ~0 = ~0.
Now since ~v1 , . . . , ~vn are eigenvectors this says a1 (1~v1 ) + + an (n~vn ) = ~0.
But now if we scale the original equation by 1 and subtract (to eliminate ~v1 ), we obtain a2 (2
1 )~v2 + a3 (3 1 )~v3 + + an (n 1 )~vn = ~0.
Since by assumption all of the eigenvalues 1 , 2 , . . . , n were dierent, this dependence is still
nontrivial, since each of j 1 is nonzero, and at least one of a2 , , an is nonzero.
But now we can repeat the process to eliminate each of ~v2 , ~v3 , . . . , ~vn1 in turn. Eventually we
are left with the equation b ~
vn = ~0 for some nonzero b. But this is impossible, because it would say
~
that ~
vn = 0, contradicting our denition saying that the zero vector is not an eigenvector.
So there cannot be a nontrivial dependence relation, meaning that ~v1 , . . . , ~vn are linearly independent.

Proof: Suppose we had a nontrivial dependence relation between

(Note that at least two coecients have to be nonzero, because none of

Corollary: If

is an

nn

matrix with

n-dimensional

Theorem: The product of the eigenvalues of

~v1 , ~v2 , . . . , ~vn

are (any)

are a basis for

This result follows from the previous theorem: it guarantees that


so since they are vectors in the

1 , 2 , . . . , n , and ~v1 , ~v2 , . . . , ~vn


Rn .

distinct eigenvalues

eigenvectors associated to those eigenvalues, then

vector space

Rn ,

is the determinant of

~v1 , ~v2 , . . . , ~vn

are linearly independent,

they are a basis.

A.

p(t) = (t 1 ) (t 2 ) (t n ), we see that the constant term


(1)n 1 2 2 . But the constant term is also just p(0), and since p(t) = det(tI A) we
n
have p(0) = det(A) = (1) det(A). Thus, setting the two expressions equal shows that the product
of the eigenvalues equals the determinant of A.

Proof: If we expand out the product


is equal to

1.3

Theorem: The sum of the eigenvalues of

equals the trace of

A.

Note: The trace of a matrix is dened to be the sum of its diagonal entries.

Proof: If we expand out the product

p(t) = (t 1 ) (t 2 ) (t n ) we see that the coecient of


tn1 is equal to (1 + + n ). If we expand out the determinant det(tI A) to nd the coecient
n1
of t
, we can show (with a little bit of eort) that the coecient is the negative of the sum of the
diagonal entries of A. Therefore, setting the two expressions equal shows that the sum of the eigenvalues
equals the trace of A.

Theory of Similarity and Diagonalization


n n matrices A and B are similar (or conjugate) if there exists an invertible n n
B = P 1 AP . (We refer to P 1 AP as the conjugation of A by P .)






3 1
1 2
2 3
Example: The matrices A =
and B =
are similar: with P =
, so that
2 1
1 1
1 2



 
 



2 3
2 3
3 1
2 3
1 2
P 1 =
, we can verify that

=
, so that
1 2
1 2
2 1
1 2
1 1
P 1 AP = B .


0 1
1
also has B = Q
AQ. In general, if two matrices A and B are
Remark: The matrix Q =
1 2
1
similar, then there can be many dierent matrices Q with Q
AQ = B .

Denition: We say two


matrix

such that

Similar matrices have quite a few useful algebraic properties (which justify the name similar). If
and

D=P

CP ,

then we have the following:

B + D = P 1 AP + P 1 CP = P 1 (A + C)P .

The sum of the conjugates is the conjugate of the sum:

The product of the conjugates is the conjugate of the product:

The inverse of the conjugate is the conjugate of the inverse:

=P

B = P 1 AP

P.

BD = P 1 AP P 1 CP = P 1 (AC)P .

A1

exists if and only if

B 1

exists, and

The determinant of the conjugate is equal to the original determinant:

det(P

) det(A) det(P ) = det(A) det(P

det(B) = det(P 1 AP ) =

P ) = det(A).

The conjugate has the same characteristic polynomial as the original matrix:

tI P P

A P ) = det(P

det(tI B) = det(P 1

(tI A)P ) = det(tI A).

In particular, a matrix and its conjugate have the same eigenvalues (with the same multiplicities).
Also, by using the fact that the trace is equal both to the sum of the diagonal elements and a
coecient in the characteristic polynomial, we see that a matrix and its conjugate have the same
trace.

~x is an eigenvector of A with eigenvalue , then P 1 ~x is an eigenvector of B with


A ~x = ~x then B (P 1 ~x) = P 1 A(P P 1 )~x = P 1 A ~x = P 1 (~x) = (P 1 ~x).

If

This is also true in reverse: if

~y

is an eigenvector of

then

P ~y

eigenvalue

is an eigenvector of

if

(with the

same eigenvalue).

In particular, the eigenspaces for

have the same dimensions as the eigenspaces for

One question we might have about similarity is: given a matrix

A,

A.

what is the simplest matrix

that

is

similar to?

As observed above, any matrix similar to

has the same eigenvalues for

A.

So, if the eigenvalues are

1 , , n ,

the simplest form we could plausibly hope for would be a diagonal matrix

1
..

n
whose diagonal elements are the eigenvalues of

A is diagonalizable if it is similar to a diagonal matrix D; that is, if there


D = P 1 AP .




2 6
1 2
Example: The matrix A =
is diagonalizable. We can check that for P =
and
3
7
1 1




1 2
4 0
1
P 1 =
, then we have P
AP =
= D.
1 1
0 1

Denition: We say that a matrix


exists an invertible matrix

A.

If we know that

Since

with

is diagonalizable and have

is diagonal,

Dk

D = P 1 AP ,

then it is very easy to compute any power of

is the diagonal matrix whose diagonal entries are the

D = (P

powers of

D.

AP ) = P (A )P , so A = P D P .


 k
2 6
4
k
Example: With A =
as above, we have D =
3
7
0


 k
 

k
k
24
224
1 2
4 0
.

=
1 1
0 1
1 + 4k 1 + 2 4k
Then

k th

A:

0
1


, so that

Ak =

1
1

2
1

k which are not positive integers. For

7
3
4
1
2
example, if k = 1 we get the matrix
1 , which is actually the inverse matrix A . And
3

2
 4


1
0 2
2 6
2
if we set k =
we get the matrix B =
, whose square satises B =
= A.
1 3
3
7
2
Observation: This formula also makes sense for values of

Theorem: An

nn

matrix

is diagonalizable if and only if it has

linearly independent eigenvectors. In

particular, every matrix whose eigenvalues are distinct is diagonalizable.

Proof: If

has

linearly independent eigenvectors

then consider the matrix

|
P = ~v1
|

|
~vn
|

~v1 , , ~vn

with respective eigenvalues

whose columns are the eigenvectors of

A.

1 , , n ,


|
|
|
|
|
Because ~v1 , , ~vn are eigenvectors, we have A P = A~v1 A~vn = 1~v1
|
|
|
|
|



1
|
|
|
|
|
|

..
But we also have 1~v1 n~vn = ~v1 ~vn
= P D.
.
|
|
|
|
|
|
n

Therefore,

A P = P D. Now since the eigenvectors


D = P 1 AP , as desired.

are linearly independent,

|
n~vn .
|

is invertible, and

we can therefore write

For the other direction, if

D = P 1 AP

then (like above) we can rewrite this to say

AP = P D.

|
|
|
|
|
|
|
~vn then AP = P D says A~v1 A~vn = 1~v1 n~vn , which
|
|
|
|
|
|
|
(by comparing columns) says that A~
v1 = 1~v1 , . . . , A~vn = n~vn . Thus the columns ~v1 , , ~vn of
P are eigenvectors, and (because P is invertible) they are linearly independent.

|
If P = ~v1
|

Finally, the last statement in the proof follows because (as shown earlier) a matrix with
eigenvalues has

distinct

linearly independent eigenvectors.

Advanced Remark: As the theorem demonstrates, if we are trying to diagonalize a matrix, we can run into
trouble if the matrix has repeated eigenvalues. However, we might still like to know what the simplest form
a non-diagonalizable matrix is similar to.

The answer is given by what is called the Jordan Canonical Form (of a matrix): every matrix is similar

to a matrix of the form

J1
J2

..

where each

J1 , , Jn

is a square Jordan block matrix

Jn

of the form

J =

1
..

,
1

with

on the diagonal and 1s directly above the diagonal (where

blank entries are zeroes).

2
Example: The non-diagonalizable matrix 0
0
and J2 = [3].

1
2
0


0
2
0 is in Jordan Canonical Form, with J1 =
0
3

1
2

The existence and uniqueness of the Jordan Canonical Form can be proven using a careful analysis of

~x satisfying (I A)k ~x = ~0
k = 1.)

generalized eigenvectors: vectors


eigenvectors would correspond to

for some positive integer

k.

(Regular

Roughly speaking, the idea is to use certain carefully-chosen generalized eigenvectors to ll in for
the missing eigenvectors; doing this causes the appearance of the extra 1s appearing above the
diagonal in the Jordan blocks.

Theorem (Cayley-Hamilton): If
matrix

p(x)

is the characteristic polynomial of a matrix

A,

then

p(A)

is the zero

(where in applying a polynomial to a matrix, we replace the constant term with that constant times

the identity matrix).

Example: For the matrix

A=

2
3

t2 3t 4. We can compute A2 =

 
 

6 6
4 0
0 0

=
.
9 3
0 4
0 0

2
1



, we have

10
9

6
7


t2
det(tI A) =
3


, and then indeed we have


2
= (t 1)(t 2) 6 =
t1


10 6
A2 3A 4I =

9 7

Proof (if

is diagonalizable): If

the characteristic polynomial of

A is
A.

diagonalizable, then let

D = P 1 AP

with

diagonal, and

p(x)

be

D are the eigenvalues 1 , , n of A, hence are roots of the characteristic


p(1 ) = = p(n ) = 0.
Then, because
raising D to a power just raises all of its diagonal entries to that power, we can see


0
p(1 )
1

..
..
..
that p(D) = p
= 0.
=
=
.
.
.
0
p(n )
n
Now by conjugating each term and adding the results, we see that 0 = p(D) = p(P 1 AP ) =
P 1 [p(A)] P . So by conjugating back, we see that p(A) = P 0 P 1 = 0.
The diagonal entries of
polynomial of

A.

So

In the case where

is not diagonalizable, the proof is more dicult.

of

Canonical Form

in place of the diagonal matrix

D;

One way is to use the Jordan

then (one can verify)

p(J) = 0,

and then the

remainder of the argument is the same.

1.4

How To Diagonalize A Matrix (if possible)


In order to determine whether a matrix

A = P 1 DP ),

is diagonalizable (and if it is, how to nd a diagonalization

follow these steps:

Step 1: Find the characteristic polynomial and eigenvalues of

Step 2: Find a basis for each eigenspace of

Step 3a: Determine whether

A.

A.

A is diagonalizable  if each eigenspace has the proper

dimension (namely,

the number of times the corresponding eigenvalue appears as a root of the characteristic polynomial)
then the matrix is diagonalizable. Otherwise, the matrix is not diagonalizable.

Step 3b: If the matrix is diagonalizable, then


eigenvalues of

columns are linearly independent eigenvectors of

Example: For

with

A=

0
3

D = P 1 AP ,

2
5

Step 1: We have

can be taken to be the matrix whose

in the same order as the eigenvalues appear in

, determine whether there exists a diagonal matrix

D.

and an invertible matrix

and if so, nd them.

tI A =

The eigenvalues are therefore

is the diagonal matrix whose diagonal entries are the

(with appropriate multiplicities), and then

t
2
3 t 5
= 2, 3.


so

det(tI A) = t(t 5) + 6 = t2 5t + 3 = (t 2)(t 3).

Step 2:

 




 

2
a
a
2b
2a
For = 2 we need to solve

=2
, so
=
and thus a = b.
5
b
b
3a + 5b
2b




b
1
The eigenvectors are of the form
so a basis for the = 2 eigenspace is
.
b
1






 

2
0 2
a
a
2b
3a
For = 3 we need to solve

=3
, so
=
and thus a = b.
3 5
b
b
3a + 5b
3b
3
"
#


2
2
b
The eigenvectors are of the form
so a basis for the = 3 eigenspace is
.
3
3
b


2 0
Step 3: Since the eigenvalues are distinct we know that A is diagonalizable, and D =
. We
0 3


1 2
have two linearly independent eigenvectors, and so we can take P =
.
1
3





 

3 2
3 2
0 2
1 2
2 0
1
1
To check: we have P =
, so P
AP =
=
=
1
1
1
1
3 5
1
3
0 3
D.
0
3

Note: We could also take

D=

3
0

0
2


if we wanted. There is no particular reason to care much about

which diagonal matrix we want as long as we make sure to arrange the eigenvectors in the correct order.

1 1 0
Example: For A = 0 2 0 , determine whether there exists a diagonal matrix D and an invertible
0 2 1
1
matrix P with D = P
AP , and if so, nd them.



t1
1
0
t2
0
t2
0 so det(tIA) = (t1)
Step 1: We have tIA = 0
= (t1)2 (t2).
2 t 1
0
2 t 1
The eigenvalues are therefore = 1, 1, 2.

Step 2:

0
a
a
ab
a
0 b = b , so 2b = b and thus b = 0.
1
c
c
2b + c
c

a
1
0
The eigenvectors are of the form 0 so a basis for the = 1 eigenspace is 0 , 0 .
c
0
1

2a
ab
a
a
1 1 0
For = 2 we need to solve 0 2 0 b = 2 b , so 2b = 2b and thus
2c
2b + c
c
c
0 2 1

b
a = b and c = 2b. The eigenvectors are of the form b so a basis for the = 2 eigenspace is
2b

1
1 .
2

1 0 0
Step 3: Since the eigenspace for = 1 is 2-dimensional, the matrix A is diagonalizable, and D = 0 1 0
0 0 2

1 0 1
0 1 .
We have three linearly independent eigenvectors, so we can take P = 0
0 1 2

1 1 0
1 1 0
1 1 0
1 0 1
To check: we have P 1 = 0 2 1 , so P 1 AP = 0 2 1 0 2 0 0 0 1 =
0 1 0
0 1 0
0 2 1
0 1 2

1 0 0
0 1 0 = D.
0 0 2
1
For = 1 we need to solve 0
0

1
2
2

1 1 1
Example: For A = 0 1 1 , determine whether there exists a diagonal matrix D and an invertible matrix
0 0 1
P with D = P 1 AP , and if so, nd them.

t 1 1
1
t 1 1 so det(tIA) = (t1)3 since tIA is upper-triangular.
Step 1: We have tIA = 0
0
0
t1
The eigenvalues are therefore = 1, 1, 1.

Step 2:

For

=1

a = b = 0.

we need to solve

1
0
0

The eigenvectors are of

Step 3: Since the eigenspace for

=1

1
a
a
a+b+c
a
1 b = b , so b + c = b and thus
1
c
c
c
c

0
0
the form 0 so a basis for the = 1 eigenspace is 0 .
c
1
1
1
0

is 1-dimensional but the eigenvalue appears 3 times as a root of

the characteristic polynomial, the matrix

is

not diagonalizable .

Well, you're at the end of my handout. Hope it was helpful.


Copyright notice: This material is copyright Evan Dummit, 2012. You may not reproduce or distribute this material
without my express permission.

You might also like