Topic E

You might also like

You are on page 1of 9

Eigenvalues and Eigenvectors

Eigenvectors and eigenvalues


Eigenvalues and eigenvectors are vectors associated with matrices.
Example 1. Let
       
−1 2 2 3 2 1
A = −2 3 2 , v 1 = 2 , v 2 = 2 , v 3 = 1 .
−1 0 4 1 1 1

We calculate
    
−1 2 2 3 3
Av 1 = −2 3 2 2 = 2 = v 1
−1 0 4 1 1
    
−1 2 2 2 4
Av 2 = −2 3 2 2 = 4 = 2v 2
−1 0 4 1 2
    
−1 2 2 1 3
Av 3 = −2 3 2 1 = 3 = 3v 3 .
−1 0 4 1 3

Definition 2. Let A be an n × n square matrix. An eigenvalue of A is a scalar


λ for which the equation
Ax = λx
has a nontrivial solution x; such an x is called an eigenvector corresponding
to λ.
Example 3. In our example, v 1 , v 2 , v 3 are eigenvectors corresponding to 1, 2, 3
respectively.
Note: x is called an eigenvector corresponding to λ if and only if x is
nontrivial solution to (A − λI) x = 0. In particular, the zero vector is not an
eigenvector.
Definition 4. {x : Ax = λx} = Null (A − λI) is called the eigenspace of A
corresponding to λ.
Question 5. Can 0 be an eigenvector of A?

1
No! An eigenvector must be nonzero (by definition) because A0 = 0 = λ0
for every scalar λ.
Question 6. Could λ = 0 be an eigenvalue of A?
Yes! For example,
      
1 2 0 0 0 0
3 4 0  0  = 0 = 0 ·  0  .
5 6 0 13 0 13

Question 7. Given an n × n matrix A, how many linear independent eigenvec-


tors can A have?
This is an important question that we’ll have more to say about later. For
now, we have:
Theorem 8. If v 1 , . . . , v k are eigenvectors that correspond to pairwise distinct
eigenvalues λ1 , . . . , λk of an n × n matrix A, then v 1 , . . . , v k are linearly inde-
pendent (i.e. the set {v 1 , . . . , v k } is linearly independent).
Note: λ1 , . . . , λk pairwise distinct means λi 6= λj whenever i 6= j.
Proof. Suppose not (in order to reach a contradiction). That is, suppose v 1 , . . . , v k
are linearly dependent. Since v 1 , . . . , v k are all nonzero, v 1 6= 0 and there is a
minimal index p > 1 such that (Theorem 7 from section 1.7)

v p = c1 v 1 + · · · + cp−1 v p−1 .

Since p is minimal, v 1 , . . . , v p−1 are linearly independent. Then,

Av p = c1 Av 1 + · · · + cp−1 Av p−1
(∗)
= c1 λ1 v 1 + · · · + cp−1 λp−1 v p−1 .

We also know
(∗∗) Av p = c1 λp v 1 + · · · + cp−1 λp v p−1 .
Subtracting (∗∗) from (∗) we get

0 = c1 (λ1 − λp ) v 1 + · · · + cp−1 (λp−1 − λp ) v p−1 .

Since v 1 , . . . , v p−1 are linearly independent

c1 (λ1 − λp ) = · · · = cp−1 (λp−1 − λp ) = 0.

But since λi 6= λp for i = 1, . . . , p − 1, we get

c1 = · · · = cp−1 = 0

which implies v p = 0. This contradicts v p being an eigenvector of A, since


eigenvectors are non-trivial.

2
Computation of Eigenvectors and Eigenvalues
Question 9. How do we find eigenvalues and eigenvectors?

Recall- A is an n×n square matrix. λ is an eigenvalue of A if the equation

Ax = λx

has a nontrivial solution x (which is called the eigenvector corresponding to λ).


That is, λ is an eigenvalue of A precisely when A−λI is singular (non-invertible,
has a non-zero null space) ⇐⇒ det (A − λI) = 0.
Theorem 10. λ is an eigenvalue of A if and only if λ satisfies the character-
istic equation det (A − λI) = 0.
For the reason we want to study the function

p : R → R, λ 7→ det(A − λI)
Using the definition of the determinant, it is easy to see that p is a poly-
nomial in the variable λ. The polynomial p (λ) = det (A − λI) is called the
characteristic polynomial of A.
Problem 11. Find the eigenvalues and corresponding eigenspaces of
 
6 −1 1
A = −2 7 −2 .
3 −3 8

1. Find eigenvalues (triangular matrix):



6 − λ −1 1

det (A − λI) = −2 7−λ −2
3 −3 8 − λ

5 − λ −1 1
C1 →C1 +C2
= 5 − λ 7−λ −2
0 −3 8 − λ

5 − λ −1 1
R2 →R2 −R1
= 0 8−λ −3
0 −3 8 − λ

5 − λ 0 1
C2 →C2 +C3
= 0 5−λ −3
0 5 − λ 8 − λ

5 − λ 0 1
R3 →R3 −R2
= 0 5−λ −3
0 0 11 − λ
2
= (5 − λ) (11 − λ) .

3
The roots of
2
det (A − λI) = (5 − λ) (11 − λ) = 0
are λ = 5, 11. Therefore λ = 5, 11 are the eigenvalues of A (we say that
λ = 5 is an eigenvalue of algebraic multiplicity 2).
2. Find a basis for the eigenspaces, Null (A − λI):
   
−5 −1 1 1 −1 −1
A − 11I = −2 −4 −2 → 0 3 2 .
3 −3 −3 0 0 0

Thus   
 1 
Null (A − 11I) = span −2 .
3
 

is the eigenspace of A corresponding to 11.


   
1 −1 1 1 −1 1
A − 5I = −2 2 −2 → 0 0 0 .
3 −3 3 0 0 0

Thus    
 1 −1 
Null (A − 5I) = span 1 ,  0  .
0 1
 

is the spaceship of A corresponding to 11.


Check:
    
6 −1 1 1 11
−2 √
7 −2 −2 = −22
3 −3 8 3 33
    
6 −1 1 1 5
−2 √
7 −2 1 = 5
3 −3 8 0 0
    
6 −1 1 −1 −5
−2 √
7 −2  0  =  0 
3 −3 8 1 5

Note that
    
6 −1 1 1 1 −1 11 5 −5
−2 7 −2 −2 1 0  = −22 5 0
3 −3 8 3 0 1 33 0 5
| {z }| {z }
=A =:P

4
and     
1 1 −1 11 0 0 11 5 −5
−2 1 0   0 5 0 = −22 5 0 .
3 0 1 0 0 5 33 0 5
| {z }| {z }
=P =:D

That is
AP = P D ⇒ P −1 AP = D
(A is the original matrix, P is the set of eigenvectors, and D is a diagonal
matrix).
In our example
     
1 −1 1 6 −1 1 1 1 −1 11 0 0
1
2 4 2 −2 7 −2 −2 1 0  =  0 5 0 .
6
−3 3 3 3 −3 8 3 0 1 0 0 5
| {z }| {z }| {z } | {z }
=P −1 =A =P =D

Definition 12. Let A, B be n × n square matrices.

1. A is similar to B if B = P −1 AP for some invertible matrix P .


2. A is diagonalizable if A is similar to a diagonal matrix D; that is D =
P −1 AP .
Example 13. For  
6 −1 1
A = −2 7 −2
3 −3 8
λ = 5, 11 are the eigenvalues and
  
 1 
Null (A − 11I) = span −2 ,
3
 
    
 1 −1 
Null (A − 5I) = span 1 ,  0 
0 1
 

are the eigenspace of A corresponding to 5, 11, respectively.


Note that
    
6 −1 1 1 1 −1 11 5 −5
−2 7 −2 −2 1 0  = −22 5 0
3 −3 8 3 0 1 33 0 5
| {z }| {z }
=A =:P

5
and     
1 1 −1 11 0 0 11 5 −5
−2 1 0   0 5 0 = −22 5 0 .
3 0 1 0 0 5 33 0 5
| {z }| {z }
=P =:D
That is
AP = P D ⇒ P −1 AP = D
(A is the original matrix, P is the set of eigenvectors, and D is a diagonal
matrix).
In our example
     
1 −1 1 6 −1 1 1 1 −1 11 0 0
1
2 4 2 −2 7 −2 −2 1 0  =  0 5 0
6
−3 3 3 3 −3 8 3 0 1 0 0 5
| {z }| {z }| {z } | {z }
=P −1 =A =P =D

Definition 14. Let A, B be n × n square matrices.


1. A is similar to B if B = P −1 AP for some invertible matrix P .
2. A is diagonalizable if A is similar to a diagonal matrix D; that is D =
P −1 AP .
Theorem 15 (Diagonalization Theorem). An n × n matrix A is diagonalizable
if and only if A has n linearly independent eigenvectors (a basis for Rn ).
Proof. ⇒: Suppose A is diagonalizable. There is an invertible matrix P such
that P −1 AP = D a diagonal matrix. ⇒ AP = P D.
Assume
 
λ1 0 · · · 0
.. 
 0 λ2 . . .

. 
, P = p p ··· p .

D= .
 .. . .. . .. 0 
 1 2 n

0 ··· 0 λn
n o
Since P is invertible, p1 , p2 , . . . pn is linearly independent. We have

AP = Ap1 Ap2 · · · Apn
q

P D = λ1 p1 λ2 p2 ··· λn pn .
That is,
Ap1 = λ1 p1
Ap2 = λ2 p2
..
.
Apn = λn pn .

6
Hence, p1 , . . . , pn are n linearly independent eigenvectors of A.
⇐: Similar.
Corollary 16. If an n × n matrix A has n distinct eigenvalues, then A is
diagonalizable.
Problem 17. Diagonalize
 
4 0 −2
A = 2 5 4 .
0 0 5
First

4 − λ 0 −2

det (A − λI) = 2 5−λ 4
0 0 5 − λ

4 − λ −2
= (5 − λ)

0 5 − λ
2
= (5 − λ) (4 − λ)
⇒ λ = 4, 5 are the eigenvalues.
λ=4:
   
0 −2
0 2 1 0
A − 4I = 2 14  → 0 0 1
0 01 0 0 0
 
 −1 
⇒ Null (A − 4I) = span  2  .
0
 

λ=5:
   
−1 0 −2 1 0 2
A − 5I =  2 0 4  → 0 0 0
0 0 0 0 0 0
    
 0 −2 
⇒ Null (A − 5I) = span 1 ,  0  .
0 1
 

So    
4 0 0 −1 0 −2
D = 0 5 0 , P = 2 1 0
0 0 5 0 0 1
and      
−1 0 −2 0 0 −2 −1 0 −2 4 0 0
 2 1 0  2 1 4  2 1 0  = 0 5 0 .
0 0 1 0 0 1 0 0 1 0 0 5
| {z }
=P −1 =P

7
Problem 18. Diagonalize
 
4 0 −2
B = 2 5 5 .
0 0 5


4 − λ 0 −2

det (B − λI) = 2 5−λ 5
0 0 5 − λ

4 − λ −2
= (5 − λ)
0 5 − λ
2
= (5 − λ) (4 − λ) .

λ=4:
   
0 0 −2 2 1 0
B − 4I = 2 1 5  → 0 0 1
0 0 1 0 0 0
  
 −1 
⇒ Null (B − 4I) = span  2  .
0
 

λ=5:
   
−1 0 −2 1 0 0
B − 5I =  2 0 5  → 0 0 1
0 0 0 0 0 0
  
 0 
⇒ Null (B − 5I) = span 1 .
0
 

Wait! We don’t have 3 linearly independent eigenvectors! B is not diagonaliz-


able (this sometimes happens with repeated eigenvalues).
Example 19. We remember that not all polynomials have real numbers as
roots, and
 a matrix
 might have a real number
 is eigenvalue.
 For example,
0 −1 λ −1 2
A= , det(A − λ) = det = λ + 1.
1 0 1 λ
| {z }
The eigenvalues only exist as complex numbers (not part of this lecture).

Review
1. Show that if A is invertible, then det A−1 = 1

det A .

2. Find a formula for det (rA) when A is an n × n matrix.

8
   
−6 12 2
3. Let A = , w = . Determine if w is in Col (A). Is w in
−3 6 1
Null (A)?
4. True\False: The range of a linear transformation is a vector space.
5. Given vectors u1 , . . . , up , and w in V , show that w is a linear combination
 
of u1 , . . . , up if and only if [w]B is a linear combination of [u1 ]B , . . . , up B .

6. True\False: If Av = 0v for some some v ∈ Rn , then λ is an eigenvalue of


A.
7. Show that λ is an eigenvalue of A iff λ is an eigenvalue of AT .
8. True\False: If det A = 0 then 0 is an eigenvalue of A.

You might also like