You are on page 1of 43

Eigenvalues and

Eigenvectors and their


Applications
By
Dr. P.K.Sharma
Sr. Lecturer in Mathematics
D.A.V. College Jalandhar.
Email Id: pksharma_davc@yahoo.co.in
The purpose of my lecture is to
make you to understand the

following :
What are eigenvectors and eigenvalues ?
• What is the origin of eigenvectors and eigenvalues ?
• Do every matrix have eigenvectors and eigenvalues ?
• Are eigenvectors corresponding to a given eigenvalue unique?
• How many L.I. eigenvectors corresponding to a given
eigenvalue exists ?
• What are the eigenvalues corresponding to special types of
matrices like symmetric , skew symmetric , orthoganal and
unitary matrices etc.
• Some important Theorems relating to eigenvalues
• Why eigenvectors and eigenvalues are important ?
• What are the application of eigenvectors and eigenvalues ?
Linear algebra studies linear transformations,
which are represented by matrices acting on
vectors. Eigenvalues, eigenvectors and
eigenspaces are properties of a matrix.
• In general, a matrix acts on a vector by changing
both its magnitude and its direction. However, a
matrix may act on certain vectors by changing only
their magnitude, and leaving their direction
unchanged (or possibly reversing it). These vectors
are the eigenvectors of the matrix. A matrix acts
on an eigenvector by multiplying its magnitude by
a factor, which is positive if its direction is
unchanged and negative if its direction is reversed.
This factor is the eigenvalue associated with that
eigenvector.
Definition
• If A is an n × n matrix, then a nonzero vector x in Rn is
called an eigenvector of A if Ax is a scalar multiple of x ;
that is ,
A x = λ x , for some scalar λ.
The scalar λ is called an eigenvalue of A , and x is
called the eigenvector of A corresponding to the
eigenvalue λ.

In short : (1) An Eigenvector is a vector that maintains its


direction after undergoing a linear transformation.

(2) An Eigenvalue is the scalar value that the eigenvector was


multiplied by during the linear transformation

4
Example 1
Eigenvector of a 2×2 Matrix
1 
x=  
• The vector is an eigenvector of
the matrix  2 
3 0 
A= 
8 − 1

Corresponding to the eigenvalue λ=3, since


3 0   1   3 
Ax =     =   = 3x
8 −1  2   6  5
Example 2
X is not an Eigenvector of a 2×2
Matrix
• The vector  2  is not eigenvector of the matrix
x=  
3
, there
λ
 donot exist scalar such that
3 0 
A= 
λ
8 − 1
 3 0  2  6 
A x =     =  = λ x
 8 − 1  3  13
Hence the vector x is not eigenvector of the matrix A
 eigenvalues . Only square
Note: Not all matrices have
matrices have eigenvalues and eigenvectors.

6
Origin of Eigenvalues and Eigenvect
Eigenvalues and eigenvectors have their
origins in physics, in particular in problems where
motion is involved, although their uses extend
from solutions to stress and strain problems to
differential equations and quantum mechanics.
Recall from last class that we used matrices to
deform a body - the concept of STRAIN.
Eigenvectors are vectors that point in
directions where there is no rotation.
Eigenvalues are the change in length of the
eigenvector from the original length.
The basic equation in eigenvalue problems
How to find eigenvalues of a
square matrix of
To find the eigenvalues of an n × n matrix A .
order n we

rewrite Ax = λx as Ax = λI x
or equivalently, (λI - A)x = 0 (1)
Equation (1) has a nonzero solution if and only if
det (λI - A) = 0 (2)
Equation (2) is called the characteristic equation of A; the scalar
satisfying this equation are the eigenvalues of A. When expanded,
det (λI - A) is a polynomial p in λ called the characteristic
polynomial of A.
The set of all eigenvalues of A is called the Spectrum of A .
Example 3
Eigenvalues of a 3×3 Matrix
• Find the eigenvalues of 0 1 0
A =  0 0 1 
 4 −17 8 
Solution.
The characteristic polynomial of A is
 λ −1 0 
det(λ I − A) = det  0 λ −1  = λ3 − 8λ 2 + 17λ − 4
 −4 17 λ − 8

The eigenvalues of A must therefore satisfy the


characteristic
λ 3 − 8λ 2 equation
+ 17λ − 4 = 0 (2)
On solving we find λ = 4 , 2 + 3 , 2 - 3
Example 4
Eigenvalues of an Upper Triangular
• Matrix
Find the eigenvalues of the upper triangular matrix
a11 a12 a13 a14
0 a a a
A = 22 23 24
Solution. 0 0 a33 a34
 
 0 0 0
  a44
Recalling that the determinant of a triangular matrix is the
product of the entries on the main diagonal , we obtain

 λ − a11 −a12 −a13 a−14 


 0 λ − a −a −
a 
det(λ I − A ) = det 22 23 24  (= λa − )( aλ − )( aλ −)(
 0 0 λ − a33 −a34  11 22 33 44

 
 0 0 0 λ − a 44 
Thus, the characteristic equation is (λ-a 11 )(λ-a22 ) (λ-a33 ) (λ-a44 )=0
and the eigenvalues are λ=a11 , λ=a22 , λ=a33 , λ=a44
which are precisely the diagonal entries of A.

10
Eigenvalues of special types of
matrices
Types of Matrices Nature of Eigenvalues
T
• Symmetric ( A =A ) • Reals
T
• Skew Symmetric = -A )
( A • Purely Imaginary or
• Orthogonal ( ATA = I ) Zero
• Unit modulus
• Hermitian ( Aθ= A )
• Reals
• Skew Hermatian θ
( A = -A ) • Purely imaginary or
• Unitary θ
(A A= I ) zero
• Unit modulus
Theorem 1
If A is an n×n triangular matrix (upper
triangular, lower triangular, or diagonal),
then the eigenvalues of A are entries on
the main diagonal of A.
Theorem 2
If k is a positive integer, λ is an eigenvalue of a matrix A,
and x is corresponding eigenvector, then λk is an
eigenvalue of Ak and x is a corresponding eigenvector.
Theorem 3
A square matrix A is invertible if and only if λ=0 is not an
eigenvalue of A.

12
Theorem 4
Equivalent Statements
• If A is an n × n matrix and λ is a
real number, then the following are
equivalent.
a) λ is an eigenvalue of A.
b) The system of equations (λI-A)x = 0
has nontrivial solutions.
c) There is a nonzero vector x in Rn
such that Ax=λx.
d) λ is a solution of the characteristic
equation det(λI-A)=0.
13
Finding Eigenvectors
corresponding to given
eigenvector (or Bases for
• The eigenvectors Eigenspaces)
of A corresponding to an eigenvalue λ are
the nonzero vectors x that satisfy Ax = λx.

• Equivalently, the eigenvectors corresponding to λ are the


nonzero vectors in the solution space of (λI-A)x =0. We call
this solution space the eigenspace of A corresponding to λ.

• Are eigenvector x corresponding to eigenvalue λ unique ?


No ; every scalar multiple of eigenvector x is also
eigvevector corresponding to eigenvalue λ, for
A(kx) = k(Ax) = k(λx) = λ(kx).
• The set of L.I. eigenvectors forms the bases of the
eigenspace.

14
Example 5
Finding Eigenvectors of a square
matrix
(or Bases for Eigenspaces)
Find eigenvectors and hence the bases for the

eigenspaces of
 0 0 −2  ……
(1) A = 1 2 1 
Solution. 1 0 3 
The characteristic equation of matrix A is
λ3-5λ2+8λ-4=0 or (λ-1)(λ-2)2=0 ; ------(2)
Thus, the eigenvalues of A are λ=1 and λ=2, so we
need to find the eigenvectors corresponding to these
two distinct eigenvalues.

15
 x1 
By definition,
x =  x2 
 x3 

Is an eigenvector of A corresponding to λ if and only if x


is a nontrivial solution of (λI-A)x=0, that is, of
λ 0 2   x1  0 
 −1 λ − 2 −1   x  =  0  (3)
  2  
 −1 0 λ − 3  x3   0 
If λ=2, then (3) becomes
 2 0 2  x1 0
−1 0 − 
1 =x 0
   2 

−1 0 − 
1 
  x
3 
0
Solving this system yield x1 = - s , x2 = t ,
x3 = s
Thus, the eigenvectors of A corresponding to λ=2 are 16
−s −s  −
0  1 0
x = t=  +0 =t
    
 s+
 
0 t
 1

 s
  
s 
 
0 
 
1 0

 −1  0
 0  and  1 
Since    
 1   0 are linearly independent eigenvectors.
and so these vectors form a basis for
e eigenspace
th correspondingλ=
to
Similarly, the eigenvectors corresponding to λ = 1 are the nonzero-
vector so it form a basis for the eigenspace
-2
corresponding to λ = 1.
1 
 
Note: The number
1 of L.I. eigenvector corresponding to eigenvalue

λ equal to n – rank(λI – A) , where n is the order of square matix A.


Theorem (5/1)
Equivalent Statements
• If A is an n × n matrix, and if TA: Rn →Rn is multiplication by A,
then the following are equivalent.
a) A is invertible.
b) Ax = 0 has only the trivial solution.
c) The reduced row-echelon form of A is In.

d) A is expressible as a product of elementary matrix.


e) Ax = B is consistent for every n × 1 matrix B.
f) Ax = B has exactly one solution for every n × 1 matrix B.
g) det(A)≠0.
18
Theorem ( 5/2 )
Equivalent Statements
h) The range of TA is Rn.

i) TA is one-to-one.

j) The column vectors of A are linearly independent.


k) The row vectors of A are linearly independent.
l) The column vectors of A span Rn.
m) The row vectors of A span Rn.
n) The column vectors of A form a basis for Rn.
o) The row vectors of A form a basis for Rn.
19
Theorem (5/3)
Equivalent Statements
p) A has rank n.
q) A has nullity 0.
r) The orthogonal complement of the
nullspace of A is Rn.
s) The orthogonal complement of the row
space of A is {0}.
t) ATA is invertible.
u) λ= 0 is not eigenvalue of A.

20
Diagonalization

Definition : A square matrix A is


called diagonalizable if there is
an invertible matrix P such that
P-1 AP is a diagonal matrix; the
matrix P is said to diagonalize A.
21
Theorem 6
If v1, v2, … vk, are eigenvectors of A
corresponding to distinct eigenvalues λ1, λ2,
…, λk , then {v1, v2, … vk} is a linearly
independent set.
Theorem 7
If an n × n matrix A has n distinct
eigenvalues, then A is diagonalizable.

22
Theorem 8

• If A is an n × n matrix, then the


following are equivalent.

a) A is diagonalizable.

b) A has n linearly independent


eigenvectors.

23
Procedure for Diagonalizing a
Matrix
• The preceding theorem guarantees that an n × n matrix A with n L.I.
eigenvectors is diagonalizable, and the proof provides the following
method for diagonalizing A.
Step 1. Find n L.I. eigenvectors of A, say, p1, p2, …, pn.

Step 2. From the matrix P having p1, p2, …, pn as its column vectors.

Step 3. The matrix P-1 AP will then be diagonal with λ1, λ2, …, λn as its
successive diagonal entries, where λi is the eigenvalue corresponding
to pi, for i=1, 2, …, n.

24
Example 6
Finding a Matrix P That Diagonalizes a
Matrix A
• Find a matrix P that diagonalizes
 0 0 −2 
A = 1 2 1 
1 0 3 
Solution.

From Example 5 of the preceding section we found the


characteristic equation of A to be (λ-1)(λ-2)2=0

and we found the following bases for the eigenspaces:


 −1 0  −2 
λ = 2 : p1 =  0  , p 2 = 1  λ =1: p 3 =  1 
 1  0   1 
25
Example 6 (Cont.)
Finding a Matrix P That Diagonalizes a
Matrix A
There are three basis vectors in total, so the matrix A is
diagonalizable and
 −1 0 −2 
P =  0 1 1 
 1 0 1 

diagonalizes A. As a check, the reader should verify that


 1 0 2   0 0 − 2   −1 0 −2   2 0 0 
P −1 AP =  1 1 1  1 2 1   0 1 1  =  0 2 0 
 −1 0 −1 1 0 3   1 0 1   0 0 1 

26
Example 7
A Matrix That Is Not

Diagonalizable
Find a matrix P ( if possible ) that diagonalize the
matrix
 1 0 0
A =  1 2 0 
 −3 5 2 

Solution.
The characteristic polynomial of A is
λ −1 0 0
det(λ I − A) = −1 λ − 2 0 = (λ − 1)(λ − 2)2
3 −5 λ − 2

27
Example 7 (Cont.)
A Matrix That Is Not Diagonalizable
so the characteristic equation is

(λ-1)(λ-2)2=0

Thus, the eigenvalues of A are λ=1 and λ=2. We can


easily show that bases for the eigenspaces are
 1/ 8  0 
λ = 1: p1 =  −1/ 8 λ = 2 : p 2 =  0 
 1  1 
Since A is a 3×3 matrix and there are only two basis
vectors in total, A is not diagonalizable.
28
Algebraic multiplicity of λ : It is defined as the number
of times the root λ occur in the characteristic equation
and is denoted by Mλ .

Geometric multiplicity of λ : It is defined as the number


of L.I. eigenvectors associated with λ and it denoted by

(= dimension of eigenspace of λ )

In general, mλ ≤ Mλ

Defect of λ : ∆ λ = Mλ - mλ

Remark : (1) Defective matrices are not


diagonalizable
Application of Eigenvalues and
Eigenvectors
• Computing Powers of a Matrix :
There are numerous problems in applied mathematics
that require the computation of high powers of a square
matrix. We shall conclude this section by showing how
diagonalization can be used to simplify such
computations for diagonalizable matrices.
If A is an n × n matrix and P is an invertible matrix, then
(P-1 AP)2 = (P-1 AP)(P-1 AP) = P-1 AIAP=P-1 A2P
More generally, for any positive integer k
(P-1 AP)k = P-1 Ak P 30
Computing Powers of a Matrix
(cont.)
It follows form this equation that if A is diagonalizable,
and P-1AP=D is diagonal matrix, then P-1AkP = ( P-1AP )
k
= Dk

Solving this equation for Ak yield : Ak = PDk P-1

This last equation expresses the kth power of A in terms


 d1 0 ... 0   d1k 0 ... 0 
of the kth power of the  diagonal matrix D. But Dk
is
k
 0 d 2 ... 0   0 d 2 ... 0 
D =compute; for example,
easy to , and D = if
k

: : :  : : : 
   k
 0 0 ... d n   0 0 ... d n  31
Example 8
Power of a Matrix
• Find A13, where  0 0 −2 
A = 1 2 1 
1 0 3 
Solution.

We showed in Example 5 that the matrix A has three L.I.


 −1 0 −2
eigenvectors and so the matrix A P 0 1 1
is=diagonalized by
 
 1 0 1 

and that 2 0 0
D =P −AP
1
= 0 2 0
 
0 0 1
 32
Example 8 ( Cont.)
Power of a Matrix
Thus , we have

−1 0 2− 13
2  1 0 0 0
13 − 1   0  1
A = PD P  0= 1
13
1   1 2 13
0 
 1 0 1   1
 0
 0 1−
13
0−
 − 8190 0 −
16382 

=  8191 8192 8191  
 8191 0 16383 
33
Cayley-Hamilton Theorem
Every square matrix satisfy its characteristic polynomial
i.e. If p(λ ) = det(A −λ I) is the characteristic polynomial
of an n × n matrix A, then p(A) = 0.
Application of Cayley-HamiltonTheorem
The Cayley-Hamilton Theorem can be used to find:
 The Power of a matrix and
 The Inverse of an n × n matrix A, by expressing
these as polynomials in A of degree < n.
Use of eigenvectors and eigenvalues in solving
Linear differential equations

• The eigenvalue and eigenvector method of


mathematical analysis is useful in many fields
because it can be used to solve homogeneous linear
systems of differential equations with constant
coefficients. Furthermore, in chemical engineering
many models are formed on the basis of systems of
differential equations that are either linear or can be
linearized and solved using the eigenvalue
eigenvector method. In general, most ODEs can be
linearized and therefore solved by this method
How to solve Initial value problem
using Eigenvalues and
Eigenvectors
• Solve the following Initial value problem:
du1
= a 11u1 + a 12 u2 + .........+ a 1 un n
dt
du 2
= a 21u1 + a 22 u2 + .........+ a 2 un n
dt
........................................ ...............
........................................ ...............
du n
= a n1u1 + a n2u2 + .........+ a nnu n
dt
given that u i= b i when t = 0 , for i = 1 , 2 , ...., n
w e w rite the above system in the m atrix
rm as:
fo
 u1 ( t)  a11 a12 ...n a1 
 u ( t)  a21 a22 ...n a2 
du
= Au , w here u(t) =2  , A= ,u
dt  :  ... ... ... ... 
   
 un ( t)  1an 2an ... ann 
Let λ1 , λ2 , ......., λn be the eigenvalues ofmatrix A and
X1 , X 2 , ............, Xn be the correspon
ding eigenvectors.
Then the solutions of this L.D.E. is giv
en by :
λ t
u = X e , where X is the corresponding eigenvector ofλ .
The general solution is given by :
n
u(t) = ∑i i
c X
i =1
e λi t
 b1 xi 1  xi 1 
b  n x  x  
Now , u(0) =  2 = ∑ c   , where X = i 2 
i 2

:  i =1 i  :  i
: 
     
 bn xin  xin 
on solving these equations , we can findci , for i = 1,2,...,n
So the solution of the given Initial
value problem is :
 u1 ( t )  xi 1 
u ( t)  n x 
u(t) =  2

 : 
= ∑ ic
i =1
i 2

:
λi t

e , On comparing , we get
   
 un ( t )  xin 
u i (t ) = c i x iieλit , for i = 1 , 2 , ....., n .
Example 9
Solve the Initial Value Problem
• Solve the Initial value problem
dv
= 3v ; v = 6 at t = 0
dt
dw
= 8v - w ; w = 5 at t = 0
dt
• The problem is to find v(t) and w(t) for t > 0.
We write the system in the matrix form as :

du  v( t )  3 0 
= Au , where u(t) =  , A =  , u(0)
dt  w( t)  8 − 1 
λ = A3 are
As in EX. 1 , we see that eigenvalues of the matrix
1 , λ =1
2
The corresponding eigenvalues are 1  0 
X1 =   , X 2 =  .
du 2  1 
=Au
dt , The solution to this L.D.E. is given
byλ t
u= X e , w here X is the eigenvectorsponing λ
corre to eigenvalue

The general solution is given by


λ1 t λt
u(t) = C1 X1 e + C2 X2 e
2
, where C1 and C2 beary
arbitr
constants
 6  1  0
So , u(0) =   = C 1   + C 2   , implies that C 1 =6 and C 2 = −7.
 5  2  1
 v(t )  1 3 t  0 -t
So we get u(t) =  = 6  e -7   e

 w(t )  2  1
Thus ; the solution of the given Initial value

problem is : 3t
On com paring , w e get v(t) = 6e and =w12e (t) − 7e
3t
Eigenvectors and eigenvalues are used in
structural geology to determine the directions
of principal strain; the directions where angles
are not changing.
In seismology, these are the directions of least
compression (tension), the compression axis, and
the
Someintermediate
facts: axis (in three dimensions).
• The product of the eigenvalues = det|A|
• The sum of the eigenvalues = trace(A)
The (x,y) values of A can be thought of as
representing points on an ellipse centered at
(0.0). The eigenvectors are then in the directions
of the major and minor axes of the ellipse, and
Eigenvalues & Eigenvectors
• Example of uses:
– Structural analysis (vibrations).
– Correlation analysis
(statistical analysis and data mining).
• We use the Jacobi method applicable to
symmetric matrices only.
• A 2x2 rotation matrix is applied on the matrix to
annihilate the largest off-diagonal element.
• This process is repeated until all offdiagonal
elements are negligible.

©DB Consulting, 1999,


all rights reserved
Questions

Thanks

You might also like