You are on page 1of 18

11

Eigen Values and


11.0. Polynomials of Eigen Veetors
matrices and linear
f) =a0+at + operators. Consider
operators. Consider a polynomial
polynom
Let A be a ag*+. + a,t"
square matrix of any order.
We define
fA)=ag +a,A +azA
being identity matrix.
.. +a,A".
A is called a zero or root of the
Let T: V>V be a linear operator
polynomial f (t) iff (A) =0.
on a vector space V and
f() =

ag+at +agt* +... +n


Then we define

f(T)=a0l +aT+ a2T +... +a,T"


Tis called a zero or root of f) iff(7) =0.
fA is a matrix representation of a linear operator T, then f(A) is matrix
representation off (T).
11.1. Eigen value and eigen vector. Let T be a linear operator on a vector
space Vover a field K.
A scalarhe Kis called an eigen value of Tif 3 a non-zero vector v e Vs.t.
T (v) = av. (1)
We also write Av = v .
value
Such a vector v is called an eigen vector of 7 corresponding to the eigen
.
in any basis. The following have some
Here A is matrix representation of T
meaning: vector, eigen vector.
vector, latent
Characteristic vector, proper
have the s a m e meaning
Similarly the following
value.
value, eigen value, spectral
Characteristic value, proper
value, laent
then, from (1),
scalar,
non-zero adv =à (av).
18 any =

T (av) aT (v)
a =

(av).
T (av) A
=
s a m e eigen value
Thus, also a n eigen
tor of T
vector longing to
belonging to the same
value A is called
is to the same eigen
This shows that a v v e c t o r s belonging o
belonging
ofV.
eigen this eigen space is
that this a subspace
.A. The set
set of
of all such eigen
all such shall ghow that
all show
eigen spacec e of A.
Later o n
we

expreBB10 OT the form


p o l y n o m i a l s . , An t . . +A,a+
"-.. +Ajd tAo
1 2 , Matric +An 2 "- -
(a) =A, + An -12
ofdegree
m if
p o l y n o m i a l

is called matric
(315)
316 Linear Algebra

i) Am Am -1s ..,A, Ag are all square matrices of the same order n.


(ii) A 0.
This matrix polynomial is called n-rowed matrie polynomial. The symbol
Ais called indeterminate.
Two matric polynomials are said to be equal if the coefficients of like powers of
A are the same, i.e., if
FA)=Am" +AP- 17-l+. +Aà +Ap.
G )=Bm" +B-1 -l+...+B,.+Bo.
then we write F (0) = G A), ifA, = B, for r = 0, 1, 2,.., m.
11.3. Definition. Let A be a square matrix of order n and A an indeterminate.
Then thematric polynomial A -N of the first degree is called characteristic
matrix of A, I being unit matrix of order n.
The determinant | A - I | is called characteristicpolynomial ofA. Clearly
this determinant is ordinary polynomial of degree n.
The equation | A -N| =0 is called characteristic equation of A.
The roots of this equation are called characteristic roots or latent roots or
eigen values.
The set of eigen values of A is called the spectrum of A.
Remark. We have defined latent roots of a square matrix A in two
ways
The roots ofthe equation | A - | =0 are called latent roots of A.
ii) IfAu = u, u + 0, then i s called latent root of A corresponding to the eigen
vector u.
Now we shall show the equivalence of these two definitions in the following
theorem.
Theorem 1. The equation Au = hu has a non-trivial solution u iffA is a latent
root of A.
Or, A is a latent root ofa matrix A iff it is a root of the characteristic equation of A.
Or, The eigen values ofa linear transformation A are the scalars A which satisfy the
equation A- =0
Or, Prove that the scalar is a characteristic root of the matrix A if the matrix
A - is singular
Proof. Let A be a characteristic root of a square matrix A.
To prove thatA is a root of the characteristic equation of A, ie., to prove that
1 satisfies the equation | A - | =0.
Our assumption » 3 latent vector u of As.t. Au Aj4
=

Au -àjul =0
(A -Al) u=0, u #0
A-Al is singular
|s4 - ! |=0
a t i s f i e s the equation | A - I | = 0.

Conversely, suppose that Aj satisfies the equation

To prove that
A -N|=0.
A1 is a latent root of A.
Egn lialawes n i Eigen lecnors 317
By assumption. / A - A,li =0.

This A-A is a singular matrix


(A-*)u =0 has non-trivial solutionu
i a vector
u#0 st (A A^) uj =0 -

Au Du =à^ du)= A11


i s a latent root of A.
Problem1. If is a latent root of A, then there exists more than one latent
ector belonging to the same eigen value
.
Solution Let ày be a latent root of A
corresponding to the latent vector u.
To prove that 3 more than one latent vector
belonging to the same latent value.
By assumption Au =u1
Let a be any scalar, then
A (au)= aAuj =aju =
(au) or A (auj) =À (au).
This proves that au 1 1S an eigen vector of A belonging to the same eigen value
for every real non-zero value of a.
Proved.
Ex. 2 Given any latent vector of A, there exists only
u one latent root A
corresponding to u
Solution. Let u be an eigen vector of Aso that there exists a scalar 1 8.t.
Au u1
To prove that is unique.
Suppose not.
Then 3 another eigen value s.t. Au =l1 and
A * h2
Now
or
aAu =A141
1 - ) =0.
or
A contradiction.
10, 1-2 *0.
being eigen vector, is non-zero vector.
u1, an
Hence assumption * 2 is wrong so that Aj= 2.
our
Proved.
114. Characteristic
subspace of a matrix. Let A be an eigen value of a
square matrix equation
A. Consider the matrix equation
A -1) u =0.
... (1)
Also, by definition of eigen value,
A-1! | =0.
Hence (1) has non-trivial
solutions u and these solutions are n r in
r being the rank
of A -l. All these
solutions are clearly latent vectorsnumber, -

of A
corresponding to the latent value 1. The space spanned these
space and this vector is defined as
by vectors is a vector
space
of Acorresponding to the eigen value characteristic subspace or eigen space
11.5. Eigen space.
A.
Definition. Let T be a linear operator on vector
matrix representation of T relative to a space V (K). Let A be the
that 3v# 0 e Vs.t. T (v) lv. any basis of V. Let
à be an eigen value of T so
=
318 Linear Algebra
The eigen space of , denoted by Va, is defined as
VvEV: T(v) =Av
V is a subspace of V (K). (Refer Thcorem 11)
Theorem 2. (Cayley Hamilton Theorem) (Meerut 1997)
Bvery square matrix satisfies its own characteristic equation.
An alternate form of the theorem. Every matrix is a zero of its characteristic
polynomial.
Proof. Let A be a square matrix of order n, I an identity matrix of order n, x an
indeterminate. Then | A xl | 0 is the characteristic equation of A.
- =

Let A =lain
xn
a-X d12 a13 dn
d21 a22 a23 12n
A-xl |=

an1 n2 n3 n-x
= (- 1y" {x"+p*+Po-+... +P,), say
a+a, - 1** + a-2 +... +ag, say
Thus | A xl | is expressible as
-

A- =a0t * +ag*" t .. + a, . (1)


The characteristic equation ofA is
A-xl | =0
ie., a t a t +a +..+4, =0.
We have to prove that A
satisfies this equation, i.e., to prove that
ag + ayA + agA° t .. + a,A" = 0.
Since each element of
of | A -xl | is an
(A xl) is a polynomial of
-

degree
1 (at most), each cofactor
ordinary
element of adj (A xl) is an polynomial
in x of degree n 1 at most. Hence each -

ordinary polynomial in x of degree (n 1) at most. Hence


-

adj (A xl) is expressible as


-

adj (A -a) =Bo + B,x +B


Bo, B], . . B,-1 are all square matrices B,-
. + . (2)
where
We know that of the same order n.
(A x) adj (A xl) |A 1. =
-xl |
-

Making use of (1) and (2),


A x) (B0 +Bx +B .. +
B,- B,- 1
+

=
I (40 t ax + ag t
Comparing the coefficients of the like powers ofx on both a,.
+ ...

sides
ABg d,
AB,-Bo a,
AB,- B = (dg,
''*****'**''*'''''**''

AB,1B,, 2 1
alues and Eigen Vectors
Eigu
319

- B-1G,.
Premultiplying these equations successivelv by 1, A, A2,AS, ...,A" and then
adding, we get

0= ag +aA Proved
+
11.6. Expression for the inverse of a non-singular matrix. Let a +agA2+ ..
GyA"
aan-singular matrix of order n | A |#0. Then
so that
A-xl | =4o +ajt +a +a, =0 ... (1)

is the characteristic equation of A.


By Cayley Hamilton theorem,
(2)
ao +aA +agA +.. +a,A" = 0.
From (1), A x l | =a0 +a* + agxá + ... + G,. (3) ...

This is true for every : and hence in particular for x = 0, so that (3) gives
For | A | 0 ,
A-01 | =a0 or | A|=ag or ag#0
So we can divide (2) by ag.

Then I= -A+2A2 ..+L A


Post-multiplying by A and noting that
IA=A,A'A=A (AA) = Al =A,

AA-1=AAA-1=A1=A and so on,


we obtain
A=-1+A+A2 A-1
A.
expression for
This is the required
is called
Positive matrix.
matrix A =
la;),nxn Over field of real numbers
Definition. A square

positive if
i) A A* numbers, not
all zero, we have
s e q u e n c e (x) of real
(1) For every
>0
numbers,
field is complex
be removed. If the from the
statement.
then bar may may be dropped
If field is of reals, condition (i)
and therefore,
the
then (ii) = (i) ofA. positive matrix
is
diagonal of a
Here A* conjugate transpose
=
on the
main

Prove that every entry


Example.
matrix, then by def.,
positive. be a positive
Solution. Let A = [a.d. .

. (1)

Now supposo
zero.

scalars,
not all becomes
,
are n
dx= 1.
anda Now (1)
wnere
1 , X2, . . . ,

= 0 v j e x c e p t . j
= i
|'>0
0 or aj | x;
j
320 Linear Algebra

or
a>0a8z~| z |2
From this we conclude that
au>0-vi-1,2,..n.
Hence each entry on the main diagonal ofa positive matrix in positive.
Ex. 3. Find the chaucteristic equation of the matrix
-1
A-
1

and verify that it is satisfied by A.


Solution. We have
-1
) A-l | = 2- -1 = -+ 6: 9x + 4.
-1 2-x
The characteristic equation of A is
A- xl | = 0,
i.e., - +6x- 9x +4 =0
Or -6x+ 9x 4 0.
This is the required characteristic .(1)
i) To verify that A satisfies (1).
equation of A.
For this we must show that
AS-6A+ 9A 41 =0.
-

.. (2)
2
A=-1 -1 -5 5
6-5
-1 -5 6
6 -5
A-AA =-5 5
6-5
5 6
22
-21
-21
22 -21
22

22-21 22
L.H.S. of (2)
=A3 -6A+9A -41
22 -21 22 -36 30 -30 -9 9
-21 22 -21 30 -36 18 -9
22 -21 22 -30 30 -36 -9 18

0 0
0 4 0
0 -4
0
000-0-R.H.S. of (2).
0 0 Proved.
Problem 4. Determine A for the matrix A
Solution. By Ex. 3, given in Ex. 3.

A6A 9A 4=0,
Eign
Values and Eigen Vectors 321

I-a-6A?.94)
-

6A + 9)

Ans.

roots of a matru
Theorem 3. The eigen vectors corresponding to distinct eigen
are linearly independent.
to distinct roots
Proof. Let uj, u2 Un be eigen vectors of Acorres ponding
...,

Then
A. Ag ,
A
Au, = \u,r=1, 2, ..,n . (1)

*a*r, s, s.t. r * s
set.
To prove that uj, ug, u form a linearly independent . (2)
=0, a1, Gz a, beingscalars.
a11 ta2l42t. + a,4n
...,

Let
A and then using (1), .. (3)
Premultiplying (2) by
1Ai t a2A2l42 t.+Gadnta =
obtain
by A and using (1), we
premultiplying this (4)
Again t... +aiba =0
a1Asu1 t a 2 ua
Repeating this process,
.. (5)
*****

uat
a a u +aza4 and (5) are equivalent to
the single matrix

equations (2), (3), (4)


The set of
equation.
1 1
at4
agu2
M 0.

that
Then the last says
is non-singular.
coefficient matrix
that the
(1) implies
agz
'

2,,n For u, 0 for anyr.


r
=
1,
a,u, =0; 2,..,
n.

This =
0;
r= 1,
a,
322 Linear Algebra

Thus we have shown that every relation of the type (2) implies that a, = 0;

r 1,2,., n.
Consequently the required result follows.
11.7. Annihilating polynomials.
Definition. Suppose 7T is a linear operator over a finite dimensional vector
space V (P). Let f (*) be a polynomial over a field F. Let A be n xn matrix over F.
ffT)=0, then we #ay that the polynomialf (r) annihilates the linear operator
T.
f A)0, then we say that the polynomial f () annihilates the matrix A.
Example. Every linear operator Ton an n-dimensional vector space satisfies
its charucteristic equation.
11.8. Monic Polynomial.
Definition. A polynomial f(x) over a field Fis called monic polynomial if the
coefficient of highest degree term in it is unity.
Examples. (i) x"- 6x* + Tx 11 is a monic polynomial
-

of degree 3 over set of


integers.
(i) x-+is monic polynomial of degree 4 over set of rationals.
a

Minimal polynomial of a linear


11.9.
Definition. A monic polynomial f(x) of operator.
lowest degree
minimal polynomial of T if f(x) annihilates
over a field F is called
T, ie., iff(T) = 0. Also the equation
f() 0 is called
=
minimal
equation of the linear operator T. Similarly we define
the minimal
polynomial of a matrix.
Theorem 4. The minimal polynomial
Proof. Suppose the minimal ofa matrix of linear operator is unique.
polynomial of a matrix A
non-zero
polynomial of degree less than n can annihilate A. is of degree n. Then no
Let f(x) =x" +a- +a +... + an
and g& (x)="+ b +b"... +b
be two minimal
Then
polynomials that annihilate A.
fA) = 0, g A) = 0.
A"+a+ arAn -2 +... +G,! =0
A" + b1A"+baA"-+... +b,l =0.
Subtracting and writing m = Cm»
we get
cA+cA"-
This shows that
+ ...
+Cpl =0.
annihilates A. Also degree (t)= c- +co
h
of h -1<n, which it is+not
+...
is n
Ca
Cm 0 m possible. Consequently
or

or
m-6=0
mbmfor m=1,2, .., n.
Thisf=g. Hence the uniqueness.
Theorem 5. Let T be
the diagonalizable
distinct characteristic values
a

T.
linear operator
and let c1,
polynomial of Then the minimal Co ., Ch be
polynomial for T is the
p () =(x -c) *-
c2)...-ca).
Values and Eigen Vecton
Eign
Solution. c1. C2 Ch are the characteristic roots of T (1)
p ) (x -c) (x -e)..(
( )
)-0
)
characteristic equation of T. ch
Hfence each
is the for T. FHence e
characteristic root of T is a root of minimal polynomial
polynomial
Also minimnl
ominimal
X -cCk 18 an factorof
is
factor of
-

the polynomial for T. Consequenuy


-C **
of-C1
of C2
C2 * characteristic vector
or

will be minimal polynomial for T ifp (A0. Let u be


p (x)
T. then by def, Tu Chu for some h s.t. 1 sh sh.
=

This (T-Ch) u =0
I(T- c) (T- co)... (T- ch)u =0,
I(T-c1) (T-cg).. (T- ch),.. (T-
clu =0
=0.
l p (T)u =0, by (1) >p (T)
Hence the result
follows. What is the minima
dimensional, vector space.
finite the
Problem 5. Let V be minimal polynomial for
a
the
the identity operator on V? What
is
polynomial for
zero operator ? x - 1 annihilates
polynomial
0. Therefore the monic
Solution. Since I - 1/ =
I. Consequently
the identity operator lowest degree that annihilates

Also x - 1 is a
polynomial of
annihilates the
zero
polynomial for 1.
r- 1 is minimal w e s e e that the
monic polynomial
x
O. Hence
Part. Again that
annihilates
Second of lowest degree
and first is the polynomial
operator O O. What is the
for F.
minimal polynomial
over
vector space
x is the n-dimensional on V?
6. Let V be a operator
Problem on V, (ii) z e r o
the identity operator
polynomial of (i)
characteristic basis for V.
S o l u t i o n . Let B
be any ordered
I unit matrix.
(i) Then lg
=
=

ofI is
polynomial
The characteristic 0
(1 = (1 - x)"
0
(1-x)
det ( - x l ) =

0 0 (1-x) Ans.

matrix.
O lg = 0 = zero
(i) det ( 0 - x )
characteristic
polynomial of Ois
The
0

O () Ans.
vector space
V and
(-1"x". ndimensional

= (- x"= an
linear operator on7 n e n vector a e V is aneigen v e c t o r
T be a a
B is a n eigen
Theorem 6. Let
an
ordered
basta B.
ordinute
vector X relative to
matrix ofTw.rt
A be the
co
its
value c iff
relative to its eigen
c
value
of T
corresponding
to its eigen
Dector ofA
Proof. We are given
324
ordered basis of V (F)
(i) [T]R = A, where B =l01..., a) is an
(ii) T(u)= cu,
where X = (b1, b2 6,).
(ii) X=lulg and sou
=
b;G
Aim. T iff AX = cX
(u) = cu
. (1)
IT-cll= [Tlg - c Mg =A -cl
Using the fact that T (w)g = 7lg lulg (Refer Theorem 2, Chapter 6)
IT-cllg lulp = (T-ch) ulg
We get
(A -c) X =(7 (u)
-

cl (u))g = {T (u) -

cu))g
or
AX-cX = {T (u) - cul
or
AX-cX= 0 iff T (u) - cu = (
This proves that
AX= cX
or
T (u) = cu.
iff
Theorem 7. Similar matrices A and B have the same characteristic polynomial
and hence the same eigen values. if X is an eigen vector of A corresponding to the

eigen value c, then P X is an eigen vector of B corresponding to the eigen value c,

where B = P'AP
Proof. (i) A and B are similar matrices
3 non--singular matrix P
s.t. B P'AP
B -xl P'AP - xP-lp
= PAP-PxIP=P (A - x) P
det (B-x) = det [Pl (A - x) Pl
=
(det P) det (A -x) det P =
(detPA) (det P) det (A -

xl
But (detP)(det P= det (PlP) = detI=1
det (B-xl) = det (A -xl).
This shows that A and B have the same characteristic polynomials.
ii) Xis an eigen vector of A relative to eigen value c
AX=cX.
Observe that B(PX)=(PAP) (Px)
(PA) (PP)X= (P'A) IX
= Pl(AX)= Pl (¢X) =cP-lx
OT B (Plx)=c (PX)
P X is eigen vector of B relative to eigen value c.
Theorem 8. Let T be a linear operator on a finite dimensional vector
Then the following are equivalent: space V.
(i) cis a characteristic value of T
(i) The operator T- cl is singular (non-invertible)
(ii) det (T- cl) = 0.
Proof. To prove (i) ^(ii)
c is a characteristic value ofT
non-zero vector (characteristic vector)
Values and Eigen Vectors 325
Eign

uE
Vs.t. T (u) =
cu.
T (u) - cl (u) = 0

(T-c) (u) =0 with u+0


T - cl is singular, by def.
To prove (ii) » (iii).
(ii) »T- cl is singular
3 non-zero vector u e V s.t.
(T- c ) (u) = 0
coefficient det (T- ch = 0

[Condition of non-trivial solution of the system (T ch) (u) 01


-
=

To prove (iii) =» (i).


Öhas a non-trivial solution
det (T- c) 0 » the
=
system (T- ch (u) =

u # 0.
with u * 0o.
Now (T-c) (u) 0 = or T (u) = cu

Thisc is eigen value of T.


This completes the proof. the
values and the corresponding eigen space for
Problem 7. Find the eigen
matrix
8
-6
-6
2 4
given matrix.
Solution. Let A denote the

8-x 6
7-x -+182 45x.
A- 2 3-x

Then | A - xl | = 0 gives
+18- 45x =0
- 18+45x =0
or
x ( - 18r +45) =0
or
3) =
0, giving z =
0, 3, 15.
x(x- 15) (x
-

or
3, 15.
Eigen values a r e 0, u.
equation Au
=

Consider the matrix .. (1)


= 0.
u
(A A )
This gives = 0.
Consider the case in which
0) u =0 or
Au =0
Then (1) becomes (A -

6
8
or

8x 6y + 22 =0,
This -6x+ 7y - 42 =0,
2x 4y + 32 =0.

On
solving any
two, we get 2 2 * 1
326
Linear Algebra

Then
is an eigen vector corresponding to the eigen value l =
0, k, being any non-zero real
number. Next consider the case in which^ =3.
Then (1) becomes (A 3) u = 0
8-3 -6
or
-6 7-3
2
This is equivalent to 5x-6y + 22 = 0,
-6x +4y 4z 0, -
=

2x-4y + Oz = 0.
This gives b (say).
2
Then u =b 1
-2
is an eigen vector corresponding to the eigen value ^
number. =
3;b being any non-zero real

Similarly u=c
is an eigen vector corresponding to the eigen value^
Thus the =
15.
subspaces spanned by the vectors

separately are the three eigen spaces of A


A=0,3, 15. corresponding to the eigen values
Problem 18. Find the eigen values
and eigen space ofa
matrix A given by
A =0 2
Solution.
1-5 2 3
2-x
0 0
3
2-x
=(1-:) (2-*) (2 - x).
A-xl | 0
gives (1-x) (2-x) (2-x)
=

Consider the matrix


equation =0, givingx 1, 2, 2. =

Au ru, =

Consider the case in which i.e., (A N) u 0. - =

Then (1) is ^ =1, (1)


reduced to (A )u-

=
0
and Elgen ectors
Eign alueN 327

o
which is oquivalent to 2y + 32 0, y + 3z = 0,z = 0.
Solving. we get z 0, y = 0, x = a.

1
a
0

is an eigen vector belonging to the eigen value A 1, a being any non-zero number. =

Next consider the case in which A = 2.


Then (1)i s expressible as (A-2) u = 0.

2
i.e
0
This is equivalent to -* + 2y +3z = 0, 32 =0.
On solving, we get = 0 , x =2y,

y=b| 1, when^ =2.

Thus the two spaces spanned by vectors

corresponding to two
distinct eigen values
the eigen spaces of A
are
separately
of the
=0, 2. 0 gives a system of the equations
Remark. In case =
2, if (A - 21)u =

0. ..(2)
type 0, x +2y +2=
+2y +z =
0, x+ 2y +2 =

= 0. .. (3)
or of the type +2 0, -2x + y - z =

4x 2y + - 22 0, 2x-y
=

to the single equation


2) is equivalent =0.
+2y +z viz.
independent solutions,
r

there are two linearly -1. ifA 2, 2. =

n this case =

=
-1, z =
1; x =
1, y =0,z
*=1,y
Hence

- value A
=
2,
eigen to the vectors
to the es corresponding
is an eigen vector belonging spaces
three eigen
In this w e have
case
2 f a =2,2, 2.
328
Linear Algebra

Similarly (3) is equivalent to the single equation


2x-y +z = 0.
This determines two linearly independent solutions
a=1,y 2,z 0; * =-1,y 0, z 2. ifa 2, 2.
= = = = =

In this case we have again three eigen spaces, corresponding to the eigen vectors

, if A= 2, 2, 2

The study of this remark is of vital importance for further study.


Ramark. For the definitions of kernel, null space, singular and
non-singular transformation, refer to Chapter 5.
11.10. Diagonalization of a linear operator. Let T: V>V be a linear
operator and V a finite dimensional vector space. Let the matrix representationof
be
a 0
0 0

0 0 0
This is possible only if there exists a basis lv1, U2, .., Up} of V s.t.
T
(01) aj'1, T (V2) agU2s, T (0,) G,"n
= = =

This implies that u, U2,


U are eigen vectors of T corresponding to the eigen
...,

values a, a2,
Under such circumstances, T is to be diagonalizable operator. Hence a
diagonalizable operator is defined as follows:
"IfTis a linear operator on a finite dimensional vector space V, then Tis said
tobe diagonalizable operator if 3 a basis B for V s.t.
each vector of B is an eigen
vector of T" If V is n-dimensional, then T is
vectors.
diagonalizable iff T has n L.I. eigen
11.11. Diagonalizable Matrix.
Definition. A matrix A over a field Fis said to be
to a diagonal matrix D over F. It means that A is diagonalizable if it is similar
matrix S such that diagonalizable if3 a non-singular
S AS diagonal matrix
Also A is then said to D.
diagonalize
Theorem 9. A neces8sary and
D or to transform A to diagonal form.
be sufficient condition that an
diagonalizable is that Ahasn linearly independent eigen vectors matrixA over F n xn

in Va ().
Proof. I. Suppose A is
To prove that A has n diagonalizable
n xn matrix over F.

V, F) linearly independent (L.I) eigen vectors belonging to


Our assumption =» A is
similar to a diagonal matrix D over F
non-singular matrix Ps.t.
P'AP = D
»P(P'AP) = PD » AP = PD.
We can write D as D =diagonal ld, d2,., . (1)
dnl.
Eign Values and Eigen Vectors 329

Then it can be verified that the eigen values ofD are precisely its diagonnl
elements d1. da, . d. But A is similar to D. Hence A also has the aame eigen valuen
d,. da d
Pis non-singular = | P |*0 = columns of P are L.I.
Take Pi. Pa Pn as the columns of P.
Then P1,Pas , are L.l. Also (1) =

d d
0
0
A
IP1,Pa., PAl P =
. . .

P,l
0 d
Equating columns on both sides,
AP, = d,P, (for i = 1, 2,..., n). . (2)
Now (2) declares that P is an eigen vector A corresponding to the eigen value

But we have already shown that (P1. P2 Pl is L.I. Thus A has n L.l. eigen
d.
vectors P, P2 .., P, belonging to V, (F).
II. Conversely suppose that A has n L.I. eigen vectors X1, X2, ..., Xn
that
corresponding to the eigen values A. 2,., n
so

AX = ; Xi, (3)
vectors
is the matrix formed by column
X] i.e.,X
Take X =
X1, X2, ..,

Moreover these vectors are L.I. Hence Xis non-singular.


X, X2,., Xn.
(3) AX = XB where B = diagonal A, Aa ,

An
X BAX =

A is similar to a diagonal matrix B,


A is diagonalizable. vector space V(F) is
T on a n n-dimensional
Theorem 10. A linear operator diagonalizable.
diagonalizable iff its matrix A relative to
an
ordered basis B of Vis
vector space V (F).
T is linear operator on an n-dimensional that A is the
Proof. Suppose
is a n ordered
basis for Vn (P). Further suppose
Also suppose that B
T relative to the basis B.
representation of
the transformation
matrix
Step (i). Let A be diagonalizable.
To prove that T is diagonalizable.
A is diagonalizable.
vectors
A has n LI. eigen (Refer Theorem 9.)

All these vectors belong to V, vecor"Y)


having X1, Xg. X, as their
are
Suppose a1, 2 , .,a, aren
C0-ordinate vectors8. la,) is L.l.
LAnear
of X,J
independent

ndotnt eigon
vectors ofT Hence.
az,..., G
aren
Dheaty
*
Itmeans that d,
by def., i s diagonalizable.
iagonalizable.
be diagonalizable.
(11). Let T
ep that A is
diagonalizable,

lo prove
330 Linear Algebra

Tis diagonalizable
T has n LI. eigen vector a. 2 ., G E V.
Suppose X, is the co-ordinate vector e V, )corresponding to the eigen vector
o;e Vrelative to B.
Linear independence of lal >X1, X2, .., X, are L.I.
Now X1,X2,.,X, are n LI. eigen vectors* of the matrix A. Hence, by the
previous theorem, A is diagonalizable.
Problem 9. Provethatcharacteristic roots ofa diagonal matrix are the diagonal
elements. Hence show that if a matrix A is similar to a diagonal matrix D, then
diagonal elements of D are the characteristic roots of A.
Solution.
a11 0
a22 0 0
i) Let D=0
a33 0
0 nn
Then D is a diagonal matrix of order n x n.

a11-X 0 0
0 a22 0
D-xl=|
0 0 nn-x
Then | D-xl | =
(«11-2)
(a22 -x).. (n
The characteristic equation of D is |D -xl | = 0
i.e., 1 1 - ) (a22 - x) ... (ann -x) = 0.

This * =011,a22, nn
x = diagonal elements of D.
i.e.
It means that characteristic values of a diagonal matrix are its diagonal
element.
i) Let a matrix A be similar to a diagonal matrix D.
To prove that characteristic values of A are the diagonal elements of D.
A and D are similar
A - x l | =| D -xl |

characteristic roots of Aand D are the same.


But characteristic values of D are its diagonal elements. Hence the
characteristic values of A are the diagonal elements of D.
Problem 10. Let T be a linear operator on R° (R) which is represented in the
standard ordered basis by the matrix
-9
-8
-16 87
*Suppose Tis a linear operator on n-dimensional vector space
of T relative to an
Vand Ais the matrix representation
ordered basis B of V. Then a vector a E V is an eigen vector of T
eigen value c iff its co-ordinate vector X relative to B is an eigen vector of A
corresponding to an
value c. corresponding to the eigen
Kien Values and Eigen Vectors
331
Prove that T is diagonalizable.
Solution. Let (Meerut 1998, 97)
9 4
A -8 3
-116 8
The characteristie equation of A is
9
A- xl | = | -8
4

-16
3 x 4=0
8 7-x
Applying C1+ C2+ C3 and then taking common (-1 -x) from C, we get

1- 1
3- 0
1 4
or -(1+x)|0 0 -1-*by R2 -R1, =0, R3 R1
-

4 3-
Expanding along C. - (1 +*) (1) (-1-z) (3-x) = 0.

This x = -1, -1, 3.


Hence eigen values of A are -1, -1, 3.
IfXis the eigen vector of A relative to the eigen value 2, then
AX= X.
This (A - ) X = 0

-9- 4
3-A . (1)
-8
-16 8

where

Case I. When = -1.

Then (1) becomes


-8
-8
-16
R31 (-2) i.e.,, R3 2R, give
-

and
R2 (-1), ie., Ra
-

single
This is equivalent to- 8x equntion
+ 4y + 4z - 0
E (2)

(3)
or 2x-y-z =0. of
li.e., of (2)| is 1. Hence number
coeflicient matrix matrix
The rank of the c0etficient matrix -its rank = 3-1=
dependent solution is number of columns of
2.
332 Linear Algebra

The two independent solutions of (3) are

X1nd1-X
Case II. When = 3.
Then (1) becomes
-12 4
-8 0
-16 8
Applying R12 (-1), R32 (-2), we get

4
-8 0 4
8 8

R31-2) gives
4

0
This -4x +4y = 0, -8x + 42 =0
- y = 0, 2x-2 = 0

=y
Takingx 1, we have x 1,y =1,z
= = = 2

Let P be the matrix formed by the eigen vectors X corresponding to the eigen
values A= -1, -1, 3.
Then P=1 -1
10 1

Then det P=| P| =-1 #0.


ThisPis non-singular.
the column vectors X1, X2, X of P are LI. and all these belong to R3
Finally, the matrix A has 3 L.I. eigen vectors and so A is diagonalizable (Refer
Theorem 4). Consequently T is diagon alizable. For A is the matrix representation
of T.

You might also like