You are on page 1of 5

502 Chapter 14 I Further Topics in Line•r Algebra

Suppose m is a natural number. According to Problem 8 in Section 13_ _


6
Am= PDmp-1
(2)
Because D is a diagonal matrix, it is easy to calculate om, and [2] then .
simple way of finding Am. So the remammg . . problem 1s
. to fi nd the matrices
Yields
p a
D needed in [ 1]. We note first the following result: atld

I
L...-.,__-------.i
A and p- 1AP have the same eigenvalues ]
[14,11]

This is because these two matrices have the same characteristic polynomial, as
1 1
IP- AP-lll = IP- AP-P- 1UPI = IP- 1(A-,U)PI = IP- 1llA-AIIIPI
= IA-All
(This chain of equalities uses the facts that the determinant of a product is the
product of the determinants and that the determinant of an inverse matrix is the re-
ciprocal of the determinant of the matrix.) Now the eigenvalues of a diagonal matrix
are equal to the diagonal elements (see Example 14.13 of Section 14.4). It follows
that if A is diagonalizable so that [14.10] holds, then p-i AP= diag (,.. 1, •••• >.,i),
where 1 1, ••• , An are the eigenvalues of A, written in the appropriate order. Here
the eigenvalues are not necessarily all distinct.
These useful properties prompt two questions:

1. Which square matrices are diagonalizable?


2. If A is diagonalizable, how do we find the matrix P in [14.10]?

The answers are given in the next theorem.

Theorem 14.6 Ann x n matrix A is diagonalizable iff A has a set of n


linearly independent eigenvectors x 1, •• : , x,,. In this case,

A1 0
0 [}4.12]
p-'AP=

0 0

where Pis the matrix with x 1, ••• , Xn . •.• , An


-'-'1umns, and /1.,, • are the
associated eigenvalues.
Sec. 14.5 I Diagonatization
503
rroaf The n x n matrix A is diagonalizable iff there . .
atriX P = (pij)nx11 such that [14.10] holds or cquival elxists an mvenible
rn ' ) Th" l . ' ' ent y. such that AP .
pdiag (i..J, •.. • ,.,_,, . 1s ast cquanon can be Written as =

Pll Pl2 Pin '


A] 0 0
P21 P2"' P2n 0 11.2 0
AP=

Pnl Pn2 Pnn 0 0 ;_,,

A2Pl2 An Pin [I]


AIPII
i..1 P2t i..21>22 AnP2n
- = (AJX1, A2X2, ... . AnXn)
AJPnl A2Pn2 AnPnn

where the last equality follows from the fact that the columns of p are x 1,
x2..... x,,. Moreover, AP= (Ax1. Ax2, . .. , .Axn). Thus, [I] is equivalent to
the n equations

(k = l. 2..... n) [2]

But these equations say that x 1• • .. , x,, are eigenvectors of the matrix A with ;.. 1,
. .. , )..,, as the corresponding eigenvaJues. Because P has an inverse iff !Pl # 0,
and so iff x 1••• • , x,, are linearly independent, the proof of Theorem 14.6 is
complete.

Example 14.15
Verify Theorem 14.6 for

A= G~)
(See Example 14.l l(a) of Section 14.4.)
Solution
.
The eioenvalues
C,
are AJ = -2 and A2 = 3, and as corrcspoocling
eigenvectors, we can choose the vectors

( _;) and (:)


So we take

_, ( 1/5 -1/5)
Pe: (
-3
2
:) for which p = 3/5 2/5
Withs IAP . th matrix diaa0 (-2. 3),
orne routine al aebra we find that P- 15 e
Whi b ' .
t COnfinns Theorem 14.6. ·
~Pie 14.16
Verify Theorem 14.6 for matrix Bin Example 14.12 of Section 14.4.
504 Chapter 14 I Further Topics in Unear Algebra

So
lution The eioenvalues
::, were 'A1 = 1 and).,.,- = 2. Moreove
the three eigenvectors r, we foul\d

and

These vectors are linearly independent, so B is diagonalizable. If we ch00st

32 2) = (-1
-! 2 .2),
·p=
(3-1 1 0
0 1
, we find that p- I
:>
3
-6 -5

and a routine calculation confirms that p- 1BP = diag (1, 2, 2).

Not all matrices are diagonalizable. Nor is it always easy to verify the nee.
essary and sufficient conditions in Theorem 14.6 for a matrix to be diagonalizable
Indeed, easy necessary and sufficient conditions for a matrix to be diagonalizable
do not exist. One can prove that if A is an n x n matrix with n different eigen•
values, then A is diagonalizable. Yet, Example 14.16 shows that this condition is
not necessary, because the matrix there is diagonalizable and yet the eigenvalues
are not all distinct.

Problems
1. If D = diag (1/2, 1/3, 1/4), compute D2 and D'', for any natural number
n 2:: 3. What is the limit of D" as n oo?
2. Are the following matrices diagonalizable?
a. The matrix A in Example 14.12(a) of Section 14.4.
b. The matrices in Problems l(e) and (f) of Section 14.4. .p
3. Show that the followinae matrices are diaaonalizable,
::,
find a suirable malflX
(this matrix is not unique), and then verify [14.12]:

a. (~ _:) b. G-1f -D G=~ -1)


14.6. The Spectral Theorem for Symmetric
Matrices ~'·
5
M f th · . · ·ons are ~:bies
any O e square mamces encountered in economic appbcao ""Y v3!1 . s.
In · l · of Ill"'' tflce
parucu ar, second-order conditions for extrema of funcuons triC 1113
· l uadra · & Of syrnine ·,....5.
mvo ve q tic 1orms, which can be expressed in terms triC inatJ'l"v~!IV
So we tum now to a special study of eigenvalues of symrne etriC 2'/. Z
Example 14.14 of Section 14.4, we have already seen that a synut1
sec. 14. 6 I The Spectra/ Th
eorem for S
. Ymmetric Matrices 5
ha5
real ei2:envalues. In fact this is
- . ' a genera] pr
05
t,ec ause of the fo1lowmg: openy of symm e tn· c matnces,
.

.7 Suppose A is a symmetric n xn ma . .
fbeOreJII -14A'. Then: . tnx (with real entries),
sotbatA-
Tbe characteristic polynomial of A has only real roots. th .
(a) . al es of A are real. · at is. all the
e1uenv u . . .
" and y are eigenvectors associated with two diffier .
If
andµ, then x and y are O.L ogonal (in the sense that x'y ... O).

Proof

(a) Proving (a) is quile advanced, so we refer to Hadley (1973).


(b) Suppose that Ax = AX and Ay =µ,y. Multiplying these two equalities
on the left by y' and x'. respectively. yields:

[1 J y' Ax= ly'x (2) x'Ay = µ,x'y


Because A is symmetric. transposing each side of (1] yields x' Ay dy. =
Hence. [2] implies that ;..x'y = =
µ,x'y or that (A - µ,)x'y 0. In case
=
i. # µ, it fo)]ows that xy 0. so x and y are orthogonal.

The Spectral Theorem


The following theorem is a very important result in linear algebra.

~rem 14.8 (The Spectral Theorem) Suppose ~at A is a s~~t-


u~in x n matrix. Then there exists an orthogonal mamx. U (that is, Witb
== U') such that
[14.13
u- 1AU= diag ()..1, A2, - ·:, An)
Where.
'-I, .l..2, · • - , An are the eioenvalues of A.

]4 7 all the eigenvalues


a to Theorem
Proof ("m a special case): di
Accor: no . then ·Toeorem
, J4.6 shows
are r~. I~ they also happen to be all d~ffere~L 14.12] of Theorem 1_4.6 is
that A rs d1a~onalizable and that the matnx p . [ tors corresponding to
p ... · the e1aenvec .
• ::: (x1 ..... Xn), where XJ • ... ' Xn. are =have length l by replacmg
,., , •..• An Wie can make aJl these e1genvectorsTh m 14 7 the vectors x,,
· ·
them with xi/ Xt ••••• Xn/ Xn . According
th
·
;:trix
eore · ·
p is also orthogonal (see
· · • • Xn are mutually orthogonal. Then e
;cs in unear Algebra
Chapter 14 I Further Top .,
et u == P. then we have proved Theorem 14.S t
l
Problem 3) ·· If .we values are distinct. For the proof of the ocneraior the case
when all the eigen "') ° case' "·"'e
refer to Hadley (197:> ·

Problems
l. By finding u explicitly, verify (14.13] for the following matrices:

1 1 0) 1 34
b. A==
(0 0 2
1 1 0 c. A=
(
3 1
4 0
o\
i)
2. Verify ToeOrems 14.6 and 14.7 for the following matrices:

a. A= (1
2
2)
-2
b. A=(~ 1 1
~\
-1)
3. Prove that if Pis an n x n matrix whose column vectors are all of length
...
and mutually orthogonal, then P is orthogonal .

You might also like