You are on page 1of 17

Chapter 8

Jordan Normal Form


8.1 Minimal Polynomials
Recall p
A
(x) = det(xI A) is called the characteristic polynomial of the
matrix A.
Theorem 8.1.1. Let A M
n
. Then there exists a unique monic polyno-
mial q
A
(x) of minimum degree for which q
A
(A) = 0. If p(x) is any polyno-
mial such that p(A) = 0, then q
A
(x) divides p(x).
Proof. Since there is a polynomial p
A
(x) for which p
A
(A) = 0, there is one of
minimal degree, which we can assume is monic. by the Euclidean algorithm
p
A
(x) = q
A
(x)h(x) +r(x)
where deg r(x) < deg q
A
(x). We know
p
A
(A) = q
A
(A)h(A) +r(A).
Hence r(A) = 0, and by the minimality assumption r(x) 0. Thus q
A
divided p
A
(x) and also any polynomial for which p(A) = 0. to establish
that q
A
is unique, suppose q(x) is another monic polynomial of the same
degree for which q(A) = 0. Then
r(x) = q(x) q
A
(x)
is a polynomial of degree less than q
A
(x) for which r(a) = q(A)q
A
(A) = 0.
This cannot be unless r(x) 0 = 0q = q
A
.
Denition 8.1.1. The polynomial q
A
(x) in the theorem above is called the
minimal polynomial.
221
222 CHAPTER 8. JORDAN NORMAL FORM
Corollary 8.1.1. If A, B M
n
are similar, then they have the same min-
imal polynomial.
Proof.
B = S
1
AS
q
A
(B) = q
A
(S
1
AS) = S
1
q
A
(A)S = q
A
(A) = 0.
If there is a minimal polynomial for B of smaller degree, say q
B
(x), then
q
B
(A) = 0 by the same argument. This contradicts the minimality of q
A
(x).
Now that we have a minimum polynomial for any matrix, can we nd a
matrix with a given polynomial as its minimum polynomial? Can the degree
the polynomial and the size of the matrix match? The answers to both
questions are armative and presented below in one theorem.
Theorem 8.1.2. For any n
th
degree polynomial
p(x) = x
n
+a
n1
x
n1
+a
n2
x
n2
+ +a
1
x +a
0
there is a matrix A M
n
(C) for which it is the minimal polynomial.
Proof. Consider the matrix given by
A =
_

_
0 0 . . . . . . a
0
1 0 a
1
0 1 0
.
.
. a
2
.
.
.
0
.
.
.
0 . . . 0 1 a
n1
_

_
.
Observe that
Ie
1
= e
1
= A
0
e
1
Ae
1
= e
2
= Ae
1
Ae
2
= e
3
= A
2
e
1
.
.
.
Ae
n1
= e
n
= A
n1
e
1
8.1. MINIMAL POLYNOMIALS 223
and
Ae
n
= a
n1
e
n
a
n2
e
n1
a
1
e
2
a
0
e
1
Since Ae
n
= A
n
e
1
, it follows that
p(A)e
1
= A
n
e
1
+a
n1
A
n1
e
1
+a
n2
A
2
e
1
+ +a
1
Ae
1
+a
0
Ie
1
= 0
Also
p(A)e
k
= p(A)A
k1
e
1
= A
k1
p(A)e
1
= A
k1
(0) = 0 k = 2, . . . , n.
Hence p(A)e
j
= 0 for j = 1 . . . n. Thus p(A) = 0. We know also that
p(x) is monic. Suppose now that
q(x) = x
m
+b
m1
x
m1
+b
1
x +b
0
where m < n and q(A) = 0. Then
q(A)e
1
= A
m
e
1
+b
m1
A
m1
e
1
+ +b
1
Ae
1
+b
0
e
1
= e
m+1
+b
m1
e
m
+ +b
1
e
2
+b
0
e
1
= 0.
But the vectors e
m+1
. . . e
1
are linear independent from which we conclude
that q(A) = 0 is impossible. Thus p(x) is minimal.
Denition 8.1.2. For a given monic polynomial p(x), the matrix A con-
structed above is called the companion matrix to p.
The transpose of the companion matrix can also be used to generate a linear
dierential system which has the same characteristic polynomial as a given
n
th
order dierential equation. Consider the linear dierential equation
y
(n)
+a
n1
y
(n1)
+ +a
1
y
0
+a
0
= 0.
This n
th
order ODE can be converted to a rst order system as follows:
u
1
= y
u
2
= u
0
1
= y
0
u
3
= u
0
2
= y
00
.
.
.
.
.
.
u
n
= u
0
n1
= y
(n1)
224 CHAPTER 8. JORDAN NORMAL FORM
Then we have
_

_
u
1
u
2
.
.
.
u
n
_

_
0
=
_

_
0 1
0 1 0
0 1
.
.
.
.
.
.
0
.
.
.
1
a
0
a
1
. . . . . . a
n1
_

_
_

_
u
1
u
2
.
.
.
u
n
_

_
8.2 Invariant subspaces
There seems to be no truly simple way to the Jordan normal form. The
approach taken here is intended to reveal a number of features of a matrix,
interesting in their own right. In particular, we will construct generalized
eigenspaces that envelop the entire connection of a matrix with its eigen-
values. We have in various ways considered subspaces V of C
n
that are
invariant under the matrix A M
n
(C). Recall this means that AV V .
For example, eigenvectors can be used to create invariant subspaces. Null
spaces, the eigenspace of the zero eigenvalue, are invariant as well. Triangu-
lar matrices furnish an easily recognizable sequence of invariant subspaces.
Assuming T M
n
(C) is upper triangular, it is easy to see that the sub-
spaces generated by the coordinate vectors {e
1
, . . . , e
m
} for m = 1, . . . , n are
invariant under T.
We now consider a specic type of invariant subspace that will lead the
so-called Jordan normal form of a matrix, the closest matrix similar to A
that resembles a diagonal matrix.
Denition 8.2.1 (Generalized Eigenspace). Let A M
n
(C) with spec-
trum (A) = {
1
, . . . ,
k
}. Dene the generalized eigenspace pertaining to

i
by
V

i
= {x C
n
| (A
i
I)
n
x = 0}
Observe that all the eigenvectors pertaining to
i
are contained in V

i
.
If the span of the eigenvectors pertaining to
i
is not equal to V

i
then there
must be a positive power p and a vector x such that (A
i
I)
p
x = 0 but that
y = (A
i
I)
p1
x 6= 0. Thus y is an eigenvector pertaining to
i
. For this
reason we will call V

i
the space of generalized eigenvectors pertaining to

i
. Our rst result, that V

i
is invariant under A, is simple to prove, noting
that only closure under vector addition and scalar multiplication need be
established.
8.2. INVARIANT SUBSPACES 225
Theorem 8.2.1. Let A M
n
(C) with spectrum (A) = {
1
, . . . ,
k
}.
Then for each i = 1, . . . , k, V

i
is an invariant subspace of A.
One might question as to whether V

i
could be enlarged by allowing
higher powers than n in the denition. The negative answer is most simply
expressed by evoking the Hamilton-Cayley theorem. We write the charac-
teristic polynomial p
A
() =
Q
(
i
)
m
A
(
i
)
, where m
A
(
i
) is the algebraic
multiplicity of
i
. Since p
A
(A) =
Q
(A
i
I)
m
A
(
i
)
= 0, it is an easy mat-
ter to see that we exhaust all of C
n
with the spaces V

i
. This is to say that
allowing higher powers in the denition will not increase the subspaces V

i
.
Indeed, as we shall see, the power of (
i
) can be decreased to the geomet-
ric multiplicity m
g
(
i
) the power for
i
. For now the general power n will
suce. One very important result, and an essential rst step in deriving
the Jordan form, is to establish that any square matrix A is similar to a
block diagonal matrix, with each block carrying a single eigenvalue.
Theorem 8.2.2. Let A M
n
(C) with spectrum (A) = {
1
, . . . ,
k
} and
with invariant subspaces V

i
, i = 1, 2, . . . , k. Then (i) The spaces V

i
, j =
1, . . . , k are mutually linearly independent. (ii)
L
k
i=1
V

i
= C
n
(alternatively
C
n
= S(V

1
, . . . , V

k
)) (iii) dimV

i
= m
A
(
i
). (iv) A is similar to a block
diagonal matrix with k blocks A
1
, . . . , A
k
. Moreover, (A
i
) = {
i
} and
dimA
i
= m
A
(
i
) .
Proof. (i) It should be clear that the subspaces V

i
are linearly independent
of each other. For if there is a vector x in both V

i
and V

j
then there is
a vector for some integer q, it must be true that (A
j
I)
q1
x 6= 0 but
(A
j
I)
q
x = 0. This means that y = (A
j
I)
q1
x is an eigenvector
pertaining to
j
. Since (A
i
I)
n
x = 0 we must also have that
(A
j
I)
q1
(A
i
I)
n
x = (A
i
I)
n
(A
j
I)
q1
x
= (A
i
I)
n
y = 0
=
n
X
k=0

n
k

(
i
)
nk
A
k
y
=
n
X
k=0

n
k

(
i
)
nk

j
k
y
= (
j

i
)
n
y = 0
This is impossible unless
j
=
i
. (ii) The key part of the proof is to block
diagonalize A with respect to these invariant subspaces. To that end, let S
226 CHAPTER 8. JORDAN NORMAL FORM
be the matrix with columns generated from bases of the individual V

i
taken
in the order of the indices. Supposing there are more linearly independent
vectors in C
n
other than those already selected, ll out the matrix S with
vectors linearly independent to the subspaces V

i
, i = 1, . . . , k. Now dene

A = S
1
AS. We conclude by the invariance of the subspaces and their
mutual linear independence that

A has the following block structure.

A = S
1
AS =
_

_
A
1
0 0
0 A
2
0
.
.
.
.
.
.
.
.
.
A
k

0 0 B
_

_
It follows that
p
A
() = p

A
() =

Y
p
A
i
()

p
B
()
Any root r of p
B
() must be an eigenvalue of A, say
j
, and there must be an
eigenvector x pertaining to
j
. Moreover, due to the block structure we can
assume that x = [0, . . . , 0, x]
T
, where there are k blocked zeros of the sizes of
the A
i
respectively. Then it is easy to see that ASx =
j
Sx, and this implies
that Sx V

j
. Thus there is another vector in V

j
, which contradicts its
denition. Therefore B is null, or what is the same thing,
k
i=1
V

i
= C
n
.
(iii) Let d
i
= dim V

i
. From (ii) know
P
d
i
= n. Suppose that
i
(A
j
).
Then there is another eigenvector x pertaining to
i
and for which A
j
x =

i
x. Moreover, this vector has the form x = [0, . . . , 0, x, 0, . . . 0]
T
, analogous
to the argument above. By construction Sx / V

i
, but ASx =
i
Sx, and
this contradicts the denition of V

i
. We thus have that p
A
i
() = (
i
)
d
i
.
Since p
A
() =
Q
p
A
i
() =
Q
(
i
)
m
A
(
i
)
, it follows that d
i
= m
A
(
i
)
(iv) Putting (ii), and (iii) together gives the block diagonal structure as
required.
On account of the mutual linear independence of the invariant subspaces
V

i
and the fact that they exhaust C
n
the following corollary is immediate.
Corollary 8.2.1. Let A M
n
(C) with spectrum (A) =
1
, . . . ,
k
and
with generalized eigenspaces V

i
, i = 1, 2, . . . , k. Then each x C
n
has a
unique representation x =
P
k
i=1
x
i
where x
i
V

i
.
Another interesting result which reveals how the matrix works as a linear
transformation is to decompose the it into components with respect to the
8.2. INVARIANT SUBSPACES 227
generalized eigenspaces. In particular, viewing the block diagonal form

A = S
1
AS =
_

_
A
1
0 0
0 A
2
0
.
.
.
.
.
.
A
k
_

_
the space C
n
can be split into a direct sum of subspaces E
1
, . . . , E
k
based on
coordinate blocks. This is accomplished in such that any vector y C
n
can
be written uniquely as y =
P
k
i=1
y
i
where the y
i
E
i
. (Keep in mind that
each y
i
C
n
; its coordinates are zero outside the coordinate block pertaining
E
i
.) Then

Ay =

A
P
k
i=1
y
i
=
P
k
i=1

Ay
i
=
P
k
i=1
A
i
y
i
. This provides a
computational tool when this block diagonal form is known. Note that
the blocks correspond directly to the invariant subspaces by SE
i
= V

i
. We
can use these invariant subspaces to get at the minimal polynomial. For
each i = 1, . . . , k dene
m
i
= min
j
{(A
i
I)
j
x = 0 | x V

i
}
Theorem 8.2.3. Let A M
n
(C) with spectrum (A) =
1
, . . . ,
k
and
with invariant subspaces V

i
, i = 1, 2, . . . , k. Then the minimal polynomial
of A is given by
q () =
k
Y
i=1
(
i
)
m
i
Proof. Certainly we see that for any vector x V

j
q (A) x =

k
Y
i=1
(A
i
I)
m
i
!
x = 0
Hence, the minimal polynomial q
A
() divides q (A). To see that indeed
they are in fact equal, suppose that the minimal polynomial has the form
q
A
() =
k
Y
i=1
(
i
)
m
i
where m
i
m
i
, for i = 1, . . . , k and in particular m
j
< m
j
. By construction
there must exist a vector x V

j
such that (A
j
I)
m
j
x = 0 but y =
228 CHAPTER 8. JORDAN NORMAL FORM
(A
j
I)
m
j
1
x 6= 0. Then if
q (A) x =

k
Y
i=1
(A
i
I)
m
i
!
x
=
_
_
_
k
Y
i=1
i6=j
(A
i
I)
m
i
_
_
_y
= 0
This cannot be because the contrary implies that there is another vector in
one of the invariant subspaces V

k
.
Just one more step is needed before the Jordan normal form can be derived.
For a given V

i
we can interpret the spaces in a heirarchical viewpoint. We
know that V

i
contains all the eigenvectors pertaining to
i
. Call these
eigenvectors the rst order generalized eigenvectors. If the span of these
is not equal to V

i
, then there must be a vector x V

i
for which y =
(A
i
I)
2
x = 0 but (A
i
I) x 6= 0. That is to say y is an eigenvector of
A pertaining to
i
. Call such vectors second order generalized eigenvectors.
In general we call an x V

i
a generalized eigenvector of order j if y =
(A
i
I)
j
x = 0 but (A
i
I)
j1
x 6= 0. In light of our previous discussion
V

i
contains generalized eigenvectors of order up to but not greater than
m

i
.
Theorem 8.2.4. Let A M
n
(C) with spectrum (A) = {
1
, . . . ,
k
} and
with invariant subspaces V

i
, i = 1, 2, . . . , k.
(i) Let x V

i
be a generalized eigenvector of order p. Then the vectors
x, (A
i
I) x, (A
i
I)
2
x, . . . , (A
i
I)
p1
x (1)
are linearly independent.
(ii) The subspace of C
n
generated by the vectors in (1) is an invariant
subspace of A.
Proof. (i) To prove linear independence of a set of vectors we suppose linear
dependence. That is there is a smallest integer k and constants b
j
such that
k
X
j=0
x
j
=
k
X
j=0
b
j
(A
i
I)
j
x = 0
8.2. INVARIANT SUBSPACES 229
where b
k
6= 0. Solving we obtain b
k
(A
i
I)
k
x =
P
k1
j=0
b
j
(A
i
I)
j
x.
Now apply (A
i
I)
pk
to both sides and obtain a new linearly dependent
set as the following calculation shows.
0 = b
k
(A
i
I)
pk+k
x =
k1
X
j=0
b
j
(A
i
I)
j+pk
x
=
p1
X
j=pk
b
j+pk
(A
i
I)
j
x
The key point to note here is that the lower limit of the sum is increased.
This new linearly dependent set, which we denote with the notation
P
p1
j=pk
c
j
(A
i
I)
j
x
can be split in the same way as before, where we assume with no loss in gen-
erality that c
p1
6= 0. Then
c
p1
(A
i
I)
p1
x =
p2
X
j=pk
c
j
(A
i
I)
j
x
Apply (A
i
I) to both sides to get
0 = c
p1
(A
i
I)
p
x =
p2
X
j=pk
c
j
(A
i
I)
j+1
x
=
p1
X
j=pk+1
c
j1
(A
i
I)
j
x
Thus we have obtained another linearly independent set with the lower limit
of powers increased by one. Continue this process until the linear depen-
dence of (A
i
I)
p1
x and (A
i
I)
p2
x is achieved. Thus we have
c (A
i
I)
p1
x = d (A
i
I)
p2
x
(A
i
I) y =
d
c
y
where y = (A
i
I)
p2
x. This implies that
i
+
d
c
is a new eigenvalue
with eigenvector y V

i
, and of course this is a contradiction. (ii) The
invariance under A is more straightforward. First note that while x
1
= x,
230 CHAPTER 8. JORDAN NORMAL FORM
x
2
= (AI) x = Ax x so that Ax = x
2
x
1
. Consider any vector y
dened by y =
P
p1
j=0
b
j
(A
i
I)
j
x It follows that
Ay = A
p1
X
j=0
b
j
(A
i
I)
j
x
=
p1
X
j=0
b
j
(A
i
I)
j
Ax
=
p1
X
j=0
b
j
(A
i
I)
j
(x
2
x
1
)
=
p1
X
j=0
b
j
(A
i
I)
j
[(A
i
I) x
1
x
1
]
=
p1
X
j=0
c
j
(A
i
I)
j
x
where c
j
= b
j1
for j > 0 and c
0
= , which proves the result.
8.3 The Jordan Normal Form
We need a lemma that points in the direction we are headed, that being the
use of invariant subspaces as a basis for the (Jordan) block diagonalization
of any matrix. These results were discussed in detail in the Section 8.2. The
restatement here illustrates the invariant subspace nature of the result,
irrespective of generalized eigenspaces. Its proof is elementary and is left to
the reader.
Lemma 8.3.1. Let A M
n
(C) with invariant subspace V C
n
.
(i) Suppose v
1
. . . v
k
is a basis for V and S is an invertible matrix with
the rst k columns given by v
1
. . . v
k
. Then
1...k
S
1
AS =

.
8.3. THE JORDAN NORMAL FORM 231
(ii) Suppose that V
1
, V
2
C
n
are two invariant subspaces of A and C
n
=
V
1
V
2
. Let the (invertible) matrix S consist respectively of bases from V
1
and V
2
as its columns. Then
S
1
AS =

0
0

.
Denition 8.3.1. Let C. A Jordan block J
k
() is a k k upper
triangular matrix of the form
J
k
() =
_

_
1
0
1
0
.
.
.
1

_
.
A Jordan matrix is any matrix of the form
J =
_

_
J
n
1
(
1
) 0
.
.
.
0 J
n
k
(
k
)
_

_.
where the matrices J
n
1
are Jordan blocks. If J M
n
(C), then n
1
+n
2
+
n
k
= n.
Theorem 8.3.1 (Jordan normal form). Let A M
n
(C). Then there is
a nonsingular matrix S M
n
such that
A = S
_

_
J
n
1
(
1
)
0
.
.
.
0
J
n
k
(
k
)
_

_
S
1
= SJS
1
where J
n
i
(
i
) is a Jordan block, where n
1
+n
2
+ +n
k
= n. J is unique up
to permutations of the blocks. The eigenvalues
1
, . . . ,
k
are not necessarily
distinct. If A is real with real eigenvalues, then S can be taken as real.
Proof. This result is proved in four steps.
(1) Block diagonalize (by similarity) into invariant subspaces pertaining to
(A). This is accomplished as follows. First block diagonalize the ma-
trix according to the generalized eigenspaces V

i
= {x C
n
| (A
i
I)
n
x =
232 CHAPTER 8. JORDAN NORMAL FORM
0} as discussed in the previous section. Beginning with the highest or-
der eigenvector in x V

i
, construct the invariant subspace as in (1)
of Section 8.2. Repeat this process until all generalized eigenvectors
have been included in an invariant subspace. This includes of course
rst order eigenvectors that are not associated with higher order eigen-
vectors. These invariant subspaces have dimension one. Each of these
invariant subspaces is linearly independent from the others. Continue
this process for all the generalized eigenspaces. This exhausts C
n
.
Each of the blocks contains exactly one eigenvector. The dimensions
of these invariant subspaces can range from one to m

i
, there being at
least one subspace of dimension m

i
.
(2) Triangularize each block by Schurs theorem, so that each block has
the form
K() =
_

_

.
.
.
0
_

_
You will note that
K() = I +N
where N is nilpotent, or K() is 1 1.
(3) Jordanize each triangular block. Assume that K
1
() is mm, where
m > 1. By construction K
1
() pertains to an invariant subspace for
which there is a unique vector x for which
N
m1
x 6= 0 and N
m
x = 0.
Thus N
m1
x is an eigenvector of K
1
(), the unique eigenvector. Dene
y
i
= N
i1
x i = 1, 2, . . . , m.
Expand the set {y
i
}
m
i=1
as a basis of C
m
. Dene
S
1
=
"
y
m
y
m1
y
1
.
.
.
.
.
.
.
.
.
#
.
Then
NS
1
=
"
0 y
m
y
m1
. . . y
2
.
.
.
.
.
.
.
.
.
.
.
.
#
.
8.3. THE JORDAN NORMAL FORM 233
So
S
1
1
NS
1
=
_

_
0 1 0
0 1
0
.
.
.
.
.
.
1
0 0
_

_
.
We conclude that
S
1
1
K
1
()S
1
=
_

_
1 0
1
1
.
.
.
.
.
.
.
.
. 1
0
_

_
(4) Assemble all of the blocks to form the Jordan form. For example,
the block K
1
() and the corresponding similarity transformation S
1
studied above can be treated in the assembly process as follows: Dene
the n n matrix

S
1
=
_
_
I 0 0
0 S
1
0
0 0 I
_
_
where S
1
is the block constructed above and placed in the nn matrix
in the position that K
1
() was extracted from the block triangular
form of A. Repeat this for each of the blocks pertaining to minimally
invariant subspaces. This gives a sequence of block diagonal matrices

S
1
,

S
2
. . . ,

S
k
. Dene T =

S
1

S
2
. . .

S
k
. It has the form
T =
_

S
1
0

S
2
.
.
.
0

S
k
_

_
Together with the original matrix P that transformed the matrix to
the minimal invariant subspace blocked form and the unitary matrix
234 CHAPTER 8. JORDAN NORMAL FORM
V used to triangularize A, it follows that
A = PV T
_

_
J
n
1
(
1
)
0
.
.
.
0
J
n
k
(
k
)
_

_
(PV T)
1
= SJS
1
with S = PV T.
Example 8.3.1. Let
J =
_

_
2 1
0 2
2
3 1 0
0 3 1
0 0 3
1
_

_
In this example, there are four blocks, with two of the blocks pertaining to
the single eigenvalue 2. For the rst block there is the single eigenvector e
1
,
but the invariant subspace is S(e
1,
e
2
) . For the second block, the eigenvector,
e
3
, generates the one dimensional invariant subspace. The block pertaining
to the eigenvector 3 has the single eigenvector e
4
while the minimal invariant
subspace is S(e
4,
e
5
, e
6
). Finally, the one dimensional subspace pertaining
to the eigenvector 1 is spanned by e
7
. The minimal invariant polynomial
is q () = ( 2)
2
( 3)
3
( + 1).
8.4 Convergent matrices
Using the Jordan normal form, the study of convergent matrices becomes
relatively straightforward and simpler.
Theorem 8.4.1. If A M
n
and (A) < 1. Then
lim
k
A = 0.
8.5. EXERCISES 235
Proof. We assume A is a Jordan matrix. Each Jordan block J
k
() can be
written as
J
k
() = I
k
+N
k
where
N
k
=
_

_
0 1 0
0 1
.
.
.
.
.
.
.
.
. 1
0 0
_

_
is nilpotent.
Now
A =
_
_
_
J
n
1
(
1
)
.
.
.
J
m
k
(
k
)
_
_
_
.
We compute, for m > n
k
(J
n
k
(
k
))
M
= (I +N)
m
=
m
I +
m
k
X
j=0

mj
N
j

m
j

because N
j
= 0 for j > n
k
. We have

mj

m
j

0 as m
since || < 1. The results follows.
8.5 Exercises
1. Prove Theorem 8.2.1.
2. Find a 3 3 matrix that has the same eigenvalues are the squares of
the roots of the equation
3
3
2
+ 4 5 = 0.
3. Suppose that A is a square matrix with (A) = {3}, m
a
(3) = 6, and
m
g
(3) = 3. Up to permutations of the blocks show all possible Jordan
normal forms for A.
236 CHAPTER 8. JORDAN NORMAL FORM
4. Let A M
n
(C) and let x
1
C
n
.Dene x
i+1
= Ax
i
for i = 1, . . . , n1.
Show that V = S({x
1
, . . . , x
n
}) is an invariant subspace of A. Show
that V contains an eigenvector of A.
5. Referring to the previous problem, let A M
n
(R) be a permutation
matrix. (i) Find starting vectors so that dimV = n. (ii) Find start-
ing vectors so that dimV = 1. (iii) Show that if = 1 is a simple
eigenvalue of A then dimV = 1 or dimV = n.
Chapter 9
Hermitian and Symmetric
Matrices
Example 9.0.1. Let f : D R, D R
n
. The Hessian is dened by
H(x) = h
ij
(x)
f
x
i
x
j
M
n
.
Since for functions f C
2
it is known that

2
f
x
i
x
j
=

2
f
x
j
x
i
it follows that H(x) is symmetric.
Denition 9.0.1. A function f : R R is convex if
f(x + (1 )y) f(x) + (1 )f(y)
for x, y D (domain) and 0 1.
Proposition 9.0.1. If f C
2
(D) and f
00
(x) 0 on D then f(x) is convex.
Proof. Because f
00
0, this implies that f
0
(x) is increasing. Therefore if
x < x
m
< y we must have
f(x
m
) f(x) +f
0
(x
m
)(x
m
x)
and
f(x
m
) f(y) +f
0
(x
m
)(x
m
y)
237

You might also like