You are on page 1of 35

Solutions to Axler, Linear Algebra Done Right 2nd Ed.

Edvard Fagerholm
edvard.fagerholm@¦helsinki.fi[gmail.com¦
Beware of errors. I read the book and solved the exercises during spring break (one
week), so the problems were solved in a hurry. However, if you do find better or interesting
solutions to the problems, I’d still like to hear about them. Also please don’t put this on
the Internet to encourage copying homework solutions...
1 Vector Spaces
1. Assuming that C is a field, write z = a + bi. Then we have 1/z = z/zz = z/[z[
2
.
Plugging in the numbers we get 1/(a + bi) = a/(a
2
+ b
2
) − bi/(a
2
+ b
2
) = c + di. A
straightforward calculation of (c+di)(a+bi) = 1 shows that this is indeed an inverse.
2. Just calculate ((1 +

3)/2)
3
.
3. We have v + (−v) = 0, so by the uniqueness of the additive inverse (prop. 1.3)
−(−v) = v.
4. Choose a = 0 and v = 0. Then assuming av = 0 we get v = a
−1
av = a
−1
0 = 0.
Contradiction.
5. Denote the set in question by A in each part.
(a) Let v, w ∈ A, v = (x
1
, x
2
, x
3
), w = (y
1
, y
2
, y
3
). Then x
1
+ 2x
2
+ 3x
3
= 0 and
y
1
+ 2y
2
+ 3y
3
= 0, so that 0 = x
1
+ 2x
2
+ 3x
3
+ y
1
+ 2y
2
+ 3y
3
= (x
1
+ y
1
) +
2(x
2
+y
2
) +3(x
3
+y
3
), so v +w ∈ A. Similarly 0 = a0 = ax
1
+2ax
2
+3ay
3
, so
av ∈ A. Thus A is a subspace.
(b) This is not a subspace as 0 ∈ A.
(c) We have that (1, 1, 0) ∈ A and (0, 0, 1) ∈ A, but (1, 1, 0)+(0, 0, 1) = (1, 1, 1) ∈ A,
so A is not a subspace.
(d) Let (x
1
, x
2
, x
3
), (y
1
, y
2
, y
3
) ∈ A. If x
1
= 5x
3
and y
1
= 5y
3
, then ax
1
= 5ax
3
,
so a(x
1
, x
2
, x
3
) ∈ A. Similarly x
1
+ y
1
= 5(x
3
+ y
3
), so that (x
1
, x
2
, x
3
) +
(y
1
, y
2
, y
3
) ∈ A. Thus A is a subspace.
1
6. Set U = Z
2
.
7. The set ¦(x, x) ∈ R
2
[ x ∈ R¦ ∪¦(x, −x) ∈ R
2
[ x ∈ R¦ is closed under multiplication
but is trivially not a subspace ((x, x) + (x, −x) = (2x, 0) doesn’t belong to it unless
x = 0).
8. Let ¦V
i
¦ be a collection of subspaces of V . Set U = ∩
i
V
i
. Then if u, v ∈ U. We
have that u, v ∈ V
i
for all i because V
i
is a subspace. Thus au ∈ V
i
for all i, so that
av ∈ U. Similarly u + v ∈ V
i
for all i, so u + v ∈ U.
9. Let U, W ⊂ V be subspaces. Clearly if U ⊂ W or W ⊂ U, then U ∪ W is clearly a
subspace. Assume then that U ⊂ W and W ⊂ U. Then we can choose u ∈ U ` W
and w ∈ W ` U. Assuming that U ∪ W is a subspace we have u + w ∈ U ∪ W.
Assuming that u + w ⊂ U we get w = u + w − u ⊂ U. Contradiction. Similarly for
u + w ∈ W. Thus U ∪ W is not a subspace.
10. Clearly U = U + U as U is closed under addition.
11. Yes and yes. Follows directly from commutativity and associativity of vector addition.
12. The zero subspace, ¦0¦, is clearly an additive identity. Assuming that we have
inverses, then the whole space V should have an inverse U such that U + V = ¦0¦.
Since V + U = V this is clearly impossible unless V is the trivial vector space.
13. Let W = R
2
. Then for any two subspaces U
1
, U
2
of W we have U
1
+ W = U
2
+ W,
so the statement is clearly false in general.
14. Let W = ¦p ∈ {(F) [ p =
¸
n
i=0
a
i
x
i
, a
2
= a
5
= 0¦.
15. Let V = R
2
. Let W = ¦(x, 0) ∈ R
2
[ x ∈ R¦. Set U
1
= ¦(x, x) ∈ R
2
[ x ∈ R¦,
U
2
= ¦(x, −x) ∈ R
2
[ x ∈ R¦. Then it’s easy to see that U
1
+ W = U
2
+ W = R
2
,
but U
1
= U
2
, so the statement is false.
2 Finite-Dimensional Vector Spaces
1. Let u
n
= v
n
and u
i
= v
i
+v
i+1
, i = 1, . . . , n−1. Now we see that v
i
=
¸
n
j=i
u
i
. Thus
v
i
∈ span(u
1
, . . . , u
n
), so V = span(v
1
, . . . , v
n
) ⊂ span(u
1
, . . . , u
n
).
2. From the previous exercise we know that the span is V . As (v
1
, . . . , v
n
) is a linearly
independent spanning list of vectors we know that dimV = n. The claim now follows
from proposition 2.16.
2
3. If (v
1
+ w, . . . , v
n
+ w) is linearly dependent, then we can write
0 = a
1
(v
1
+ w) + . . . + a
n
(v
n
+ w) =
n
¸
i=1
a
i
v
i
+ w
n
¸
i=1
a
i
,
where a
i
= 0 for some i. Now
¸
n
i=1
a
i
= 0 because otherwise we would get
0 =
n
¸
i=1
a
i
v
i
+ w
n
¸
i=1
a
i
=
n
¸
i=1
a
i
v
i
and by the linear independence of (v
i
) we would get a
i
= 0, ∀i contradicting our
assumption. Thus
w = −

n
¸
i=1
a
i

−1
n
¸
i=1
a
i
v
i
,
so w ∈ span(v
1
, . . . , v
n
).
4. Yes, multiplying with a nonzero constant doesn’t change the degree of a polynomial
and adding two polynomials either keeps the degree constant or makes it the zero
polynomial.
5. (1, 0, . . .), (0, 1, 0, . . .), (0, 0, 1, 0, . . .), . . . is trivially linearly independent. Thus F

isn’t finite-dimensional.
6. Clearly {(F) is a subspace which is infinite-dimensional.
7. Choose v
1
∈ V . Assuming that V is infinite-dimensional span(v
1
) = V . Thus we can
choose v
2
∈ V ` span(v
1
). Now continue inductively.
8. (3, 1, 0, 0, 0), (0, 0, 7, 1, 0), (0, 0, 0, 0, 1) is clearly linearly independent. Let (x
1
, x
2
, x
3
, x
4
, x
5
) ∈
U. Then (x
1
, x
2
, x
3
, x
4
, x
5
) = (3x
2
, x
2
, 7x
4
, x
4
, x
5
) = x
2
(3, 1, 0, 0, 0)+x
4
(0, 0, 7, 1, 0)+
x
5
(0, 0, 0, 0, 1), so the vectors span U. Hence, they form a basis.
9. Choose the polynomials (1, x, x
2
−x
3
, x
3
). They clearly span {
3
(F) and form a basis
by proposition 2.16.
10. Choose a basis (v
1
, . . . , v
n
). Let U
i
= span(v
i
). Now clearly V = U
1
⊕ . . . ⊕ U
n
by
proposition 2.19.
11. Let dimU = n = dimV . Choose a basis (u
1
, . . . , u
n
) for U and extend it to a
basis (u
1
, . . . , u
n
, v
1
, . . . , v
k
) for V . By assumption a basis for V has length n, so
(u
1
, . . . , u
n
) spans V .
3
12. Let U = span(u
0
, . . . , u
m
). As U = {
m
(F) we have by the previous exercise that
dimU < dim{
m
(F) = m + 1. As (p
0
, . . . , p
m
) is a spanning list of vectors having
length m + 1 it is not linearly independent.
13. By theorem 2.18 we have 8 = dimR
8
= dimU + dimW − dim(U ∩ W) = 4 + 4 −
dim(U ∩W). Since U +W = R
8
, we have that dim(U ∩W) = 0 and the claim follows
14. Assuming that U ∩ W = ¦0¦ we have by theorem 2.18 that
9 = dimR
9
= dimU + dimW −dim(U ∩ W) = 5 + 5 −0 = 10.
Contradiction.
15. Let U
1
= ¦(x, 0) ∈ R
2
[ x ∈ R¦, U
2
= ¦(0, x) ∈ R
2
[ x ∈ R¦, U
3
= ¦(x, x) ∈ R
2
[ x ∈
R¦. Then U
1
∩ U
2
= ¦0¦, U
1
∩ U
3
= ¦0¦ and U
2
∩ U
3
= ¦0¦. Thus the left-hand side
of the equation in the problem is 2 while the right-hand side is 3. Thus we have a
counterexample.
16. Choose a basis (u
i
1
, . . . , u
i
n
i
) for each U
i
. Then the list of vectors (u
1
1
, . . . , u
m
nm
) has
length dimU
1
+ . . . + dimU
m
and clearly spans U
1
+ . . . + U
m
proving the claim.
17. By assumption the list of vectors (u
1
1
, . . . , u
m
nm
) from the proof of the previous exercise
is linearly independent. Since V = U
1
+. . .+U
n
= span(u
1
1
, . . . , u
m
nm
) the claim follows.
3 Linear Maps
1. Any v = 0 spans V . Thus, if w ∈ V , we have w = av for some a ∈ F. Now T(v) = av
for some a ∈ F, so for w = bv ∈ V we have T(w) = T(bv) = bT(v) = bav = aw. Thus
T is multiplication by a scalar.
2. Define f by e.g.
f(x, y) =

x, x = y
0, x = y
,
then clearly f satisfies the condition, but is non-linear.
3. To define a linear map it’s enough to define the image of the elements of a basis.
Choose a basis (u
1
, . . . , u
m
) for U and extend it to a basis (u
1
, . . . , u
m
, v
1
, . . . , v
n
) of
V . If S ∈ L(U, W). Choose some vectors w
1
, . . . , w
n
∈ W. Now define T ∈ L(V, W)
by T(u
i
) = S(u
i
), i = 1, . . . , m and T(v
i
) = w
i
, i = 1, . . . , n. These relations define
the linear map and clearly T
|U
= S.
4. By theorem 3.4 dimV = dimnull T +dimrange T. If u ∈ null T, then dimrange T >
0, but as dimrange T ≤ dimF = 1 we have dimrange T = 1 and it follows that
4
dimnull T = dimV −1. Now choose a basis (u
1
, . . . , u
n
) for null T. By our assumption
(u
1
, . . . , u
n
, u) is linearly independent and has length dimV . Hence, it’s a basis for
V . This implies that
V = span(u
1
, . . . , u
n
, u) = span(u
1
, . . . , u
n
) ⊕span(u) = null T ⊕¦au [ a ∈ F¦.
5. By linearity 0 =
¸
n
i=1
a
i
T(v
i
) =
¸
n
i=1
T(a
i
v
i
) = T(
¸
n
i=1
a
i
v
i
). By injectivity we
have
¸
n
i=1
a
i
v
i
= 0, so that a
i
= 0 for all i. Hence (T(v
1
), . . . , T(v
n
)) is linearly
independent.
6. If n = 1 the claim is trivially true. Assume that the claim is true for n = k.
If S
1
, . . . , S
k+1
satisfy the assumptions, then S
1
S
k
is injective by the induction
hypothesis. Let T = S
1
S
k
. If u = 0, then by injectivity of S
k+1
we S
k+1
u = 0
and by injectivity of T we have TS
k+1
u = 0. Hence TS
k+1
is injective and the claim
follows.
7. Let w ∈ W. By surjectivity of T we can find a vector v ∈ V such that T(v) = w.
Writing v = a
1
v
1
+ . . . + a
n
v
n
we get w = T(v) = T(a
1
v
1
+ . . . + a
n
v
n
) = a
1
T(v
1
) +
. . . + a
n
T(v
n
) proving the claim.
8. Let (u
1
, . . . , u
n
) be a basis for null T and extend it to a basis (u
1
, . . . , u
n
, v
1
, . . . , v
k
)
of V . Let U := span(v
1
, . . . , v
k
). Then by construction null T ∩ U = ¦0¦ and for an
arbitrary v ∈ V we have that v = a
1
u
1
+ . . . + a
n
u
n
+ b
1
v
n
+ . . . + b
k
v
k
, so that
T(v) = b
1
T(v
1
) + . . . + b
k
T(v
k
).
Hence range T = T(U).
9. It’s easy to see that (5, 1, 0, 0), (0, 0, 7, 1) is a basis for null T (see exercise 2.8). Hence
dimrange T = dimF
4
− dimnull T = 4 − 2 = 2. Thus range T = F
2
, so that T is
surjective.
10. It’s again easy to see that (3, 1, 0, 0, 0), (0, 0, 1, 1, 1) is a basis of null T. Hence, we get
dimrange T = dimF
5
−dimnull T = 5 −2 = 3 which is impossible.
11. This follows trivially from dimV = dimnull T + dimrange T.
12. From dimV = dimnull T +dimrange T it follows trivially that if we have a surjective
linear map T ∈ L(V, W), then dimV ≥ dimW. Assume then that dimV ≥ dimW.
Choose a basis (v
1
, . . . , v
n
) for V and a basis (w
1
, . . . , w
m
) for W. We can define
a linear map T ∈ L(V, W) by letting T(v
i
) = w
i
, i = 1, . . . , m, T(v
i
) = 0, i =
m + 1, . . . , n. Clearly T is surjective.
5
13. We have that dimrange T ≤ dimW. Thus we get dimnull T = dimV −dimrange T ≥
dimV −dimW. Choose an arbitrary subspace U of V of such that dimU ≥ dimV −
dimW. Let (u
1
, . . . , u
n
) be a basis of U and extend it to a basis (u
1
, . . . , u
n
, v
1
, . . . , v
k
)
of V . Now we know that k ≤ dimW, so let (w
1
, . . . , w
m
) be a basis for W. Define
a linear map T ∈ L(V, W) by T(u
i
) = 0, i = 1, . . . , n and T(v
i
) = w
i
, i = 1, . . . , k.
Clearly null T = U.
14. Clearly if we can find such a S, then T is injective. Assume then that T is injective.
Let (v
1
, . . . , v
n
) be a basis of V . By exercise 5 (T(v
1
), . . . , T(v
n
)) is linearly inde-
pendent so we can extend it to a basis (T(v
1
), . . . , T(v
n
), w
1
, . . . , w
m
) of W. Define
S ∈ L(W, V ) by S(T(v
i
)) = v
i
, i = 1, . . . , n and S(w
i
) = 0, i = 1, . . . , m. Clearly
ST = I
V
.
15. Clearly if we can find such a S, then T is surjective. Otherwise let (v
1
, . . . , v
n
) be a
basis of V . By assumption (T(w
1
), . . . , T(w
n
)) spans W. By the linear dependence
lemma we can make the list of vectors (T(w
1
), . . . , T(w
n
)) a basis by removing some
vectors. Without loss of generality we can assume that the first m vectors form the
basis (just permute the indices). Thus T(w
1
), . . . , T(w
m
)) is a basis of W. Define
the map S ∈ L(W, V ) by S(T(w
i
)) = w
i
, i = 1, . . . , m. Now clearly TS = I
W
.
16. We have that dimU = dimnull T + dimrange T ≤ dimnull T + dimV . Substituting
dimV = dimnull S + dimrange S and dimU = dimnull ST + dimrange ST we get
dimnull ST + dimrange ST ≤ dimnull T + dimnull S + dimrange S.
Clearly dimrange ST ≤ dimrange S, so our claim follows.
17. This is nothing but pencil pushing. Just take arbitrary matrices satisfying the re-
quired dimensions and calculate each expression and the equalities easily fall out.
18. Ditto.
19. Let V = (x
1
, . . . ,
n
). From proposition 3.14 we have that
´(Tv) = ´(T)´(v) =

a
1,1
a
1,n
.
.
.
.
.
.
.
.
.
a
m,1
a
m,n
¸
¸
¸

x
1
.
.
.
x
n
¸
¸
¸ =

a
1,1
x
1
+ . . . + a
1,n
x
n
.
.
.
a
m,1
x
1
+ . . . + a
m,n
x
n
¸
¸
¸
which shows that Tv = (a
1,1
x
1
+ . . . + a
1,n
x
n
, . . . , a
m,1
x
1
+ . . . + a
m,n
x
n
).
20. Clearly dimMat(n, 1, F) = n = dimV . We have that Tv = 0 if and only if v =
0v
1
+ . . . , 0v
n
= 0, so null T = ¦0¦. Thus T is injective and hence invertible.
6
21. Let e
i
denote the n 1 matrix with a 1 in the ith row and 0 everywhere else. Let T
be the linear map. Define a matrix A := [T(e
1
), . . . , T(e
n
)]. Now it’s trivial to verify
that
Ae
i
= T(e
i
).
By distributivity of matrix multiplication (exercise 17) we get for an arbitrary v =
a
1
e
1
+ . . . + a
n
e
n
that
Av = A(a
1
e
1
+ . . . + a
n
e
n
) = a
1
Ae
1
+ . . . + a
n
Ae
n
= a
1
T(e
1
) + . . . + a
n
T(e
n
) = T(a
1
e
1
+ . . . + a
n
e
n
),
so the claim follows.
22. From theorem 3.21 we have ST invertible ⇔ ST bijective ⇔ S and T bijective ⇔ S
and T invertible.
23. By symmetry it’s sufficient to prove this in one direction only. Thus if TS = I. Then
ST(Su) = S(TSu) = Su for all u. As S is bijective Su goes through the whole space
V as u varies, so ST = I.
24. Clearly TS = ST if T is a scalar multiple of the identity. For the other direction
assume that TS = ST for every linear map S ∈ L(V ).
25. Let T ∈ L(F
2
) be the operator T(a, b) = (a, 0) and S ∈ L(F
2
) the operator S(a, b) =
(0, b). Then neither one is injective and hence invertible by 3.21. However, T + S is
the identity operator which is trivially invertible. This can be trivially generalized
to spaces arbitrary spaces of dimension ≥ 2.
26. Write
A :=

a
1,1
a
1,n
.
.
.
.
.
.
.
.
.
a
n,1
a
n,n
¸
¸
¸, x =

x
1
.
.
.
x
n
¸
¸
¸.
Then the system of equations in a) reduces to Ax = 0. Now A defines a linear map
from Mat(n, 1, F) to Mat(n, 1, F). What a) states is now that the map is injective
while b) states that it is surjective. By theorem 3.21 these are equivalent.
4 Polynomials
1. Let λ
1
, . . . , λ
m
be m distinct numbers and k
1
, . . . , k
m
> 0 such that their sum is n.
Then
¸
m
i=1
(x −λ
i
)
k
i
is clearly such a polynomial.
7
2. Let p
i
(x) =
¸
j=i
(x − z
j
), so that deg p
i
= m. Now we have that p
i
(z
j
) = 0 if and
only if i = j. Let c
i
= p
i
(z
i
). Define
p(x) =
m+1
¸
i=1
w
i
c
−1
i
p
i
(x).
Now deg p = m and p clearly p(z
i
) = w
i
.
3. We only need to prove uniqueness as existence is theorem 4.5. Let s

, r

be other such
polynomials. Then we get that
0 = (s −s

)p + (r −r

) ⇔ r

−r = (s −s

)p
We know that for any polynomials p = 0 = q we have deg pq = deg p + deg q.
Assuming that s = s

we have deg(r

− r) < deg(s − s

)p which is impossible. Thus
s = s

implying r = r

.
4. Let λ be a root of p. Thus we can write p(x) = (x −λ)q(x). Now we get
p

(x) = q(x) + (x −λ)q

(x).
Thus λ is a root of p

if and only if λ is a root of q i.e. λ is a multiple root. The
statement follows.
5. Let p(x) =
¸
n
i=0
a
i
x
i
, where a
i
∈ R. By the fundamental theorem of calculus we
have a complex root z. By proposition 4.10 z is also a root of p. Then z and z are
roots of equal multiplicity (divide by (x − z)(x − z) which is real). Thus if we have
no real roots we have an even number of roots counting multiplicity, but the number
of roots counting multiplicity is deg p hence odd. Thus p has a real root.
5 Eigenvalues and Eigenvectors
1. An arbitrary element of U
1
+. . . +U
n
is of the form u
1
+. . . +u
n
where u
i
∈ U
i
. Thus
we get T(u
1
+ . . . + u
n
) = T(u
1
) + . . . + T(u
n
) ∈ U
1
+ . . . + U
n
by the assumption
that T(u
i
) ∈ U
i
for all i.
2. Let V = ∩
i
U
i
where U
i
is invariant under T for all i. Let v ∈ V , so that v ∈ U
i
for all
i. Now T(v) ∈ U
i
for all i by assumption, so T(v) ∈ ∩U
i
= V . Thus V is invariant
under T.
3. The clame is clearly true for U = ¦0¦ or U = V . Assume that ¦0¦ = U = V . Let
(u
1
, . . . , u
n
) be a basis for U and extend it to a basis (u
1
, . . . , u
n
, v
1
, . . . , v
m
). By
our assumption m ≥ 1. Define a linear operator by T(u
i
) = v
1
, i = 1, . . . , n and
T(v
i
) = v
1
, i = 1, . . . , m. Then clearly U is not invariant under T.
8
4. Let u ∈ null(T −λI), so that T(u) −λu = 0. ST = TS gives us
0 = S(Tu −λu) = STu −λSu = TSu −λSu = (T −λI)Su,
so that Su ∈ null(T −λI).
5. Clearly T(1, 1) = (1, 1), so that 1 is an eigenvalue. Also T(−1, 1) = (1, −1) =
−(−1, 1) so −1 is another eigenvalue. By corollary 5.9 these are all eigenvalues.
6. We easily see that T(0,0,1)=(0,0,5), so that 5 is an eigenvalue. Also T(1, 0, 0) =
(0, 0, 0) so 0 is an eigenvalue. Assume that λ = 0 and T(z
1
, z
2
, z
3
) = (2z
2
, 0, 5z
3
) =
λ(z
1
, z
2
, z
3
). From the assumption λ = 0 we get z
2
= 0, so the equation is of the
form (0, 0, 5z
3
) = λ(z
1
, 0, z
3
). Again we see that z
1
= 0 so we get the equation
(0, 0, 5z
3
) = λ(0, 0, z
3
). Thus 5 is the only non-zero eigenvalue.
7. Notice that the range of T is the subspace ¦(x, . . . , x) ∈ F
n
[ x ∈ F¦ and has dimension
1. Thus dimrange T = 1, so dimnull T = n − 1. Assume that T has two distinct
eigenvalues λ
1
, λ
2
and assume that λ
1
= 0 = λ
2
. Let v
1
, v
2
be the corresponding
eigenvectors, so by theorem 5.6 they are linearly independent. Then v
1
, v
2
∈ null T,
but dimnull T = n − 1, so this is impossible. Hence T has at most one non-zero
eigenvalue hence at most two eigenvalues.
Because T is not injective we know that 0 is an eigenvalue. We also see that
T(1, . . . , 1) = n(1, . . . , 1), so n is another eigenvalue. By the previous paragraph,
these are all eigenvalues of T.
8. Let a ∈ F. Now we have T(a, a
2
, a
3
, . . .) = (a
2
, a
3
, . . .) = a(a, a
2
, . . .), so every a ∈ F
is an eigenvalue.
9. Assume that T has k +2 distinct eigenvalues λ
1
, . . . , λ
k+2
with corresponding eiven-
vectors v
1
, . . . , v
k+2
. By theorem 5.6 these eigenvectors are linearly independent. Now
Tv
i
= λ
i
v
i
and dimspan(Tv
1
, . . . , Tv
k+2
) = dimspan(λ
1
v
1
, . . . , λ
k+2
v
k+2
) ≥ k + 1
(it’s k + 2 if all λ
i
are non-zero, otherwise k + 1). This is a contradiction as
dimrange T = k and span(λ
1
v
1
, . . . , λ
k+2
v
k+2
) ⊂ range T.
10. As T = (T
−1
)
−1
we only need to show this in one direction. If T is invertible, then
0 is not an eigenvalue. Now let λ be an eigenvalue of T and v the corresponding
eigenvector. From Tv = λv we get that T
−1
λv = λT
−1
v = v, so that T
−1
v = λ
−1
v.
11. Let λ be an eigenvalue of TS and v the corresponding eigenvector. Then we get
STSv = Sλv = λSv, so if Sv = 0, then it is is an eigenvector for the eigenvalue λ. If
Sv = 0, then TSv = 0, so λ = 0. As Sv = 0 we know that S is not injective, so ST
is not injective and it has eigenvalue 0. Thus if λ is an eigenvalue of TS, then it’s an
eigenvalue of ST. The other implication follows by symmetry.
9
12. Let (v
1
, . . . , v
n
) be a basis of V . By assumption (Tv
1
, . . . , Tv
n
) = (λ
1
v, . . . , λ
n
v
n
).
We need to show that λ
i
= λ
j
for all i, j. Choose i = j, then by our assumption
v
i
+ v
j
is an eigenvector, so T(v
i
+ v
j
) = λ
i
v
i
+ λ
j
v
j
= λ(v
i
+ v
j
) = λv
i
+ λv
j
. This
means that (λ
i
− λ)v
i
+ (λ
j
− λ)v
j
= 0. Because (v
i
, v
j
) is linearly independent we
get that λ
i
−λ = 0 = λ
j
−λ i.e. λ
i
= λ
j
.
13. Let v
1
∈ V and extend it to a basis (v
1
, . . . , v
n
) of V . Now Tv
1
= a
1
v
1
+ . . . + a
n
v
n
.
Let U
i
be the subspace generated by the vectors v
j
, j = i. By our assumption each
U
i
is an invariant subspace. Let Tv
1
= a
1
v
1
+ . . . + a
n
v
n
. Now v
1
∈ U
i
for i > 1. So
let j > 1, then Tv ∈ U
j
imples a
j
= 0. Thus Tv
1
= a
1
v
1
. We see that v
1
was an
eigenvector. The result now follows from the previous exercise.
14. Clearly (STS
−1
)
n
= ST
n
S
−1
, so
n
¸
i=0
a
i
(STS
−1
)
i
=
n
¸
i=0
a
i
ST
i
S
−1
= S

n
¸
i=0
a
i
T
i

S
−1
.
15. Let λ be an eigenvalue of T and v ∈ V a corresponding eigenvector. Let p(x) =
¸
n
i=0
a
i
x
i
. Then we have
p(T)v =
n
¸
i=0
a
i
T
i
v =
n
¸
i=0
a
i
λ
i
v = p(λ)v.
Thus p(λ) is an eigenvalue of p(T). Then let a be an eigenvalue of p(T) and v ∈ V
the corresponding eigenvector. Let q(x) = p(x) −a. By the fundamental theorem of
algebra we can write q(x) = c
¸
n
i=1
(x −λ
i
). Now q(T)v = 0 and q(T) = c
¸
n
i=1
(T −
λ
i
I). As q(T) is non-injective we have that T −λ
i
I is non-injective for some i. Hence
λ
i
is an eigenvalue of T. Thus we get 0 = q(λ
i
) = p(λ
i
) −a, so that a = p(λ
i
).
16. Let T ∈ L(R
2
) be the map T(x, y) = (−y, x). On page 78 it was shown that it has
no eigenvalue. However, T
2
(x, y) = T(−y, x) = (−x, −y), so −1 it has an eigenvalue.
17. By theorem 5.13 T has an upper-triangular matrix with respect to some basis (v
1
, . . . , v
n
).
The claim now follows from proposition 5.12.
18. Let T ∈ L(F
2
) be the operator T(a, b) = (b, a) with respect to the standard basis.
Now T
2
= I, so T is clearly invertible. However, T has the matrix
¸
0 1
1 0

.
10
19. Take the operator T in exercise 7. It is clearly not invertible, but has the matrix

1 1
.
.
.
.
.
.
.
.
.
1 1
¸
¸
¸.
20. By theorem 5.6 we can choose basis (v
1
, . . . , v
n
) for V where v
i
is an eigenvector of
T corresponding to the eigenvalue λ
i
. Let λ be the eigenvalue of S corresponding to
v
i
. Then we get
STv
i
= Sλ
i
v
i
= λ
i
Sv
i
= λ
i
λv
i
= λλ
i
v
i
= λTv
i
= Tλv
i
= TSv
i
,
so ST and TS agree on a basis of V . Hence they are equal.
21. Clearly 0 is an eigenvalue and the corresponding eigenvectors are null T = W ` ¦0¦.
Now assume λ = 0 is an eigenvalue and v = u + w is a corresponding eigenvector.
Then P
U,W
(u + w) = u = λu + λw ⇔ (1 −λ)u = λw. Thus λw ∈ U, so λw = 0, but
λ = 0, so we have w = 0 which implies (1 −λ)u = 0. Because v is an eigenvector we
have v = u + w = u = 0, so that 1 − λ = 0. Hence λ = 1. As we can choose freely
our u ∈ U, so that it’s non-zero we see that the eigenvectors corresponding to 1 are
u = U ` ¦0¦.
22. As dimV = dimnull P + dimrange P, it’s clearly sufficient to prove that null P ∩
range P = ¦0¦. Let v ∈ null P ∩ range P. As v ∈ range P, we can find a u ∈ V such
that Pu = v. Thus we have that v = Pu = P
2
u = Pv, but v ∈ null P, so Pv = 0.
Thus v = 0.
23. Let T(a, b, c, d) = (−b, a, −d, c), then T is clearly injective, so 0 is not an eigenvalue.
If λ = 0 is an eigenvalue and λ(a, b, c, d) = (−b, a, −d, c). We clearly see that a =
0 ⇔ b = 0 and similarly with c, d. By symmetry we can assume that a = 0 (an
eigenvector is non-zero). Then we have λa = −b and λb = a. Substituting we get
λ
2
b = −b ⇔ (λ
2
+ 1)b = 0. As b = 0 we have λ
2
+ 1 = 0, but this equation has no
solutions in R. Hence T has no eigenvalue.
24. If U is an invariant subspace of odd degree, then by theorem 5.26 T
|U
has an eigenvalue
λ with eigenvector v. Then λ is an eigenvalue of T with eigenvector v against our
assumption. Thus T has no subspace of odd degree.
6 Inner-Product Spaces
1. From the law of cosines we get |x − y|
2
= |x|
2
+ |y|
2
− 2|x||y| cos θ. Solving we
get
|x||y| cos θ =
|x|
2
+|y|
2
−|x −y|
2
2
.
11
Now let x = (x
1
, x2), y = (y
1
, y
2
). A straight calculation shows that
|x|
2
+|y|
2
−|x−y|
2
= x
2
1
+x
2
2
+y
2
1
+y
2
2
−(x
1
−y
1
)
2
−(x
2
−y
2
)
2
= 2(x
1
y
1
+x
2
y
2
),
so that
|x||y| cos θ =
|x|
2
+|y|
2
−|x −y|
2
2
=
2(x
1
y
1
+ x
2
y
2
)
2
= 'x, y` .
2. If 'u, v` = 0, then 'u, av` = 0, so by the Pythagorean theorem |u| ≤ |u| + |av| =
|u+av|. Then assume that |u| ≤ |u+av| for all a ∈ F. We have |u|
2
≤ |u+av|
2

'u, u` ≤ 'u + av, u + av`. Thus we get
'u, u` ≤ 'u, u` +'u, av` +'av, u` +'av, av` ,
so that
−2Re a 'u, v` ≤ [a[
2
|v|
2
.
Choose a = −t 'u, v` with t > 0, so that
2t 'u, v`
2
≤ t
2
'u, v`
2
|v|
2
⇔ 2 'u, v`
2
≤ t 'u, v`
2
|v|
2
.
If v = 0, then clearly 'u, v` = 0. If not choose t = 1/|v|
2
, so that we get
2 'u, v`
2
≤ 'u, v`
2
.
Thus 'u, v` = 0.
3. Let a = (a
1
,

2a
2
, . . . ,

na
n
) ∈ R
n
and b = (b
1
, b
2
/

2, . . . , b
n
/

n) ∈ R
n
. This
equality is then simply 'a, b`
2
≤ |a|
2
|b|
2
which follows directly from the Cauchy-
Schwarz inequality.
4. From the parallelogram equality we have |u + v|
2
+ |u − v|
2
= 2(|u|
2
+ |v|
2
).
Solving for |v| we get |v| =

17.
5. Set e.g. u = (1, 0), v = (0, 1). Then |u| = 1, |v| = 1, |u + v| = 2, |u − v| = 2.
Assuming that the norm is induced by an inner-product, we would have by the
parallelogram inequality
8 = 2
2
+ 2
2
= 2(1
2
+ 1
2
) = 4,
which is clearly false.
6. Just use |u| = 'u, u` and simplify.
7. See previous exercise.
12
8. This exercise is a lot trickier than it might seem. I’ll prove it for R, the proof for C
is almost identical except for the calculations that are longer and more tedious. All
norms on a finite-dimensional real vector space are equivalent. This gives us
lim
n→∞
|r
n
x + y| = |λx + y|
when r
n
→ λ. This probably doesn’t make any sense, so check a book on topology
or functional analysis or just assume the result.
Define 'u, v` by
'u, v` =
|u + v|
2
−|u −v|
2
4
.
Trivially we have |u|
2
= 'u, u`, so positivity definiteness follows. Now
4('u + v, w` −'u, w` −'v, w`) = |u + v + w|
2
−|u + v −w|
2
−(|u + w|
2
−|u −w|
2
)
−(|v + w|
2
−|v −w|
2
)
= |u + v + w|
2
−|u + v −w|
2
+(|u −w|
2
+|v −w|
2
)
−(|u + w|
2
+|v + w|
2
)
We can apply the parallelogram equality to the two parenthesis getting
4('u + v, w` −'u, w` −'v, w`) = |u + v + w|
2
−|u + v −w|
2
+
1
2
(|u + w −2w|
2
+[u −v[
2
)

1
2
(|u + v + 2w|
2
+|u −v|
2
)
= |u + v + w|
2
−|u + v −w|
2
+
1
2
|u + w −2w|
2

1
2
|u + v + 2w|
2
= (|u + v + w|
2
+|w|
2
) +
1
2
|u + v −2w|
2
− (|u + v −w|
2
+|w|
2
) −
1
2
|u + v + 2w|
2
Applying the parallelogram equality to the two parenthesis we and simplifying we
are left with 0. Hence we get additivity in the first slot. It’s easy to see that
'−u, v` = −'u, v`, so we get 'nu, v` = n'u, v` for all n ∈ Z. Now we get
'u, v` = 'nu/n, v` = n'u/n, v` ,
13
so that 'u/n, v` = 1/n'u, v`. Thus we have 'nu/m, v` = n/m'u, v`. It follows that
we have homogeneity in the first slot when the scalar is rational. Now let λ ∈ R and
choose a sequence (r
n
) of rational numbers such that r
n
→ λ. This gives us
λ'u, v` = lim
n→∞
r
n
'u, v` = lim
n→∞
'r
n
u, v`
= lim
n→∞
1
4
(|r
n
u + v|
2
−|r
n
u −v|
2
)
=
1
4
(|λu + v|
2
−|λu −v|
2
)
= 'λu, v`
Thus we have homogeneity in the first slot. We trivially also have symmetry, so we
have an inner-product. The proof for F = C can be done by defining the inner-product
from the complex polarizing identity. And by using the identities
'u, v`
0
=
1
4
(|u + v|
2
−|u −v|
2
) ⇒ 'u, v` = 'u, v`
0
+'u, iv`
0
i.
and using the properties just proved for ', `
0
.
9. This is really an exercise in calculus. We have integrals of the types
'sin nx, sin mx` =

π
−π
sin nxsin mxdx
'cos nx, cos mx` =

π
−π
cos nxcos mxdx
'sin nx, cos mx` =

π
−π
sin nxcos mxdx
and they can be evaluated using the trigonometric identities
sin nxsin mx =
cos((n −m)x) −cos((n + m)x)
2
cos nxcos mx =
cos((n −m)x) + cos((n + m)x)
2
cos nxsin mx =
sin((n −m)x) −sin((n + m)x)
2
.
10. e
1
= 1, then e
2
=
x−x,11
x−x,11
. Where
'x, 1` =

1
0
x = 1/2,
14
so that
|x −1/2| = 'x −1/2, x −1/2` =

1
0
(x −1/2)
2
dx =

1/12,
which gives e
2
=

12(x − 1/2) =

3(2x − 1). Then to continue pencil pushing we
have

x
2
, 1

= 1/3,

x
2
,

3(2x −1)

=

3/6,
so that x
2

x
2
, 1

1 −

x
2
,

3(2x −1)

3(2x −1) = x
2
−1/3 −1/2(2x −1). Now
|x
2
−1/3 −1/2(2x −1)| = 1/(6

5)
giving e
3
=

5(6x
2
−6x+1). The orthonormal basis is thus (1,

3(2x−1),

5(6x
2

6x + 1).
11. Let (v
1
, . . . , v
n
) be a linearly dependent list. We can assume that (v
1
, . . . , v
n−1
) is
linearly independent and v
n
∈ span(v
1
, . . . , v
n
). Let P denote the projection on the
subspace spanned by (v
1
, . . . , v
n−1
). Then calculating e
n
with the Gram-Schmidt
algorithm gives us
e
n
=
v
n
−P(v
n
)
|v
n
−Pv
n
|
,
but v
n
= Pv
n
by our assumption, so e
n
= 0. Thus the resulting list will simply
contain zero vectors. It’s easy to see that we can extend the algorithm to work for
linearly dependent lists by tossing away resulting zero vectors.
12. Let (e
1
, . . . , e
n−1
) be an orthogonal list of vectors. Assume that (e
1
, . . . , e
n
) and
(e
1
, . . . , e
n−1
, e

n
) are orthogonal and span the same subspace. Then we can write
e

n
= a
1
e
1
+ . . . + a
n
e
n
. Now we have 'e

n
, e
i
` = 0 for all i < n, so that a
i
= 0 for
i < n. Thus we have e

n
= a
n
e
n
and from |e

n
| = 1 we have a
n
= ±1.
Let (e
1
, . . . , e
n
) be the orthonormal base produced from (v
1
, . . . , v
n
) by Gram-Schmidt.
Then if (e

1
, . . . , e

n
) satisfies the hypothesis from the problem we have by the previous
paragraph that e

i
= ±e
i
. Thus we have 2
n
possible such orthonormal lists.
13. Extend (e
1
, . . . , e
m
) to a basis (e
1
, . . . , e
n
) of V . By theorem 6.17
v = 'v, e
1
` e
1
+ . . . +'v, e
n
` e
n
,
so that
|v| = | 'v, e
1
` e
1
+ . . . +'v, e
m
` e
n
|
= | 'v, e
1
` e
1
| + . . . +| 'v, e
m
` e
n
|
= [ 'v, e
1
` [ + . . . +[ 'v, e
n
` [.
Thus we have |v| = [ 'v, e
1
` [ + . . . +[ 'v, e
m
` [ if and only if 'v, e
i
` = 0 for i > m i.e.
v ∈ span(e
1
, . . . , e
m
).
15
14. It’s easy to see that the differentiation operator has a upper-triangular matrix in the
orthonormal basis calculated in exercise 10.
15. This follows directly from V = U ⊕U

.
16. This follows directly from the previous exercise.
17. From exercise 5.21 we have that V = range P ⊕ null P. Let U := range P, then by
assumption null P ⊂ U

. From exercise 15, we have dimU = dimV − dimU =
dimnull P, so that null P = U

. An arbitrary v ∈ V can be written as v = Pv +(v −
Pv). From P
2
= P we have that P(v − Pv) = 0, so v − Pv ∈ null P = U

. Hence
the decomposition v = Pv + (v −Pv) is the unique decomposition in U ⊕U

. Now
Pv = P(Pv +(v −Pv)) = Pv, so that P is the identity on U. By definition P = P
U
.
18. Let u ∈ range P, then u = Pv for some v ∈ V hence Pu = P
2
v = Pv = u, so P is
the identity on range P. Let w ∈ null P. Then for a ∈ F
|u|
2
= |P(u + aw)|
2
≤ |u + aw|
2
.
By exercise 2 we have 'u, w` = 0. Thus null P ⊂ (range P)

and from the dimension
equality null P = (range P)

. Hence the claim follows.
19. If TP
U
= P
U
TP
U
clearly U is invariant. Now assume that U is invariant. Then for
u ∈ U we have P
U
TP
U
u = P
U
Tu = Tu = TP
U
u.
20. Let u ∈ U, then we have Tu = TP
U
u = P
U
Tu ∈ U. Thus U is invariant. Then let
w ∈ U

. Now we can write Tw = u+u

where u ∈ U and u

∈ U

. Now P
U
Tw = u,
but u = P
U
Tw = TP
U
w = T0 = 0, so Tw ∈ U

. Thus U

is also invariant.
21. First we need to find an orthonormal basis for U. With Gram-Schmidt we get e
1
=
(1/

2, 1/

2, 0, 0), e
2
= (0, 0, 1/

5, 2/

5). Let U = span(e
1
, e
2
). Then we have
u = P
U
(1, 2, 3, 4) = '(1, 2, 3, 4), e
1
` e
1
+'(1, 2, 3, 4), e
2
` e
2
= (3/2, 3/2, 11/5, 22/5).
22. If p(0) = 0 and p

(0) = 0, then p(x) = ax
2
+ bx
3
. Thus we want to find the
projection of 2 + 3x to the subspace U := span(x
2
, x
3
). With Gram-Schmidt we get
the orthonormal basis (

3x
2
,

420/11(x
3

1
2
x
2
)). We get
P
U
(2+3x) =

2 + 3x,

3x
2


3x
2
+

2 + 3x,

420/11(x
3

1
2
x
2
)

420/11(x
3

1
2
x
2
).
Here

2 + 3x,

3x
2

=
17
12

3,

2 + 3x,

420/11(x
3

1
2
x
2
)

=
47
660

1155,
so that
P
U
(2 + 3x) = −
71
22
x
2
+
329
22
x
3
.
16
23. There’s is nothing special with this exercise, compared to the previous two, except
that it takes ages to calculate.
24. We see that the map T : {
2
(R) → R defined by p → p(1/2) is linear. From exercise
10 we get that (1,

3(2x − 1),

5(6x
2
− 6x + 1) is an orthonormal basis for {
2
(R).
From theorem 6.45 we see that
q(x) = 1 +

3(2 1/2 −1)

3(2x −1) +

5(6 (1/2)
2
−6 1/2 + 1)

5(6x
2
−6x + 1)
= −3/2 + 15x −15x
2
.
25. The map T : {
2
(R) → R defined by p →

1
0
p(x) cos(πx)dx is clearly linear. Again
we have the orthonormal basis (1,

3(2x −1),

5(6x
2
−6x + 1), so that
q(x) =

1
0
cos(πx)dx +

1
0

3(2x −1) cos(πx)dx

3(2x −1)
+

1
0

5(6x
2
−6x + 1) cos(πx)dx

5(6x
2
−6x + 1)
= 0 −
4

3
π
2

3(2x −1) + 0 = −
12
π
2
(2x −1).
26. Choose an orthonormal basis (e
1
, . . . , e
n
) for V and let F have the usual basis. Then
by proposition 6.47 we have
´(T, (e
1
, . . . , e
n
)) = ['e
1
, v` 'e
n
, v`] ,
so that
´(T

, (e
1
, . . . , e
n
)) =

'e
1
, v`
.
.
.
'e
n
, v`
¸
¸
¸
and finally T

a = (a'e
1
, v`, . . . , a'e
n
, v`).
27. with the usual basis for F
n
we get
´(T) =

0 1 0
.
.
.
.
.
.
.
.
. 1
0 0
¸
¸
¸
¸
¸
¸
so by proposition 6.47
´(T

) =

0 0
1
.
.
.
.
.
.
.
.
.
0 1 0
¸
¸
¸
¸
¸
¸
.
17
Clearly T

(z
1
, . . . , z
n
) = (z
2
, . . . , z
n
, 0).
28. By additivity and conjugate homogeneity we have (T −λI)

= T

−λI. From exercise
31 we see that T

−λI is non-injective if and only if T −λI is non-injective. That is
λ is an eigenvalue of T

if and only if λ is an eigenvalue of T.
29. As (U

)

= U and (T

)

= T it’s enough to show only one of the implications. Let
U be invariant under T and choose w ∈ U

. Now we can write T

w = u + v, where
u ∈ U and v ∈ U

, so that
0 = 'Tu, w` = 'u, T

w` = 'u, u + v` = 'u, u` = |u|
2
.
Thus u = 0 which completes the proof.
30. As (T

)

= T it’s sufficient to prove the first part. Assume that T is injective, then
by exercise 31 we have
dimV = dimrange T = dimrange T

,
but range T is a subspace of V , so that range T = V .
31. From proposition 6.46 and exercise 15 we get
dimnull T = dim(range T)

= dimW −dimrange T
= dimnull T + dimW −dimV.
For the second part we have
dimrangeT

= dimW −dimnullT

= dimV −dimnull T
= dimrange T,
where the second equality follows from the first part.
32. Let T be the operator induced by A. The columns are the images of the basis vectors
under T, so the generate range T. Hence the dimension of the span of the column
vectors equal dimrange T. By proposition 6.47 the span of the row vectors equals
range T

. By the previous exercise dimrange T = dimrange T

, so the claim follows.
7 Operators on Inner-Product Spaces
1. For the first part, let p(x) = x and q(x) = 1. Then clearly 1/2 = 'Tp, q` = 'p, Tq` =
'p, 0` = 0. For the second part it’s not a contradiction since the basis is not orthog-
onal.
18
2. Choose the standard basis for R
2
. Let T and S the the operators defined by the
matrices
¸
0 1
1 0

,
¸
0 0
0 1

.
Then T and S as clearly self-adjoint, but TS has the matrix
¸
0 1
0 0

.
so TS is not self-adjoint.
3. If T and S are self-adjoint, then (aT +bS)

= (aT)

+(bS)

= aT +bS for any a, b ∈ R,
so self-adjoint operators form a subspace. The identity operator I is self-adjoint, but
clearly (iI)

= −iI, so iI is not self-adjoint.
4. Assume that P is self-adjoint. Then from proposition 6.46 we have null P = null P

=
(range P)

. Now P is clearly a projection to range P. Then assume that P is a
projection. Let (u
1
, . . . , u
n
) an orthogonal basis for range P and extend it to a basis
of V . Then clearly P has a diagonal matrix with respect to this basis, so P is
self-adjoint.
5. Choose the operators corresponding to the matrices
A =
¸
0 −1
1 0

, B =
¸
1 1
1 0

.
Then we easily see that A

A = AA

and B

B = BB

. However, A + B doesn’t
define a normal operator which is easy to check.
6. From proposition 7.6 we get null T = null T

. It follows from proposition 6.46 that
range T = (null T

)

= (null T)

= range T

.
7. Clearly null T ⊂ null T
k
. Let v ∈ null T
k
. Then we have

T

T
k−1
v, T

T
k−1
v

=

T

T
k
v, T
k−1
v

= 0,
so that T

T
k−1
v = 0. Now

T
k−1
v, T
k−1
v

=

T

T
k−1
v, T
k−2
v

= 0,
so that v ∈ null T
k−1
. Thus null T
k
⊂ null T
k−1
and continuing we get null T
k

null T.
Now let u ∈ range T
k
, then we can find a v ∈ V such that u = T
k
v = T(T
k−1
v), so
that range T
k
⊂ range T. From the first part we get dimrange T
k
= dimrange T, so
range T
k
= range T.
19
8. The vectors u = (1, 2, 3) and v = (2, 5, 7) are both eigenvectors corresponding to
different eigenvalues. A self-adjoint operator is normal, so by corollary 7.8 if T is
normal that would imply orthogonality of u and v. Clearly 'u, v` = 0, so T can’t be
even normal much less self-adjoint.
9. If T is normal, we can choose a basis of V consisting of eigenvectors of T. Let A be
the matrix of T corresponding to the basis. Now T is self-adjoint if and only if the
conjugate transpose of A equals A that is the eigenvalues of T are real.
10. For any v ∈ V we get 0 = T
9
v −T
8
v = T
8
(Tv −v). Thus Tv −v ∈ null T
8
= null T
by the normality of T. Hence T(Tv −v) = T
2
v −Tv = 0, so T
2
= T. By the spectral
theorem we can choose a basis of eigenvectors for T such that T has a diagonal matrix
with λ
1
, . . . , λ
n
on the diagonal. Now T
2
has the diagonal matrix with λ
2
1
, . . . , λ
2
n
on
the diagonal and from T
2
= T we must have λ
2
i
= λ
i
for all i. Hence λ
i
= 0 or λ
i
= 1,
so the matrix of T equals its conjugate transpose. Hence T is self-adjoint.
11. By the spectral theorem we can choose a basis of T such that the matrix of T
corresponding to the basis is a diagonal matrix. Let λ
1
, . . . , λ
n
be the diagonal
elements. Let S be the operator corresponding to the diagonal matrix having the
elements

λ
1
, . . . ,

λ
n
on the diagonal. Clearly S
2
= T.
12. Let T the operator corresponding to the matrix
¸
0 −1
1 0

. Then
T
2
+ I = 0.
13. By the spectral theorem we can choose a basis consisting of eigenvectors of T. Then
T has a diagonal matrix with respect to the basis. Let λ
1
, . . . , λ
n
be the diago-
nal elements. Let S be the operator corresponding to the diagonal matrix having
3

λ
1
, . . . ,
3

λ
n
on the diagonal. Then clearly S
3
= T.
14. By the spectral theorem we can choose a basis (v
1
, . . . , v
n
) consisting of eigenvectors
of T. Let v ∈ V be such that |v| = 1. If v = a
1
v
1
+ . . . + a
n
v
n
this implies that
¸
n
i=1
[a
i
[ = 1. Assume that |Tv −λv| < ε. If [λ
i
−λ[ ≥ ε for all i, then
|Tv −λv| = |
n
¸
i=1
(a
i
Tv
i
−λv
i
)| = |
n
¸
i=1
(a
i
λ
i
v
i
−λv
i
)|
= |
n
¸
i=1
a
i

i
−λ)v
i
| =
n
¸
i=1
[a
i
[[λ
i
−λ[|v
1
|
=
n
¸
i=1
[a
i
[[λ
i
−λ[ ≥ ε
n
¸
i=1
[a
i
[ = ε,
which is a contradiction. Thus we can find an eigenvalue λ
i
such that [λ −λ
i
[ < ε.
20
15. If such an inner-product exists we have a basis consisting of eigenvectors by the
spectral theorem. Assume then that (v
1
, . . . , v
n
) is basis such that the matrix of T
is a matrix consisting of eigenvectors of T. Define an inner-product by
'v
i
, v
j
` =

0, i = j
1, i = j
.
Extend this to an inner-product by bilinearity and homogeneity. Then we have an
inner-product and clearly T is self-adjoint as (v
1
, . . . , v
n
) is an orthonormal basis of
U.
16. Let T ∈ L(R
2
) be the operator T(a, b) = (a, a + b). Then span((1, 0)) is invariant
under T, but it’s orthogonal complement span((0, 1)) is clearly not.
17. Let T, S be two positive operators. Then they are self-adjoint and (S + T)

=
S

+T

= S +T, so S +T is self-adjoint. Also '(S + T)v, v` = 'Sv, v` +'Tv, v` ≥ 0.
Thus, S + T is positive.
18. Clearly T
k
is self-adjoint for every positive integer. For k = 2 we have

T
2
v, v

=
'Tv, Tv` ≥ 0. Assume that the result is true for all positive k < n and n ≥ 2. Then
'T
n
v, v` =

T
n−2
Tv, Tv

≥ 0,
by hypothesis and self-adjointness. Hence the result follows by induction.
19. Clearly 'Tv, v` > 0 for all v ∈ V ` ¦0¦ implies that T is injective hence invertible.
So assume that T is invertible. Since T is self-adjoint we can choose an orthonormal
basis (v
1
, . . . , v
n
) of eigenvectors such that T has a diagonal matrix with respect to
the basis. Let the elements on the diagonal be λ
1
, . . . , λ
n
and by assumption λ
i
> 0
for all i. Let v = a
1
v
1
+ . . . + a
n
v
n
∈ V ` ¦0¦, so that
'Tv, v` = [a
1
[
2
λ
1
+ . . . +[a
n
[
2
λ
n
> 0.
20.
¸
sin θ cos θ
cos θ −sin θ

is a square root for every θ ∈ R, which shows that it has infinitely
many square roots.
21. Let S ∈ L(R
2
) be defined by S(a, b) = (a + b, 0). Then |S(1, 0)| = |(1, 0)| = 1 and
|S(0, 1)| = 1, but S is clearly not an isometry.
22. R
3
is an odd-dimensional real vector space. Hence S has an eigenvalue λ and a
corresponding eigenvector v. Since |Sv| = [λ[|v| = |v| we have λ
2
= 1. Hence
S
2
v = λ
2
v = v.
21
23. The matrices corresponding to T and T

are
´(T) =

0 0 1
2 0 0
0 3 0
¸
¸
, ´(T

) =

0 2 0
0 0 3
1 0 0
¸
¸
.
Thus we get
´(T

T) =

4 0 0
0 9 0
0 0 1
¸
¸
,
so that T

T is the operator (z
1
, z
2
, z
3
) → (4z
1
, 9z
2
, z
3
). Hence

T

T(z
1
, z
2
, z
3
) =
(2z
1
, 3z
2
, z
3
). Now we just need to permute the indices which corresponds to the
isometry S(z
1
, z
2
, z
3
) = (z
3
, z
1
, z
2
).
24. Clearly T

T is positive, so by proposition 7.26 it has a unique positive square root.
Thus it’s sufficient to show that R
2
= T

T. Now we have T

= (SR)

= R

S

= RS

by self-adjointness of R. Thus R
2
= RIR = RS

SR = T

T.
25. Assume that T is invertible. Then

T

T must be invertible (polar decomposition).
Hence S = T

T

T
−1
, so S is uniquely determined. Now assume that T is not
invertible. Then range

T

T is not invertible (hence not surjective).
Now assume that T is not invertible, so that

T

T is not invertible. By the spectral
theorem we can choose a basis (v
1
, . . . , v
n
) of eigenvectors of

T

T and we can
assume that v
1
corresponds to the eigenvalue 0. Now let T = S

T

T. Define U
by Uv
1
= −Sv
1
, Uv
i
= Sv
i
, i > 1. Clearly U = S and T = U

T

T. Now choose
u, v ∈ V . Then it’s easy to verify that 'Uu, Uv` = 'Su, Sv` = 'u, v`, so U is an
isometry.
26. Choose a basis of eigenvectors for T such that T has a diagonal matrix with eigenval-
ues λ
1
, . . . , λ
n
on the diagonal. Now T

T = T
2
corresponds to the diagonal matrix
with λ
2
1
, . . . , λ
2
n
on the diagonal and the square root

T

T corresponds to the diag-
onal matrix having [λ
1
[, . . . , [λ
n
[ on the diagonal. These are the singular values, so
the claim follows.
27. Let T ∈ L(R
2
) be the map T(a, b) = (0, a). Then from the matrix representation
we see that T

T(a, 0) = (a, 0), hence

T

T = T

T and 1 is clearly a singular value.
However, T
2
= 0, so

(T
2
)

T
2
= 0, so 1
2
= 1 is not a singular value of T
2
.
28. The composition of two bijections is a bijection. If T=S

T

T, then since S is an
isometry we have that T is bijective hence invertible if and only if

T

T is bijective.
But

T

T is injective hence bijective if and only if 0 is not an eigenvalue. Thus T is
bijective if and only if 0 is not a singular value.
22
29. From the polar decomposition theorem we see that dimrange T = dimrange

T

T.
Since

T

T is self-adjoint we can choose a basis of eigenvectors of

T

T such that
the matrix of

T

T is a diagonal matrix. Clearly dimrange

T

T equals the number
of non-zero elements on the diagonal i.e. the number of non-zero singular values.
30. If S is an isometry, then clearly

S

S = I, so all singular values are 1. Now assume
that all singular values are 1. Then

S

S is a self-adjoint (positive) operator with
all eigenvalues equal to 1. By the spectral theorem we can choose a basis of V such
that

S

S has a diagonal matrix. As all eigenvalues are one, this means that the
matrix of

S

S is the identity matrix. Thus

S

S = I, so S is an isometry.
31. Let T
1
= S
1

T

1
T
1
and T
2
= S
2

T

2
T
2
. Assume that T
1
and T
2
have the same
singular values s
1
, . . . , s
n
. Then we can choose bases of eigenvectors of

T

1
T
1
and

T

2
T
2
such that they have the same matrix. Let these bases be (v
1
, . . . , v
n
) and
(w
1
, . . . , w
n
). Let S the operator defined by S(w
i
) = v
i
, then clearly S is an isometry
and

T

2
T
2
= S
−1

T

1
T
1
S. Thus we get T
2
= S
2

T

2
T
2
= S
2
S
−1

T

1
T
1
S. Writing
S
3
= S
2
S
−1
S
−1
1
, which is clearly an isometry, we get
T
2
= S
3
S
1

T

1
T
1
S = S
3
T
1
S.
Now let T
2
= S
1
T
1
S
2
. Then we have T

2
T
2
= (S
1
T
1
S
2
)

(S
1
T
1
S
2
) = S

1
T

1
T
1
S
1
=
S
−1
1
T

1
T
1
S
1
. Let λ be an eigenvalue of T

1
T
1
and v a corresponding eigenvector.
Because S
1
is bijective we have a u ∈ V such that v = S
1
u. This gives us
T

2
T
2
u = S
−1
1
T

1
T
1
S
1
u = S
−1
1
T

1
T
1
v = S
−1
1
λv = λu,
so that λ is an eigenvalue of T

2
T
2
. Let (v
1
, . . . , v
n
) be an orthonormal basis of eigen-
vectors of T

1
T
1
, then (S
−1
1
v
1
, . . . , S
−1
v
n
) is an orthonormal basis for V of eigenvectors
of T

2
T
2
. Hence T

2
T
2
and T

1
T
1
have the same eigenvalues. The singular values are
simply the positive square roots of these eigenvalues, so the claim follows.
32. For the first part, denote by S the map defined by the formula. Set
A := ´(S, (f
1
, . . . , f
n
), (e
1
, . . . , e
n
)), B := ´(T, (e
1
, . . . , e
n
), (f
1
, . . . , f
n
)).
Clearly A = B and is a diagonal matrix with s
1
, . . . , s
n
on the diagonal. All the s
i
are positive real numberse, so that B

= B = A. It follows from proposition 6.47
that S = T

.
For the second part observe that a linear map S defined by the formula maps f
i
to s
−1
i
e
i
which is mapped by T to f
i
. Thus TS = I. The claim now follows from
exercise 3.23.
23
33. By definition

T

T is positive. Thus all singular values of T are positive. From the
singular value decomposition theorem we can choose bases of V such that
|Tv| = |
n
¸
i=1
s
i
'v, e
i
` f
i
| =
n
¸
i=1
s
i
[ 'v, e
i
` [
since all s
i
are positive. Now clearly
ˆ s|v| = ˆ s
n
¸
i=1
[ 'v, e
i
` [ ≤
n
¸
i=1
s
i
[ 'v, e
i
` [ = |Tv|
and similarly for s|v|.
34. From the triangle equality and the previous exercise we get
|(T

+ T

)v| ≤ |T

v| +|T

v| ≤ (s

+ s

)|v|.
Now let v be the eigenvector of

(T

+ T

)

(T

+ T

) corresponding to the singular
value s. Let T + T

= S

(T

+ T

)

(T

+ T

), so that
|(T

+ T

)v| = |S

(T

+ T

)

(T

+ T

)v| = |Ssv| = s|v| ≤ (s

+ s

)|v|,
so that s ≤ s

+ s

.
8 Operators on Complex Vector Spaces
1. T is not injective, so 0 is an eigenvalue of T. We see that T
2
= 0, so that any v ∈ V
is a generalized eigenvector corresponding to the eigenvalue 0.
2. On page 78 it’s shown that ±i are the eigenvalues of T. We have dimnull(T −iI)
2
+
dimnull(T + iI)
2
= dimC
2
= 2, so that dimnull(T − iI)
2
= dimnull(T + iI)
2
= 1.
Because (1, −i) is an eigenvector corresponding to the eigenvalue i and (1, i) is an
eigenvector corresponding to the eigenvalue −i. The set of all generalized eigenvectors
are simply the spans of these corresponding eigenvectors.
3. Let
¸
m−1
i=0
a
i
T
i
v = 0, then 0 = T
m−1
(
¸
m−1
i=0
a
i
T
i
v = a
0
T
m−1
v, so that a
0
= 0. Ap-
plying repeatedly T
m−i
for i = 2, . . . we see that a
i
= 0 for all i. Hence (v, . . . , T
m−1
v)
is linearly independent.
4. We see that T
3
= 0, but T
2
= 0. Assume that S is a square root of T, then S
is nilpotent, so by corollary 8.8 we have S
dimV
= S
3
= 0, so that 0 = S
4
= T
2
.
Contradiction.
5. If ST is nilpotent, then ∃n ∈ N such that (ST)
n
= 0, so (TS)
n+1
= T(ST)
n
S = 0.
24
6. Assume that λ = 0 is an eigenvalue of N with corresponding eigenvector v. Then
N
dimV
v = λ
dimV
v = 0. This contradicts corollary 8.8.
7. By lemma 8.26 we can find a basis of V such that the matrix of N is upper-triangular
with zeroes on the diagonal. Now the matrix of N

is the conjugate transpose. Since
N

= N, the matrix of N must be zero, so the claim follows.
8. If N
dimV −1
= N
dimV
, then dimnull N
i−1
< dimnull N
i
for all i ≤ dimV by propo-
sition 8.5. By corollary dimnull N
dimV
= dimV , so the claim clearly follows.
9. range T
m
= range T
m+1
implies dimnull T
m
= dimnull T
m+1
i.e. null T
m
= null T
m+1
.
From proposition 8.5 we get dimnull T
m
= dimnull T
m+k
for all k ≥ 1. Again this
implies that dimrange T
m
= dimrange T
m+k
for all k ≥ 1, so the claim follows.
10. Let T be the operator defined in exercise 1. Clearly null T ∩ range T = ¦0¦, so the
claim is false.
11. We have dimV = dimnull T
n
+dimrange T
n
, so it’s sufficient to prove that null T
n

range T
n
= ¦0¦. Let v ∈ null T
n
∩ range T
n
. Then we can find a u ∈ V such that
T
n
u = v. From 0 = Tv = T
n+1
u we see that u ∈ null T
n+1
= null T
n
which implies
that v = T
n
u = 0.
12. From theorem 8.23 we have V = null T
dimV
, so that T
dimV
= 0. Then let T ∈ L(R
3
)
be the operator T(a, b, c) = (−b, a, 0). Then clearly 0 is the only eigenvalue, but T is
not nilpotent.
13. From null T
n−2
= null T
n−1
and from proposition 8.5 we see that dimnull T
n
≥ n−1.
Assume that T has three different eigenvalues 0, λ
1
, λ
2
. Then dimnull (T −λ
i
I)
n
≥ 1,
so from theorem 8.23
n ≥ dimnull T
n
+dimnull (T −λ
1
I)
n
+dimnull (T −λ
2
I)
n
≥ n −1 +1 +1 = n +1,
which is impossible, so T has at most two different eigenvalues.
14. Let T ∈ L(C
4
) be defined by T(a, b, c, d) = (7a, 7b, 8c, 8d) from the matrix of T it’s
easy to see that the characteristic polynomial is (z −7)
2
(z −8)
2
.
15. Let d
1
be the multiplicity of the eigenvalue 5 and d
2
the multiplicity of the eigenvalue
6. Then d
1
, d
2
≥ 1 and d
1
+ d
2
= n. It follows that d
1
, d
2
≤ n − 1, so that
(z − 5)
d
1
(z − 6)
d
2
divides (z − 5)
n−1
(z − 6)
n−1
. By the Cayley-Hamilton theorem
(T −5I)
d
1
(T −6I)
d
2
= 0, so that (T −5I)
n−1
(T −6I)
n−1
= 0.
16. If every generalized eigenvector is an eigenvector, then by theorem 8.23 T has a basis
of eigenvectors. If there’s a generalized eigenvector that is not an eigenvector, then
we have an eigenvalue λ such that dimnull(T −λI)
dimV
= dimnull(T −λI). Thus if
25
λ
1
, . . . , λ
m
are the eigenvalues of T, then
¸
m
i=1
dimnull(T − λ
i
I) < dimV , so there
doesn’t exists a basis of eigenvectors.
17. By lemma 8.26, choose a basis (v
1
, . . . , v
n
) such that N has an upper-triangular
matrix. Apply Gram-Schmidt orthogonalization to the basis. Then Ne
1
= 0 and
assume that Ne
i
∈ span(e
1
, . . . , e
i
), so that
Ne
i+1
= N

v
i+1
−'v
i+1
, e
1
` e
1
−. . . −'v
i+1
, e
i
` e
i
|v
i+1
−'v
i+1
, e
1
` e
1
−. . . −'v
i+1
, e
i
` e
i
|

∈ span(e
1
, . . . , e
i+1
).
Thus N has an upper-triangular matrix in the basis (e
1
, . . . , e
n
).
18. Continuing in the proof of lemma 8.30 up to j = 4 we see that a
4
= −5/128. So that

I + N = I +
1
2
N −
1
8
N
2
+
1
16
N
3

5
128
N
4
19. Just replace the Taylor-polynomial in lemma 8.30 with the Taylor polynomial of
3

1 + x and copy the proof of the lemma and theorem 8.32.
20. Let p(x) =
¸
m
i=0
a
i
x
i
be the minimal polynomial of T. If a
0
= 0, then p(x) =
x
¸
m
i=1
a
i
x
i−1
. Since T is invertible we must have
¸
m
i=1
a
i
T
i−1
= 0 which contradicts
the minimality of p. Hence a
0
= 0, so solving for I in p(T) we get
−a
−1
0
(a
1
+ . . . + a
m
T
m−1
)T = I.
Hence setting q(x) = −a
−1
0
(a
1
+ . . . + a
m
x
m−1
) we have q(T) = T
−1
.
21. The operator defined by the matrix

0 0 1
0 0 0
0 0 0
¸
¸
is clearly an example.
22. Choose the matrix
A =

1 0 0 0
0 1 1 0
0 0 1 0
0 0 0 0
¸
¸
¸
¸
.
It’s easy to see that A(A−I)
2
= 0. Thus, the minimal polynomial divides z(z −1)
2
.
However, the whole polynomial is the only factor that annihilates A, so z(z −1)
2
is
the minimal polynomial.
26
23. Let λ
1
, . . . , λ
m
be the eigenvalues of T with multiplicity d
1
, . . . , d
m
. Assume that V
has a basis of eigenvectors (v
1
, . . . , v
n
). Let p(z) =
¸
m
i=1
(z −λ
i
). Then we get
p(T)v
i
=

¸
¸
j=i
(T −λ
j
I)
¸

(T −λ
i
I)v
i
= 0,
so that the minimal polynomial divides p and hence has no double root.
Now assume that the minimal polynomial has no double roots. Let the minimal
polynomial be p and let (v
1
, . . . , v
n
) be a Jordan-basis of T. Let A be the largest
Jordan-block of T. Now clearly p(A) = 0. However, by exercise 29, the minimal
polynomial of (A − λI) is z
m+1
where m is the length of the longest consecutive
string of 1’s that appear just above the diagonal, so the minimal polynomial of A
is (z − λ)
m+1
. Hence m = 0, so that A is a 1 1 matrix and the basis vector
corresponding to A is an eigenvalue. Since A was the largest Jordan-block it follows
that all basis vectors are eigenvalues (corresponding to a 1 1 matrix).
24. If V is a complex vector space, then V has a basis of eigenvectors. Now let λ
1
, . . . , λ
m
be the distinct eigenvalues. Now clearly p =
¸
m
i=1
(z −λ
i
) annihilates T, so that the
minimal polynomial being a factor of p doesn’t have a double root.
Now assume that V is a real vector space. By theorem 7.25 we can find a basis of T
such that T has a block diagonal matrix where each block is a 1 1 or 2 2 matrix.
Let λ
1
, . . . , λ
m
be the distinct eigenvalues corresponding to the 1 1 blocks. Then
p(z) =
¸
m
i=1
(z − λ
i
) annihilates all but the 2 2 blocks of the matrix. Now it’s
sufficient to show that each 2 2 block is annihilated by a polynomial which doesn’t
have real roots.
By theorem 7.25 we can choose the 2 2 blocks to be of the form
¸
a −b
b a

where b > 0 and it’s easy to see that
¸
a −b
b a

−2a
¸
a −b
b a

+ (a
2
+ b
2
)
¸
1 0
0 1

= 0.
Clearly this polynomial has a negative discriminant, so the claim follows.
25. Let q be the minimal polynomial. Write q = sp + r, where deg r < deg p, so that
0 = q(T)v = s(T)p(T)v +r(T)v = r(T)v. Assuming r = 0, we can multiply the both
sides with the inverse of the highest coefficient of r yielding a monic polynomial r
2
of degree less than p such that r
2
(T)v = 0 contradicting the minimality of p. Hence
r = 0 and p divides q.
27
26. It’s easy to see that no proper factor of z(z −1)
2
(z −3) annihilates the matrix
A =

3 1 1 1
0 1 1 0
0 0 1 0
0 0 0 0
¸
¸
¸
¸
,
so the it’s minimal polynomial is z(z − 1)
2
(z − 3) which by definition is also the
characteristic polynomial.
27. It’s easy to see that no proper factor of z(z −1)(z −3) annihilates the matrix
A =

3 1 1 1
0 1 1 0
0 0 1 0
0 0 0 0
¸
¸
¸
¸
,
but A(A−I)(A−3I) = 0, so the claim follows.
28. We see that T(e
i
) = e
i+1
for i < n. Hence (e
1
, Te
1
, . . . , T
n−1
e
1
) = (e
1
, . . . , e
n
) is
linearly independent. Thus for any non-zero polynomial p of degree less than n we
have p(e
i
) = 0. Hence the minimal polynomial has degree n, so that the minimal
polynomial equals the characteristic polynomial.
Now from the matrix of T we see that T
n
(e
i
) = −a
0
−a
1
T(e
1
) −. . . −a
n−1
T
n−1
. Set
p(z) = a
0
+a
1
z +. . . +a
n−1
z
n−1
+z
n
. Now p(T)(e
1
) = 0, so by exercise 25 p divides
the minimal polynomial. However, the minimal polynomial is monic and has degree
n, so p is the minimal polynomial.
29. The biggest Jordan-block of N is of the form

0 1
0 1
.
.
.
.
.
.
0 1
0
¸
¸
¸
¸
¸
¸
¸
.
Now clearly N
m+1
= 0, so the minimal polynomial divides z
m+1
. Let v
i
, . . . , v
i+m
be
the basis elements corresponding to the biggest Jordan-block. Then N
m
e
i+m
= e
i
=
0, so the minimal polynomial is z
m+1
.
30. Assume that V can’t be decomposed into two proper subspaces. Then T has only
one eigenvalue. If there is more than one Jordan-block, then Let (v
1
, . . . , v
m
) be
the vectors corresponding to all but the last Jordan-block and (v
m+1
, . . . , v
n
) the
28
vectors corresponding to the last Jordan-block. Then clearly V = span(v
1
, . . . , v
m
) ⊕
span(v
m+1
, . . . , v
n
). Thus, we have only one Jordan-block and T − λI is nilpotent
and has minimal polynomial z
dimV
by the previous exercise. Hence T has minimal
polynomial (z −λ)
dimV
.
Now assume that T has minimal polynomial (z −λ)
dimV
. If V = U ⊕W, let p
1
be the
minimal polynomial of T
|U
and p
2
the minimal polynomial of T
|W
. Now (p
1
p
2
)(T) =
0, so that (z − λ)
dimV
divides p
1
p
2
. Hence deg p
1
p
2
= deg p
1
+ deg p
2
≥ dimV , but
deg p
1
+deg p
2
≤ dimU +dimW = dimV . Thus deg p
1
p
2
= dimV . This means that
p
1
(z) = (z − λ)
dimU
and p
2
(z) = (z − λ)
dimW
. Now if n = max¦dimU, dimW¦ we
have (T
|U
− λI)
n
= (T
|W
− λI)
n
= 0. This means that (T − λI)
n
= 0 contradicting
the fact that (z −λ)
dimV
is the minimal polynomial.
31. Reversing the Jordan-basis simply reverses the order of the Jordan-blocks and each
block needs to be replaced by its transpose.
9 Operators on Real Vector Spaces
1. Let a be the value in the upper-left corner. Then the matrix must have the form
¸
a 1 −a
1 −a a

and it’s easy to see that
¸
1
1

is an eigenvector corresponding to the eigenvalue 1.
2. The characteristic polynomial of the matrix is p(x) = (x − a)(x − d) − bc. Now the
discriminant equals (a + d)
2
−4(ad −bc) = (a −d)
2
+ 4bc. Hence the characteristic
polynomial has a root if and only if (a − d)
2
+ 4bc ≥ 0. If p(x) = (x − λ
1
)(x − λ
2
),
then p(T) = 0 implies that either A − λ
1
I or A − λ
2
I is not invertible hence A has
an eigenvalue. If p doesn’t have a root then assuming that Av = λv we get
0 = p(A)v = p(λ)v,
so that v = 0. Hence A has no eigenvalues. The claim follows.
3. See the proof of the next problem, though in this special case the proof is trivial.
4. First let λ be an eigenvalue of A. Assume that λ is not an eigenvalue of A
1
, . . . , A
m
,
so that A
i
−λI is invertible for each i. Now let B be the matrix

(A
1
−λI)
−1

.
.
.
0 (A
m
−λI)
−1
¸
¸
¸.
29
It’s now easy to verify that B(A−λI) is a diagonal matrix with all 1s on the diagonal,
hence it’s invertible. However, this is impossible, since A −λI is not. Thus, λ is an
eigenvalue of some A
i
.
Now let λ be an eigenvalue of A
i
. Let T be the operator corresponding to A − λI
in L(F
n
). Let A
i
correspond to the columns j, . . . , j + k and let (e
1
, . . . , e
n
) be the
standard basis. Let (a
1
, . . . , a
k
) be the eigenvector of A
i
corresponding to λ. Set v =
a
1
e
j
+. . . +a
k
e
j+k
. Then it’s easy to see that T maps the space span(e
1
, . . . , e
j−1
, v)
to the space span(e
1
, . . . , e
j−1
). Hence T is not injective, so that λ is an eigenvalue
of A.
5. The proof is identical to the argument used in the solution of exercise 2.
6. Let (v
1
, . . . , v
n
) be the basis with the respect to which T has the given matrix.
Applying Gram-Schmidt to the basis the matrix trivially has the same form.
7. Choose a basis (v
1
, . . . , v
n
) such that T has a matrix of the form in theorem 9.10.
Now if v
j
corresponds to the second vector in a pair corresponding to a 2 2 matrix
or corresponds to a 1 1 matrix, then span(v
1
, . . . , v
j
) is an invariant subscape. The
second posibility means that v
j
corresponds to the first vector in a pair corresponding
to a 2 2 matrix, so that span(v
1
, . . . , v
j+1
) is an invariant subspace.
8. Assuming that such an operator existed we would have basis (v
1
, . . . , v
7
) such that
the matrix of T would have a matrix with eigenpair (1, 1)
dimnull(T
2
+ T + I)
dimV
2
= 7/2
times on the diagonal. This would contradict theorem 9.9 as 7/2 is not a whole
number. It follows that such an operator doesn’t exist.
9. The equation x
2
+ x + 1 = 0 has a solution in C. Let λ be a solution. Then the
operator corresponding to the matrix λI clearly is an example.
10. Let T be the operator in L(R
2k
) corresponding to the block diagonal matrix where
each block has characteristic polynomial x
2
+αx +β. Then by theorem 9.9 we have
k =
dimnull(T
2
+ αT + βI)
2k
2
,
so that dimnull(T
2
+ αT + βI)
2k
is even. However, the minimal polynomial of T
divides (T
2
+αT +βI)
2k
and has degree less than 2k, so that (T
2
+αT +βI)
k
= 0.
It follows that dimnull(T
2
+ αT + βI)
2k
= dimnull(T
2
+ αT + βI)
k
and the claim
follows.
30
11. We see that (α, β) is an eigenpair and from the nilpotency of T
2
+ αT + βI and
theorem 9.9 we have
dimnull(T
2
+ αT + βI)
dimV
2
= (dimV )/2
it follows that dimV is even. For the second part we know from the Cayley-Hamilton
theorem that the minimal polynomial p(x) of T has degree less than dimV . Thus we
have p(x) = (x
2
+ αx + β)
k
and from deg p ≤ dimV we get k ≤ (dimV )/2. Hence
(T + αT + βI)
(dimV )/2
= 0.
12. By theorem 9.9 we can choose a basis such that the matrix of T has the form of 9.10.
Now we have the matrices [5] and [7] at least once on the diagonal. Assuming that
T has an eigenpair we would have at least one 2 2 matrix on the diagonal. This is
impossible as the diagonal has only length 3.
13. By proposition 8.5 dimnull T
n−1
≥ n−1. Like in the previous exercise we know that
the matrix [0] is at least n−1 times on the diagonal and it’s impossible to fit a 2 2
matrix in the only place left.
14. We see that A satisfies the polynomial p(z) = (z − a)(z − d) − bc no matter if F is
R or C. If F = C, then because p is monic, has degree 2 and annihilates A it follows
that p is the characteristic polynomial.
Assume then that F = R. Now if A has no eigenvalues, then p is the characteristic
polynomial by definition. Assume then that p(z) = (z − λ
1
)(z − λ
2
) which implies
that p(T) = (T −λ
1
I)(T −λ
2
I) = 0. If neither T −λ
1
I or T −λ
2
I is injective, then
by definition p is the minimal polynomial. Assume then that T − λ
2
I is invertible.
Then dimnull(T − λ
1
I) = 2, so that T − λ
1
I = 0. Hence we get c = b = 0 and
a = d = λ
1
. Hence p(z) = (z − λ
1
)
2
, so that p is the characteristic polynomial by
definition.
15. S is normal, so there’s an orthonormal basis of V such that S has a block diagonal
matrix with respect to the basis and each block is a 1 1 or 2 2 matrix and the
22 blocks have no eigenvalue (theorem 7.25). From S

S = I we see that each block
A
i
satisfy A

i
A
i
= I, so that they are isometries. Hence a 2 2 block is of the form
¸
cos θ −sin θ
sin θ cos θ

.
We see that T
2
+αT +βI is not injective if and only if A
2
i
+αA
i
+βI is not injective
for some 22 matrix A
i
. Hence x
2
+αx+β must equal the characteristic polynomial
of some A
i
, so by the previous exercise it’s of the form
(x −cos θ)(z −cos θ) + sin
2
θ = x
2
−2xcos θ + cos
2
θ + sin
2
θ
so that β = cos
2
θ + sin
2
θ = 1.
31
10 Trace and Determinant
1. The map T → ´(T, (v
1
, . . . , v
n
)) is bijective and satisfies ST → ´(ST, (v
1
, . . . , v
n
)) =
´(S, (v
1
, . . . , v
n
))´(T, (v
1
, . . . , v
n
)). Hence
ST = I ⇔ ´(ST, (v
1
, . . . , v
n
)) = ´(S, (v
1
, . . . , v
n
))´(T, (v
1
, . . . , v
n
)) = I.
The claim now follows trivially.
2. Both matrices represent an operator in L(F
n
). The claim now follows from exercise
3.23.
3. Choose a basis (v
1
, . . . , v
n
) and let A = (a
ij
) be the matrix corresponding to the
basis. Then Tv
1
= a
11
v
1
+ . . . + a
n1
v
n
. Now (v
1
, 2v
2
, . . . , 2v
n
) is also a basis and we
have by our assumption Tv = a
11
v
1
+ 2(a
21
v
2
+ . . . + a
n1
v
n
). We thus get
a
21
v
2
+ . . . + a
n1
v
n
= 2(a
21
v
2
+ . . . + a
n1
v
n
),
which implies (a
21
v
2
+ . . . + a
n1
v
n
= 0. By linear independence we get a
21
= . . . =
a
n1
= 0, so that Tv
1
= a
11
v. The claim clearly follows.
4. Follows directly from the definition of ´(T, (u
1
, . . . , u
n
), (v
1
, . . . , v
n
)).
5. Let (e
1
, . . . , e
n
) be the standard basis for C
n
. Then we can find an operator T ∈
L(C
n
) such that ´(T, (e
1
, . . . , e
n
)) = B. Now T has an upper-triangular matrix
corresponding to some basis (v
1
, . . . , v
n
) of V . Let A = ´((v
1
, . . . , v
n
), (e
1
, . . . , e
n
)).
Then
A
−1
BA = ´(T, (v
1
, . . . , v
n
))
which is upper-triangular. Clearly A is an invertible square matrix.
6. Let T be the operator corresponding to the matrix
¸
0 −1
1 0

,
¸
0 −1
1 0

2
=
¸
−1 0
0 −1

.
By theorem 10.11 we have trace(T
2
) = −2 < 0.
7. Let (v
1
, . . . , v
n
) be a basis of eigenvectors of T and λ
1
, . . . , λ
n
be the corresponding
eigenvalues. Then T has the matrix

λ
1
0
.
.
.
0 λ
n
¸
¸
¸.
Clearly trace(T
2
) = λ
2
1
+ . . . + λ
2
n
≥ 0 by theorem 10.11.
32
8. Extend v/|v| to an orthonormal basis (v/|v|, e
1
, . . . , e
n
) of V and let A = ´(T, (v/|v|, e
1
, . . . , e
n
)).
Let a, a
1
, . . . , a
n
denote the diagonal elements of A. Then we have
a
i
= 'Te
i
, e
i
` = ''e
i
, v` w, e
i
` = '0, e
i
` = 0,
so that
trace(T) = a = 'Tv/|v|, v/|v|` = ''v/|v|, v` w, v/|v|` = 'w, v` .
9. From exercise 5.21 we have V = null P ⊕range P. Now let v ∈ range P. Then v = Pu
for some u ∈ V . Hence Pv = P
2
w = Pw = v, so that P is the identity on range P.
Now choose a basis (v
1
, . . . , v
m
) for range P and extend it with a basis (u
1
, . . . , u
n
)
of null P to get a basis for V . Then clearly the matrix for P in this basis consists
of a diagonal matrix with 1s on part of the diagonal corresponding to the vectors
(v
1
, . . . , v
m
) and 0s on the rest of the diagonal. Hence trace P = dimrange P ≥ 0.
10. Let (v
1
, . . . , v
n
) be some basis of V and let A = ´(T, (v
1
, . . . , v
n
)). Let a
1
, . . . , a
n
be
the diagonal elements of A. By proposition 6.47 T

has the matrix A

, so that the
diagonal elements of A

are a
1
, . . . , a
n
. By theorem 10.11 we have
trace(T

) = a
1
+ . . . + a
n
= a
1
+ . . . + a
n
= trace(T).
11. A positive operator is self-adjoint. By the spectral theorem we can find a basis
(v
1
, . . . , v
n
) of eigenvectors of T. Then A = ´(T, (v
1
, . . . , v
n
)) is a diagonal matrix
and by positivity of T all the diagonal elements a
1
, . . . , a
n
are positive. Hence a
1
+
. . . + a
n
= 0 implies a
1
= . . . = a
n
= 0, so that A = 0 and hence T = 0.
12. The trace of T is the sum of the eigenvalues. Hence λ − 48 + 24 = 51 − 40 + 1, so
that λ = 36.
13. Choose a basis of (v
1
, . . . , v
n
) of V and let A = ´(T, (v
1
, . . . , v
n
)). Then the matrix
of cT is cA. Let a
1
, . . . , a
n
be the diagonal elements of A. Then we have
trace(cT) = ca
1
+ . . . + ca
n
= c(a
1
+ . . . + a
n
) = ctrace(T).
14. The example in exercise 6 shows that this is false. For another example take S =
T = I.
15. Choose a basis (v
1
, . . . , v
n
) for V and let A = ´(T, (v
1
, . . . , v
n
)). It’s sufficient to
prove that A = 0. Now let a
i,j
be the element in row i, column j of A. Let B be
the matrix with 1 in row j column i and 0 elsewhere. Then it’s easy to see that in
BA the only non-zero diagonal element is a
i,j
. Thus trace(BA) = a
i,j
. Let S be the
operator corresponding to B. It follows that a
i,j
= trace(ST) = 0, so that A = 0.
33
16. Let T

Te
i
= a
1
e
1
+ . . . + a
n
e
n
. Then we have that a
i
= 'T

Te
i
, e
i
` = |Te
i
|
2
by
the ortogonality of (e
1
, . . . , e
n
). Clearly a
i
is the ith diagonal element in the matrix
´(T

T, (e
1
, . . . , e
n
)), so that
trace(T

T) = |Te
1
|
2
+ . . . +|Te
n
|
2
.
The second assertion follows immediately.
17. Choose a basis (v
1
, . . . , v
n
) such that T has an upper triangular matrix B = (b
ij
)
with eigenvalues λ
1
, . . . , λ
n
on the diagonal. Now the ith element on the diagonal of
B

B is
¸
n
j=1
b
ji
b
ji
=
¸
n
j=1
[b
ji
[
2
, where [b
ii
[
2
= [λ
i
[
2
. Hence
n
¸
i=1

i
[
2

n
¸
i=1
n
¸
j=1
[b
ji
[
2
= trace(B

B) = trace(A

A) =
n
¸
k=1
n
¸
j=1
[a
jk
[
2
.
18. Positivity and definiteness follows from exercise 16 and additivity in the first slot
from corollary 10.12. Homogeneity in the first slot is exercise 13 and by exercise 10
'S, T` = trace(ST

) = trace((ST

)

) = trace(TS

) = 'T, S`.
It follows that the formula defines an inner-product.
19. We have 0 ≤ 'Tv, Tv` − 'T

v, T

v` = '(T

T −TT

)v, v`. Hence T

T − TT

is a
positive operator (it’s clearly self-adjoint), but trace(T

T − TT

) = 0, so all the
eigenvalues are 0. Hence T

T −TT

= 0 by the spectral theorem, so T is normal.
20. Let dimV = n and write A = ´(T). Then we have
det cT = det cA =
¸
σ
ca
σ(1),1
ca
σ(n),n
= c
n
det A = c
n
det T
21. Let S = I and T = −I. Then we have 0 = det(S + T) = det S + det T = 1 + 1 = 2.
22. We can use the result of the next exercise which means that we only need to prove
this for the complex case. Let T ∈ L(C
n
) be the operator corresponding to the
matrix A. Now each block A
j
on the diagonal corresponds to some basis vectors
(e
i
, . . . , e
i+k
). These can be replaced with another set of vectors, (v
i
, . . . , v
i+k
), span-
ning the same subspace such that A
j
in this new basis is upper-triangular. We have
T(span(v
i
, . . . , v
i+k
)) ⊂ span(v
i
, . . . , v
i+k
), so after doing this for all blocks we can
assume that A is upper-triangular. The claim follows immediately.
23. This follows trivially from the formula of trace and determinant for a matrix, because
the formulas only depends on the elements of the matrix.
34
24. Let dimV = n and let A = ´(T), so that B = ´(T) equals the conjugate transpose
of A i.e. b
ij
= a
ji
. Then we have
det T = det A =
¸
σ
a
σ(1),1
a
σ(n),n
=
¸
σ
a
1,σ(1)
a
n,σ(n)
=
¸
σ
a
1,σ(1)
a
n,σ(n)
=
¸
σ
b
σ(1),1
b
σ(n),n
= det B = det T

.
The first equality on the second line follows easily that every term in the upper sum
is represented by a term in the lower sum and vice versa. The second claim follows
immediately from theorem 10.31.
25. Let Ω = ¦(x, y, z) ∈ R
3
[ x
2
+ y
2
+ z
2
< 1¦ and let T be the operator T(x, y, z) =
(ax, by, cz). It’s easy to see that T(Ω) is the ellipsoid. Now Ω is a circle with radius
1. Hence
[ det T[volume(Ω) = abc
4
3
π =
4
3
πabc,
which is the volume of an ellipsoid.
35

6. Set U = Z2 . 7. The set {(x, x) ∈ R2 | x ∈ R} ∪ {(x, −x) ∈ R2 | x ∈ R} is closed under multiplication but is trivially not a subspace ((x, x) + (x, −x) = (2x, 0) doesn’t belong to it unless x = 0). 8. Let {Vi } be a collection of subspaces of V . Set U = ∩i Vi . Then if u, v ∈ U . We have that u, v ∈ Vi for all i because Vi is a subspace. Thus au ∈ Vi for all i, so that av ∈ U . Similarly u + v ∈ Vi for all i, so u + v ∈ U . 9. Let U, W ⊂ V be subspaces. Clearly if U ⊂ W or W ⊂ U , then U ∪ W is clearly a subspace. Assume then that U ⊂ W and W ⊂ U . Then we can choose u ∈ U \ W and w ∈ W \ U . Assuming that U ∪ W is a subspace we have u + w ∈ U ∪ W . Assuming that u + w ⊂ U we get w = u + w − u ⊂ U . Contradiction. Similarly for u + w ∈ W . Thus U ∪ W is not a subspace. 10. Clearly U = U + U as U is closed under addition. 11. Yes and yes. Follows directly from commutativity and associativity of vector addition. 12. The zero subspace, {0}, is clearly an additive identity. Assuming that we have inverses, then the whole space V should have an inverse U such that U + V = {0}. Since V + U = V this is clearly impossible unless V is the trivial vector space. 13. Let W = R2 . Then for any two subspaces U1 , U2 of W we have U1 + W = U2 + W , so the statement is clearly false in general. 14. Let W = {p ∈ P(F) | p =
n i i=0 ai x ,

a2 = a5 = 0}.

15. Let V = R2 . Let W = {(x, 0) ∈ R2 | x ∈ R}. Set U1 = {(x, x) ∈ R2 | x ∈ R}, U2 = {(x, −x) ∈ R2 | x ∈ R}. Then it’s easy to see that U1 + W = U2 + W = R2 , but U1 = U2 , so the statement is false.

2

Finite-Dimensional Vector Spaces
1. Let un = vn and ui = vi + vi+1 , i = 1, . . . , n − 1. Now we see that vi = vi ∈ span(u1 , . . . , un ), so V = span(v1 , . . . , vn ) ⊂ span(u1 , . . . , un ).
n j=i ui .

Thus

2. From the previous exercise we know that the span is V . As (v1 , . . . , vn ) is a linearly independent spanning list of vectors we know that dim V = n. The claim now follows from proposition 2.16.

2

3. If (v1 + w, . . . , vn + w) is linearly dependent, then we can write
n n

0 = a1 (v1 + w) + . . . + an (vn + w) =
i=1

ai vi + w
i=1

ai ,

where ai = 0 for some i. Now
n

n i=1 ai

= 0 because otherwise we would get
n n

0=
i=1

ai vi + w
i=1

ai =
i=1

ai vi

and by the linear independence of (vi ) we would get ai = 0, ∀i contradicting our assumption. Thus
n −1 n

w=−
i=1

ai
i=1

ai vi ,

so w ∈ span(v1 , . . . , vn ). 4. Yes, multiplying with a nonzero constant doesn’t change the degree of a polynomial and adding two polynomials either keeps the degree constant or makes it the zero polynomial. 5. (1, 0, . . .), (0, 1, 0, . . .), (0, 0, 1, 0, . . .), . . . is trivially linearly independent. Thus F∞ isn’t finite-dimensional. 6. Clearly P(F) is a subspace which is infinite-dimensional. 7. Choose v1 ∈ V . Assuming that V is infinite-dimensional span(v1 ) = V . Thus we can choose v2 ∈ V \ span(v1 ). Now continue inductively. 8. (3, 1, 0, 0, 0), (0, 0, 7, 1, 0), (0, 0, 0, 0, 1) is clearly linearly independent. Let (x1 , x2 , x3 , x4 , x5 ) ∈ U . Then (x1 , x2 , x3 , x4 , x5 ) = (3x2 , x2 , 7x4 , x4 , x5 ) = x2 (3, 1, 0, 0, 0)+x4 (0, 0, 7, 1, 0)+ x5 (0, 0, 0, 0, 1), so the vectors span U . Hence, they form a basis. 9. Choose the polynomials (1, x, x2 − x3 , x3 ). They clearly span P3 (F) and form a basis by proposition 2.16. 10. Choose a basis (v1 , . . . , vn ). Let Ui = span(vi ). Now clearly V = U1 ⊕ . . . ⊕ Un by proposition 2.19. 11. Let dim U = n = dim V . Choose a basis (u1 , . . . , un ) for U and extend it to a basis (u1 , . . . , un , v1 , . . . , vk ) for V . By assumption a basis for V has length n, so (u1 , . . . , un ) spans V .

3

. . . Choose some vectors w1 . . . i = 1. . .g. f (x. U3 = {(x. n. . 0) ∈ R2 | x ∈ R}. . . W ). These relations define the linear map and clearly T|U = S. . . 2. but is non-linear. . um ) has n nm 1 1 length dim U1 + . Then the list of vectors (u1 . . Choose a basis (u1 . . um . . . Thus we have a counterexample. . x = y then clearly f satisfies the condition. ui i ) for each Ui . W ) by T (ui ) = S(ui ). Now T (v) = av for some a ∈ F. 0. If u ∈ null T . pm ) is a spanning list of vectors having length m + 1 it is not linearly independent. U1 ∩ U3 = {0} and U2 ∩ U3 = {0}. x) ∈ R2 | x ∈ R}. U2 = {(0. .4 dim V = dim null T + dim range T . we have w = av for some a ∈ F. um ) from the proof of the previous exercise nm 1 is linearly independent. x) ∈ R2 | x ∈ R}. .12. so for w = bv ∈ V we have T (w) = T (bv) = bT (v) = bav = aw. . um ) the claim follows. i = 1. . Thus T is multiplication by a scalar. um ). By theorem 3. Since V = U1 +. v1 . By theorem 2. . + Um proving the claim. . . Let U = span(u0 . Thus the left-hand side of the equation in the problem is 2 while the right-hand side is 3. m and T (vi ) = wi . Let U1 = {(x. . . .18 we have 8 = dim R8 = dim U + dim W − dim(U ∩ W ) = 4 + 4 − dim(U ∩ W ). Now define T ∈ L(V. 13. 17. 4.18 that 9 = dim R9 = dim U + dim W − dim(U ∩ W ) = 5 + 5 − 0 = 10. . . . To define a linear map it’s enough to define the image of the elements of a basis. Contradiction. we have that dim(U ∩ W ) = 0 and the claim follows 14. As (p0 . um ) for U and extend it to a basis (u1 . . . Then U1 ∩ U2 = {0}. wn ∈ W . x = y . . Define f by e. . if w ∈ V . 15. As U = Pm (F) we have by the previous exercise that dim U < dim Pm (F) = m + 1. .+Un = span(u1 . . . . . Any v = 0 spans V . but as dim range T ≤ dim F = 1 we have dim range T = 1 and it follows that 4 . . nm 1 3 Linear Maps 1. vn ) of V . . Thus. . If S ∈ L(U. 3. . then dim range T > 0. . . . 16. + dim Um and clearly spans U1 + . y) = x. . Since U + W = R8 . . Assuming that U ∩ W = {0} we have by theorem 2. . . . Choose a basis (ui . By assumption the list of vectors (u1 . . .

. then by injectivity of Sk+1 we Sk+1 u = 0 and by injectivity of T we have T Sk+1 u = 0. + an un + b1 vn + . i = 1. Writing v = a1 v1 + . . By surjectivity of T we can find a vector v ∈ V such that T (v) = w. . u) is linearly independent and has length dim V . 0. un . . . . Hence. vk ) of V . 10. . . un ) be a basis for null T and extend it to a basis (u1 . This follows trivially from dim V = dim null T + dim range T . 1) is a basis for null T (see exercise 2. . Let (u1 . Choose a basis (v1 . . . . W ) by letting T (vi ) = wi . . . If S1 . 5. vk ). 0. + bk T (vk ). . . It’s easy to see that (5. . then S1 · · · Sk is injective by the induction hypothesis. Hence. (0. un ) for null T . . 7. Sk+1 satisfy the assumptions. Let U := span(v1 . . We can define a linear map T ∈ L(V. 1. . . 1. n. . . . From dim V = dim null T + dim range T it follows trivially that if we have a surjective linear map T ∈ L(V. 1) is a basis of null T . Hence range T = T (U ). .8). By linearity 0 = n ai T (vi ) = n T (ai vi ) = T ( n ai vi ). . 0). . . 0. . Let w ∈ W . . . . T (vi ) = 0. v1 . . . . Hence (T (v1 ). it’s a basis for V . . 12. 1. . . Thus range T = F2 . . 6. . T (vn )) is linearly i=1 independent. (0. + bk vk . . . . Hence T Sk+1 is injective and the claim follows. By injectivity we i=1 i=1 i=1 have n ai vi = 0. . 0. . . . If u = 0. 0). i = m + 1. . m. Now choose a basis (u1 . . . Then by construction null T ∩ U = {0} and for an arbitrary v ∈ V we have that v = a1 u1 + . + an vn ) = a1 T (v1 ) + . we get dim range T = dim F5 − dim null T = 5 − 2 = 3 which is impossible. so that T is surjective. un . un ) ⊕ span(u) = null T ⊕ {au | a ∈ F}. . This implies that V = span(u1 . W ). 9. . It’s again easy to see that (3. . . + an T (vn ) proving the claim. u) = span(u1 . 7. . . . Let T = S1 · · · Sk . . . Clearly T is surjective. . wm ) for W . . vn ) for V and a basis (w1 . Assume then that dim V ≥ dim W . . Hence dim range T = dim F4 − dim null T = 4 − 2 = 2. so that ai = 0 for all i.dim null T = dim V −1. If n = 1 the claim is trivially true. 11. . . . + an vn we get w = T (v) = T (a1 v1 + . . 0. By our assumption (u1 . so that T (v) = b1 T (v1 ) + . then dim V ≥ dim W . un . Assume that the claim is true for n = k. . 1. . 5 . 8.

13. This is nothing but pencil pushing. 18. . n and T (vi ) = wi . v1 . V ) by S(T (wi )) = wi . We have that T v = 0 if and only if v = 0v1 + . Clearly if we can find such a S.  . .  =  . Let (u1 . Clearly dim range ST ≤ dim range S. M(T v) = M(T )M(v) =  . Thus we get dim null T = dim V −dim range T ≥ dim V − dim W .n xn ). From proposition 3. . . 16. . . Substituting dim V = dim null S + dim range S and dim U = dim null ST + dim range ST we get dim null ST + dim range ST ≤ dim null T + dim null S + dim range S. Clearly null T = U .n xn am. . . Otherwise let (v1 . . Define S ∈ L(W. .n xn . . . . . un . . . . . . .n xn which shows that T v = (a1. am. . 1. Clearly dim Mat(n.n xn  . . . . V ) by S(T (vi )) = vi . . . .n ). Let V = (x1 .1 x1 + . Just take arbitrary matrices satisfying the required dimensions and calculate each expression and the equalities easily fall out. i = 1. Let (v1 .. We have that dim U = dim null T + dim range T ≤ dim null T + dim V . . . W ) by T (ui ) = 0. 14. . . . . so our claim follows. . . 19. . so null T = {0}. . T (vn )) is linearly independent so we can extend it to a basis (T (v1 ).1 · · · am. Assume then that T is injective. . . Clearly if we can find such a S. . . . . . . . . . . .  . . i = 1. Ditto. then T is injective. . 15. Clearly ST = IV . i = 1. . . . . F) = n = dim V . Define a linear map T ∈ L(V. Now clearly T S = IW . n and S(wi ) = 0. Without loss of generality we can assume that the first m vectors form the basis (just permute the indices). wm ) be a basis for W .   .1 x1 + . . w1 .n x1 a1. . . + am. . . . . .14 we have that      a1. Thus T is injective and hence invertible. By exercise 5 (T (v1 ). By assumption (T (w1 ). .1 · · · a1. . We have that dim range T ≤ dim W . . . . i = 1. T (wn )) spans W . Define the map S ∈ L(W. . . 20. . m. Now we know that k ≤ dim W . . 17. k. . . . . T (vn ).  . . un ) be a basis of U and extend it to a basis (u1 . vn ) be a basis of V .1 x1 + . + a1. . T (wm )) is a basis of W . . T (wn )) a basis by removing some vectors. . Thus T (w1 ). . . .1 x1 + . . then T is surjective. . Choose an arbitrary subspace U of V of such that dim U ≥ dim V − dim W . 6 . am. vk ) of V . . so let (w1 . . . wm ) of W . 0vn = 0. m. . . vn ) be a basis of V . + am. i = 1. .  . + a1. By the linear dependence lemma we can make the list of vectors (T (w1 ). .

. 25. . Now it’s trivial to verify that Aei = T (ei ). . b) = (0.. . Let λ1 . 4 Polynomials 1. However. . . Let T ∈ L(F2 ) be the operator T (a. Let T be the linear map. 24. 26. km > 0 such that their sum is n. so ST = I. 23.21 these are equivalent. F). Define a matrix A := [T (e1 ). As S is bijective Su goes through the whole space V as u varies. 1. What a) states is now that the map is injective while b) states that it is surjective. T + S is the identity operator which is trivially invertible. . . .1 · · ·    a1. By distributivity of matrix multiplication (exercise 17) we get for an arbitrary v = a1 e1 + . Then m (x − λi )ki is clearly such a polynomial. an. 0) and S ∈ L(F2 ) the operator S(a. 22. . + an Aen = a1 T (e1 ) + . + an T (en ) = T (a1 e1 + .   . . . .1 · · ·  . . . x =  . Then ST (Su) = S(T Su) = Su for all u. . A :=  . so the claim follows. Then neither one is injective and hence invertible by 3. F) to Mat(n. By theorem 3. Thus if T S = I. b). + an en that Av = A(a1 e1 + .n . . + an en ) = a1 Ae1 + . i=1 7 . . 1. This can be trivially generalized to spaces arbitrary spaces of dimension ≥ 2. . . xn Then the system of equations in a) reduces to Ax = 0.  . From theorem 3. . Now A defines a linear map from Mat(n. . Write a1. . . + an en ).21. b) = (a. . Clearly T S = ST if T is a scalar multiple of the identity. λm be m distinct numbers and k1 . . T (en )]. an. For the other direction assume that T S = ST for every linear map S ∈ L(V ). . . Let ei denote the n × 1 matrix with a 1 in the ith row and 0 everywhere else.n  x1 .21.21 we have ST invertible ⇔ ST bijective ⇔ S and T bijective ⇔ S and T invertible. By symmetry it’s sufficient to prove this in one direction only.

The clame is clearly true for U = {0} or U = V . Thus V is invariant under T . . An arbitrary element of U1 + . Now we have that pi (zj ) = 0 if and only if i = j. . . Assume that {0} = U = V . . . Now T (v) ∈ Ui for all i by assumption. + un ) = T (u1 ) + . . 4. .10 z is also a root of p. Let p(x) = n ai xi . Thus p has a real root. Let λ be a root of p. . Let pi (x) = j=i (x − zj ). Thus if we have no real roots we have an even number of roots counting multiplicity. vm ). . . v1 . un . Now we get p (x) = q(x) + (x − λ)q (x). λ is a multiple root. . . . . where ai ∈ R. . . . un ) be a basis for U and extend it to a basis (u1 . . Thus λ is a root of p if and only if λ is a root of q i. By proposition 4. . i = 1. so that deg pi = m. . . Thus we get T (u1 + .2. . but the number of roots counting multiplicity is deg p hence odd. i = 1. m. . Define m+1 p(x) = i=1 wi c−1 pi (x). n and T (vi ) = v1 . The statement follows.5. Thus s = s implying r = r . Let v ∈ V . 5 Eigenvalues and Eigenvectors 1. By the fundamental theorem of calculus we i=0 have a complex root z. Let s . . Let V = ∩i Ui where Ui is invariant under T for all i. 3. i Now deg p = m and p clearly p(zi ) = wi . + un where ui ∈ Ui . so T (v) ∈ ∩Ui = V . Thus we can write p(x) = (x − λ)q(x). Let ci = pi (zi ). . + Un by the assumption that T (ui ) ∈ Ui for all i. + Un is of the form u1 + . Assuming that s = s we have deg(r − r) < deg(s − s )p which is impossible. . so that v ∈ Ui for all i. Define a linear operator by T (ui ) = v1 . 8 . Then we get that 0 = (s − s )p + (r − r ) ⇔ r − r = (s − s )p We know that for any polynomials p = 0 = q we have deg pq = deg p + deg q. Then clearly U is not invariant under T . 5. Then z and z are roots of equal multiplicity (divide by (x − z)(x − z) which is real). Let (u1 . 3.e. . By our assumption m ≥ 1. . r be other such polynomials. . 2. We only need to prove uniqueness as existence is theorem 4. + T (un ) ∈ U1 + . .

. From the assumption λ = 0 we get z2 = 0. . . . Hence T has at most one non-zero eigenvalue hence at most two eigenvalues. 5z3 ) = λ(z1 . 9 . If T is invertible. Also T (−1. . so λ = 0. 10. λ2 and assume that λ1 = 0 = λ2 . . z2 . . We easily see that T(0. As T = (T −1 )−1 we only need to show this in one direction. Now T vi = λi vi and dim span(T v1 . . a3 . . . 0) so 0 is an eigenvalue. then it’s an eigenvalue of ST . Thus 5 is the only non-zero eigenvalue. 1). If Sv = 0. . 9. . Clearly T (1. . . 0. T vk+2 ) = dim span(λ1 v1 . 0. so n is another eigenvalue. then T Sv = 0. Then v1 . λk+2 vk+2 ) ⊂ range T . 1) = n(1. so that 5 is an eigenvalue.6 these eigenvectors are linearly independent.1)=(0. . 1) = (1. 7. then it is is an eigenvector for the eigenvalue λ. We also see that T (1. z3 ). 8. v2 be the corresponding eigenvectors. . so dim null T = n − 1. a2 . Let λ be an eigenvalue of T S and v the corresponding eigenvector. Then we get ST Sv = Sλv = λSv. so that Su ∈ null(T − λI). so by theorem 5. ST = T S gives us 0 = S(T u − λu) = ST u − λSu = T Su − λSu = (T − λI)Su.0. . . .) = (a2 . . Now we have T (a. 5z3 ) = λ(z1 . 0. Notice that the range of T is the subspace {(x. . . 6.9 these are all eigenvalues. Let a ∈ F. . but dim null T = n − 1. a3 . . Let u ∈ null(T − λI). so that 1 is an eigenvalue. Thus dim range T = 1. 0) = (0. v2 ∈ null T . Assume that λ = 0 and T (z1 . 0. Assume that T has two distinct eigenvalues λ1 . so if Sv = 0. these are all eigenvalues of T . The other implication follows by symmetry. By the previous paragraph. Also T (1. By theorem 5. . As Sv = 0 we know that S is not injective. . . Now let λ be an eigenvalue of T and v the corresponding eigenvector. 11. 0. vk+2 .) = a(a. . 5. 1) = (1.6 they are linearly independent. From T v = λv we get that T −1 λv = λT −1 v = v. −1) = −(−1. so the equation is of the form (0. a2 . . z3 ) = (2z2 .0. . . x) ∈ Fn | x ∈ F} and has dimension 1. 0. otherwise k + 1). . so this is impossible. Because T is not injective we know that 0 is an eigenvalue. Again we see that z1 = 0 so we get the equation (0.4. z2 . . . so every a ∈ F is an eigenvalue. This is a contradiction as dim range T = k and span(λ1 v1 . 1) so −1 is another eigenvalue. z3 ). By corollary 5. 0. . .). so that T −1 v = λ−1 v. so that T (u) − λu = 0. . .5). 1). 5z3 ) = λ(0. λk+2 vk+2 ) ≥ k + 1 (it’s k + 2 if all λi are non-zero. . then 0 is not an eigenvalue. Assume that T has k + 2 distinct eigenvalues λ1 . z3 ). λk+2 with corresponding eivenvectors v1 . Thus if λ is an eigenvalue of T S. . so ST is not injective and it has eigenvalue 0. Let v1 . .

On page 78 it was shown that it has no eigenvalue. . vn ) of V . + an vn . . Let T ∈ L(R2 ) be the map T (x. Choose i = j. By our assumption each Ui is an invariant subspace. . . 16.12. Now q(T )v = 0 and q(T ) = c n (T − i=1 i=1 λi I). . vn ). The claim now follows from proposition 5. Let v1 ∈ V and extend it to a basis (v1 . 10 . Let (v1 . 18. . 15. x) = (−x. However. . Let T ∈ L(F2 ) be the operator T (a.12. Then let a be an eigenvalue of p(T ) and v ∈ V the corresponding eigenvector. Let q(x) = p(x) − a.e. Let Ui be the subspace generated by the vectors vj . + an vn . so T is clearly invertible. However. . Let T v1 = a1 v1 + . Let λ be an eigenvalue of T and v ∈ V a corresponding eigenvector. λn vn ). a) with respect to the standard basis. We need to show that λi = λj for all i. vn ) be a basis of V . so −1 it has an eigenvalue. so n n n ai (ST S i=0 −1 i ) = i=0 ai ST S i −1 =S i=0 ai T i S −1 . x). . By assumption (T v1 . vj ) is linearly independent we get that λi − λ = 0 = λj − λ i.13 T has an upper-triangular matrix with respect to some basis (v1 . . 14. Thus p(λ) is an eigenvalue of p(T ). By the fundamental theorem of algebra we can write q(x) = c n (x − λi ). This means that (λi − λ)vi + (λj − λ)vj = 0. Now T 2 = I. . . so that a = p(λi ). . then by our assumption vi + vj is an eigenvector. . Now T v1 = a1 v1 + . Now v1 ∈ Ui for i > 1. Clearly (ST S −1 )n = ST n S −1 . Then we have n n p(T )v = i=0 ai T i v = i=0 ai λi v = p(λ)v. . . Thus we get 0 = q(λi ) = p(λi ) − a. λi = λj . T has the matrix 0 1 1 0 . −y). then T v ∈ Uj imples aj = 0. y) = (−y. The result now follows from the previous exercise. j. . . . . . Hence λi is an eigenvalue of T . We see that v1 was an eigenvector. . Because (vi . b) = (b. j = i. By theorem 5. 13. Let p(x) = n i i=0 ai x . T vn ) = (λ1 v. 17. . so T (vi + vj ) = λi vi + λj vj = λ(vi + vj ) = λvi + λvj . Thus T v1 = a1 v1 . T 2 (x. As q(T ) is non-injective we have that T − λi I is non-injective for some i. . y) = T (−y. So let j > 1.

 . we can find a u ∈ V such that P u = v. clearly not invertible. c). so ST and T S agree on a basis of V . a. As b = 0 we have λ2 + 1 = 0. If U is an invariant subspace of odd degree. d) = (−b. As dim V = dim null P + dim range P . but v ∈ null P .W (u + w) = u = λu + λw ⇔ (1 − λ)u = λw. Then we have λa = −b and λb = a. It is  1  . By symmetry we can assume that a = 0 (an eigenvector is non-zero). . 21. 22. then by theorem 5. Take the operator T in exercise 7. If λ = 0 is an eigenvalue and λ(a. . . b. so we have w = 0 which implies (1 − λ)u = 0. By theorem 5. then T is clearly injective. so that it’s non-zero we see that the eigenvectors corresponding to 1 are u = U \ {0}. 1 1 ··· 20.  . vn ) for V where vi is an eigenvector of T corresponding to the eigenvalue λi . 24. −d. Clearly 0 is an eigenvalue and the corresponding eigenvectors are null T = W \ {0}. −d. b. Because v is an eigenvector we have v = u + w = u = 0. Thus λw ∈ U . 23. Let λ be the eigenvalue of S corresponding to vi .26 T|U has an eigenvalue λ with eigenvector v. .. but this equation has no solutions in R. Thus T has no subspace of odd degree. Let T (a. d) = (−b. Thus we have that v = P u = P 2 u = P v. . a. We clearly see that a = 0 ⇔ b = 0 and similarly with c. Then PU. . As v ∈ range P . . . it’s clearly sufficient to prove that null P ∩ range P = {0}. d. Now assume λ = 0 is an eigenvalue and v = u + w is a corresponding eigenvector. Thus v = 0. 6 Inner-Product Spaces 1. c. .6 we can choose basis (v1 . so λw = 0. so P v = 0. Let v ∈ null P ∩ range P . Substituting we get λ2 b = −b ⇔ (λ2 + 1)b = 0. Hence T has no eigenvalue. so that 1 − λ = 0.19. . so 0 is not an eigenvalue. As we can choose freely our u ∈ U . Hence λ = 1. Solving we x + y − x−y 2 11 . From the law of cosines we get x − y get x y cos θ = 2 = x 2 2 + y 2 2 −2 x 2 y cos θ. Then λ is an eigenvalue of T with eigenvector v against our assumption. c). Then we get ST vi = Sλi vi = λi Svi = λi λvi = λλi vi = λT vi = T λvi = T Svi . but has the matrix  ··· 1 . c. Hence they are equal. but λ = 0.

y = (y1 . y2 ). 5. If v = 0. Then assume that u ≤ u+av for all a ∈ F. From the parallelogram equality we have u + v √ Solving for v we get v = 17. b2 / 2. av . 0). u + av . v = (0. so by the Pythagorean theorem u ≤ u + av = u+av . . . Thus we get u. √ √ √ √ 3. b 2 ≤ a 2 b 2 which follows directly from the CauchySchwarz inequality. so that 2t u.Now let x = (x1 . 2 + u−v 2 = 2( u 2 + v 2 ). 1 2 so that x y cos θ = x 2 + y 2 − x−y 2 2 = 2(x1 y1 + x2 y2 ) = x. . v 2 ≤ t u. av + av. u ≤ u. v = 0. Choose a = −t u. nan ) ∈ Rn and b = (b1 . v with t > 0. so that −2Re a u. v = 0. . v 2 ≤ t2 u. See previous exercise. A straight calculation shows that x 2 + y 2 − x−y 2 2 2 = x2 + x2 + y1 + y2 − (x1 − y1 )2 − (x2 − y2 )2 = 2(x1 y1 + x2 y2 ). u − v = 2. u = (1. u and simplify. Assuming that the norm is induced by an inner-product.g. Just use u = u. 4. . Set e. then u. 1). u + av. u ≤ u + av. bn / n) ∈ Rn . This equality is then simply a. v = 0. v 2 ≤ u. we would have by the parallelogram inequality 8 = 22 + 22 = 2(12 + 12 ) = 4. so that we get 2 u. v = 1. 6. av = 0. 2a2 . u + u. If not choose t = 1/ v 2 . which is clearly false. v 2 v 2. . If u. x2). 7. 2 2. Let a = (a1 . . . then clearly u. Thus u. Then u = 1. v ≤ |a|2 v 2 . y . 12 . v 2 . We have u 2 ≤ u+av 2 ⇔ u. u + v = 2. v 2 v 2 ⇔ 2 u.

so positivity definiteness follows. This gives us n→∞ lim rn x + y = λx + y when rn → λ. v = Trivially we have u 2 u+v 2 − u−v 4 2 . w − v. v by u. v = n u. This exercise is a lot trickier than it might seem. Now u+v+w −( u + w −( v + w = +( u − w −( u + w 2 2 2 4( u + v. Hence we get additivity in the first slot. Now we get u. All norms on a finite-dimensional real vector space are equivalent. v = nu/n. v = n u/n. = u. This probably doesn’t make any sense. w ) = − u+v−w 2 − u − w 2) − v − w 2) 2 u+v+w 2 2 − u+v−w 2 + v − w 2) + v + w 2) We can apply the parallelogram equality to the two parenthesis getting 4( u + v. 13 . the proof for C is almost identical except for the calculations that are longer and more tedious. w − v. v . Define u. u . w ) = + − = + = − u+v+w 2− u+v−w 2 1 ( u + w − 2w 2 + |u − v|2 ) 2 1 ( u + v + 2w 2 + u − v 2 ) 2 u+v+w 2− u+v−w 2 1 1 u + w − 2w 2 − u + v + 2w 2 2 2 1 2 ( u + v + w + w 2 ) + u + v − 2w 2 1 ( u + v − w 2 + w 2 ) − u + v + 2w 2 2 2 Applying the parallelogram equality to the two parenthesis we and simplifying we are left with 0. w − u. v . It’s easy to see that −u. so we get nu. I’ll prove it for R. v = − u. w − u.8. v for all n ∈ Z. so check a book on topology or functional analysis or just assume the result.

9. It follows that we have homogeneity in the first slot when the scalar is rational. v = u. We have integrals of the types π sin nx. And by using the identities u. v n→∞ 1 = lim ( rn u + v 2 − rn u − v 2 ) n→∞ 4 1 = ( λu + v 2 − λu − v 2 ) 4 = λu. cos mx sin nx. The proof for F = C can be done by defining the inner-product from the complex polarizing identity. sin mx cos nx.so that u/n. so we have an inner-product. v 0 + u. v = n→∞ lim rn u.1 1 cos((n − m)x) − cos((n + m)x) 2 cos((n − m)x) + cos((n + m)x) 2 sin((n − m)x) − sin((n + m)x) . v 0 1 = ( u+v 4 2 − u − v 2 ) ⇒ u. 2 . 14 . iv 0 i. 1 = 0 x = 1/2. Thus we have nu/m. This gives us λ u. Now let λ ∈ R and choose a sequence (rn ) of rational numbers such that rn → λ. v . v . This is really an exercise in calculus. and using the properties just proved for ·.1 1 x− x. v = n/m u. Where 1 x. e1 = 1. We trivially also have symmetry. v = 1/n u. cos mx = −π π sin nx sin mxdx cos nx cos mxdx −π π = = −π sin nx cos mxdx and they can be evaluated using the trigonometric identities sin nx sin mx = cos nx cos mx = cos nx sin mx = 10. v = lim rn u. · 0 . v Thus we have homogeneity in the first slot. then e2 = x− x.

em ) to a basis (e1 . e1 e1 + . . so that ai = 0 for i < n. Then to continue pencil pushing we √ √ x2 . which gives e2 = have √ 12(x − 1/2) = √ 3(2x − 1). Let (e1 . e1 e1 + . e1 e1 + . en ) and (e1 . . vn ) by Gram-Schmidt. . en−1 ) be an orthogonal list of vectors. . . ei = 0 for all i < n. vn−1 ). x2 .17 v = v. en ) satisfies the hypothesis from the problem we have by the previous paragraph that ei = ±ei . . . em en v. . 13.so that 1 x − 1/2 = x − 1/2. 15 . . . . 5(6x2 − 6x + 1). . Assume that (e1 . Thus we have en = an en and from en = 1 we have an = ±1.e. . ei = 0 for i > m i. . 3(2x − 1) = 3/6. en−1 . em en = | v. Extend (e1 . . It’s easy to see that we can extend the algorithm to work for linearly dependent lists by tossing away resulting zero vectors. + v. en ) be the orthonormal base produced from (v1 . en ) of V . en = vn − P vn but vn = P vn by our assumption. . . 3(2x − 1) 3(2x − 1) = x2 − 1/3 − 1/2(2x − 1). Then we can write en = a1 e1 + . Then calculating en with the Gram-Schmidt algorithm gives us vn − P (vn ) . vn ). . Let (v1 . . . + | v. 3(2x − 1). . 1 1 − x2 . Let (e1 . em | if and only if v. . Now we have en . By theorem 6. em ). + | v. . Thus we have v = | v. . . . Thus we have 2n possible such orthonormal lists. vn−1 ) is linearly independent and vn ∈ span(v1 . . . . √ √ so that x2 − x2 . . + an en . v ∈ span(e1 . so that v = = v. . . . 1 = 1/3. 12. . . x − 1/2 = 0 (x − 1/2)2 dx = 1/12. . The orthonormal basis is thus (1. We can assume that (v1 . Let P denote the projection on the subspace spanned by (v1 . e1 | + . Thus the resulting list will simply contain zero vectors. . . . . . . . . . en ) are orthogonal and span the same subspace. . en |. . . . . en en . . . . . e1 | + . . Now √ x2 − 1/3 − 1/2(2x − 1) = 1/(6 5) √ √ √ giving e3 = 5(6x2 − 6x + 1). . 11. + v. so en = 0. . . . + v. Then if (e1 . . . . . vn ) be a linearly dependent list. .

19. 4).14. Let U = span(e1 . Let u ∈ range P . Thus we want to find the projection of 2 + 3x to the subspace U := span(x2 .21 we have that V = range P ⊕ null P . 21. so v − P v ∈ null P = U ⊥ . e2 ). It’s easy to see that the differentiation operator has a upper-triangular matrix in the orthonormal basis calculated in exercise 10. Hence the claim follows. w = 0. Here √ 17 √ 2 + 3x. If T PU = PU T PU clearly U is invariant. Then let w ∈ U ⊥ . 3x2 = 3. 22 22 = 47 √ 1155. If p(0) = 0 and p (0) = 0. From exercise 5. 2. Let w ∈ null P . 420/11(x3 − 2 x2 )). 2. e2 e2 = (3/2. 1/ 2. An arbitrary v ∈ V can be written as v = P v + (v − P v). Let U := range P . e2 = (0. By definition P = PU . 0). 2. Thus null P ⊂ (range P )⊥ and from the dimension equality null P = (range P )⊥ . 1 420/11(x3 − x2 ) 2 71 2 329 3 x + x . 2/ 5). Now P v = P (P v + (v − P v)) = P v. 4). so that null P = U ⊥ . 2 . 3/2. so that P is the identity on U . 12 so that PU (2 + 3x) = − 16 2 + 3x. 20. Hence the decomposition v = P v + (v − P v) is the unique decomposition in U ⊕ U ⊥ . With Gram-Schmidt we get e1 = √ √ √ (1/ 2. Let u ∈ U . 1/ 5. 0. Then for a ∈ F u 2 = P (u + aw) 2 ≤ u + aw 2 . This follows directly from V = U ⊕ U ⊥ . 22/5). 4) = (1. 3. x3 ). From exercise 15. 17. 3x2 3x + 2 + 3x. From P 2 = P we have that P (v − P v) = 0. 18. By exercise 2 we have u. 660 1 420/11(x3 − x2 ) 2 1 420/11(x3 − x2 ). Thus U is invariant. With Gram-Schmidt we get √ 1 the orthonormal basis ( 3x2 . 11/5. 15. We get √ √ 2 PU (2+3x) = 2 + 3x. then p(x) = ax2 + bx3 . 3. but u = PU T w = T PU w = T 0 = 0. 22. we have dim U = dim V − dim U = dim null P . then by assumption null P ⊂ U ⊥ . Now we can write T w = u + u where u ∈ U and u ∈ U ⊥ . First we need to find an orthonormal √ basis for U . 3. then we have T u = T PU u = PU T u ∈ U . Then we have u = PU (1. then u = P v for some v ∈ V hence P u = P 2 v = P v = u. Now PU T w = u. e1 e1 + (1. Thus U ⊥ is also invariant. 0. Now assume that U is invariant. 16. so P is the identity on range P . Then for u ∈ U we have PU T PU u = PU T u = T u = T PU u. This follows directly from the previous exercise. so T w ∈ U ⊥ .

45 we see that √ √ √ √ q(x) = 1 + 3(2 · 1/2 − 1) 3(2x − 1) + 5(6 · (1/2)2 − 6 · 1/2 + 1) 5(6x2 − 6x + 1) = −3/2 + 15x − 15x2 .. M(T ∗ . v . From theorem 6.. 0 17 . Again √ √ we have the orthonormal basis (1. (e1 . From exercise √ √ 10 we get that (1.  . 1  0 0 .. . except that it takes ages to calculate. Choose an orthonormal basis (e1 . en )) = [ e1 . . π π 26. .. 3(2x − 1). . 25. .47 we have M(T. compared to the previous two. We see that the map T : P2 (R) → R defined by p → p(1/2) is linear. M(T ) =   .. 27. en )) =   . 5(6x2 − 6x + 1). The map T : P2 (R) → R defined by p → 0 p(x) cos(πx)dx is clearly linear.. There’s is nothing special with this exercise. v ).      . . v   . . . a en . . en ) for V and let F have the usual basis. . (e1 . . . . v · · · en . v  and finally T ∗ a = (a e1 . Then by proposition 6. 24. .23. with the usual basis for Fn we get    M(T ) =    0 so by proposition 6. . 0 . . . 5(6x2 − 6x + 1) is an orthonormal basis for P2 (R). so that  e1 .47  ∗ 0 1 . 1    .. 3(2x − 1). so that 1 1 q(x) = 0 cos(πx)dx + 0 1√ 1√ 3(2x − 1) cos(πx)dx × √ 3(2x − 1) + √ 5(6x2 − 6x + 1) cos(πx)dx × 5(6x2 − 6x + 1) 0 √ 4 3√ 12 = 0 − 2 3(2x − 1) + 0 = − 2 (2x − 1). v ] .   0 0   1 . . en . . . .

For the first part. Now we can write T ∗ w = u + v. . but range T is a subspace of V .Clearly T ∗ (z1 . 7 Operators on Inner-Product Spaces 1. .46 and exercise 15 we get dim null T = dim(range T )⊥ = dim W − dim range T = dim null T + dim W − dim V. . 28. 0). Hence the dimension of the span of the column vectors equal dim range T . let p(x) = x and q(x) = 1. From exercise 31 we see that T ∗ − λI is non-injective if and only if T − λI is non-injective. 30. so that range T = V . As (T ∗ )∗ = T it’s sufficient to prove the first part. For the second part it’s not a contradiction since the basis is not orthogonal. By the previous exercise dim range T = dim range T ∗ . T q = p. 0 = 0. By additivity and conjugate homogeneity we have (T −λI)∗ = T ∗ −λI. . As (U ⊥ )⊥ = U and (T ∗ )∗ = T it’s enough to show only one of the implications. Let T be the operator induced by A. q = p. where the second equality follows from the first part. Thus u = 0 which completes the proof. 18 . u = u 2 . T ∗ w = u. then by exercise 31 we have dim V = dim range T = dim range T ∗ . 32. . where u ∈ U and v ∈ U ⊥ . u + v = u. By proposition 6. so the claim follows.47 the span of the row vectors equals range T ∗ . zn . . . From proposition 6. For the second part we have dim rangeT ∗ = dim W − dim nullT ∗ = dim V − dim null T = dim range T. Then clearly 1/2 = T p. That is λ is an eigenvalue of T ∗ if and only if λ is an eigenvalue of T . . so that 0 = T u. Assume that T is injective. Let U be invariant under T and choose w ∈ U ⊥ . so the generate range T . zn ) = (z2 . 31. The columns are the images of the basis vectors under T . 29. w = u.

un ) an orthogonal basis for range P and extend it to a basis of V . Then clearly P has a diagonal matrix with respect to this basis. T k−1 v = 0. so that range T k ⊂ range T . Thus null T k ⊂ null T k−1 and continuing we get null T k ⊂ null T . 7. Now let u ∈ range T k . Then we easily see that A∗ A = AA∗ and B ∗ B = BB ∗ . The identity operator I is self-adjoint. 4. Let v ∈ null T k . . so iI is not self-adjoint. . but clearly (iI)∗ = −iI. Let (u1 . . Then we have T ∗ T k−1 v. Choose the standard basis for R2 . 5. A + B doesn’t define a normal operator which is easy to check.46 we have null P = null P ∗ = (range P )⊥ . From the first part we get dim range T k = dim range T . so self-adjoint operators form a subspace. From proposition 7. Then from proposition 6. then we can find a v ∈ V such that u = T k v = T (T k−1 v). 6. B= 1 1 1 0 . Then assume that P is a projection. so range T k = range T . Assume that P is self-adjoint. 1 0 0 1 Then T and S as clearly self-adjoint. However. Now T k−1 v. so that T ∗ T k−1 v = 0. then (aT +bS)∗ = (aT )∗ +(bS)∗ = aT +bS for any a. Now P is clearly a projection to range P . T k−1 v = T ∗ T k−1 v. b ∈ R. so P is self-adjoint. . If T and S are self-adjoint. so that v ∈ null T k−1 . T ∗ T k−1 v = T ∗ T k v. Choose the operators corresponding to the matrices A= 0 −1 1 0 . Let T and S the the operators defined by the matrices 0 1 0 0 . 3. It follows from proposition 6.6 we get null T = null T ∗ .2. . T k−2 v = 0. 19 . Clearly null T ⊂ null T k .46 that range T = (null T ∗ )⊥ = (null T )⊥ = range T ∗ . . but T S has the matrix 0 1 0 0 so T S is not self-adjoint.

so by corollary 7. If T is normal. . . 10. . Hence T (T v − v) = T 2 v − T v = 0. . v = 0. . λn be the diagonal elements. λ2 on n 1 the diagonal and from T 2 = T we must have λ2 = λi for all i. 3 λn on the diagonal.8. . . If |λi − λ| ≥ ε for all i. then n n 0 −1 . By the spectral theorem we can choose a basis of eigenvectors for T such that T has a diagonal matrix with λ1 . + an vn this implies that n i=1 |ai | = 1. λn on the diagonal. Assume that T v − λv < ε. . Let S be the operator corresponding to the diagonal matrix having √ √ 3 λ1 . . Thus we can find an eigenvalue λi such that |λ − λi | < ε. i so the matrix of T equals its conjugate transpose. 20 . . Then T has a diagonal matrix with respect to the basis. . For any v ∈ V we get 0 = T 9 v − T 8 v = T 8 (T v − v). . Let λ1 . Hence T is self-adjoint. Let A be the matrix of T corresponding to the basis. Let v ∈ V be such that v = 1. A self-adjoint operator is normal. √Let S be the operator corresponding to the diagonal matrix having the √ elements λ1 . . 3) and v = (2. Now T is self-adjoint if and only if the conjugate transpose of A equals A that is the eigenvalues of T are real. λn be the diagonal elements. . By the spectral theorem we can choose a basis consisting of eigenvectors of T . . 5. λn on the diagonal. . 13. so T 2 = T . Clearly S 2 = T . By the spectral theorem we can choose a basis of T such that the matrix of T corresponding to the basis is a diagonal matrix. . By the spectral theorem we can choose a basis (v1 . . . .8 if T is normal that would imply orthogonality of u and v. Let λ1 . which is a contradiction. . . Let T the operator corresponding to the matrix T 2 + I = 0. 7) are both eigenvectors corresponding to different eigenvalues. . Hence λi = 0 or λi = 1. vn ) consisting of eigenvectors of T . Thus T v − v ∈ null T 8 = null T by the normality of T . Then clearly S 3 = T . Then 1 0 T v − λv = i=1 n (ai T vi − λvi ) = i=1 n (ai λi vi − λvi ) |ai ||λi − λ| v1 = i=1 n ai (λi − λ)vi = i=1 n = i=1 |ai ||λi − λ| ≥ ε i=1 |ai | = ε. . we can choose a basis of V consisting of eigenvectors of T . 2. The vectors u = (1. . Now T 2 has the diagonal matrix with λ2 . . Clearly u. . . 12. If v = a1 v1 + . so T can’t be even normal much less self-adjoint. 14. . . 9. 11.

T v ≥ 0. 19. Then we have an inner-product and clearly T is self-adjoint as (v1 . by hypothesis and self-adjointness. . Also (S + T )v. i = j Extend this to an inner-product by bilinearity and homogeneity. λn and by assumption λi > 0 for all i. 17. Hence S has an eigenvalue λ and a corresponding eigenvector v. S + T is positive. Then T n v. sin θ cos θ is a square root for every θ ∈ R. 0) = 1 and S(0. Clearly T v. Clearly T k is self-adjoint for every positive integer. . 20. Hence S 2 v = λ2 v = v. b) = (a + b. + an vn ∈ V \ {0}. . . v = |a1 |2 λ1 + . Thus. Let v = a1 v1 + . Then span((1. v > 0 for all v ∈ V \ {0} implies that T is injective hence invertible. Hence the result follows by induction. 21 . 16. S be two positive operators. Define an inner-product by vi . so that T v. vn ) is basis such that the matrix of T is a matrix consisting of eigenvectors of T . . . v + T v. 21. . Since T is self-adjoint we can choose an orthonormal basis (v1 . . R3 is an odd-dimensional real vector space. If such an inner-product exists we have a basis consisting of eigenvectors by the spectral theorem. vj = 0. v = T n−2 T v. 0). vn ) of eigenvectors such that T has a diagonal matrix with respect to the basis. . + |an |2 λn > 0. but S is clearly not an isometry. T v ≥ 0. 18. Assume then that (v1 . but it’s orthogonal complement span((0. which shows that it has infinitely cos θ − sin θ many square roots. . For k = 2 we have T 2 v. v = Sv. vn ) is an orthonormal basis of U. 1)) is clearly not. . v ≥ 0. 0)) is invariant under T . . . i = j . Let T ∈ L(R2 ) be the operator T (a. Let the elements on the diagonal be λ1 . Assume that the result is true for all positive k < n and n ≥ 2. . a + b). 1) = 1. 22. Then S(1. Then they are self-adjoint and (S + T )∗ = S ∗ + T ∗ = S + T . .15. 0) = (1. so S + T is self-adjoint. . Let S ∈ L(R2 ) be defined by S(a. So assume that T is invertible. . . Since Sv = |λ| v = v we have λ2 = 1. b) = (a. 1. . Let T. . v = T v.

vn ) of eigenvectors of √∗ T and we can T assume that v1 corresponds to the eigenvalue 0. i > 1. 0 3 0 1 0 0 Thus we get  4 0 0 M(T ∗ T ) =  0 9 0  . .23. Hence S = T T ∗ T √so S is uniquely determined. √ −1 . Now we have T ∗ = (SR)∗ = R∗ S ∗ = RS ∗ by self-adjointness of R. These are the singular values. Now T ∗ T = T 2 corresponds to the diagonal matrix √ with λ2 . 9z2 . so (T 2 )∗ T 2 = 0. so U is an isometry. λn on the diagonal. Assume that T is invertible. Let T ∈ L(R2 ) be the map T (a. . Now we just need to permute the indices which corresponds to the isometry S(z1 . z2 . z1 . By the spectral √ theorem we can choose a basis (v1 . v ∈ V . . Now let T =√ T ∗ T . a).26 it has a unique positive square root. z3 ) = (2z1 . z2 . √ 28. so by proposition 7. Thus T is bijective if and only if 0 is not a singular value. Hence T ∗ T (z1 . Then from the matrix representation √ we see that T ∗ T (a. Define U S by U v1 = −Sv1 . . √ 25. 3z2 . . Thus it’s sufficient to show that R2 = T ∗ T . λ2 on the diagonal and the square root T ∗ T corresponds to the diagn 1 onal matrix having |λ1 |. . 26. . then since S is an √ isometry we have that T is bijective hence invertible if and only if T ∗ T is bijective. Then T ∗ T must be invertible (polar decomposition). Thus R2 = RIR = RS ∗ SR = T ∗ T . v . Then it’s easy to verify that U u. U v = Su. Clearly T ∗ T is positive. The composition of two bijections is a bijection. Sv = u. z3 ). . z2 . . 0). U vi = Svi . . so that T ∗ T is not invertible. The matrices corresponding to T and T ∗ are     0 0 1 0 2 0 M(T ) =  2 0 0  . so 12 = 1 is not a singular value of T 2 . However. Clearly U = S and T = U T ∗ T . |λn | on the diagonal. z3 ) → (4z1 . . . 0) = (a. . M(T ∗ ) =  0 0 3  . 27. hence T ∗ T = T ∗ T and 1 is clearly a singular value. 24. 0 0 1 √ so that T ∗ T is the operator (z1 . b) = (0. so the claim follows. Then range T ∗ T is not invertible (hence not surjective). T 2 = 0. . Choose a basis of eigenvectors for T such that T has a diagonal matrix with eigenvalues λ1 . 22  . z2 ). √ Now assume that T is not invertible. If T=S T ∗ T . z3 ) = (z3 . . Now assume that T is not invertible. Now choose u. √ But T ∗ T is injective hence bijective if and only if 0 is not an eigenvalue. . z3 ).

. this means that the √ √ matrix of S ∗ S is the identity matrix. Assume that T1 and T2 have the same ∗ singular values s1 . . . It follows from proposition 6. denote by S the map defined by the formula. Then we can choose bases of eigenvectors of T1 T1 and ∗ T2 T2 such that they have the same matrix. so S is an isometry. Then S ∗ S is a self-adjoint (positive) operator with all eigenvalues equal to 1. . . then (S −1 v . vn ) and (w1 . . . . . (f1 . Set A := M(S. For the second part observe that a linear map S defined by the formula maps fi to s−1 ei which is mapped by T to fi . Let T1 = S1 T1 T1 and T2 = S2 T2 T2 . so all singular values are 1. . the √ Since T ∗ T is self-adjoint we can choose a basis of eigenvectors of T ∗ T such that √ √ the matrix of T ∗ T is a diagonal matrix. Let S the operator defined by S(wi ) = vi . . . Clearly A = B and is a diagonal matrix with s1 . . S −1 v ) is an orthonormal basis for V of eigenvectors vectors of T1 1 1 n 1 ∗ ∗ ∗ of T2 T2 . For the first part. . The claim now follows from i exercise 3. Let these bases be (v1 . ∗ so that λ is an eigenvalue of T2 T2 . . B := M(T. . (e1 . we get S3 = S2 S 1 T2 = S3 S1 ∗ T1 T1 S = S3 T1 S. . Let (v1 . then clearly S is an isometry ∗ ∗ ∗ ∗ and T2 T2 = S −1 T1 T1 S. . so the claim follows. Thus S ∗ S = I. Writing −1 S −1 . This gives us −1 −1 ∗ −1 ∗ ∗ T2 T2 u = S1 T1 T1 S1 u = S1 T1 T1 v = S1 λv = λu. en )). which is clearly an isometry. the number of non-zero singular values. Then we have T2 T2 = (S1 T1 S2 )∗ (S1 T1 S2 ) = S1 T1 T1 S1 = −1 ∗ ∗ T and v a corresponding eigenvector. . sn on the diagonal. . . . fn )). then clearly S ∗ S = I. . As all eigenvalues are one. . . . (e1 . . All the si are positive real numberse. Hence T2 T2 and T1 T1 have the same eigenvalues.e. . Now assume √ that all singular values are 1. If S is an isometry. 23 . Let λ be an eigenvalue of T1 1 Because S1 is bijective we have a u ∈ V such that v = S1 u. . From √ polar decomposition theorem we see that dim range T = dim range T ∗ T . fn ). Thus we get T2 = S2 T2 T2 = S2 S −1 T1 T1 S. S1 T1 T1 S1 . . .47 that S = T ∗ . so that B ∗ = B = A. sn . . 32. The singular values are simply the positive square roots of these eigenvalues. By the spectral theorem we can choose a basis of V such √ that S ∗ S has a diagonal matrix. .√ 29. . . . wn ). ∗ ∗ 31. (f1 . ∗ ∗ ∗ Now let T2 = S1 T1 S2 . vn ) be an orthonormal basis of eigen∗ T . √ 30. . Thus T S = I. en ). .23. . . Clearly dim range T ∗ T equals the number of non-zero elements on the diagonal i.

so that 0 = S 4 = T 2 . . If ST is nilpotent. . so that a0 = 0. ei | ≤ i=1 si | v. On page 78 it’s shown that ±i are the eigenvalues of T . then ∃n ∈ N such that (ST )n = 0. From the triangle equality and the previous exercise we get (T + T )v ≤ T v + T v ≤ (s + s ) v . 8 Operators on Complex Vector Spaces 1. . Now clearly n n s v =s ˆ ˆ i=1 | v. then S is nilpotent. . T is not injective. 4. We see that T 2 = 0. . −i) is an eigenvector corresponding to the eigenvalue i and (1. then 0 = T m−1 ( m−1 ai T i v = a0 T m−1 v. so that (T + T )v = S so that s ≤ s + s . 3. Contradiction. Let T + T = S (T + T )∗ (T + T ). 2. Because (1. i) is an eigenvector corresponding to the eigenvalue −i. but T 2 = 0. Api=0 i=0 plying repeatedly T m−i for i = 2. so by corollary 8.√ 33. so that dim null(T − iI)2 = dim null(T + iI)2 = 1. so (T S)n+1 = T (ST )n S = 0. From the singular value decomposition theorem we can choose bases of V such that n n Tv = i=1 si v. T m−1 v) is linearly independent. ei | = T v and similarly for s v . Now let v be the eigenvector of (T + T )∗ (T + T ) corresponding to the singular value s. Assume that S is a square root of T . We have dim null(T − iI)2 + dim null(T + iI)2 = dim C2 = 2. 34. Let m−1 ai T i v = 0. . so 0 is an eigenvalue of T . we see that ai = 0 for all i. . (T + T )∗ (T + T )v = Ssv = s v ≤ (s + s ) v . ei fi = i=1 si | v. 5. 24 . We see that T 3 = 0. ei | since all si are positive.8 we have S dim V = S 3 = 0. The set of all generalized eigenvectors are simply the spans of these corresponding eigenvectors. so that any v ∈ V is a generalized eigenvector corresponding to the eigenvalue 0. Hence (v. By definition T ∗ T is positive. Thus all singular values of T are positive.

If there’s a generalized eigenvector that is not an eigenvector. so the claim clearly follows. b. It follows that d1 . d) = (7a. range T m = range T m+1 implies dim null T m = dim null T m+1 i. but T is not nilpotent. so the claim is false. Then d1 . 16. Let T be the operator defined in exercise 1.26 we can find a basis of V such that the matrix of N is upper-triangular with zeroes on the diagonal. 7. Let v ∈ null T n ∩ range T n . Now the matrix of N ∗ is the conjugate transpose. Then let T ∈ L(R3 ) be the operator T (a.6. which is impossible. then we have an eigenvalue λ such that dim null(T − λI)dim V = dim null(T − λI). so it’s sufficient to prove that null T n ∩ range T n = {0}. 7b. c. Let d1 be the multiplicity of the eigenvalue 5 and d2 the multiplicity of the eigenvalue 6. Assume that λ = 0 is an eigenvalue of N with corresponding eigenvector v. c) = (−b. 10.23 T has a basis of eigenvectors.8. From proposition 8. By lemma 8. From 0 = T v = T n+1 u we see that u ∈ null T n+1 = null T n which implies that v = T n u = 0. If N dim V −1 = N dim V . b. 15.5. Thus if 25 . If every generalized eigenvector is an eigenvector. Then clearly 0 is the only eigenvalue. so T has at most two different eigenvalues.23 n ≥ dim null T n + dim null (T − λ1 I)n + dim null (T − λ2 I)n ≥ n − 1 + 1 + 1 = n + 1. so that T dim V = 0. so that (T − 5I)n−1 (T − 6I)n−1 = 0. so from theorem 8. then by theorem 8. so that (z − 5)d1 (z − 6)d2 divides (z − 5)n−1 (z − 6)n−1 . 0). the matrix of N must be zero. Since N ∗ = N . From null T n−2 = null T n−1 and from proposition 8. By corollary dim null N dim V = dim V . 11. so the claim follows. null T m = null T m+1 . 13. Then we can find a u ∈ V such that T n u = v. 12. 8. Then N dim V v = λdim V v = 0. 9. By the Cayley-Hamilton theorem (T − 5I)d1 (T − 6I)d2 = 0. This contradicts corollary 8. then dim null N i−1 < dim null N i for all i ≤ dim V by proposition 8.5 we get dim null T m = dim null T m+k for all k ≥ 1. d2 ≤ n − 1. We have dim V = dim null T n + dim range T n . λ2 .5 we see that dim null T n ≥ n−1. d2 ≥ 1 and d1 + d2 = n. so the claim follows. 8d) from the matrix of T it’s easy to see that the characteristic polynomial is (z − 7)2 (z − 8)2 . Again this implies that dim range T m = dim range T m+k for all k ≥ 1. Let T ∈ L(C4 ) be defined by T (a. 8c. Assume that T has three different eigenvalues 0.e.23 we have V = null T dim V . 14. Clearly null T ∩ range T = {0}. From theorem 8. Then dim null (T −λi I)n ≥ 1. a. λ1 .

. so solving for I in p(T ) we get −a−1 (a1 + . e1 e1 − . + am xm−1 ) we have q(T ) = T −1 . . . Choose the matrix 1  0 A=  0 0  0 1 0 0 0 1 1 0  0 0  .32. then doesn’t exists a basis of eigenvectors. Continuing in the proof of lemma 8. So that √ 1 1 1 5 N4 I + N = I + N − N2 + N3 − 2 8 16 128 19. . . The operator defined by the matrix   0 0 1  0 0 0  0 0 0 is clearly an example. 26 . . Just replace the Taylor-polynomial in lemma 8. en ). . . ei ei vi+1 − vi+1 . e1 e1 − . However. λm are the eigenvalues of T . ei+1 ). Apply Gram-Schmidt orthogonalization to the basis. the whole polynomial is the only factor that annihilates A. − vi+1 . . . .26. 0  0 It’s easy to see that A(A − I)2 = 0. . 22. .λ1 . . 0 21. . − vi+1 . 0 Hence setting q(x) = −a−1 (a1 + . ei ei ∈ span(e1 . m i=1 dim null(T − λi I) < dim V . . . vn ) such that N has an upper-triangular matrix. Since T is invertible we must have x i=1 ai x i=1 ai T the minimality of p. . Then N e1 = 0 and assume that N ei ∈ span(e1 . . . ei ). Let p(x) = i=0 ai x be the minimal polynomial of T . . so z(z − 1)2 is the minimal polynomial. . + am T m−1 )T = I.30 up to j = 4 we see that a4 = −5/128. choose a basis (v1 . 18. Hence a0 = 0. By lemma 8. . . so that N ei+1 = N vi+1 − vi+1 . so there 17.30 with the Taylor polynomial of √ 3 1 + x and copy the proof of the lemma and theorem 8. m i 20. Thus. If a0 = 0. . . then p(x) = m m i−1 = 0 which contradicts i−1 . . the minimal polynomial divides z(z − 1)2 . . Thus N has an upper-triangular matrix in the basis (e1 .

Now clearly p(A) = 0. the minimal polynomial of (A − λI) is z m+1 where m is the length of the longest consecutive string of 1’s that appear just above the diagonal. 27 . then V has a basis of eigenvectors. .23. . Write q = sp + r. Then we get i=1   p(T )vi =  j=i (T − λj I) (T − λi I)vi = 0. so that the i=1 minimal polynomial being a factor of p doesn’t have a double root. Let the minimal polynomial be p and let (v1 . Let λ1 . . 24. by exercise 29. . By theorem 7. By theorem 7. 25. so that the minimal polynomial divides p and hence has no double root. . Now assume that the minimal polynomial has no double roots. . so the claim follows. Clearly this polynomial has a negative discriminant. so that 0 = q(T )v = s(T )p(T )v + r(T )v = r(T )v.25 we can choose the 2 × 2 blocks to be of the form a −b b a where b > 0 and it’s easy to see that a −b b a − 2a a −b b a + (a2 + b2 ) 1 0 0 1 = 0. . . Then p(z) = m (z − λi ) annihilates all but the 2 × 2 blocks of the matrix. . Since A was the largest Jordan-block it follows that all basis vectors are eigenvalues (corresponding to a 1 × 1 matrix). Now let λ1 . Assuming r = 0. so the minimal polynomial of A is (z − λ)m+1 . . Now assume that V is a real vector space. vn ) be a Jordan-basis of T . so that A is a 1 × 1 matrix and the basis vector corresponding to A is an eigenvalue.25 we can find a basis of T such that T has a block diagonal matrix where each block is a 1 × 1 or 2 × 2 matrix. dm . λm be the distinct eigenvalues corresponding to the 1 × 1 blocks. we can multiply the both sides with the inverse of the highest coefficient of r yielding a monic polynomial r2 of degree less than p such that r2 (T )v = 0 contradicting the minimality of p. . where deg r < deg p. Let A be the largest Jordan-block of T . . Let λ1 . . vn ). λm be the eigenvalues of T with multiplicity d1 . . Assume that V has a basis of eigenvectors (v1 . Now clearly p = m (z − λi ) annihilates T . . If V is a complex vector space. . Let p(z) = m (z − λi ). Hence m = 0. . However. . . λm be the distinct eigenvalues. . . . Let q be the minimal polynomial. . Hence r = 0 and p divides q. Now it’s i=1 sufficient to show that each 2 × 2 block is annihilated by a polynomial which doesn’t have real roots. .

 . 0 0 0 0 but A(A − I)(A − 3I) = 0. Hence the minimal polynomial has degree n. Set p(z) = a0 + a1 z + . vi+m be the basis elements corresponding to the biggest Jordan-block. + an−1 z n−1 + z n . Then N m ei+m = ei = 0. . The biggest Jordan-block of N is of the form  0 1  0 1   .. If there is more than one Jordan-block. Then T has only one eigenvalue. . Assume that V can’t be decomposed into two proper subspaces. so by exercise 25 p divides the minimal polynomial.   Now clearly N m+1 = 0. so p is the minimal polynomial. so the minimal polynomial is z m+1 . We see that T (ei ) = ei+1 for i < n. . Thus for any non-zero polynomial p of degree less than n we have p(ei ) = 0. en ) is linearly independent. the minimal polynomial is monic and has degree n. . . . vm ) be the vectors corresponding to all but the last Jordan-block and (vm+1 . 28. . Now from the matrix of T we see that T n (ei ) = −a0 − a1 T (e1 ) − . so the claim follows. 0 0 0 0 so the it’s minimal polynomial is z(z − 1)2 (z − 3) which by definition is also the characteristic polynomial. . . so the minimal polynomial divides z m+1 . . Now p(T )(e1 ) = 0. It’s easy to see that no proper factor of z(z − 1)(z − 3) annihilates the matrix   3 1 1 1  0 1 1 0   A=  0 0 1 0 . Let vi .   0 1 0     . 29. . so that the minimal polynomial equals the characteristic polynomial. . . It’s easy to see that no proper factor of z(z − 1)2 (z − 3) annihilates the matrix   3 1 1 1  0 1 1 0   A=  0 0 1 0 . . then Let (v1 . . .. 27. . . T e1 . . vn ) the 28 . . Hence (e1 . .26. 30. . . . . . T n−1 e1 ) = (e1 . However. − an−1 T n−1 .

vectors corresponding to the last Jordan-block. let p1 be the minimal polynomial of T|U and p2 the minimal polynomial of T|W . Hence T has minimal polynomial (z − λ)dim V . . vm ) ⊕ span(vm+1 . See the proof of the next problem. 9 Operators on Real Vector Spaces 1. 3. . If V = U ⊕W . . then p(T ) = 0 implies that either A − λ1 I or A − λ2 I is not invertible hence A has an eigenvalue. though in this special case the proof is trivial. Then clearly V = span(v1 . so that Ai − λI is invertible for each i. Thus deg p1 p2 = dim V . This means that (T − λI)n = 0 contradicting the fact that (z − λ)dim V is the minimal polynomial. but deg p1 + deg p2 ≤ dim U + dim W = dim V . Now the discriminant equals (a + d)2 − 4(ad − bc) = (a − d)2 + 4bc. Hence the characteristic polynomial has a root if and only if (a − d)2 + 4bc ≥ 0. . vn ). 4. dim W } we have (T|U − λI)n = (T|W − λI)n = 0. . . we have only one Jordan-block and T − λI is nilpotent and has minimal polynomial z dim V by the previous exercise. . . . First let λ be an eigenvalue of A. 2. Let a be the value in the upper-left corner. Hence deg p1 p2 = deg p1 + deg p2 ≥ dim V . . 31. Thus.. If p(x) = (x − λ1 )(x − λ2 ). This means that p1 (z) = (z − λ)dim U and p2 (z) = (z − λ)dim W . The claim follows. Am . Now (p1 p2 )(T ) = 0. If p doesn’t have a root then assuming that Av = λv we get 0 = p(A)v = p(λ)v. Reversing the Jordan-basis simply reverses the order of the Jordan-blocks and each block needs to be replaced by its transpose. so that v = 0. Assume that λ is not an eigenvalue of A1 . . −1 0 (Am − λI) 29 . Now assume that T has minimal polynomial (z −λ)dim V . The characteristic polynomial of the matrix is p(x) = (x − a)(x − d) − bc. Hence A has no eigenvalues. . Now if n = max{dim U. .  . Then the matrix must have the form a 1−a 1−a a and it’s easy to see that 1 1 is an eigenvector corresponding to the eigenvalue 1. so that (z − λ)dim V divides p1 p2 . Now let B be the matrix   (A1 − λI)−1 ∗   .

hence it’s invertible. . . . . 2 so that dim null(T 2 + αT + βI)2k is even. . It follows that dim null(T 2 + αT + βI)2k = dim null(T 2 + αT + βI)k and the claim follows. . . this is impossible. . λ is an eigenvalue of some Ai . . . . This would contradict theorem 9. . However. Choose a basis (v1 . 30 . . Let (v1 . . . Applying Gram-Schmidt to the basis the matrix trivially has the same form. Let λ be a solution. . Now if vj corresponds to the second vector in a pair corresponding to a 2 × 2 matrix or corresponds to a 1 × 1 matrix. 7. so that span(v1 . ak ) be the eigenvector of Ai corresponding to λ. . then span(v1 . so that λ is an eigenvalue of A. . . Then it’s easy to see that T maps the space span(e1 . . ej−1 ). v7 ) such that the matrix of T would have a matrix with eigenpair (1. Let (a1 . The equation x2 + x + 1 = 0 has a solution in C. .9 as 7/2 is not a whole number. . . . Let T be the operator corresponding to A − λI in L(Fn ). vj ) is an invariant subscape. ej−1 . 6. . Hence T is not injective. Set v = a1 ej + . . It follows that such an operator doesn’t exist. Then by theorem 9. . . . . . 10. However. . Assuming that such an operator existed we would have basis (v1 . .It’s now easy to verify that B(A−λI) is a diagonal matrix with all 1s on the diagonal. en ) be the standard basis. . vn ) such that T has a matrix of the form in theorem 9. . j + k and let (e1 . . vj+1 ) is an invariant subspace. + ak ej+k . The second posibility means that vj corresponds to the first vector in a pair corresponding to a 2 × 2 matrix. .10. . Then the operator corresponding to the matrix λI clearly is an example. the minimal polynomial of T divides (T 2 + αT + βI)2k and has degree less than 2k. . . Now let λ be an eigenvalue of Ai . 1) dim null(T 2 + T + I)dim V = 7/2 2 times on the diagonal. so that (T 2 + αT + βI)k = 0. vn ) be the basis with the respect to which T has the given matrix.9 we have k= dim null(T 2 + αT + βI)2k . . . Let Ai correspond to the columns j. 9. 8. Thus. v) to the space span(e1 . since A − λI is not. 5. The proof is identical to the argument used in the solution of exercise 2. Let T be the operator in L(R2k ) corresponding to the block diagonal matrix where each block has characteristic polynomial x2 + αx + β.

Assume then that p(z) = (z − λ1 )(z − λ2 ) which implies that p(T ) = (T − λ1 I)(T − λ2 I) = 0. 14.9 we can choose a basis such that the matrix of T has the form of 9. β) is an eigenpair and from the nilpotency of T 2 + αT + βI and theorem 9. so that they are isometries. This is impossible as the diagonal has only length 3. 12. Then dim null(T − λ1 I) = 2. By theorem 9. Assume then that T − λ2 I is invertible. Hence a 2 × 2 block is of the form i cos θ − sin θ sin θ cos θ . then p is the characteristic polynomial by definition. If F = C.9 we have dim null(T 2 + αT + βI)dim V = (dim V )/2 2 it follows that dim V is even. Assuming that T has an eigenpair we would have at least one 2 × 2 matrix on the diagonal. so that p is the characteristic polynomial by definition. has degree 2 and annihilates A it follows that p is the characteristic polynomial. Hence p(z) = (z − λ1 )2 . Hence x2 + αx + β must equal the characteristic polynomial of some Ai . Thus we have p(x) = (x2 + αx + β)k and from deg p ≤ dim V we get k ≤ (dim V )/2. We see that A satisfies the polynomial p(z) = (z − a)(z − d) − bc no matter if F is R or C. Now we have the matrices [5] and [7] at least once on the diagonal.25). If neither T − λ1 I or T − λ2 I is injective. so there’s an orthonormal basis of V such that S has a block diagonal matrix with respect to the basis and each block is a 1 × 1 or 2 × 2 matrix and the 2 × 2 blocks have no eigenvalue (theorem 7. For the second part we know from the Cayley-Hamilton theorem that the minimal polynomial p(x) of T has degree less than dim V . 13. then because p is monic. We see that (α.5 dim null T n−1 ≥ n − 1. Now if A has no eigenvalues. so that T − λ1 I = 0. We see that T 2 + αT + βI is not injective if and only if A2 + αAi + βI is not injective i for some 2 × 2 matrix Ai . S is normal. Assume then that F = R. 15. so by the previous exercise it’s of the form (x − cos θ)(z − cos θ) + sin2 θ = x2 − 2x cos θ + cos2 θ + sin2 θ so that β = cos2 θ + sin2 θ = 1. From S ∗ S = I we see that each block Ai satisfy A∗ Ai = I.11. Like in the previous exercise we know that the matrix [0] is at least n − 1 times on the diagonal and it’s impossible to fit a 2 × 2 matrix in the only place left. then by definition p is the minimal polynomial. By proposition 8. Hence we get c = b = 0 and a = d = λ1 . 31 .10. Hence (T + αT + βI)(dim V )/2 = 0.

.11 we have trace(T 2 ) = −2 < 0. The claim now follows trivially. vn ) of V . . en )) = B. . . . + an1 vn = 2(a21 v2 + . . . . . (v1 . The claim clearly follows. The map T → M(T. un ). vn ))M(T. = an1 = 0. + an1 vn ). . (v1 . (u1 . . 4. . .11. . . . . (v1 . vn )). .. . vn )) = M(S. . . . Let A = M((v1 . . (v1 . . . (v1 . . n 1 32 . The claim now follows from exercise 3. . + λ2 ≥ 0 by theorem 10. . . . . . (v1 . Then we can find an operator T ∈ L(Cn ) such that M(T. . vn )) = I. Now (v1 . vn )) is bijective and satisfies ST → M(ST. Clearly A is an invertible square matrix. . vn ))M(T. . . Let (v1 . . which implies (a21 v2 + . 0 λn Clearly trace(T 2 ) = λ2 + . . . (e1 . . Both matrices represent an operator in L(Fn ). . . . + an1 vn = 0. . . Hence ST = I ⇔ M(ST. (v1 . Then A−1 BA = M(T. . (e1 . . . . By linear independence we get a21 = . . . . . . . 2v2 . . so that T v1 = a11 v. Then T v1 = a11 v1 + . Let T be the operator corresponding to the matrix 0 −1 1 0 . λn be the corresponding eigenvalues. 3. . By theorem 10. .  . . . We thus get a21 v2 + . 2vn ) is also a basis and we have by our assumption T v = a11 v1 + 2(a21 v2 + . . . 2. Follows directly from the definition of M(T. . . . 6. (v1 . . Let (e1 . . + an1 vn ).10 Trace and Determinant 1. . en ) be the standard basis for Cn . vn ) be a basis of eigenvectors of T and λ1 . . . . . vn )) = M(S. . . en )). . (v1 . . vn ) and let A = (aij ) be the matrix corresponding to the basis. 7. . . . Then T has the matrix   λ1 0   . . .23. . vn )) which is upper-triangular. . . 0 −1 1 0 2 = −1 0 0 −1 . vn ). . vn )). . . + an1 vn . . Choose a basis (v1 . . 5. Now T has an upper-triangular matrix corresponding to some basis (v1 . . .

e1 . The example in exercise 6 shows that this is false. . + an = 0 implies a1 = . . . . Now choose a basis (v1 . so that the diagonal elements of A∗ are a1 . 11. Let (v1 . Then it’s easy to see that in BA the only non-zero diagonal element is ai. vn )). vm ) and 0s on the rest of the diagonal. so that A = 0.j . . vn )) is a diagonal matrix and by positivity of T all the diagonal elements a1 . . 10.j be the element in row i. 12. vn ) for V and let A = M(T. ei .j = trace(ST ) = 0. Now let ai. . so that λ = 36. (v1 . . Let a1 . Hence a1 + . . . . vn ) of eigenvectors of T . 14. Let a. . so that P is the identity on range P . .j . . . Hence λ − 48 + 24 = 51 − 40 + 1. . en ) of V and let A = M(T. Let B be the matrix with 1 in row j column i and 0 elsewhere. 33 . . Hence P v = P 2 w = P w = v. . v w. . . Choose a basis of (v1 . v/ v = w. . . . Choose a basis (v1 . By proposition 6. .11 we have trace(T ∗ ) = a1 + . . . . v w. A positive operator is self-adjoint. . . . Then clearly the matrix for P in this basis consists of a diagonal matrix with 1s on part of the diagonal corresponding to the vectors (v1 . an be the diagonal elements of A. . . . ei = 0. . . v/ v = v/ v . ei = so that trace(T ) = a = T v/ v . un ) of null P to get a basis for V . By theorem 10. . . (v1 . . ei = 0. column j of A. . . an . . For another example take S = T = I. By the spectral theorem we can find a basis (v1 . an are positive. 9. 15. . . . . Extend v/ v to an orthonormal basis (v/ v .21 we have V = null P ⊕range P . . . . . . Then we have trace(cT ) = ca1 + . vm ) for range P and extend it with a basis (u1 . . .47 T ∗ has the matrix A∗ . . v . + can = c(a1 + . . . vn )). . an be the diagonal elements of A. Then we have ai = T ei . . It follows that ai. . . . Thus trace(BA) = ai. . so that A = 0 and hence T = 0. an denote the diagonal elements of A. . . Now let v ∈ range P . . Let a1 . Let S be the operator corresponding to B. It’s sufficient to prove that A = 0.8. . (v1 . 13. . = an = 0. . From exercise 5. . . . . Then v = P u for some u ∈ V . en )). . (v1 . . + an ) = ctrace(T ). . + an = trace(T ). . vn )). . vn ) be some basis of V and let A = M(T. . (v/ v . . a1 . vn ) of V and let A = M(T. . . + an = a1 + . . e1 . Hence trace P = dim range P ≥ 0. Then A = M(T. . . . Then the matrix of cT is cA. The trace of T is the sum of the eigenvalues.

We have T (span(vi . . λn on the diagonal. vi+k ). ei+k ). en ). Then we have 0 = det(S + T ) = det S + det T = 1 + 1 = 2. . . Homogeneity in the first slot is exercise 13 and by exercise 10 S. T ∗ v = (T ∗ T − T T ∗ )v. . It follows that the formula defines an inner-product.12. T v − T ∗ v. . We can use the result of the next exercise which means that we only need to prove this for the complex case. so all the eigenvalues are 0. Now each block Aj on the diagonal corresponds to some basis vectors (ei . . 20. Positivity and definiteness follows from exercise 16 and additivity in the first slot from corollary 10. Clearly ai is the ith diagonal element in the matrix M(T ∗ T. T = trace(ST ∗ ) = trace((ST ∗ )∗ ) = trace(T S ∗ ) = T. . vn ) such that T has an upper triangular matrix B = (bij ) with eigenvalues λ1 . (vi . Let T ∈ L(Cn ) be the operator corresponding to the matrix A. .1 · · · caσ(n). . + an en . + T en 2 . . . . . vi+k ). . but trace(T ∗ T − T T ∗ ) = 0. . S . so after doing this for all blocks we can assume that A is upper-triangular. . Hence T ∗ T − T T ∗ is a positive operator (it’s clearly self-adjoint). . . (e1 . because the formulas only depends on the elements of the matrix. 17. en )). . . . vi+k )) ⊂ span(vi . . . 19. . . where |bii |2 = |λi |2 . .16. 18. These can be replaced with another set of vectors. 23. Let S = I and T = −I. so T is normal. The claim follows immediately. . 34 . . . so that trace(T ∗ T ) = T e1 The second assertion follows immediately. . spanning the same subspace such that Aj in this new basis is upper-triangular. This follows trivially from the formula of trace and determinant for a matrix. . Then we have that ai = T ∗ T ei . Hence T ∗ T − T T ∗ = 0 by the spectral theorem. . . . v . |λi |2 ≤ i=1 i=1 j=1 |bji |2 = trace(B ∗ B) = trace(A∗ A) = k=1 j=1 |ajk |2 .n = cn det A = cn det T 21. Hence j=1 j=1 n n n n n 2 + . Then we have det cT = det cA = σ caσ(1). ei = T ei 2 by the ortogonality of (e1 . Now the ith element on the diagonal of B ∗ B is n bji bji = n |bji |2 . . We have 0 ≤ T v. 22. . Choose a basis (v1 . Let dim V = n and write A = M(T ). Let T ∗ T ei = a1 e1 + .

Let dim V = n and let A = M(T ). 35 . y. cz).e. 25. Let Ω = {(x.n = det B = det T ∗ . z) ∈ R3 | x2 + y 2 + z 2 < 1} and let T be the operator T (x. It’s easy to see that T (Ω) is the ellipsoid. so that B = M(T ) equals the conjugate transpose of A i. The second claim follows immediately from theorem 10. Now Ω is a circle with radius 1. Then we have det T = det A = σ aσ(1). by.σ(1) · · · an.σ(n) σ = σ a1.24.σ(n) = = σ bσ(1).31.σ(1) · · · an.n a1. 3 3 which is the volume of an ellipsoid.1 · · · bσ(n). bij = aji . y.1 · · · aσ(n). The first equality on the second line follows easily that every term in the upper sum is represented by a term in the lower sum and vice versa. Hence 4 4 | det T |volume(Ω) = abc × π = πabc. z) = (ax.