Professional Documents
Culture Documents
Book 4
Linear algebra
T. S. BLYTH o E. F. ROBERTSON
University of St Andrews
Published in the United States of America by Cambridge University Press, New York
www.cambridge.org
Information on this title: www.cambridge.org/9780521272896
© Cambridge University Press 1985
A catalogue record for this publication is available from the British Library
Preface vi
Background reference material vii
1: Direct sums and Jordan forms 1
2: Duality and normal transformations 18
Solutions to Chapter 1 31
Solutions to Chapter 2 67
Test paper 1 96
Test paper 2 98
Test paper 3 100
Test paper 4 102
Preface
TSB, EFR
St Andrews
Background reference material
vn
[12] I. D. Macdonald, The Theory of Groups, Oxford University
Press, 1968.
[13] S. MacLane and G. Birkhoff, Algebra, Macmillan, 1968.
[14] N. H. McCoy, Introduction to Modern Algebra, Allyn and
Bacon, 1975.
[15] J. J. Rotman, The Theory of Groups: An Introduction, Allyn
and Bacon, 1973.
[16] I. Stewart, Galois Theory, Chapman and Hall, 1975.
[17] I. Stewart and D. Tall, The Foundations of Mathematics,
Oxford University Press, 1977.
vin
1: Direct sums and Jordan forms
In this chapter we take as a central theme the notion of the direct sum
A 0 B of subspaces A, B of a vector space V. Recall that V = A 0 B
if and only if every x G V can be expressed uniquely in the form a + b
where aeA and b e B; equivalently, if V = A+B and AnB = {0}. For
every subspace A of V there is a subspace B of V such that V = A®B.
In the case where V is of finite dimension, this is easily seen; take a basis
{vi,..., Vk} of A, extend it to a basis {^i,..., vn} of V, then note that
{vk+i,..., vn} spans a subspace 5 such that V = A 0 B.
If / : V —> V is a linear transformation then a subspace W of V is
said to be {^invariant if / maps W into itself. If W is /-invariant then
there is an ordered basis of V with respect to which the matrix of V is
of the form
M N
Pi — pi o pj = 0 for t ^ ;.
4=1
In this case W{ = Imp,- for each i, and relative to given ordered bases of
Book 4 Linear algebra
M2
Mk
Of particular importance is the situation where each M,- is of the form
A 1 0 ... 0 0
0 A 1 ... 0 0
0 0 A ... 0 0
M,=
. . .
0 0 0 ... A 1
0 0 0 ... 0 A
in which case the diagonal block matrix is called a Jordan matrix.
The Cayley-Hamilton theorem says that a linear transformation / is
a zero of its characteristic polynomial. The minimum polynomial of /
is the monic polynomial of least degree of which / is a zero. When the
minimum polynomial of / factorises into a product of linear polynomials
then there is a basis of V with respect to which the matrix of / is a
Jordan matrix. This matrix is unique (up to the sequence of the diagonal
blocks), the diagonal entries A above are the eigenvalues of / , and the
number of Mt- associated with a given A is the geometric multiplicity of
A. The corresponding basis is called a Jordan basis.
We mention here that, for space considerations in the solutions, we
shall often write an eigenvector
as i, x2,
1.1 Which of the following statements are true? For those that are false,
give a counter-example.
(i) If {0,1,(12, a$} is a basis for IR3 and b is a non-zero vector in IR then
{b + ai, ci2, as} is also a basis for IR3.
1: Direct sums and Jordan forms
ti (a, 6, c) = (a + 6, b + c, c + a);
Find Kerti and Im*t- for i = 1,2,3,4. Is it true that IR3 = Keri,- 0lmi t -
for any of i = 1,2,3,4?
Is Im<2 ^-invariant? Is Keri2 ^-invariant?
Find ts o i 4 and t^ o <3. Compute the images and kernels of these
composites.
1.3 Let V be a vector space of dimension 3 over a field F and let t G £(V, V)
be represented by the matrix
3 -1 1
-1 5 -1
1 -1 3
Do the same results hold if the ground field IR is replaced by the field
1: Direct sums and Jordan forms
1.6 Let V be a finite-dimensional vector space and let t G £(V, V). Establish
the chains
V Dim* Dim* 2 D ... D Imt ft D Im* n + 1 2 . . . ;
{0}CKer<CKer* 2 C ... C Ker*n C Keri n + 1 C . . . .
Show that there is a positive integer p such that Im£p = Imtfp+1 and
deduce that
is a basis of V.
Hence show that a non-zero nxn matrix A over F is such that A2 = 0
if and only if A is similar to a matrix of the form
o"
Ir 0
0 0
Vi = {x G V I xz = x2 and x4 = a^},
V2 = {x G V I £3 = - z 2 and £4 = - X i } .
Show that
(1) Vi and V2 are subspaces of V;
Book 4 Linear algebra
(2) {61 -f- 64,62 -f 63} is a basis of V\ and {61 — 64,62 — 63} is a basis
ofv2;
V = Im(id v + / ) 0 Im(id v - / ) .
IP 0
0 -/„-,
xeKerg = f{x),
1: Direct sums and Jordan forms
In-2
1 1
0 1
1 1
0 1
1 1
0 1
[Hint. Observe that Img C Ker gr. Let {g(ci)y..., g{cp)} be a basis of
Im g and extend this to a basis {&i,..., &n-2pj g{c\),..., g(cp)} of Ker g.
Show that
p}
is a basis of V.]
1.10 Let V be a finite-dimensional vector space and let t G £(V, V) be such
that t ^ idy and t ^ 0. Is it possible to have Im* n Kerf ^ {0}? Is
it possible to have Imi = Kertf? Is it possible to have Imi C Kertf? Is
it possible to have Kerf C Imf? Which of these are possible if t is a
projection?
1.11 Is it possible to have projections e , / G £{V,V) with Kere = Ker/ and
I m e ^ I m / ? Is it possible to have Ime = I m / and Kere ^ Ker/? Is it
possible to have projections e, / with e o / = 0 but / o e ^ 0?
1.12 Let V be a vector space over a field of characteristic not equal to 2. Let
ci, 62 G £(V, V) be projections. Prove that ei + e2 is a projection if and
only if ei o e2 = ^2 o ei = 0.
If ei + e2 is a projection, find Im(ei + e<i) and Ker(ei +^2) in terms
of the images and kernels of cj, e2.
1.13 Let V be the subspace of IR3 given by
7 = {(a,a,0) I aGlR}.
1.18 Let V be a vector space over a field F and let t G £{V,V). Let Ai and
A2 be distinct eigenvalues of t with associated eigenvectors Vi and v2. Is
it possible for Ax + A2 to be an eigenvalue of ti What about AiA2?
t{a,b) = (a + 26, b - a ) .
1.18 Suppose that t G £(V, V) has zero as an eigenvalue. Prove that t is not
invertible. Is it true that t is invertible if and only if all the eigenvalues
of t are non-zero? If t is invertible, how are the eigenvalues of t related
to those of f 1 ?
1.19 Let V be a vector space of finite dimension over Q and let t E C{V,V)
be such that tm = 0 for some m > 0. Prove that all the eigenvalues of t
are zero. Deduce that if t ^ 0 then t is not diagonalisable.
Q ii 3 -1 ]L r -1 2
O
(a) (6) -1 5 -1 c) -1 7 2
•I
" 33 , <2
J 1 -- 1 3 2 10
2 1L - 1 1 0 r
(d) 0 5I 1 e) 0 2 1
0 C) 1 -1 0 3
Cx = {v e V | t(v) = At/}
Show that in this way V becomes a vector space over C. Use the identity
0 -In
In 0
Ak
where each A{ is real and of the form
1.33 Let V be a vector space of dimension 3 over IR and let t G £(V, V) have
eigenvalues -2,1,2. Use the Cayley-Hamilton theorem to express t2n
as a real quadratic polynomial in t.
1.34 Let V be a vector space of dimension n over a field F and let / G £(V, V)
be such that all the zeros of the characteristic polynomial of / lie in F.
Let Ai be an eigenvalue of / and let b\ be an associated eigenvector.
Let W be such that V — Fb\ 0 W and let (6J)2<t<» be an ordered basis
of W. Show that the matrix of / relative to the basis {&i, b'2,..., b'n} is
of the form
i # 2 ••• fin
) M
13
Book 4 Linear algebra
1 3 -2 3 0 1
0 7 -4 ) (<0 0 3 0
0 9 -5 0 0 3
1.38 Find a Jordan normal form J of the matrix
2 1 1 1 0
0 2 0 0 0
A = 0 0 2 1 0
0 0 0 1 1
0 -1 -1 -1 0
Find also a Jordan basis and hence an invertible matrix P such that
1.39 For each of the following matrices A find a Jordan normal form J, a
Jordan basis, and an invertible matrix P such that P~l AP = J .
-13 8 1 2
22 -2 -12"
20 0
-22 13 0 3
(a) -12
8-5 0-1
30 -3 -16
-22 13 5 5
14
1: Direct sums and Jordan forms
1.40 Find a Jordan normal form and a Jordan basis for the matrix
5 - 1 - 3 2 -5
0 2 0 0 0
1 0 1 1 - 2
0 - 1 0 3 1
1 - 1 - 1 1 1
1 0 -1 1 0
-4 1 -3 2 1
A= -2 -1 0 1 1
-3 -1 -3 4 1
-8 -2 -7 5 4
From only the information given by the minimum polynomial, how many
essentially different Jordan normal forms are possible? How many lin-
early independent eigenvectors are there? Does the number of linearly
independent eigenvectors determine the Jordan normal form J ? If not,
does the information given by the minimum polynomial together with
the number of linearly independent eigenvectors determine J?
1.42 Find a Jordan normal form of the differentiation map D on the vector
space IR^X] of polynomials of degree less than or equal to 3 with real
coefficients. Find also a Jordan basis for D on IR4[X].
1.43 If a 3 x 3 real matrix has eigenvalues 3,3,3 what are the possible Jordan
normal forms? Which of these are similar?
1.44 Which of the following are true? HA, Be Mat n x n (C) then AB and BA
have the same Jordan normal form
(i) if A and B are both invertible;
(ii) if one of A, B is invertible;
(iii) if and only if A and B are invertible;
(iv) if and only if one of A, B is invertible.
1.45 Let V be a vector space of dimension n over C. Let t G C{V,V) and let
A be an eigenvalue of t. Let J be a matrix that represents t relative to
some Jordan basis of V. Show that there are
dimKer(£ — Aidy)
15
Book 4 Linear algebra
dt
~df
= X\ — x 2 -|- 2sc2
dx\
-— - 5xi + 6x2 + 6x3 = 0 —— = X\ + 3x2 — 2x3
at at
(ix2 dx2 ,_
-—- + xi - 4x2 = 0 — = 7x2-4x3
at
7 = 0
at at
1.48 Solve the system of differential equations
+*£=«•
ax ax
16
1: Direct sums and Jordan forms
=2 Sx2
17
2: Duality and normal transformations
a? = vf(x)vi +
and for every / £ Vd we have
If (v,) n , (^t)n are ordered bases of V and (r^) n , (wf )n the correspond-
ing dual bases then the transition matrix from (vf ) n to (wf ) n is (-P"1)*
where P is the transition matrix from (v,) n to (ty,) n . In particular,
consider V = \Rn. Note that if
is a basis of IRn then the transition matrix from B to the canonical basis
(c,*)n of \Rn is M = [wiy]ftXn where mt-y = ayt-. The transition matrix
from J5d to (e^)n is given by (M~1)t. We can therefore usefully denote
the dual basis by
2: Duality and normal transformations
where [an,..., a t n ] denotes the ith row of M~l, so that
• • - + ainxn.
The fadua/ of an element xisxA :Vd -> F where z A (t/ d ) = yd(x). It
is common practice to write yd{x) as (x, yd) and say that j / d annihilates
x if (z, t/d) = 0. For every subspace W of V the set
= { / G 7 d | (Vx G W) (z, / ) = 0}
is a subspace of W and
1
=dim7.
2.1 Determine which of the following mappings are linear functional on the
vector space IR3 [X] of all real polynomials of degree less than or equal
to 2 :
2.2 Let C[0,1] be the vector space of continuous functions / : [0,1] —* IR. If
/o is a fixed element of C[0,1], prove that <p : C[0,1] —• IR given by
#>(/)= / fo(t)f(t)dt
is a linear functional.
2.3 Determine the basis of (IR3)d that is dual to the basis
{(1,0,-1),(-1,1,0), (0,1,1)}
ofIR3.
2.4 Let A = {xi, Xg} be a basis of a vector space V of dimension 2 and let
Ad = {<pi, 1P2} be the corresponding dual basis of Vd. Find, in terms of
<P\, (p-z the basis of Vd that is dual to the basis A' = {xj +2x2,3xi +4xg}
of V.
2.5 Which of the following bases of (IR2)d is dual to the basis {(-1,2), (0,1)}
of IR2?
(a) {[-1,2], [0,1]}; (6) {[-1,0], [2,1]};
(c) {[-1,0], [-2,1]}; (d) {[1,0], [2,-1]}.
2.6 (i) Find a basis that is dual to the basis
of IR4.
(ii) Find a basis of IR4 whose dual basis is
Show that Bd = {/i,/2,/s} is a basis for the dual space (IR3[X])d and
determine a basis B = {pi(X),p2{X)ips(X)} of IR3[X] of which Bd is
the dual.
2.9 Let a = (1,2) and p = (5,6) be elements of IR2 and let cp = [3,4] be an
element of (IR2)d. Determine
3x+ y = 2
x + 2y = 1
- z + 3 y = 1.
i(a, 6, c) = (2a + 6, a + 6 + c, - c ) .
Show that / is an isomorphism that does not satisfy (*). [Hint. Take
x - xuy = t.] If, on the other hand, t ^ Ker^(<) for all t =£ 0 let
22
2: Duality and normal transformations
then / is an isomorphism that does not satisfy (*). Conclude from these
observations that we must have n = 2.
Suppose now that F has more than two elements and let A G F be such
that A ^ 0,1. If there exists t ^ 0 such that t G Ker^(*) observe that {t}
is a basis of Ker $(t) and extend this to a basis {t, z) of V. If / : V —• V
is the (unique) linear transformation such that f(t) — t, f(z) = Xz
show that / is an isomorphism that does not satisfy (*). [Hint. Take
x = z, y = t.] If, on the other hand, t £ Ker $(t) for alH ^ 0 let {z} be
a basis for Ker^(tf) so that {£,£} is a basis for V. If / : V -> V is the
(unique) linear transformation such that f(z) = Xz, f(t) = t show that
/ is an isomorphism that does not satisfy (*). [Hint. Take x = y = z.\
Conclude from these observations that F must have two elements.
Now examine the vector space F2 where F = {0,1}.
[Hint. (F2)d is the set of linear transformations f : FxF -+ F. Since
F has four elements there are 24 = 16 laws of composition on F. Only
2
U = {xeV | {n\x) = 0 } .
Given v G V, define
v1 — v — 2(n\v)n.
Show that v — v1 is orthogonal to U and that | ( v + vf) G U, so that v1
is the reflection of v in the hyperplane U. Show also that the mapping
t: V -+ V defined by
t{v) = v1
is linear and orthogonal. What can you say about its eigenvalues and
eigenvectors?
23
Book 4 Linear algebra
If s : IR3 —• IR3 and t : IR4 —> IR4 are respectively reflections in the
plane 3x — y + z = 0 and in the hyperplane 2x — y + 2z — t = 0, show
that the matrices of s and t are respectively
1 2 - 4 2
6 -6
1 2 4 2 - 1
6 9 2 J r
n 0 - 4 2 1 2
-6 2 9
2 - 1 2 4
-x2 + 6xy-y2 = 1.
fM(A) =
2.20 Let V be a finite-dimensional inner product space. Show that for every
/ G Vd there is a unique /3 G V such that
(VxeV) = {x\fi).
24
2: Duality and normal transformations
[Hint. Let { a i , . . . , an} be an orthonormal basis of V and consider
Show as follows that this result does not necessarily hold for inner
product spaces of infinite dimension. Let V be the vector space of poly-
nomials over C. Show that the mapping
=I Jo
0= f r(t)p(t)q(t)dt.
Jo
2.21 Let C[0,1] be the inner product space of real continuous functions on
[0,1] with the integral inner product. Let K : C[0,1] -> C[0,1] be the
integral operator defined by
= fxyf(y)dy.
Jo
Prove that K is self-adjoint.
For every positive integer n let fn be given by
J v
" ' n+2
Show that fn is an eigenfunction of K with associated eigenvalue 0. Use
the Gram-Schmidt orthonormalisation process to find two orthogonal
eigenfunctions of K with associated eigenvalue 0.
Prove that K has only one non-zero eigenvalue. Find this eigenvalue
and an associated eigenfunction.
2.22 Let t be a skew-adjoint transformation on a unitary space V. Prove that
\6.±i is a bijection and that the transformation
det( J - T - iS) ^ 0.
U = (/ + T + iS){I - T - iS)'1
is unitary.
2.24 Let A be a real symmetric matrix and let S be a real skew-symmetric
matrix of the same order. Suppose that A and S commute and that
det(A - 5) ^ 0. Prove that
is orthogonal.
26
2: Duality and normal transformations
2.25 A complex matrix A is such that A A — —A. Show that the eigenvalues
of A are either 0 or — 1.
2.26 Let A and B be orthogonal nx n matrices with det A = — detB. Prove
that A + B is singular.
2.27 Let A be an orthogonal nx n matrix. Prove that
(1) if det A = 1 and n is odd, or if det A = — 1 and n is even, then 1 is
an eigenvalue of A)
(2) if det A = - 1 then - 1 is an eigenvalue of A.
2.28 If A is a skew-symmetric matrix and g(X) is a polynomial such that
g(A) = 0, prove that g{—A) = 0. Deduce that the minimum polynomial
of A contains only terms of even degree.
Deduce that if A is skew-symmetric and f(X),g(X) are polynomials
whose terms are respectively odd and even then f(A)ig(A) are respec-
tively skew-symmetric and symmetric.
2.29 For every complex nx n matrix A let
N(A) =
1= 1
Q{xu . ..,»») =
r<8
2.41 Show that the rank and signature of the quadratic form
n
^2 (^rs + r + s)zrz8
are independent of A.
2.42 Let A be the matrix associated with the quadratic form Q(zi,..., zn)
and let A be an eigenvalue of A. Show that there exist a\,..., an not all
zero such that
2.43 If the real square matrix A is such that det A ^ 0 show that the quadratic
form ztAtAz is positive definite.
2.44 Let / : IRn x \Rn -> IR be a symmetric bilinear form and let Qf be the
associated quadratic form. Suppose that Qf is positive definite and let
g : IRn x IRn —> IR be a symmetric bilinear form with associated quadratic
form Qg. Prove that there is a basis of IRn with respect to which Qf
and Qg are each represented by sums of squares.
For every x G IRn let fx G {\Rn)d be given by fx(y) = f{z,y). Call
/ degenerate if there exists z G IRn with fx = 0. Determine the scalars
A G IR such that g — A/ is degenerate. Show that such scalars are the
29
Book 4 Linear algebra
/ / /
• / — o n * — OO * —O
30
Solutions to Chapter 1
1.1
(i)False. For example, take 6 = — a\.
(ii)True.
(iii)False. {(1,1,1)} is a basis, so the dimension is 1.
(iv) False. For example, take A = {0} or A = {v, 2v}.
(v) True.
(vi) False. For example, take A = IRn.
(vii) True.
(viii) False, {(x, Xx) | x G IR} is a subspace of IR2 for every A G IR.
(ix) True,
(x) True,
(xi) False. An isomorphism is always represented by a non-singular
matrix.
(xii) False. Consider, for example, IR2 and C 2 . The statement is true,
however, if the vector spaces have the same ground field.
(xiii) False. is a counter-example,
(xiv) False. For example,
1 0 1 0 1 3" 0
0 0 1 0 0 4 0
(xv) True,
(xvi) True.
(xvii) False. Take, for example, /, g : IRn -> IR* given by f(x, y) = (0,0)
and g(x,y) = (x, y). Relative to the standard basis of IR* we see
Book 4 Linear algebra
and so Kerti = {0}. It follows from the dimension theorem that Imti =
IR3.
As for t2, we have
and so Ker£t- + Im£,- = IR3. Now for i = 1,2,3,4 we have from the above
that Ker*t- n Imt,- = {0}. Thus we see that IR3 = Kert,- 0 Imt,- holds in
all cases.
Im<2 is t3-invariant. For, if v G Imt2 then v = (a, 6,0) and so
32
Solutions to Chapter 1
For the last part, we have that (t3 o £4)(a,6,c) = (-6,a, 6) and that
4 ° *3)(^j &5 c) = (-6, a, a). Consequently,
a e IR};
3 - 1 1 1 _^ 3~ 1 - 1 3
-1 5-1 -1 5 -1 — • 0 4 2
1 - 1 3 3 -1 1 0 2-8
1 1
3 -1 3
0 2 - 8 —> 2-8
0 4 2 0 18
Note that we have been careful not to divide by any number that is
divisible by either 2 or 3 (since these will be zero in Z 2 and Z3 respec-
tively).
(i) When F = IR the rank of the row echelon matrix is 3, in which case
dim Im t = 3 and hence dim Ker t = 0.
(ii) When F = Z 2 we have that 2,18, —8 are zero so that the rank is 1,
in which case dimlmtf = 1 and dim Ker t = 2.
(iii) When F = Z3 we have that 18 is zero so that the rank is 2, in
which case dimlmi = 2 and dim Ker t = 1.
V = Kertf® Imi holds in cases (i) and (ii), but not in case (iii); for in
case (iii) we have that (1,1,1) belongs to both Kert and Imt.
1.4 If s o t = idy then s is surjective, hence bijective (since V is of finite
dimension). Then t = s~x and so t o s = idy.
Suppose that W is ^-invariant, so that t(W) C W. Since t is an
isomorphism we must have dimi(VK) = dim W and so t(W) = W. Hence
W = s[t{W)} = s{W) and W is s-invariant.
The result is false for infinite-dimensional spaces. For example, con-
sider the real vector space \R[X] of polynomials over IR. Let s be the
differentiation map and t the integration map. We have s o t = id but
t o s ^ id.
33
Book 4 Linear algebra
is a basis for V.
Using the fact that / o / = 0 and each xt- G Ker / it is readily seen
that the matrix of / relative to this basis is of the form
Or o"
Ir 0
0 0
1.8 (1) Sums and scalar multiples of elements of Vi, V2 are clearly elements
of V\, V2 respectively.
(2) If x G Vi then x = xx(6i + 64) + £2(62 + h) shows that Vi is
generated by {bi + 64,62 + 6 3 }. Also, if Xi(bi + 64) + z 2 (6 2 + 63) = 0
then, since {61,62,63,64} is a basis of V, we have x\ = x2 = 0. Thus
{61 + 64,62 + 63} is a basis of V\. Similarly, {61 — 64,62 — 63} is a basis
of Vi.
(3) It is clear from the definitions of V\ and V2 that we have V\ n
V2 = {0}. Consequently, the sum V\ + V2 is direct. Since Vi,F 2 are of
dimension 2 and V is of dimension 4, it follows that V = V\ © V2.
(4) To find the matrix of idy relative to the bases B — {61,62,63,64}
and C = {61 + 64,62 + 63,62 — 63, 61 — 64} we observe that
2 0 0 2
0 1 I 0
A= 21 21
0 2 0
1 2 1
• 2
0 0
2
1 0 0 1
A. — LtA. —
0 1 1 0
01-1 0
10 0-1
b d"
f h
9 e
c a
36
Solutions to Chapter 1
= (idy+/)(§*) + ( i d y - / ) ( | s )
o 1
o -/._,]•
and A is then similar to this matrix. Conversely, if A is similar to a
matrix of the form A p then there is an invertible matrix Q such that
Q-XAQ = A p . Then
37
Book 4 Linear algebra
1
0
1 1
0 1
38
Solutions to Chapter 1
a4 a'i+i
i.e. the function that agrees with & on [a,-,a,-+i[ and is zero elsewhere,
40
Solutions to Chapter 1
n+l
«=0
Since the functions ek clearly form an independent set, they therefore
form a basis of E.
It is likewise clear that F is a vector space and that G is a subspace
of F. Consider now an element of F, as depicted in the diagram
a4.
41
Book 4 Linear algebra
1
Given , consider the function fi{ whose graph is
i.e. the function that agrees with fi on [at-,a,-+i[ and is zero elsewhere.
Let the gradient in the interval [a,-,a»+i[ be 6,-, so that di —
6,(at_(-i — a,-). Then it can be seen that
1 + 1V2 0
0 1 - iy/2
42
Solutions to Chapter 1
t~~l with the same associated eigenvector. {Remark. Note that we have
assumed that V is finite-dimensional (where?)—in fact the result is false
for infinite-dimensional spaces.)
1.19 Suppose that A is a non-zero eigenvalue of t. Then t(v) = At; for some
non-zero v G V and
where Ai,..., An are the eigenvalues of t. But we have just seen that
all the eigenvalues are zero. Thus, for some invertible matrix P we have
P~l AP = 0 which gives A = 0 and hence the contradiction t = 0.
1.20 The matrix of t with respect to the canonical basis {(1,0), (0,1)} is
i -•
The characteristic equation is (A—\/3)(X+>/3) = 0 and, since t—\/3id
0, t + \/3id ^ 0, the minimum polynomial is (X - y/§)(X + \/3).
1.21 That t is linear follows from
1 1 1 1 ... 1
0 1 2 3 ... n
0 0 1 ... 2
0 0 0 0 ... 1
The eigenvalues of t are all 1. The characteristic polynomial is
Consequently, A,- = ±1 for each i and hence the sum of the eigenvalues
of t is an integer.
1.2S (a) We have
3-A -1
= A 2 -6A + 8 = ( A - 4 ) ( A - 2 )
-1 3-A
so the eigenvalues are 2 and 4, each of geometric multiplicity 1. For the
eigenvectors associated with the eigenvalue 2, solve
1 -1
-1 1
to obtain the eigenspace {[x,x] \ x G IR}. For the eigenvectors associ-
ated with the eigenvalue 4, solve
44
Solutions to Chapter 1
to obtain the eigenspace {[x, -x] \ x G IR}. Since A has distinct eigen-
values it is diagonalisable. A suitable matrix P is
1 1 1
P= 0 1 -2
-1 1 1
1 2 -1
1 0 1
0 1 2
1 -1 X 0
0 1 y = 0
0 -1 z 0
'y/2 -V2]
P=
_ \/2-l
Pi = 2
2N/2 ' " 2V^2 '
We then have
an = P i A ^ " 1 v 2 • On = P l ^ l
n~1
46
Solutions to Chapter 1
lim £ = y/2.
n-oo bn
1.25 Since f(X) and g(X) are coprime there are polynomials p(X) and q{X)
such that f(X)p{X) + g{X)q(X) = 1. Let c = p(t) and d = q{t). Then
ac H-fed= idy.
Suppose now that v is an eigenvector of ab associated with the eigen-
value 0. (Note that ab has 0 as an eigenvalue since t is singular.) Let
u = a(cv) and w = fe(dv). Then since a,fe, c commute we have
bu — bacv = cabv = 0,
aw = afedv = dabv = 0.
Continuing in this way we see that {u, t(u),... ,tr~l(u)} spans U and
so is a basis. By the above argument we havetf[£*(w)]= Y?j=i aj^3{u)i
so U is ^-invariant.
Let f{X) = - o o - a i J T ar-iXr'1+Xr. Then f(X) has degree
r and [f{t)](u) = 0. Since U is ^-invariant the restriction tu of t to U
induces a linear transformation from U to itself. Now [f{tu)]{u) = 0 so
[f{tu)]{v) = 0 for all v € U. Hence f(tu) is the zero transformation on
U. If now g(X) is a polynomial of degree less than r with g(tu) = 0 then
{g{tu)]{u) = 0 implies that {u1t(u)i... ,tr~1(u)} is dependent. Hence
/(X) is the (monic) polynomial of least degree such that f[tu) = 0, so
f(X) is the minimum polynomial of tu.
When u = (1,1,0) G IR3 and t(x,y,z) = (x + y,x - y,z) we have
that U = {(1,1,0), (2,0,0)}. Also, t2(u) = (2,2,0) = 2u and so / ( X ) =
X2 -2.
1.28 (1) Proceed by induction. The result clearly holds for n = 1. In this ex-
ample it is necessary, in order to apply the second principle of induction,
to include a proof for n = 2 :
q2 = r{s + t)r{s + t)
= (rs + r*)2
= rsrs + r£rs + rsri + r M
= rsr(s +1)
= pq.
Suppose now that it is true for all r < n where n > 2. Then
Pl(X)p(X)i Xp(X) =
48
Solutions to Chapter 1
1.29 Clearly, adding together the elements in the middle row, the middle
column, and both diagonals, we obtain
^2 mij + 3m 22 = 40,
a +/? a-7
a - (3 -
a+ 7 - 7 a -
r 3a" l"
M(a,/?, 7 ) 1 == 3a = 3a 1
1 3a 1
e2 + e3) = e2
49
Book 4 Linear algebra
and that
/(e 2 ) = (<*-/? + 7)ei + ae 2 + (a -f /? -
= (a - ft + 7)(ei + e2 + e3 - e2 - e3) + ae 2 4- (a + P - 7)e 3
= (a - j9 + 7)(«i + e2 + c3) + 0 9 - 7)^2 + (2/? - 27)e3,
/(e 3 ) = (a - 7)61 + (a + /? + 7)^2 + (a - £)e 3
= (a - 7)(ci + e2 + e3 - e2 - e3) + (/? + 27)63 + (7 - p)e3
e2 + e3) + {/3 + 2 7 )e 2 + (7
3a a — /? + 7 a—7
L= 0-7
2£ - 27 7 -
Since L and M(a,P, 7) represent the same linear mapping they are sim-
ilar and therefore have the same eigenvalues. It is readily seen that
Suppose that
(*) = 0.
On applying /P*" 1 to (•) and using the fact that fp = 0, we see that
Xofp~l(x) = 0 whence we deduce that Ao = 0. Deleting the first term
in (•) and applying fp~2 to the remainder, we obtain similarly Ai = 0.
Repeating this argument, we see that each Aj = 0 and hence that Bp is
linearly independent.
It follows from the above that if / is nilpotent of index n = dimV
then
Bn = {*,/(*),...,/""'to}
50
Solutions to Chapter 1
0
/* =
In-l
Using the given identity, we can rewrite this as the following equation
in the C-vector space V :
y^(ay - iPj)vj = 0.
It follows that ctj - ipj = 0 for every j , so that otj = 0 = /?y. Conse-
quently,
51
Book 4 Linear algebra
we deduce immediately from the fact that / o / = - idv that the matrix
of / relative to this basis is
'a + ib
\a-ib ... x- iy\ = a2 + b2 + • • • + x 2 + y2
and a sum of squares is zero if and only if each summand is zero. Hence
we see that Y = 0.
The minimum polynomial of A cannot have repeated roots. For, if this
were of the form m{X) = (X - a)2p{X) then from (A - aI)2p{A) = 0
we would have, by the above applied to each column of p(A) in turn,
(A — al)p(A) = 0 and m(X) would not be the minimum polynomial.
52
Solutions to Chapter 1
(*-id)(t + 2id)(t-2id) = 0 ,
It is easy to see by induction that this is indeed the case. Thus we see
that
t2n = t2 + 4(1 + 4 + • • • + 4n'2){t2 - id)
(t>2) /(6{)=/J l
53
Book 4 Linear algebra
Thus the matrix of / relative to the basis {61, br2,..., Vm} is of the form
0 M
t>2
(Ai - J f ) d e t ( A f - J ! r / n ) = O.
So the eigenvalues of g1 are precisely those of / with the algebraic mul-
tiplicity of Ai reduced by 1. Since all the eigenvalues of / belong to F
by hypothesis, so then do all those of g*.
The last part follows from the above by a simple inductive argument;
if the result holds for (n — 1) x (n — 1) matrices then it holds for M and
hence for A.
1.85 The eigenvalues of t are 0, 1, 1. The minimum polynomial is either
X(X - 1) or X(X - I) 2 . But t2 - t ^ 0 so the minimum polynomial is
X(X-l)2. We have that
54
Solutions to Chapter 1
40 -64
25 -40
55
Book 4 Linear algebra
[•:.:}
A Jordan basis satisfies
1 0
0 -1
0 -c 0
(Any Jordan basis is of the form {[c,0], [d, — c]} with P =
3
(c) The characteristic polynomial is (X - I) , so the only eigenvalue is
1. It has geometric multiplicity 2 with {[1,0,0], [0,2,3]} as a basis for
the eigenspace. The Jordan normal form is then
1 1 o"
0 1 0
0 0 1
3 0 1
P = 6 1 0
9 0 0
56
Solutions to Chapter 1
3 1 o"
0 3 0
0 0 3
2 0
0 2
1 1 1 1 0 X "0"
0 1 0 0 0 y 0
0 0 1 1 0 z = 0
0 0 0 0 1 t 0
0 -1 -1 -1 -1 w 0
57
Book 4 Linear algebra
58
Solutions to Chapter 1
1.89 (a) The Jordan form and a suitable (non-unique) matrix P are
2 1 0 2 - 5 5
J = 0 2 0 , P= 2 - 3 8
0 0 2 3 8 - 7
59
Book 4 Linear algebra
3 0 0 3 1 0" 3 1 0 3 0 0
0 3 0 0 3 1 j 0 3 0 j 0 3 1
0 0 3 0 0 3 0 0 3 0 0 3
0 0 1 0 1 0 0 0
and
0 0 0 0 0 0 0 0
1
{t - A i d v ) f " v t - = { t - !/,--! = • • • = 0.
Thus there is one eigenvector associated with each block, and so there
are
dimKer(i - A idy)
blocks.
Consider Ker(i — A i d y ) J . For every l x l block there corresponds
a single basis element which is an eigenvector in Ker(£ — A i d y ) J . For
every 2 x 2 block there correspond two basis elements in Ker(i — A idy )3
if j > 2 and 1 basis element if j < 2. In general, to each i x % block
there correspond i basis elements in Ker(£ — A idy )3 if j > i and j basis
elements if j < i.
It follows that
- 2 1 0 0
- 2 0 1 0
P=
- 2 0 0 1
2 0 10
2/4 =
,2t
y4 = c4e
2/3 = c3e
y2 = c2e
Since now
2 1 0 0" c2te2t
2 0 1 0 e2t
2 0 0 1 e2t
2 0 1 0 c 4 e,2t
:
we deduce that
= -2c2te2t -2cie 2 * + c2e
x2 = -2c2te2t -2Cle2t + c3e2t
= -2c2te2t -2Cle2t + c4e2 t
2t
x. = 2c2te
2t
-f 2Cle + c3e2t
62
Solutions to Chapter 1
5 4
A=
-1 0
xx = ael + 46e4t
£2 = — ae* — be4t.
4 -1 -1
A= 1 2 -1
1 -1 2
so that
x3 = ae2t + ce3t.
5 -6 -6
A= - 1 4 2
3 -6 -4
1 3 -2
A= 0 7 - 4
0 9 - 5
1 1
A=
2 3
([1,1 + + [-1, - 1 +
2\/3
Let a; = %\ tx[ = x2, x" = x'2 = x 3 , x'/' = x'3 = 2x3 + 4x2 - Sxx. Then
the system can be written in the form X' = AX where
0 10
A= 0 0 1
-8 4 2
2 1
J = 0 2
0 0-2
65
Book 4 Linear algebra
so that t/2 = C2e2*,t/3 = C3e~2t and hence t/J = 2t/i + C2e2t which gives
yi =cie2t + c2te2t.
Now observe that
1 0 1 • c2te2
_2 2t
2 1 c2e
4 4 4
Hence x = xi = Cie2t + c2t ,2t -2t . Now apply the initial conditions
to obtain
66
Solutions to Chapter 2
2.1 (a) / H-> / ' does not define a linear functional since / ' ^ IR in general.
(b),(c),(d) These are linear functional.
(e) $ : / — i > Jo f2 is not a linear mapping; for example, we have
0 = &[f + (—/)] whereas in general
= 2
= f fo(t)[af(t) + Pg(t)}dt
Jo
[ fo(t)f(t)dt + (3 f fo(t)g(t)dt
Jo Jo
= a<p{f) + f3<p{g).
2.S The transition matrix from the given basis to the standard basis is
1 -1 0
p = 0 1 1
-1 0 1
2 2 2
1 i __I
"2 2 2
II I
2 2 2
Book 4 Linear algebra
Its inverse is
Consequently, (A') d = {-2^>i + § |
2.5 The transition matrix from the given basis to the standard basis is
Its inverse is
i H B ^ f i n s)1 = {o}1 = vd
A±nB± = {A + B)1 = VL = {0}.
Consequently, Vd = A1 0 i?-1.
The answer to the question is eno' : Ad is the set of linear functional
/ : A -> F so if A ^ V we have that Ad is not a subset of Vd. What is
true is : if V = A 0 B then Vd = A1 0 5 ' where A',B' are subspaces
of F d with A1 ~jLd and £ ' - £ d . To see this, let f e Ad and define
/ : 7 -> F by /(v) = f{a) where t; = a + b. Then p : Ad -* 7 d
given by £>(/) = / is an injective linear transformation and <p(Ad) is a
68
Solutions to Chapter 2
Since the coefficient matrix is the Vandermonde matrix and since the t{
are given to be distinct, the only solution is Ai = A2 = A3 = 0. Hence
{/15/25/3} is linearly independent and so forms a basis for (IR3[X])d.
If {pi,p 2 ) p 3 } is a basis of V of which {/1, / 2 ) / 3 } is the dual then we
must have f%(p3-) = 6,-y; i.e.
t) = Sty-
(X-t2)(X-t3)
=ll;
= 3 4
+3
+3 = 17a+ 226.
69
Book 4 Linear algebra
Since <p(vi) — a,- we see that <p[y) = 0 for every v G 5 if and only if
ax — • • • = afc = 0. Thus
<p • • • + an<pn
and so {<Pk+i, • • •} ^n} is a basis for S-1. Hence dim S + dim S1- = n.
liipe Kertd then td(i/>) = 0 and so [*d(^)](«) = 0 for all u e C/'. But
from which the result follows. The final statements are immediate from
the fact that Im* = (Kerf*)-1 (see question 2.10).
2.12 To find Y we find the dual of {[1,0,0], [1,1,0], [1,1,1]}. The transition
matrix and its inverse are
1 1 1 1 -1 0
0 1 1 , p-l = 0 1 -1
0 0 1 0 0 1
70
Solutions to Chapter 2
Thus r = {(l l -l l 0),(0,1,-1),(0,0,1)}.
The matrix of t with respect to the standard basis is
2 1 0
1 1 1
0 0 - 1
1 0 0 1 1 1
1 1 0 5 0 1 1
0 -1 1 0 0 1
1 0 0 2 1 o" 1 1 l" 2 3 3
1 1 0 1 1 1 0 1 1 = 3 5 6
1 1 1 0 0 -1 0 0 1 3 5 5
V A 1 / 1 + Aa/a) = W A ) + X2Lg(f2)
and so Lg is linear.
To show that Fx is a linear functional, we must check that
^ ( A i / i 4- A 2 / 2 ) = XrfM + \2Fx(f2),
(V/GC[0,l]) f f(t)g(t)dt =
Jo
Now, by continuity, the left hand side depends on the values of / at
points other than x whereas the right hand side does not. Thus Fx ^ Lg
for any g.
71
Book 4 Linear algebra
+ 26 - Ac + 2d, 2a + 46 + 2c - a7,
- 4a + 26 + c + 2a1,2a - 6 + 2c + 4a1),
which gives
1 2 - 4 2
2 4 2 - 1
mat t = -
5 - 4 2 1 2
2 - 1 2 4
2.17 The hyperbola is represented by the equation
1 3
[•»] 3 -1
= 1.
1/V2
Thus i * AP = diag{2, - 4 } where
73
Book 4 Linear algebra
= xt1PtAPx1
2 0]|V
0 -4 U
"7 2 0"
The eigenvalues of A — 2 6 - 2 are 3,6
3,6,9 with associated nor-
0 -2 5
malised eigenvectors
- 1 2 2
2 - 1 2
2 2 - 1
Now change coordinates by defining
74
Solutions to Chapter 2
Since
t/i = | ( 2 x -
2.19 The first part is a routine check of the axioms. Using the fact that
tr(AB) = tr(BA) we have
2.20 Let p be as stated and let ,/> € V* be given by fp{a) = (a\/3). Then
since {ai,..., ctn} is an orthonormal basis we have
1=1
Since //? and / coincide on this basis, it follows that / = fp.
For the next part suppose that such a q exists. Then we have
Jo
for every p. Writing rp for p we then have
/Jo'
In particular, this holds when p = fq. So
r
whence we have rq = 0. Since r / 0 we must therefore have q = 0, a
contradiction.
For the next part of the question note that
= / p(t)q(t)r(t)dt
Jo
= fq{t)W)r{t)dt
Jo
= (q\fr)
so (/„)* = /p.
Integration by parts gives
(D(p)\q)=p(l)q(l)-P(0)q(0) - {p\D(q)).
If D* exists then we have
{p\D*(q)) = p(l)9(l) - p(0)q(0) - {p\D(q) )
so that
{p\D(q) + D*(q)) = p(l)q(l) - p(O)f(O).
Now fix q such that q(0) = 0,q(l) = 1. Then we have
(p\D(q) + D*(q))=p(l))
which is impossible (take z — 1 in the previous part of the question).
Thus D* does not exist.
76
Solutions to Chapter 2
/•l o r1
= / xf+idy--— xydy
Jo n -\-
n + 2,z j0
xy',n+2 2 xy1
n+2 n + 2 ~2~
= 0,
so K(fn) = 0/ n as required.
Apply the Gram-Schmidt process to {/i,/2}. Let ei(x) =
x — | and define e2(z) = /2(^) + afi{x) — (^2 — | ) "*" a ( x "~ I )
OL — —-
Since
e2{x) = x2 -x+\.
Xf(x) = x f yf(y)dy.
Jo
If A ^ 0 then / must then be of the form x \-* ctx for some constant a.
Substituting ax for /(re), it clearly follows that A = | . An associated
eigenfunction is given by f(x) = x.
77
Book 4 Linear algebra
We have that
det(T + iS - I) + 0.
The fact that U is unitary now follows from the previous question.
78
Solutions to Chapter 2
= (A* - S*)-1
= (A + ^)" x (A - S)(A + S){A -
= (A + S ) - 1 ^
the last equality following from the fact that since A, S commute so do
A + S and A - S. Hence B*B = I and 5 is orthogonal.
Since
(1) A*A + A = 0
we have, taking transposes,
AtA + At = 0
and hence, taking complex conjugates,
= tr(A*A)
= N{A).
Similarly, using the fact that tr(AY) = tr(YX),
N{AU) = tT[(AUYAU]
= N(A).
Finally, by the above,
N(In - U-U) = N[U(In - U~lA)\
= N(U - A)
= N{A - U).
2.80 Since A*A = AA* we have that A " 1 ^ ) " 1 = (l*)- 1 ^" 1 - But
81
Book 4 Linear algebra
2.81 Suppose that A is normal and let B = g{A). There is a unitary matrix
P and a diagonal matrix D such that
Consequently we have
and so
B*B = Ptg(D)PPtg{D)P = Ptg{D)g{D)P
and similarly
* =Ptg{D)g(D)P.
and so det A = 0.
82
Solutions to Chapter 2
and so yly = 2f*2r. Also, /xy*2r = —ytAy = 0 (by the first part of the
question). If, therefore, Au — 0 then
dy = tt*^ = -{Aufz = 0
and similarly
fixjfz = -t/jly = {Aufy = 0.
For the last part, we have
-A 2 -2
det(;4 - XI) = det - 2 -A - 1 = -A(A 2 +9),
2 1 -A
-1
u= -
> .;
1
83
Book 4 Linear algebra
Then we have
-4
± -1
which gives
z = 3\/2
0 0 0
0 0 3
0 - 3 0
-1/3 0 4/3\/2
P= 2/3 -
2/3 l/>/2
£.34 Let Q be the matrix that represents the change to a new basis with
respect to which q is in normal form. Then xtAx becomes ytBy where
x = Qy and B = Q* AQ. Now
V=
Now if q has the same rank and signature then clearly m = 0. Hence
ylBy > 0 for all y G IRn since it is a sum of squares. Consequently
xlAx > 0 for all x G IRn.
Conversely, if xt Ax > 0 for all x € \Rn then ylBy>0 for all y G IRn.
Choose y = (0,..., 0, t/z-, 0 , . . . , 0). Now the coefficient of y\ must be 0 or
1, but not —1. Therefore there are no terms of the form — y\, so m = 0
and q has the same rank and signature.
84
Solutions to Chapter 2
q(x) = x\ - x\
= (si + x2)2 + x\ - (xi +z 3 ) 2
1 0 0
0 1 0
0 0-1
1 1 0"
1 0 0
1 0 1
85
Book 4 Linear algebra
2.86 Take
ii,ya)) + f({yu\
+ + xzyi) + S2J/2
and
h({xux2)i(yuy2)) =§
4 -1 r X
x* Ax. = \x y z -1 4 -1 y
1 -1 4 z
l/\/2 l/\/3
p = 2/V6 0 -l/>/3
1/y/e —1/>/2 1/V3
Changing coordinates by setting
u X
V y
w z
transforms the original quadratic form to
3u2 + 3vr
which is positve definite.
86
Solutions to Chapter 2
= T.
Then we obtain
1 -1 2
A = -1 2 -3
2 - 3 9
Now
x2 -f 2y2 + 9J2T2 - 2zy + 4xz -
2 2 2
= (x - y 4- 2s) 4- y 4- 52? - 2t/z
87
Book 4 Linear algebra
so if we let
1 1 2
0 1 2
0 0 2
then we have
X Y
y = p
z
0 2 0"
A= 2 0 1
0 1 0
Now
2
4xy + 2yz = (x - (x - y)2 + 2yz
2 2
= X -Y + (X-Y)z [X
= {X+\z) -Y -Yz-\z2
2 2
a: = + ?7 -• 0
y= -•?)
z = f
so if we let
" 1 1 1
2 2 2
1 1 0
= 2 2
0 0 1
88
Solutions to Chapter 2
then we have
X
y = p
z Jm
(3) Here we have
* 1 1 0 -1
1 4 3 -4
- 0 3 1 -7
-1 -4 -7 -4
t=T
and so
1 -l/\/3 72 -2
0 i/Vs -l/>/2 3
0 0 l/\/2 -2
0 0 0 1
gives
"a;" T
y =P
z f
.r.
and diag{l,l , - 1 , 0}
89
Book 4 Linear algebra
r<8
h h x2xn h xn_
where
"n-1 -1 -1 ... -1
-1 n-1 -1 ... -1
-1 -1 n-1 ... -1
-1 -1 -1 ... n-1
0 -1 -1 ... n-2
Repeating this process we can show that A is congruent to the matrix
n- 1 0 0 0 ... 0
0 n-2 0 0 ... 0
0 0 n-3 -1 ... -1
0 0 - 1 n-3 ... -1
0 0 - 1 ^ - 1 ... n-3
90
Solutions to Chapter 2
s)xrx8
nxn).
Now let
2.44 Let {ui,..., un} be an orthonormal basis of the real inner product space
IRn under the inner product given by ( x\y) = f(x, y). Let x = J2?=i x%ui
and y = £ " = 1 y.-u,-. Then
f { x , y) = ( x \ y ) =
t=l
Q{X> y) =
where
x = y =
i.e. that there is an ordered basis {vi,..., vn} that is orthonormal (rel-
ative to the inner product determined by / ) and consists of eigenvectors
of B. Let x = Y%=1 £iVi and y = ^ = 1 Wi- T h e n > r e l a t i v e t o t l l i s
orthonormal basis, we have
n
Consequently,
92
Solutions to Chapter 2
Since, from the above expressions for f{x,y) and g(x, y) computed rel-
ative to the orthonormal basis {v\,..., vn\,
1=1
X\ 2/1
x = y =
0 1 o" 1 0 1
A= 1 0 1 0 - 1 0
0 1 0 1 0 0
Since
1 -A 1
det{B - XA) = det -A - 1 -A
1 -A 0
the equation det(B-XA) = 0 has no solutions. But, as observed above, if
a simultaneous reduction to sums of squares were possible, such solutions
would be the coefficients in one of these sums of squares. Since neither
of the given forms is positive definite, the conclusion follows.
2.45 The exponent is —x*Ax where
93
Book 4 Linear algebra
94
Test paper 1
by the matrix
1 8 6 4 1
0 1 0 0 0
0 1 2 1 0
0 -1 -1 0 1
0 -5 -4 -3 -2
4- a^x^xz 4- h an-.\Xn-\xn
in which each at- ^ 0, show that p = q\ and for the form
aixix2 4- 0,2X2X3 + h an-ixn-ixn + anxnxi
in which each at- / 0, show that
L if n is odd.
97
Test paper 2
fl ifs = r + l ;
r
* [0 otherwise,
hs =
\ 0 otherwise,
0 4 4
2 2 1
-3 -6 -5
Let V be a vector space of dimension n over a field F. Suppose that W
is a subspace of V with dim W = m. Show that
(a) dim W1 = n - m;
(b) (W 1 )- 1 = VK.
K f,g € V* are such that there is no A E F \ {0} with / = Xg, show
that Ker / n Ker g is of dimension n — 2.
Let V be a finite-dimensional complex inner product space and let / :
V —• V be a normal transformation. Prove that
/a(x)=o=>/(a!) =o
and deduce that the minimum polynomial of / has no repeated roots.
If e : V —> V is a projection, show that the following statements are
equivalent :
(a) e is normal;
(b) e is self-adjoint;
(c) e is the orthogonal projection of V onto Im e.
Show finally that a linear transformation h : V —• V is normal if and
only if there are complex scalars Ai,..., A^ and self-adjoint projections
e i , . . . , e/c on V such that
(1) / = Aiei+ ••• + A/Cefc;
(2) id v = fi! + ••• + cfc;
(3) ( t ^ ; ) e,-oey = 0.
(a) Show that the quadratic form xlAx is positive definite if and
only if there exists a real non-singular matrix P such that A = PPl.
Show also that if Y17J=I ^3xix3 > ° f° r a ^ non-zero vectors x then
]C£y=i bijPiXi'pjXj > 0 for all x. Hence show that if xl Ax and x*Bx are
both positive definite then so is
r=l «<y
positive definite?
99
Test paper 3
x13" "A 1 0*
C = 0 X22 ^23 0 A 1
0 0 ^33 0 0 A
{W and
A=
k{x\ + x\ + • • • 4- z 2 ) - [xi + x2 + • • • + x r ) 2 .
Show that
101
Test paper 4
{ 2 1 """" 1 2 T
x 2 "T*
2 """" 1 A
.A
*£ \ / \
II ^ i ^^ ^ 2 j
A 2 *™~ ^ i A 2 —~ A j
(1 — fi)Ay"/2 4- TIX^ A if Ai = A2.
Hence solve the system of difference equations
y n +i = 2z n + yn
where Xi = 0 and j/i = 1.
Suppose that / € £ ( C n , C n ) and that every eigenvalue of / is 0. Show
that / is nilpotent and explain how to find dim Ker / from the Jordan
normal form of / .
Let / , 0 G «C(C6,C6) be nilpotent with the same minimum polynomial
and dim Ker / = dim Ker g. Show that / , g have the same Jordan normal
form. By means of an example show that this fails in general for / , g G
7 7
(; = 1 , . . . , n) t{xj) = ^2 <*iiSi+j-i
t=i
103
5 Show that each of the quadratic forms
a z2 + a z2 + a z2
104