You are on page 1of 113

Algebra through practice

Book 4: Linear algebra


Algebra through practice
A collection of problems in algebra with solutions

Book 4
Linear algebra

T. S. BLYTH o E. F. ROBERTSON
University of St Andrews

77rc r/g/// o/ /Ac


University of Cambridge
to print and sell
all manner of book f
was granted by
Henry VIII in 1534
The University has printed
and published continuously
since 1584.

CAMBRIDGE UNIVERSITY PRESS


Cambridge
London New York New Rochelle
Melbourne Sydney
CAMBRIDGE UNIVERSITY PRESS
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, Sao Paulo

Cambridge University Press

The Edinburgh Building, Cambridge CB2 8RU, UK

Published in the United States of America by Cambridge University Press, New York

www.cambridge.org
Information on this title: www.cambridge.org/9780521272896
© Cambridge University Press 1985

This publication is in copyright. Subject to statutory exception


and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.

First published 1985

Re-issued in this digitally printed version 2008

A catalogue record for this publication is available from the British Library

Library of Congress Catalogue Card Number: 83-24013

ISBN 978-0-521-27289-6 paperback


Contents

Preface vi
Background reference material vii
1: Direct sums and Jordan forms 1
2: Duality and normal transformations 18
Solutions to Chapter 1 31
Solutions to Chapter 2 67
Test paper 1 96
Test paper 2 98
Test paper 3 100
Test paper 4 102
Preface

The aim of this series of problem-solvers is to provide a selection of


worked examples in algebra designed to supplement undergraduate
algebra courses. We have attempted, mainly with the average student
in mind, to produce a varied selection of exercises while incorporating
a few of a more challenging nature. Although complete solutions are
included, it is intended that these should be consulted by readers only
after they have attempted the questions. In this way, it is hoped that
the student will gain confidence in his or her approach to the art of
problem-solving which, after all, is what mathematics is all about.
The problems, although arranged in chapters, have not been
'graded' within each chapter so that, if readers cannot do problem n
this should not discourage them from attempting problem n + 1 . A
great many of the ideas involved in these problems have been used in
examination papers of one sort or another. Some test papers (without
solutions) are included at the end of each book; these contain questions
based on the topics covered.

TSB, EFR
St Andrews
Background reference material

Courses on abstract algebra can be very different in style and content.


Likewise, textbooks recommended for these courses can vary enorm-
ously, not only in notation and exposition but also in their level of
sophistication. Here is a list of some major texts that are widely used
and to which the reader may refer for background material. The
subject matter of these texts covers all six of the present volumes, and
in some cases a great deal more. For the convenience of the reader there
is given overleaf an indication of which parts of which of these texts
are most relevant to the appropriate sections of this volume.
[1] I. T. Adamson, Introduction to Field Theory, Cambridge
University Press, 1982.
[2] F. Ay res, Jr, Modern Algebra, Schaum's Outline Series,
McGraw-Hill, 1965.
[3] D. Burton, A first course in rings and ideals, Addison-Wesley,
1970.
[4] P. M. Cohn, Algebra Vol. I, Wiley, 1982.
[5] D. T. Finkbeiner II, Introduction to Matrices and Linear
Transformations, Freeman, 1978.
[6] R. Godement, Algebra, Kershaw, 1983.
[7] J. A. Green, Sets and Groups, Routledge and Kegan Paul,
1965.
[8] I. N. Herstein, Topics in Algebra, Wiley, 1977.
[9] K. Hoffman and R. Kunze, Linear Algebra, Prentice Hall,
1971.
[10] S. Lang, Introduction to Linear Algebra, Addison-Wesley, 1970.
[11] S. Lipschutz, Linear Algebra, Schaum's Outline Series,
McGraw-Hill, 1974.

vn
[12] I. D. Macdonald, The Theory of Groups, Oxford University
Press, 1968.
[13] S. MacLane and G. Birkhoff, Algebra, Macmillan, 1968.
[14] N. H. McCoy, Introduction to Modern Algebra, Allyn and
Bacon, 1975.
[15] J. J. Rotman, The Theory of Groups: An Introduction, Allyn
and Bacon, 1973.
[16] I. Stewart, Galois Theory, Chapman and Hall, 1975.
[17] I. Stewart and D. Tall, The Foundations of Mathematics,
Oxford University Press, 1977.

References useful for Book 4


1: Direct sums and Jordan forms [4, Sections 11.1-11.4],
[5, Chapter 7], [8, Sections 6.1-6.6], [9, Chapters 6, 7],
[11, Chapter 10].
2: Duality and normal transformations [4, Chapter 8,
Section 11.4], [5, Chapter 9], [8, Sections 4.3, 6.8, 6.10],
[9, Chapters 8, 9], [11, Chapters 11, 12].
In [4] and [6] some ring theory is assumed, and some
elementary results are proved for modules. In [5] the author
uses 'characteristic value' where we use 'eigenvalue'.

vin
1: Direct sums and Jordan forms

In this chapter we take as a central theme the notion of the direct sum
A 0 B of subspaces A, B of a vector space V. Recall that V = A 0 B
if and only if every x G V can be expressed uniquely in the form a + b
where aeA and b e B; equivalently, if V = A+B and AnB = {0}. For
every subspace A of V there is a subspace B of V such that V = A®B.
In the case where V is of finite dimension, this is easily seen; take a basis
{vi,..., Vk} of A, extend it to a basis {^i,..., vn} of V, then note that
{vk+i,..., vn} spans a subspace 5 such that V = A 0 B.
If / : V —> V is a linear transformation then a subspace W of V is
said to be {^invariant if / maps W into itself. If W is /-invariant then
there is an ordered basis of V with respect to which the matrix of V is
of the form
M N

where M is of size dim W x dimW.


If / : V —> V is such that / o / = / then / is called a projection.
For such a linear transformation we have V = Im / 0 Ker / where the
subspace I m / is /-invariant (and the subspace Ker/ is trivially so). A
vector space V is the direct sum of subspaces Wx,..., Wk if and only if
there are non-zero projections P i , . . . ,p* : V —> V such that

Pi — pi o pj = 0 for t ^ ;.
4=1

In this case W{ = Imp,- for each i, and relative to given ordered bases of
Book 4 Linear algebra

Wi,...,Wk the matrix of / is of the diagonal block form

M2

Mk
Of particular importance is the situation where each M,- is of the form

A 1 0 ... 0 0
0 A 1 ... 0 0
0 0 A ... 0 0
M,=
. . .
0 0 0 ... A 1
0 0 0 ... 0 A
in which case the diagonal block matrix is called a Jordan matrix.
The Cayley-Hamilton theorem says that a linear transformation / is
a zero of its characteristic polynomial. The minimum polynomial of /
is the monic polynomial of least degree of which / is a zero. When the
minimum polynomial of / factorises into a product of linear polynomials
then there is a basis of V with respect to which the matrix of / is a
Jordan matrix. This matrix is unique (up to the sequence of the diagonal
blocks), the diagonal entries A above are the eigenvalues of / , and the
number of Mt- associated with a given A is the geometric multiplicity of
A. The corresponding basis is called a Jordan basis.
We mention here that, for space considerations in the solutions, we
shall often write an eigenvector

as i, x2,

1.1 Which of the following statements are true? For those that are false,
give a counter-example.
(i) If {0,1,(12, a$} is a basis for IR3 and b is a non-zero vector in IR then
{b + ai, ci2, as} is also a basis for IR3.
1: Direct sums and Jordan forms

(ii) If A is a finite set of linearly independent vectors then the dimension


of the subspace spanned by A is equal to the number of vectors in
A.
(iii) The subspace {(z,x, x) \ x £ IR} of IR3 has dimension 3.
(iv) If A is a linearly dependent set of vectors in IRn then there are more
than n vectors in A.
(v) If A is a linearly dependent subset of IRn then the dimension of the
subspace spanned by A is strictly less than the number of vectors
in A.
(vi) If A is a subset of IRn and the subspace spanned by A is \Rn itself
then A contains exactly n vectors,
(vii) If A and B are subspaces of IRn then we can find a basis of IRn
which contains a basis of A and a basis of B.
(viii) An n-dimensional vector space contains only finitely many sub-
spaces.
(ix) If A is an n x n matrix over Q with A3 = I then A is non-singular,
(x) If A is an n x n matrix over C with A3 = I then A is non-singular,
(xi) An isomorphism between two vector spaces can always be repre-
sented by a square singular matrix.
(xii) Any two n-dimensional vector spaces are isomorphic.
(xiii) If A is an n x n matrix such that A2 = I then A — I.
(xiv) If A, B and C are non-zero matrices such that AC = BC then
A = B.
(xv) The identity map on IRn is represented by the identity matrix with
respect to any basis of IRn.
(xvi) Given any two bases of IRn there is an isomorphism from IRn to
itself that maps one basis onto the other.
(xvii) If A and B represent linear transformations /, g : IRn —> IR™ with
respect to the same basis then there is a non-singular matrix P such
that P~lAP = B.
(xviii) There is a bijection between the set of linear transformations from
IRn to itself and the set of n x n matrices over IR.
(xix) The map t : IR2 -* IR2 given by t(x, y) — (y, x+y) can be represented
by the matrix
r
1 2
1 2
with respect to some basis of IR2.
(xx) There is a non-singular matrix P such that P~XAP is diagonal for
any non-singular matrix A.
Book 4 Linear algebra

1.2 Let tut2,h,U G £(IR3,IR3) be given by

ti (a, 6, c) = (a + 6, b + c, c + a);

i 4 (a,6,c) = (a, 6,6).

Find Kerti and Im*t- for i = 1,2,3,4. Is it true that IR3 = Keri,- 0lmi t -
for any of i = 1,2,3,4?
Is Im<2 ^-invariant? Is Keri2 ^-invariant?
Find ts o i 4 and t^ o <3. Compute the images and kernels of these
composites.
1.3 Let V be a vector space of dimension 3 over a field F and let t G £(V, V)
be represented by the matrix

3 -1 1
-1 5 -1
1 -1 3

with respect to some basis of V. Find dimKert and dimlmtf when


(i) F = IR;
(ii) F = Z2;
(iii) F = Z2.
Is V = Kert 0 Imt in any of cases (i), (ii) or (iii)?
1.4 Let V be a finite-dimensional vector space and let 5, t G £(K, 7 ) be such
that sot = idy. Prove that tos = i d v Prove also that a subspace of V
is ^-invariant if and only if it is s-invariant. Are these results true when
V is infinite-dimensional?
1.5 Let Vn be the vector space of polynomials of degree less than n over
the field IR. If D G £{Vn, Vn) is the differentiation map, find Im.D and
KerD. Prove that ImZ)~ Vn_i and that KerD ~ IR. Is it true that

Do the same results hold if the ground field IR is replaced by the field
1: Direct sums and Jordan forms

1.6 Let V be a finite-dimensional vector space and let t G £(V, V). Establish
the chains
V Dim* Dim* 2 D ... D Imt ft D Im* n + 1 2 . . . ;
{0}CKer<CKer* 2 C ... C Ker*n C Keri n + 1 C . . . .

Show that there is a positive integer p such that Im£p = Imtfp+1 and
deduce that

() and Ker*p = Kert p+fc .

Show also that

and that the subspaces Im£p and Ker£p are ^-invariant.


1.7 Let V be a vector space of dimension n over a field F and let / : V —> V
be a non-zero linear transformation such that / o / = 0. Show that if
Im / is of dimension r then 2r < n. Suppose now that W is a subspace
of V such that V = Ker f ®W. Show that W is of dimension r and
that if {tui,..., wr} is a basis of W then {/(wi),... ,f{wr)} is a linearly
independent subset of Ker/. Deduce that n — 2r elements x\,..., £ n -2r
can be chosen in Ker/ such that

{wi,..., tur, /(wi), • • •, /(uv), a?!,..., x n _ 2 r }

is a basis of V.
Hence show that a non-zero nxn matrix A over F is such that A2 = 0
if and only if A is similar to a matrix of the form

o"
Ir 0
0 0

1.8 Let V be a vector space of dimension 4 over IR. Let a basis of V be


B = {&!, 6 2 ,6 3 ,64}. Writing each x e V as x = £ * = 1 x,-6t-, let

Vi = {x G V I xz = x2 and x4 = a^},
V2 = {x G V I £3 = - z 2 and £4 = - X i } .

Show that
(1) Vi and V2 are subspaces of V;
Book 4 Linear algebra

(2) {61 -f- 64,62 -f 63} is a basis of V\ and {61 — 64,62 — 63} is a basis
ofv2;

(4) with respect to the basis B. and the basis

C = {61 + 64,62 + 63,62 - 63,61 - 64}

the matrix of idy is


-1 0 0 1-
2 1 1 2
0 0
21 21
0 2 0
2
1 0 0 1
-2 2-
A 4 x 4 matrix M over IR is said to be centro-symmetric if

for all i,j. If M is centro-symmetric, show that M is similar to a matrix


of the form
"a p 0 0"
7 5 0 0
0 0 e f
.0 0 17 0_
1.9 Let V be a vector space of dimension n over a field F. Suppose first
that F is not of characteristic 2 (i.e. that lF + 1F ^ 0^). If / : V" - • V
is a linear transformation such that / o / = idy prove that

V = Im(id v + / ) 0 Im(id v - / ) .

Deduce that a n n x n matrix A over F is such that A2 = / n if and only


if A is similar to a matrix of the form

IP 0
0 -/„-,

Suppose now that F is of characteristic 2 and that f o f = idy. If


g = idy + / show that

xeKerg = f{x),
1: Direct sums and Jordan forms

and that g o g — 0. Deduce that a n n x n matrix A over F is such that


A2 = In if and only if A is similar to a matrix of the form

In-2
1 1
0 1
1 1
0 1

1 1
0 1

[Hint. Observe that Img C Ker gr. Let {g(ci)y..., g{cp)} be a basis of
Im g and extend this to a basis {&i,..., &n-2pj g{c\),..., g(cp)} of Ker g.
Show that
p}
is a basis of V.]
1.10 Let V be a finite-dimensional vector space and let t G £(V, V) be such
that t ^ idy and t ^ 0. Is it possible to have Im* n Kerf ^ {0}? Is
it possible to have Imi = Kertf? Is it possible to have Imi C Kertf? Is
it possible to have Kerf C Imf? Which of these are possible if t is a
projection?
1.11 Is it possible to have projections e , / G £{V,V) with Kere = Ker/ and
I m e ^ I m / ? Is it possible to have Ime = I m / and Kere ^ Ker/? Is it
possible to have projections e, / with e o / = 0 but / o e ^ 0?
1.12 Let V be a vector space over a field of characteristic not equal to 2. Let
ci, 62 G £(V, V) be projections. Prove that ei + e2 is a projection if and
only if ei o e2 = ^2 o ei = 0.
If ei + e2 is a projection, find Im(ei + e<i) and Ker(ei +^2) in terms
of the images and kernels of cj, e2.
1.13 Let V be the subspace of IR3 given by

7 = {(a,a,0) I aGlR}.

Find a subspace !7 of IR3 such that IR3 = V 0 17. Is tf unique? Find a


projection e G £(IR3, IR3) such that Ime = V and Kere = U. Find also
a projection / G £(IR3, IR3) such that I m / = 17 and Ker/ = 7 .
Book 4 Linear algebra

1.14 If V is a finite-dimensional vector space over a field F and e, / G


are projections prove that Im e = Im / if and only if eof = f and foe = e.
Suppose that e i , . . . , e* G £(V, V) are projections with

Let Ai, A 2 ,..., A*, G F be such that ^ = 1 A,- = 1. Prove that


e = AiCi + A2e2 + • • • + Afcefe
is a projection with Ime = Imet-.
Is it necessarily true that if / i , . . . , / * G £(V, V) are projections and
Z)i=i A« = 1 then Y%=i ^ifi 1S a ^ so a projection?
1.15 A net over the interval [0,1] of IR is a finite sequence (a,)o<»<n+i such
that
0 = a0 < ax < • • • < an < an+1 = 1.
A step function on [0,1[ is a mapping / : [0, l[-» IR for which there exists
a net (a,-)o<»<n+i over [0,1] and a finite sequence (&t)o<i<» of elements
of IR such that
(Vxe[a,,a,- +1 [) /(*)=&,-.
Show that the set E of step functions on [0,1[ is a vector space over IR
and that a basis of E is the set {e^ | k G [0,1[} of functions e/c : [0,1[—> IR
given by
f0 if 0 < a; < A;

A jMeceurcse linear function on [0,1[ is a mapping / : [0,1[—> IR for


which there exists a net (at)0<«<n+i and sequences (6,-)o<t<»,(c«)o<i<n
of elements of IR such that
(Vsc G [a^, o,-+i[) /(x) = b{X + ct-.
Let F be the set of piecewise linear functions on [0,1[ and let G be
the subset of F consisting of the piecewise linear functions g that are
continuous with g(0) = 0. Show that F, G are vector spaces over IR and
that F = E®G.
Show that a basis of G is the set {gk \ k G [0,1[} of functions given
by
O if 0 < x < k;

Finally, show that the assignment

describes an isomorphism from E to G.


8
1: Direct sums and Jordan forms

1.18 Let V be a vector space over a field F and let t G £{V,V). Let Ai and
A2 be distinct eigenvalues of t with associated eigenvectors Vi and v2. Is
it possible for Ax + A2 to be an eigenvalue of ti What about AiA2?

1.17 Let t G £(C 2 ,C 2 ) be given by

t{a,b) = (a + 26, b - a ) .

Find the eigenvalues of t and show that there is a basis of C2 consisting


of eigenvectors of t. Find such a basis, and the matrix of t with respect
to this basis.

1.18 Suppose that t G £(V, V) has zero as an eigenvalue. Prove that t is not
invertible. Is it true that t is invertible if and only if all the eigenvalues
of t are non-zero? If t is invertible, how are the eigenvalues of t related
to those of f 1 ?

1.19 Let V be a vector space of finite dimension over Q and let t E C{V,V)
be such that tm = 0 for some m > 0. Prove that all the eigenvalues of t
are zero. Deduce that if t ^ 0 then t is not diagonalisable.

1.20 Consider t G £(IR2, IR2) given by

Find the minimum polynomial of t.


1.21 Let F be a field and let jPn+1[X] be the vector space of polynomials of
degree less than or equal to n over F. Define t : F n+1 [X] —> .F n+ i[X]
by t(f(X)) = f{X+ 1). Show that * is linear.
Find the matrix of t relative to the basis {1, X , . . . , Xn} of Fn+\ [X\.
Find also the eigenvalues of t. If g(X) = (X - l ) n + 1 show that g(t) = 0.
Hence find the minimum polynomial of t.
1.22 If V is a finite-dimensional vector space and t G JC(V, V) is such that
t2 = idy prove that the sum of the eigenvalues of t is an integer.

1.23 For each of the following real matrices, determine


(i) the eigenvalues;
(ii) the geometric multiplicity of each eigenvalue;
(iii) whether it is diagonalisable.
Book 4 Linear algebra

For those matrices A that are diagonalisable, find an invertible matrix


P such that P~lAP is diagonal.

Q ii 3 -1 ]L r -1 2
O
(a) (6) -1 5 -1 c) -1 7 2
•I
" 33 , <2
J 1 -- 1 3 2 10

2 1L - 1 1 0 r
(d) 0 5I 1 e) 0 2 1
0 C) 1 -1 0 3

1.24 Consider the sequence described by


1 Q T

T' 2' 5 ' ' * * ' & ? "


where a n + i = a n + 26n and bn+x = an + bn.
Find a matrix J4 such that

By diagonalising A, obtain explicit formulae for an and bn and hence


show that
lim ^ = >/2.

1.25 Let £ be a singular transformation on a real vector space V. Let f(X)


and g(X) be real polynomials whose highest common factor is 1. Let
a = f(t) and b = g(t).
Prove that every eigenvector of co b that is associated with the eigen-
value 0 is the sum of an eigenvector of a associated with the eigenvalue
0 and an eigenvector of b associated with the eigenvalue 0.
1.26 Suppose that s,t G C{V,V) each have n = dimV distinct eigenvalues
and that s o t = t o s. Prove that, for every A in the ground field F,

Cx = {v e V | t(v) = At/}

is a subspace of V. Show that C\ is s-invariant and that, when At- is an


eigenvalue of f, the subspace C\{ has dimension 1.
Hence show that the matrix of s with respect to the basis of eigenvec-
tors of t is diagonal.
10
1: Direct sums and Jordan forms

1.27 Let u be a non-zero vector in an n-dimensional vector space V over a


field F, let t G £{V,V), and let U be the subspace spanned by

Show that there is a greatest integer r such that the set


{u^u),*3^),...,*1-1^)}
is linearly independent and deduce that this set is a basis for U'. Show
also that U is ^-invariant.
Show that there is a non-zero monic polynomial f(X) G F[X] of de-
gree r such that [f{t)](u) = 0 . If tjj : U —> U is the linear transformation
induced by £, show that its minimum polynomial is f(X).
In the case where u — (1,1,0) G IR3 and t is given by
t{x,y,z) = (x + y,z-y,z),
find the minimum polynomial of tu •
1.28 Let r, s, < be non-zero linear transformations on a finite-dimensional vec-
tor space V such that r o t o r = 0. Let p = r o s and q — r o (s + t)
and suppose that the minimum polynomials of p,q are p(X),q(X) re-
spectively. Prove that (with composites written as products)
(1) qn = pn~lq and pn = qn~lp for all n > 1;
(2) p(X) and <?(X) are divisible by X\
(3) ? satisfies Xp(X) = 0, and p satisfies Xq(X) = 0.
Deduce that one of the following holds :
(i) p(X) = q(X);
(ii) p(X) = Xq(X);
(iii) q(X) = Xp(X).
1.29 A 3 x 3 complex matrix M is said to be magic if every row sum, every
column sum, and both diagonal sums are equal to some i ? E C
If M is magic, prove that # = 3m22- Deduce that, given a,/9,7 G C,
there is a unique magic matrix M(a,/?, 7) such that
m22 = a, mn = a + /?, m 3 i = a + 7.
Show that {M(a,/?,7) | a,/?,7 G C} is a subspace of Mat 3x3 (C) and
that
J? = {Af(1,0,0), Af(0,1,0), Af(0,0,1)}
is a basis of this subspace.
If / : C3 —• C3 represents M(a,/?, 7) relative to the canonical basis
{ i j e2j 63}, show that ei + e2 + e3 is an eigenvector of / . Determine the
e

matrix of / relative to the basis {ei + e2 + e3ie2,e3}. Hence find the


eigenvalues of M(a,/5,7).
11
Book 4 Linear algebra

1.30 Let V be a vector space of dimension n over a field F. A linear trans-


formation / : V —> V (respectively, a n n x n matrix A over .F) is said to
be nilpotent of index p if there is an integer p > 1 such that fp~l / 0
and fp = 0 (respectively, A p - 1 ^ 0 and Ap = 0).
Show that if / is nilpotent of index p and x G Vr\{0} is such that
/ P - ^ Z ) 7^0 then

is a linearly independent subset of V. Hence show that / is nilpotent


of index n if and only if there is an ordered basis of V with respect to
which the matrix of / is
0 0
In-l 0
Deduce that a n n x n matrix A over F is nilpotent of index n if and
only if A is similar to this matrix.
1.31 Let V be a finite-dimensional vector space over IR and let / : V —• V be
a linear transformation such that / o / = - i d y . Extend the external
law IR x V —» V to an external law C x V —> V by defining, for all x G V
and all a + t'/JGC,

Show that in this way V becomes a vector space over C. Use the identity

to show that if {vi,..., vr} is a linearly independent subset of the <C-


vector space V then {t>i,..., vr, f{vi),...} /(^r)} is a linearly indepen-
dent subset of the IR-vector space V. Deduce that the dimension of V
as a vector space over <L is finite, n say, and that dim|R V = 2n.
Hence show that a 2n x 2n matrix A over IR is such that A2 = —/2n
if and only if A is similar to the matrix

0 -In
In 0

1.32 Let A be a real skew-symmetric matrix with eigenvalue A. Prove that


the real part of A is zero, and that A is also an eigenvalue.
12
1: Direct sums and Jordan forms

If {A - XI)2Z = 0 and Y = {A - XI)Z show, by evaluating


that Y = 0. Hence prove that A satisfies a polynomial equation without
repeated roots, and deduce that A is similar to a diagonal matrix.
If x is an eigenvector corresponding to the eigenvalue A = ia and if
u = x + Xj v = i(x — x) show that
Au — ay, Av = —au.
Hence show that A is similar to a diagonal block matrix
"o

Ak
where each A{ is real and of the form

1.33 Let V be a vector space of dimension 3 over IR and let t G £(V, V) have
eigenvalues -2,1,2. Use the Cayley-Hamilton theorem to express t2n
as a real quadratic polynomial in t.
1.34 Let V be a vector space of dimension n over a field F and let / G £(V, V)
be such that all the zeros of the characteristic polynomial of / lie in F.
Let Ai be an eigenvalue of / and let b\ be an associated eigenvector.
Let W be such that V — Fb\ 0 W and let (6J)2<t<» be an ordered basis
of W. Show that the matrix of / relative to the basis {&i, b'2,..., b'n} is
of the form
i # 2 ••• fin

) M

Observe that in general j3[2,... ,/?i n are non-zero, so that W is not f-


invariant. Let IT be the projection of V onto W and let g = TT O / . Show
that W is ^-invariant and that if gf is the linear transformation induced
on W by g then Mat ((/', (&(•)) = M. Show also that all the zeros of the
characteristic polynomial of g1 lie in F.
Deduce that / is triangularisable in the sense that there is a basis B
of V relative to which the matrix of / is upper triangular with diagonal
entries the eigenvalues of / .

13
Book 4 Linear algebra

1.35 Suppose that t G £(IR3, IR3) is given by


t(a, 6, c) = (2a + 6 - c, - 2 a - b + 3c, c).
Find the eigenvalues and the minimum polynomial of t. Show that t is
not diagonalisable. Find a basis of IR3 with respect to which the matrix
of t is upper triangular.
1.36 Let V = Qz[X] be the vector space of polynomials of degree less than
or equal to 2 over the field Q. If * G £(V, V) is given by

show that t is nilpotent. Find a basis of V with respect to which the


matrix of t is upper triangular.
1.37 For each of the following matrices A find a Jordan normal form and an
invertible matrix P such that P~1AP is in Jordan normal form.
39 -64] -1 -1
(a) w -1
25
-4lJ' 0

1 3 -2 3 0 1
0 7 -4 ) (<0 0 3 0
0 9 -5 0 0 3
1.38 Find a Jordan normal form J of the matrix
2 1 1 1 0
0 2 0 0 0
A = 0 0 2 1 0
0 0 0 1 1
0 -1 -1 -1 0
Find also a Jordan basis and hence an invertible matrix P such that

1.39 For each of the following matrices A find a Jordan normal form J, a
Jordan basis, and an invertible matrix P such that P~l AP = J .
-13 8 1 2
22 -2 -12"
20 0
-22 13 0 3
(a) -12
8-5 0-1
30 -3 -16
-22 13 5 5

14
1: Direct sums and Jordan forms

1.40 Find a Jordan normal form and a Jordan basis for the matrix

5 - 1 - 3 2 -5
0 2 0 0 0
1 0 1 1 - 2
0 - 1 0 3 1
1 - 1 - 1 1 1

1.41 Find the minimum polynomial of the matrix

1 0 -1 1 0
-4 1 -3 2 1
A= -2 -1 0 1 1
-3 -1 -3 4 1
-8 -2 -7 5 4

From only the information given by the minimum polynomial, how many
essentially different Jordan normal forms are possible? How many lin-
early independent eigenvectors are there? Does the number of linearly
independent eigenvectors determine the Jordan normal form J ? If not,
does the information given by the minimum polynomial together with
the number of linearly independent eigenvectors determine J?
1.42 Find a Jordan normal form of the differentiation map D on the vector
space IR^X] of polynomials of degree less than or equal to 3 with real
coefficients. Find also a Jordan basis for D on IR4[X].
1.43 If a 3 x 3 real matrix has eigenvalues 3,3,3 what are the possible Jordan
normal forms? Which of these are similar?
1.44 Which of the following are true? HA, Be Mat n x n (C) then AB and BA
have the same Jordan normal form
(i) if A and B are both invertible;
(ii) if one of A, B is invertible;
(iii) if and only if A and B are invertible;
(iv) if and only if one of A, B is invertible.
1.45 Let V be a vector space of dimension n over C. Let t G C{V,V) and let
A be an eigenvalue of t. Let J be a matrix that represents t relative to
some Jordan basis of V. Show that there are

dimKer(£ — Aidy)
15
Book 4 Linear algebra

blocks in J with diagonal entries A.


More generally, if nt- is the number of i x i blocks with diagonal entries
A and d,- = dimKer(£ — Aidy)*, show that
d{ = U\ + 2n 2 + • • • -f- [%', — l)n t -_i + i(rii + n,+i + . . . ) .
Deduce that nt- = 2dt- — d»_i — d t - + i.
1.46 Find a Jordan normal form J of the matrix
• o i o - r
-2 3 0 - 1
A =
-2 1 2 - 1
. 2 - 1 0 3.
Find also a Jordan basis and an invertible matrix P such that P~l AP =
J.
Hence solve the system of differential equations
" 0 1 0 - 1 "
-2 3 0 - 1
-2 1 2 - 1
2 -1 0 3_
1.47 Solve each of the following systems of differential equations :

dt

~df
= X\ — x 2 -|- 2sc2

dx\
-— - 5xi + 6x2 + 6x3 = 0 —— = X\ + 3x2 — 2x3
at at
(ix2 dx2 ,_
-—- + xi - 4x2 = 0 — = 7x2-4x3
at

7 = 0
at at
1.48 Solve the system of differential equations

+*£=«•
ax ax

16
1: Direct sums and Jordan forms

1.49 Solve the system of differential equations

=2 Sx2

given that Zi(0) = 0 and x2(0) = 1.


1.50 Show how the differential equation

x"1 - 2x" - Ax1 + 8s = 0

can be written as a first-order matrix system X1 = AX, By using the


method of the Jordan normal form, solve the equation given the initial
conditions
x(0) = 0, x'(0) = 0, z"(0) = 16.

17
2: Duality and normal transformations

The dual of a vector space V over a field F is the vector space Vd =


£(V,F) of linear functional f : V -+ F. If V is of finite dimension
and B = {v\,..., vn} is a basis of V then the basis that is dual to B is
Bd = {vf,..., v*} where each vf : V -» F is given by

For every x G V we have

a? = vf(x)vi +
and for every / £ Vd we have

If (v,) n , (^t)n are ordered bases of V and (r^) n , (wf )n the correspond-
ing dual bases then the transition matrix from (vf ) n to (wf ) n is (-P"1)*
where P is the transition matrix from (v,) n to (ty,) n . In particular,
consider V = \Rn. Note that if

is a basis of IRn then the transition matrix from B to the canonical basis
(c,*)n of \Rn is M = [wiy]ftXn where mt-y = ayt-. The transition matrix
from J5d to (e^)n is given by (M~1)t. We can therefore usefully denote
the dual basis by
2: Duality and normal transformations
where [an,..., a t n ] denotes the ith row of M~l, so that

• • - + ainxn.
The fadua/ of an element xisxA :Vd -> F where z A (t/ d ) = yd(x). It
is common practice to write yd{x) as (x, yd) and say that j / d annihilates
x if (z, t/d) = 0. For every subspace W of V the set

= { / G 7 d | (Vx G W) (z, / ) = 0}

is a subspace of W and
1
=dim7.

The transpose of a linear transformation f : V —> W is the linear


mapping /* : Wd -* V d described by yd ^ yd o f. When V is of finite
dimension we can identify V and its bidual (Vd)d, in which case we have
that (/*)* = / . Moreover, if / : V -> W is represented relative to fixed
ordered bases by the matrix A then /* : Wd —> Vd is represented relative
to the corresponding dual bases by the transpose A* of A.
If V is a finite-dimensional inner product space then the mapping
^ i i H ^ describes a conjugate isomorphism from V to Vd, by which
we mean that

(x + y)d = xd + yd and (Xx)d = Xxd.

The adjoint f* :W -+V of f :V ->W is defined by

and is the unique linear transformation such that

(Vx,yeV) (f(x)\y) = (x\f*(y)).


We say that / is normal if it commutes with its adjoint. If the matrix
of / relative to a given ordered basis is A then that of /* is A . We say
that A is normal if it commutes with A . A matrix is normal if and only
if it is unitarily similar to a diagonal matrix, i.e. if there is a matrix U
with U~l = U such that U~1AU is diagonal. A particularly important
type of normal transformation occurs when the vector space in question
is a real inner product space, and topics dealt with in this section reach
as far as the orthogonal reduction of real symmetric matrices and its
application to finding the rank and signature of quadratic forms.
19
Book 4 Linear algebra

2.1 Determine which of the following mappings are linear functional on the
vector space IR3 [X] of all real polynomials of degree less than or equal
to 2 :

2.2 Let C[0,1] be the vector space of continuous functions / : [0,1] —* IR. If
/o is a fixed element of C[0,1], prove that <p : C[0,1] —• IR given by

#>(/)= / fo(t)f(t)dt

is a linear functional.
2.3 Determine the basis of (IR3)d that is dual to the basis

{(1,0,-1),(-1,1,0), (0,1,1)}

ofIR3.
2.4 Let A = {xi, Xg} be a basis of a vector space V of dimension 2 and let
Ad = {<pi, 1P2} be the corresponding dual basis of Vd. Find, in terms of
<P\, (p-z the basis of Vd that is dual to the basis A' = {xj +2x2,3xi +4xg}
of V.
2.5 Which of the following bases of (IR2)d is dual to the basis {(-1,2), (0,1)}
of IR2?
(a) {[-1,2], [0,1]}; (6) {[-1,0], [2,1]};
(c) {[-1,0], [-2,1]}; (d) {[1,0], [2,-1]}.
2.6 (i) Find a basis that is dual to the basis

{(4,5,-2,11), (3,4,-2,6), (2,3,-1,4), (1,1,-1,3)}

of IR4.
(ii) Find a basis of IR4 whose dual basis is

{[2, -1,1,0], [-1,0, -2,0], [-2,2,1,0], [-8,3, -3,1]}.

2.7 Show that if V is a finite-dimensional vector space over a field F and if


A, B are subspaces of V such that V = A®B then Vd = A±eBJL.
Is it true that if V = A 0 B then Vd = Ad 0 Bdl
20
2: Duality and normal transformations
2.8 Let IR3[-X] be the vector space of polynomials over IR of degree less
than or equal to 2. Let ti,t2,t$ be three distinct real numbers and for
i = 1,2,3 define mappings /t- : IR3[X] -> IR by

Show that Bd = {/i,/2,/s} is a basis for the dual space (IR3[X])d and
determine a basis B = {pi(X),p2{X)ips(X)} of IR3[X] of which Bd is
the dual.
2.9 Let a = (1,2) and p = (5,6) be elements of IR2 and let cp = [3,4] be an
element of (IR2)d. Determine

(a) <**(*>); (6)/? A b);


(c) (2a + 3/?)*b); (d) (2a + 3/J)A([a,6]).

2.10 Prove that if S is a subspace of a finite-dimensional vector space V then

dim S + dim S1 = dim V.

If * G £(tf, V) and f* G £ ( 7 d , ^7d) is the dual of *, prove that

Deduce that if v G V then one of the following holds :


(i) there exists u €U such that t(u) = v\
(ii) there exists <p G Vd such that td((p) = 0 and p(v) = 1.
Translate these results into a theorem on solving systems of linear
equations.
Show that (i) is not satisfied by the system

3x+ y = 2
x + 2y = 1
- z + 3 y = 1.

Find the linear functional <p whose existence is guaranteed by (ii).


2.11 If s, t : U —> V are linear transformations, show that

(s o t)d = tdo sd.

Prove that the dual of an injective linear transformation is surjective,


and that the dual of a surjective linear transformation is injective.
21
Book 4 Linear algebra

2.12 Let t G £(IR3, IR3) be given by the prescription

i(a, 6, c) = (2a + 6, a + 6 + c, - c ) .

K Jr={(l J O l O) l (l J l,O),(l, 1,1)} and y d = {[l,0,01,11,1,0], [1,1,1]},


find the matrix of td with respect to the bases Yd and Xd.
2.13 Let {ai,0:2,0:3} and {#1,0:2 5^*3} ke bases of IR3 that differ only in the
third basis element. Suppose that {<pi, (p2, ^3} and {<p[, <p'2> ^3} a r e *^e
corresponding dual bases. Prove that £>3 is a scalar multiple of <ps.
2.14 Let C[0,1] denote the space of continuous functions on the interval [0,1].
Given g G C[0,1], define Lg : C[0,1] -> IR by

Show that i g is a linear functional.


Let i b e a fixed element of [0,1] and define Fx : C[0,1] -^ IR by
Fx(f) — f(x). Show that Fx is a linear functional. Show also that there
is no g G C[0,1] such that Fx = Lg.
2.15 By a canonical isomorphism $ : V —> Vd we mean an isomorphism £
such that, for all x, y G V and all isomorphisms / : V - • V, we have

where the notation {a;j^(t/)) means


In this exercise we indicate a proof of the fact that if V is of dimension
n > 1 over F then there is no canonical isomorphism £ : V —» V d except
when n = 2 and F has two elements.
If f is such an isomorphism show that, for t/ ^ 0, the subspace
Ker^(y) = {^(y)}"1" is of dimension n — 1.
Suppose first that n > 3. If there exists t G Ker £(£) for some t ^ 0
let {£, £ 1 , . . . ,£ n _2} be a basis of Ker£(£) and extend this to a basis
{*,zi,...,z n _2,2} of V. Let / : V -» V be the (unique) linear trans-
formation such that

/ ( 0 = *> /(*i) = «, / W = *i, and /(*,-) = a* for» ^ 1.

Show that / is an isomorphism that does not satisfy (*). [Hint. Take
x - xuy = t.] If, on the other hand, t ^ Ker^(<) for all t =£ 0 let
22
2: Duality and normal transformations

{xi,..., z n _i} be a basis of Ker $(t) so that {x\,..., z n _i, t} is a basis


of 7 . Show that

is also a basis of V. Show also that x^ G Ker^(a?i). Now show that if


/ : V —• V is the (unique) linear transformation such that

/(si) = *2, /(s 3 ) = s i + * 2 , /(*) = *> /(».•) = *.•(* #1,2)

then / is an isomorphism that does not satisfy (*). Conclude from these
observations that we must have n = 2.
Suppose now that F has more than two elements and let A G F be such
that A ^ 0,1. If there exists t ^ 0 such that t G Ker^(*) observe that {t}
is a basis of Ker $(t) and extend this to a basis {t, z) of V. If / : V —• V
is the (unique) linear transformation such that f(t) — t, f(z) = Xz
show that / is an isomorphism that does not satisfy (*). [Hint. Take
x = z, y = t.] If, on the other hand, t £ Ker $(t) for alH ^ 0 let {z} be
a basis for Ker^(tf) so that {£,£} is a basis for V. If / : V -> V is the
(unique) linear transformation such that f(z) = Xz, f(t) = t show that
/ is an isomorphism that does not satisfy (*). [Hint. Take x = y = z.\
Conclude from these observations that F must have two elements.
Now examine the vector space F2 where F = {0,1}.
[Hint. (F2)d is the set of linear transformations f : FxF -+ F. Since
F has four elements there are 24 = 16 laws of composition on F. Only
2

four of these are linear transformations from F2 to F] and each of these


is determined by its action on the natural basis of F2. Compute (F2)d
and determine a canonical isomorphism from F2 onto (F2)d.\
2,16 Let V be an inner product space of dimension k and let U be a subspace
of V of dimension A; - 1 (a hyperplane). Show that there exists a unit
vector n in V such that

U = {xeV | {n\x) = 0 } .

Given v G V, define
v1 — v — 2(n\v)n.
Show that v — v1 is orthogonal to U and that | ( v + vf) G U, so that v1
is the reflection of v in the hyperplane U. Show also that the mapping
t: V -+ V defined by
t{v) = v1
is linear and orthogonal. What can you say about its eigenvalues and
eigenvectors?
23
Book 4 Linear algebra

If s : IR3 —• IR3 and t : IR4 —> IR4 are respectively reflections in the
plane 3x — y + z = 0 and in the hyperplane 2x — y + 2z — t = 0, show
that the matrices of s and t are respectively

1 2 - 4 2
6 -6
1 2 4 2 - 1
6 9 2 J r
n 0 - 4 2 1 2
-6 2 9
2 - 1 2 4

2.17 Find the equations of the principal axes of the hyperbola

-x2 + 6xy-y2 = 1.

Find also the equations of the principal axes of the ellipsoid

7x2 + 6t/2 + + Axy - 4yz = 1.

2.18 Let V be a finite-dimensional inner product space and let / : V —> V


be linear. Show that if A is the matrix of / relative to an orthonormal
basis B of V then the matrix of the adjoint / * of / relative to B is the
transpose of the complex conjugate of A.
2.19 For every A e Mat nxre (C) define the trace of A by tr(A) = £ ? = 1 at-,-.
Show that if V is the vector space of n x n matrices over C then the
mapping
(A,B)»{A\B)=tT(B*A),
where B* denotes the transpose of the complex conjugate of B, is an
inner product on V.
Consider, for every M € V, the mapping JM ' V —• V defined by

fM(A) =

Show that, relative to the above inner product,

2.20 Let V be a finite-dimensional inner product space. Show that for every
/ G Vd there is a unique /3 G V such that

(VxeV) = {x\fi).
24
2: Duality and normal transformations
[Hint. Let { a i , . . . , an} be an orthonormal basis of V and consider

Show as follows that this result does not necessarily hold for inner
product spaces of infinite dimension. Let V be the vector space of poly-
nomials over C. Show that the mapping

=I Jo

is an inner product on V. Let z be a fixed element of C and let / G Vd


be the 'evaluation at zy map given by

Show that there is no q G V such that (Vp G V") f{p) = {p\q).


[Hint. Suppose that such a q exists. Let r G V be given by r(t) = t — z
and show that, for every p G V,

0= f r(t)p(t)q(t)dt.
Jo

Now let p be given by p(t) = r(t)q(t) and deduce the contradiction


<7 = 0.]
For the rest of this question let V continue to be the vector space of
polynomials over C with the above inner product. If p G V is given by
p(t) = J2aktk define p G V by p{t) = J2^ktk, and let fp : V -> V be
given by
(VgeK) fp(q)=Pq
where, as usual, {pq){t) = p{t)q(t). Show that (/ p )* exists and is fp.
Now let D : V —> V be the differentiation map. Show that D does
not admit an adjoint.
[Hint. Suppose that D* exists and show that, for all p, q G V,

(p | Z?(<?) + £*(<?)) = p(l)q(l) - p(O)q(O)-

Suppose now that q is a fixed element of V such that g(0) = 0 and


<?(1) = 1. Use the previous part of the question (with z = 1) to obtain
the required contradiction.]
25
Book 4 Linear algebra

2.21 Let C[0,1] be the inner product space of real continuous functions on
[0,1] with the integral inner product. Let K : C[0,1] -> C[0,1] be the
integral operator defined by

= fxyf(y)dy.
Jo
Prove that K is self-adjoint.
For every positive integer n let fn be given by

J v
" ' n+2
Show that fn is an eigenfunction of K with associated eigenvalue 0. Use
the Gram-Schmidt orthonormalisation process to find two orthogonal
eigenfunctions of K with associated eigenvalue 0.
Prove that K has only one non-zero eigenvalue. Find this eigenvalue
and an associated eigenfunction.
2.22 Let t be a skew-adjoint transformation on a unitary space V. Prove that
\6.±i is a bijection and that the transformation

is unitary. Show also that s cannot have —1 as an eigenvalue.


2.23 If S is a real symmetric matrix and T is a real skew-symmetric matrix
of the same order, show that

det( J - T - iS) ^ 0.

Show also that the matrix

U = (/ + T + iS){I - T - iS)'1
is unitary.
2.24 Let A be a real symmetric matrix and let S be a real skew-symmetric
matrix of the same order. Suppose that A and S commute and that
det(A - 5) ^ 0. Prove that

is orthogonal.
26
2: Duality and normal transformations
2.25 A complex matrix A is such that A A — —A. Show that the eigenvalues
of A are either 0 or — 1.
2.26 Let A and B be orthogonal nx n matrices with det A = — detB. Prove
that A + B is singular.
2.27 Let A be an orthogonal nx n matrix. Prove that
(1) if det A = 1 and n is odd, or if det A = — 1 and n is even, then 1 is
an eigenvalue of A)
(2) if det A = - 1 then - 1 is an eigenvalue of A.
2.28 If A is a skew-symmetric matrix and g(X) is a polynomial such that
g(A) = 0, prove that g{—A) = 0. Deduce that the minimum polynomial
of A contains only terms of even degree.
Deduce that if A is skew-symmetric and f(X),g(X) are polynomials
whose terms are respectively odd and even then f(A)ig(A) are respec-
tively skew-symmetric and symmetric.
2.29 For every complex nx n matrix A let

N(A) =
1= 1

Prove that, for every unitary nx n matrix U,

N{UA) = N(AU) = N{A) and N{A -U) = N{In - U~lA).

2.30 If the matrix A is normal and non-singular prove that so is A" 1 .


Prove that A — p(A) for some polynomial p(X) if and only if A is
normal.
2.31 Prove that if A is a normal matrix and g(X) is any polynomial then
g{A) is normal.
2.32 If A and B are real symmetric matrices prove that A + iB is normal if
and only if A, B commute.
2.33 Let A be a real skew-symmetric nx n matrix. Show that det (—A) =
( - l ) n det A and deduce thst if n is odd then det A = 0. Show also that
every quadratic form xtAx is identically zero.
Prove that the non-zero eigenvalues of A are of the form ip where
fi G IR. If x = y + iz where r/, z G IRn is an eigenvector associated with
the eigenvalue ifi, show that Ay = -//# and A2r = /jy. Show also that
xfy = 2*2 and that j / * ^ = 0. If Au = 0 show also that u*y = ulz = 0.
27
Book 4 Linear algebra

Find the eigenvalues of the matrix


0 2 -2
-2 0 -1
2 1 0
and an orthogonal matrix P such that
0 0
t
P AP = 0 0
0 -3
t n
2.34 Consider the quadratic form q(x) = x Ax on IR . Prove that q(x) > 0
n
for all x G IR if and only if the rank of q equals the signature of q.
Prove also that q(x) > 0 for all x G IRn with q(x) = 0 only when x - 0
if and only if the rank and signature of q are each n.
3
2.35 With respect to the standard basis for IR , a quadratic form q is repre-
sented by the matrix
1 1 -l"
A= 1 1 0
-1 0 -1
Is q positive definite? Is q positive semi-definite? Find a basis of IR3
with respect to which the matrix representing q is in normal form.
2.36 Let / be the bilinear form on IR2 x IR2 given by

Find a symmetric bilinear form g and a skew-symmetric bilinear form h


such that / = g + h.
Let q be the quadratic form given by q(x) = f{x,x) where x G IR2.
Find the matrix of q with respect to the standard basis. Find also the
rank and signature of q. Is q positive definite? Is q positive semi-definite?
2.37 Write the quadratic form
4x2 + 4t/2 - 2yz + 2xz - 2xy
in matrix notation and show that there is an orthogonal transformation
(x, y, z) H-» (w, v, w) which transforms the quadratic form to
3«2 6u;2
Deduce that the original form is positive definite.
28
2: Duality and normal transformations
2.38 By completing squares, find the rank and signature of the following
quadratic forms :
(1) 2t/2 -z2 + xy + xz\
(2) 2xy - xz- yz\
(3) yz + xz + xy + xt + yt + zt.
2.39 For each of the following quadratic forms write down the symmetric
matrix A for which the form is expressible as xtAx. Diagonalise each of
the forms and in each case find a real non-singular matrix P for which
the matrix PtAP is diagonal with entries in {1,-1,0}.
(1) x2 + 2y2 + 9z2 - 2xy + 4xz - 6yz;
(2) 4xy + 2yz]
(3) x2 + Ay2 + z2 - At2 + 2xy - 2xt + 6yz - Syt - Uzt.
2.40 Find the rank and signature of the quadratic form

Q{xu . ..,»») =
r<8

2.41 Show that the rank and signature of the quadratic form
n
^2 (^rs + r + s)zrz8

are independent of A.
2.42 Let A be the matrix associated with the quadratic form Q(zi,..., zn)
and let A be an eigenvalue of A. Show that there exist a\,..., an not all
zero such that

2.43 If the real square matrix A is such that det A ^ 0 show that the quadratic
form ztAtAz is positive definite.
2.44 Let / : IRn x \Rn -> IR be a symmetric bilinear form and let Qf be the
associated quadratic form. Suppose that Qf is positive definite and let
g : IRn x IRn —> IR be a symmetric bilinear form with associated quadratic
form Qg. Prove that there is a basis of IRn with respect to which Qf
and Qg are each represented by sums of squares.
For every x G IRn let fx G {\Rn)d be given by fx(y) = f{z,y). Call
/ degenerate if there exists z G IRn with fx = 0. Determine the scalars
A G IR such that g — A/ is degenerate. Show that such scalars are the
29
Book 4 Linear algebra

roots of the equation det(B — XA) — 0 where A, B represent / , g relative


to some basis of IRn.
By considering the quadratic forms 2xy + 2yz and x2 - y2 + 2xz show
that the result in the first paragraph fails if neither / nor g is positive
definite.
2.45 Evaluate
/•OO /»OO /»O

/ / /
• / — o n * — OO * —O

30
Solutions to Chapter 1

1.1
(i)False. For example, take 6 = — a\.
(ii)True.
(iii)False. {(1,1,1)} is a basis, so the dimension is 1.
(iv) False. For example, take A = {0} or A = {v, 2v}.
(v) True.
(vi) False. For example, take A = IRn.
(vii) True.
(viii) False, {(x, Xx) | x G IR} is a subspace of IR2 for every A G IR.
(ix) True,
(x) True,
(xi) False. An isomorphism is always represented by a non-singular
matrix.
(xii) False. Consider, for example, IR2 and C 2 . The statement is true,
however, if the vector spaces have the same ground field.
(xiii) False. is a counter-example,
(xiv) False. For example,

1 0 1 0 1 3" 0
0 0 1 0 0 4 0

(xv) True,
(xvi) True.
(xvii) False. Take, for example, /, g : IRn -> IR* given by f(x, y) = (0,0)
and g(x,y) = (x, y). Relative to the standard basis of IR* we see
Book 4 Linear algebra

that / is represented by the zero matrix and g is represented by


the identity matrix; and there is no invertible matrix P such that
P ~ 1 / 2 P = 0.
(xviii) True,
(xix) False. The transformation t is non-singular (an isomorphism), but
[l 2~|. . .
is singular.
L1 2 J
(xx) False. The matrix is not diagonalisable.

1.2 We have that

(a,6,c) G Ker*i <=> (a + 6,6 + c,c + a) = (0,0,0)

and so Kerti = {0}. It follows from the dimension theorem that Imti =
IR3.
As for t2, we have

(a,6,c) € Kert 2 <=> a - 6 = = O , 6 - c = O


= b =c

and so Ker£2 = {(^? a ) a ) I a G IR}. It is clear from the definition of £2


that Imt 2 = {(a, 6,0) | a,6<ElR}.
Likewise, it is readily seen that
Ker £3 = {0}, Im*3 = IR3,
Ker£4 = {(0,0,a) | a e IR}, lmt4 = {(a,6,6) | a,6GlR}.
If Kerti nlmti = {0} then by the dimension theorem we have

dim(Ker*t- 4- Imt,-) = dimIR3

and so Ker£t- + Im£,- = IR3. Now for i = 1,2,3,4 we have from the above
that Ker*t- n Imt,- = {0}. Thus we see that IR3 = Kert,- 0 Imt,- holds in
all cases.
Im<2 is t3-invariant. For, if v G Imt2 then v = (a, 6,0) and so

h{v) = £3(^,6,0) = (-6,a,0) Glm£ 2 .

However, Ker£2 is not £3-invariant. For (1,1,1) G Ker£2 but £3(1,1,1) =

32
Solutions to Chapter 1

For the last part, we have that (t3 o £4)(a,6,c) = (-6,a, 6) and that
4 ° *3)(^j &5 c) = (-6, a, a). Consequently,

a e IR};

Ker(*4o*3) = {(0,0, a € IR};


Im(t4ot3) = {(-b,a a,6elR}.

Reducing the matrix to row-echelon form we obtain

3 - 1 1 1 _^ 3~ 1 - 1 3
-1 5-1 -1 5 -1 — • 0 4 2
1 - 1 3 3 -1 1 0 2-8
1 1
3 -1 3
0 2 - 8 —> 2-8
0 4 2 0 18

Note that we have been careful not to divide by any number that is
divisible by either 2 or 3 (since these will be zero in Z 2 and Z3 respec-
tively).
(i) When F = IR the rank of the row echelon matrix is 3, in which case
dim Im t = 3 and hence dim Ker t = 0.
(ii) When F = Z 2 we have that 2,18, —8 are zero so that the rank is 1,
in which case dimlmtf = 1 and dim Ker t = 2.
(iii) When F = Z3 we have that 18 is zero so that the rank is 2, in
which case dimlmi = 2 and dim Ker t = 1.
V = Kertf® Imi holds in cases (i) and (ii), but not in case (iii); for in
case (iii) we have that (1,1,1) belongs to both Kert and Imt.
1.4 If s o t = idy then s is surjective, hence bijective (since V is of finite
dimension). Then t = s~x and so t o s = idy.
Suppose that W is ^-invariant, so that t(W) C W. Since t is an
isomorphism we must have dimi(VK) = dim W and so t(W) = W. Hence
W = s[t{W)} = s{W) and W is s-invariant.
The result is false for infinite-dimensional spaces. For example, con-
sider the real vector space \R[X] of polynomials over IR. Let s be the
differentiation map and t the integration map. We have s o t = id but
t o s ^ id.
33
Book 4 Linear algebra

1.5 KerD = {a \ a G F} and imD = {p{X) | degp(X) < n - 2}. Clearly,


ImD is isomorphic to Vn_i and Ker£> is isomorphic to F. Now Ker Dn
ImD ^ {0} since if a G F with a ^ 0 then the constant polynomial a
belongs to both.
The same results do not hold when the ground field is Z 2 . For exam-
ple, in this case we see that the polynomial X2 belongs to the kernel of
D.
1.6 Let 5, t € £{V, V). Then if w G Im(s o t) we have w = s[t(u)} for some
u G V which shows that w G Iras. Thus Im(s o tf) C Ims. The first
chain now follows by taking s = tn.
Similarly, if u G Kerf 1 then s[tn{u)\ = s(0) = 0 gives u G Ker(s o i n )
and so Kerf 1 C Ker(s o f ) . The second chain now follows by taking
s = t.
Now we cannot have an infinite number of strict inclusions in the first
chain since X cY implies that dimX < dimF, and the dimension of V
is finite. Hence the chain is finite. It follows that there exists a positive
integer p such that Im£p = Imi p+fc for all positive integers k. Since
dimlmi p + dim Ker tp = dimV the corresponding results for the kernel
chain are easily deduced.
To show that V = Im£ p ©Kerf it suffices, by the dimension argument,
to prove that Im*p n Kerf = {0}. Now li x e lmtp n Kerf then
tp(x) = 0 and there exists v G V such that x = tp(v). Consequently

and so v G Ker*2p = Kerf whence x = tp{v) = 0.


For the last part, observe that if x G I m f then x = tp(v) gives
t(x) = tp+1(v) G I m f + 1 C I m f and so I m f is ^-invariant. Also, if
x G K e r f then f (x) = 0 gives f + 1 ( x ) = 0 so tp[t{x)} = 0 whence
t(x) G Kerf and so Kerf is ^-invariant.
1.7 If / o / = 0 then (Vx G V) f{x) G Ker/ and so I m / C Ker/.
We know that

n = dim V — dim Im / + dim Ker / = r + dim Ker /

and, by the above, dim Ker / > dim Im / = r. Hence 2r < n.


If W is a subspace such that V = Ker/ 0 W then we have that
dim V = dim Ker / + dim W and so

= dimV — dim Ker/ = d i m l m / = r.


34
Solutions to Chapter 1

If {wi,..., wr} is a basis of W then for i = 1,..., r we have /(«;,•) 6


I m / C Ker/. Moreover, {/(wi),... ,/(w r )} is linearly independent
since

=> ^A,ty t - e Ker/


t=i
r

=> YlXiWi e Ker n w


/
t=l

Every linearly independent subset of a vector space can be enlarged to


form a basis so, since dim Ker/ = n- r, we can enlarge the independent
subset {/(u>i),.. 'j/(wr)} of Ker/ to form a basis of Ker/. Thus we
may choose n — 2r elements xi,..., xn_2r of Ker / such that

is a basis for Ker / . Since V = W 0 Ker / it follows that

is a basis for V.
Using the fact that / o / = 0 and each xt- G Ker / it is readily seen
that the matrix of / relative to this basis is of the form

Or o"
Ir 0
0 0

Suppose now that A is a non-zero n x n matrix over F. If A2 = 0


and if / : V —• V is represented by A relative to some fixed ordered
basis then / o / = 0 and, from the above, there is a basis of V with
respect to which the matrix of / is of the above form. Thus A is similar
to this matrix. Conversely, if M denotes the above matrix then clearly
M2 = 0. So if A is similar to M there is an invertible matrix P such
that A = P'lMP whence A2 = P~ 1 Af 2 P = P~x0P = 0.
35
Book 4 Linear algebra

1.8 (1) Sums and scalar multiples of elements of Vi, V2 are clearly elements
of V\, V2 respectively.
(2) If x G Vi then x = xx(6i + 64) + £2(62 + h) shows that Vi is
generated by {bi + 64,62 + 6 3 }. Also, if Xi(bi + 64) + z 2 (6 2 + 63) = 0
then, since {61,62,63,64} is a basis of V, we have x\ = x2 = 0. Thus
{61 + 64,62 + 63} is a basis of V\. Similarly, {61 — 64,62 — 63} is a basis
of Vi.
(3) It is clear from the definitions of V\ and V2 that we have V\ n
V2 = {0}. Consequently, the sum V\ + V2 is direct. Since Vi,F 2 are of
dimension 2 and V is of dimension 4, it follows that V = V\ © V2.
(4) To find the matrix of idy relative to the bases B — {61,62,63,64}
and C = {61 + 64,62 + 63,62 — 63, 61 — 64} we observe that

61 = §(61 + 64) + 0(6 2 + 63) + 0(6 2 - 63) + | ( 6 i - 64)


6 2 = 0(6! + 64) + §(62 + h) + l ( 6 a - 63) + 0(6! - 6 4 )
63 = 0(6i + 64) + | ( 6 2 + 63) - i ( 6 2 - 63) + 0(&! - 64)
64 = £(61 + 64) + 0(6 2 + 63) + 0(6 2 - 63) - i ( & ! - 64).

The matrix in question is therefore

2 0 0 2
0 1 I 0
A= 21 21
0 2 0
1 2 1
• 2
0 0
2

It is readily seen that

1 0 0 1
A. — LtA. —
0 1 1 0
01-1 0
10 0-1

Suppose now that M is centro-symmetric; i.e. of the form

b d"
f h
9 e
c a

36
Solutions to Chapter 1

Let / represent M relative to the basis B. Then the matrix of / relative


to the basis C is given by AMA~X, which is readily seen to be of the
form
a P 0 0
7 8 0 0
0 0 e ?
0 0 V
Thus if M is centro-symmetric it is similar to a matrix of the form K.
1.9 If F is not of characteristic 2 then lF + 1F ^ 0F. Writing 2 = lF + 1F
we have that \ G F. Given x G V we then observe that

= (idy+/)(§*) + ( i d y - / ) ( | s )

so V = Im(id v + / ) + Im(idy - / ) . Also, if x G Im(idy +/)nlm(idy - / )


then x = y + f(y) = z - f(z) for some y,z G V and hence, since
f o f = idy by hypothesis,

whence x = 0. Thus V = Im(idy + / ) 0 Im(idy - / ) .


If A2 = Ini let / represent A relative to some fixed ordered basis. Then
fof = idy. Let { a i , . . . , a p } be a basis of Im(idy + / ) and {a p +i,. . . , a n }
be a basis of Im(idy - / ) . Then {a\,..., an) is a basis of V. Now since
<H = 6 + /(6)forsome6GV r we have/(ai) = f(b) + f[f{b)] = f(b)+b =
ai, and similarly for a 2 , . . . , a p . Likewise, a p + i = c - /(c) for some
c G 7 so / ( a p + i ) = /(c) - f[f{c)] = f(c) - c = - a p + i , and similarly for
, a n . Hence the matrix of / relative to the basis {a\,..., an} is

o 1
o -/._,]•
and A is then similar to this matrix. Conversely, if A is similar to a
matrix of the form A p then there is an invertible matrix Q such that
Q-XAQ = A p . Then

A2 = = QA2pQ-1 = QInQ~l = In.

37
Book 4 Linear algebra

Suppose now that F is of characteristic 2, so that x + x = 0 and hence


x — -x for every x G F. Let / © / = idv and let g = idv + / . Then
(*) g(x) = 0 <=> x + /(x) = 0 <=> x = -f(x) = /(x).
Moreover, for every x G V we have g[g(x)] = g[x + /(x)] = x + /(x) +
/[x + /(*)] = x + /(x) + f{x) + /[/(x)] = x + /(x) + /(x) + x = 0 and
hence g o y = 0.
Suppose now that A2 = In and let / represent A relative to some
fixed ordered basis. Let g = idy + / and note from the above that
Img C Kergr. Let {y(ci),... ,g{cp)} be a basis of Img and extend this
to a basis

of Ker g (which is of dimension n — dim Im g = n — p). Consider now the


set
B = {&i,..., 6 n _ 2p ,flf(ci),c i , . . . ,flf(cp),c p }.
This set has n elements; for c» = 6y gives the contradiction g(ci) =
g(65) = o, and Ci = g{cj) gives the contradiction g(ci) = g[g(<y)] = 0. It
is also linearly independent; for if we had
] T At-6t-
then, applying g and using the fact that 6,-, g(c3) G Ker g, we deduce that
S V39{cj) = 0 whence each i/y = 0, and then from ^ A,-6i + 2 /iyg(cy) =
0 we obtain A, = 0 = fij for all i,j. Thus 5 is a basis of V. To compute
the matrix of / relative to the basis B we observe that since 6t- G Ker g we
have, by (•), that /(6 t ) = &,- for every i. Also, f[g(ci)] =flf[g(ct-)]+flf(c,-)=
g(c{) so that we have

f(ci) = l.g{ci) + l.C{ .


It follows from these observations that the matrix of / relative to B is
Ifi-2p

1
0

1 1
0 1

38
Solutions to Chapter 1

Consequently A is similar to this matrix. Conversely, if A is similar to


a matrix of the form V p then there is an invertible matrix Q such that
Q~lAQ = V p and so
A2 = (QVpQ- 1 ) 2 = QVlQ-1 = QInQ~l = In.
1.10 If t e £(IR2,IR2) is given by t(a,b) = (6,0) then clearly Im* = Ker* +
{0}.
If t e £(IR3,IR3) is given by t(a,6,c) = (c,0,0) then Imt C Ker*.
If * e £(IR3, IR3) is given by *(o,6,c) = (6,c,0) then Ker* C Im*.
If t is a projection then Imt n Kert = {0} and none of the above are
possible.
1.11 Consider the elements of £(IR3, IR3) given by
*i(a,6,c) = (a,a,0);
*a(a,6lc) = (0,6,0);
i 3 (a,6,c) = (0,6,c);
t 4 (a,6,c) = ( 0 , 6 - a , c ) ;
<5(a,6,c) = (a,0,0).
Each of these transformations is a projection. We have
Ker^5 = Ker^i but lmt*, ^ Imii;
Im t3 = Im U but Ker t3 # Ker tA •
Also, ti o £2 = 0 but £2 ° *i T£ 0- (Note that ^2 ° t\ is not a projection.)
1.12 Clearly, t\ + e2 is a projection if and only if (denoting composites by
juxtaposition) e ^ + 62^1 == 0. Thus if e\t<2, = 0 and &2t\ — 0 then
the property holds. Conversely, suppose that ei + e2 is a projection.
Then multiplying each side of e ^ + 62^1 = 0 on the left by t\ we obtain
t\t2 -\-t\eiZ\ — 0, and multiplying each side on the right by e\ we obtain
6162^1 4- e2ei = 0. It follows that e ^ = ^ei. But e ^ + 62^1 = 0 also
gives t\e2 — —62ei- Hence we have that each composite is zero.
When ei + e2 is a projection, we have that
Ker(ei + e2) = Kerei n Kere 2)
Im(ei +e 2 ) = Imei 0 l m e 2 .
1.18 Take U = {(0, a, 6) | a, b G IR}. Then IR3 = V 0 U since it is clear that
V n U = {0} and that
(a,6 J c) = (a f a,0) + (0,&-a,c).
For the last part refer to question 1.11; take e — t\ and / = £4.
39
Book 4 Linear algebra

1.14 Suppose that Im e = Im / . Then for every v G V we have f(v) G Im / =


Ime so, since e acts as the identity map on its image, e[/(v)] = f(v)
and hence e o / = / . Likewise, foe = e. Conversely, if e o / = / and
foe = e, let x G Ime. Then e(x) = x gives e(z) = f[e(x)] = f(x) G I m /
and so Im e C Im / . The reverse inclusion is obtained similarly.
Since Im ex = • • • = Im e^ we have et- o ey = ey for all i, j . Now

and so e is also a projection. To show that Ime = i it suffices to


prove that e o ex = e\ and ei o e = e. Now

gives the first of these, and the second is similar.


For the last part, consider e, / G £(IR2, IR2) given by
e(a,6) = (a,0), /(a, 6) = (0,6).
Then e and / are projections but clearly | e + | / is not.
1.15 Since sums and scalar multiples of step functions are step functions it is
clear that E is a subspace of the real vector space of all mappings from
IR to IR. Given # G E, the step function #t- whose graph is

a4 a'i+i

i.e. the function that agrees with & on [a,-,a,-+i[ and is zero elsewhere,
40
Solutions to Chapter 1

is given by the prescription


**W = *(«.-)K(x)-ea, +1 (x)].
It follows that {ek \ ke [0,1[} generates E since then

n+l

«=0
Since the functions ek clearly form an independent set, they therefore
form a basis of E.
It is likewise clear that F is a vector space and that G is a subspace
of F. Consider now an element of F, as depicted in the diagram

Prom geometric considerations it is clear that every element of F can be


written uniquely as the sum of a function e G E and a function g G G.
[It helps to think of the above strips as pieces of wood that can slide up
and down.] Thus it is clear that F = E®G.
To show that {gk \ k G [0,1[} is a basis for G, observe first that the
graph of gai - ga<+1 is of the form

a4.

41
Book 4 Linear algebra
1
Given , consider the function fi{ whose graph is

i.e. the function that agrees with fi on [at-,a,-+i[ and is zero elsewhere.
Let the gradient in the interval [a,-,a»+i[ be 6,-, so that di —
6,(at_(-i — a,-). Then it can be seen that

Consequently fi = J2?=o M« S^ves expression for \i as a sum of g e G


and e E E. It now follows that {c A; G [0,1[} must be a basis for G,
Finally, observe that

so that / carries a basis to a basis and therefore extends to an isomor-


phism from E to G.
1.16 Let tut2 e £(IR3,IR3) be given by

*i(a,6fc) = (a, 26,3c), *2(a,6,c) = (2a, 36,6c).

Then t\ has eigenvalues 1,2,3 and £2 has eigenvalues 2,3,6. So both


questions can be answered in the affirmative.
1.11 The eigenvalues of t are 1 + iy/2 and 1 — iy/2. Associated eigenvectors
of any matrix representing t are [l,iy/2/2] and [l,-i\/2/2\. Since the
eigenvalues are distinct, the eigenvectors of t form a basis of C2. The
matrix of t with respect to this basis is

1 + 1V2 0
0 1 - iy/2

42
Solutions to Chapter 1

1.18 Since t has 0 as an eigenvalue we have t(v) — 0 for some non-zero v G V


and hence t is not injective, so not invertible. Thus if t is invertible
then all its eigenvalues are non-zero. For the converse, suppose that t is
not invertible and hence not injective. Then there is a non-zero vector
v G Kert, and t(v) = 0 shows that 0 is an eigenvalue of t.
If now t is invertible and t(v) = Xv with A ^ 0 then v = J - 1 ^ ) ] =
t~ (Xv) = A*"1(v) gives <~1(f) = X~xv and so A"1 is an eigenvalue of
1

t~~l with the same associated eigenvector. {Remark. Note that we have
assumed that V is finite-dimensional (where?)—in fact the result is false
for infinite-dimensional spaces.)
1.19 Suppose that A is a non-zero eigenvalue of t. Then t(v) = At; for some
non-zero v G V and

0 = tm{v) = tm~l [t(v)\ = tm~l (Xv) = • • • = Am v,

and we have the contradiction Am = 0. Hence all the eigenvalues of t


are zero.
If t is diagonalisable then the matrix A of t is similar to the diagonal
matrix
"Ax

where Ai,..., An are the eigenvalues of t. But we have just seen that
all the eigenvalues are zero. Thus, for some invertible matrix P we have
P~l AP = 0 which gives A = 0 and hence the contradiction t = 0.
1.20 The matrix of t with respect to the canonical basis {(1,0), (0,1)} is

i -•
The characteristic equation is (A—\/3)(X+>/3) = 0 and, since t—\/3id
0, t + \/3id ^ 0, the minimum polynomial is (X - y/§)(X + \/3).
1.21 That t is linear follows from

= f(X + 1) + g(X + 1) = *(/(*)) +


43
Book 4 Linear algebra

t(Xf(X)) = t((\f)(X)) = Xf(X +1) = Xt(f(X)).


The matrix of t relative to {1, X , . . . , Xn} is

1 1 1 1 ... 1
0 1 2 3 ... n
0 0 1 ... 2

0 0 0 0 ... 1
The eigenvalues of t are all 1. The characteristic polynomial is

Hence, by the Cay ley-Hamilton theorem, g(t) = 0. The minimum poly-


nomial of t is then m(X) = (X - l) r for some r with 1 < r < n -f 1.
A simple check using the above matrix shows that (t - idy) r ^ 0 for
1 < r < n. Consequently we have that m(X) = (X - l ) w + 1 .
1.22 Let Ai,..., Xn be the eigenvalues of t. Then A?,..., A£ are the eigenval-
ues of t2 = idy and so

Consequently, A,- = ±1 for each i and hence the sum of the eigenvalues
of t is an integer.
1.2S (a) We have

3-A -1
= A 2 -6A + 8 = ( A - 4 ) ( A - 2 )
-1 3-A
so the eigenvalues are 2 and 4, each of geometric multiplicity 1. For the
eigenvectors associated with the eigenvalue 2, solve

1 -1
-1 1
to obtain the eigenspace {[x,x] \ x G IR}. For the eigenvectors associ-
ated with the eigenvalue 4, solve

44
Solutions to Chapter 1

to obtain the eigenspace {[x, -x] \ x G IR}. Since A has distinct eigen-
values it is diagonalisable. A suitable matrix P is

(b) The eigenvalues are 2, 3, 6 and are all of geometric multiplicity 1.


Associated eigenvectors are [1,0, - 1 ] , [1,1,1], [1, -2,1]. A is diagonalis-
able; take, for example,

1 1 1
P= 0 1 -2
-1 1 1

(c) The eigenvalues of A are 6, 6, and 12. For the eigenvalue 6 we


consider
1 -1 -2 X 0
-1 1 2 y = 0
-2 2 4 z 0

We obtain — x + y + 2z = 0. We can therefore find two linearly indepen-


dent eigenvectors associated with the eigenvalue 6, for example [1,1,0]
and [2,0,1]. Hence this eigenvalue has geometric multiplicity 2. The
eigenvalue 12 has geometric multiplicity 1 and an associated eigenvector
is [—1,1,2]. Hence A is diagonalisable and a suitable matrix P is

1 2 -1
1 0 1
0 1 2

(d) The eigenvalues are 2, 2, 1. For the eigenvalue 2 we consider

1 -1 X 0
0 1 y = 0
0 -1 z 0

from which we see that the corresponding eigenspace is spanned by


[1,0,0]. Hence the eigenvalue 2 has geometric multiplicity 1. The eigen-
value 1 also has geometric multiplicity 1, the corresponding eigenspace
being spanned by [-2,1,-1]. In this case A is not diagonalisable.
45
Book 4 Linear algebra

(e) The eigenvalues are 2, 2, 2. Prom


-1 0 1 X o"
0 0 1 y = 0
-1 0 1 z 0
we see that the corresponding eigenspace is spanned by [0,1,0] and has
dimension 1. Thus the geometric multiplicity of the eigenvalue 2 is 1,
and A is not diagonalisable.
1.24 The matrix in question is
1 2 1
and we have
1 1
r
[-1-
>"
I
I
2
-2X-1 and its eigenvalues are
The characteristic polynomial of A is X
Ai = 1 + \fL and A2 = 1 - \/2. Corresponding eigenvectors are [\/2,1]
and [-\/2,1]. The matrix

'y/2 -V2]
P=

is such that P~l AP = diag{Alf A2}.


l
In the new coordinate system, becomes P~
P~ \— where

_ \/2-l
Pi = 2
2N/2 ' " 2V^2 '
We then have
an = P i A ^ " 1 v 2 • On = P l ^ l

from which we see that

n~1

46
Solutions to Chapter 1

Now since 0 < |A 2 /Ai| < 1 we deduce that

lim £ = y/2.
n-oo bn
1.25 Since f(X) and g(X) are coprime there are polynomials p(X) and q{X)
such that f(X)p{X) + g{X)q(X) = 1. Let c = p(t) and d = q{t). Then
ac H-fed= idy.
Suppose now that v is an eigenvector of ab associated with the eigen-
value 0. (Note that ab has 0 as an eigenvalue since t is singular.) Let
u = a(cv) and w = fe(dv). Then since a,fe, c commute we have

bu — bacv = cabv = 0,

and since a,fe,d commute we have

aw = afedv = dabv = 0.

Also, u + w = (ac + bd)v = v since ac + bd = idy.


i.£0 If u,v G C\ then t(u) = Xu and t(v) = Xv and so

t(au +fev)= a£(u) 4- bt(v) = aXu + bXv = A(au + fev)

and hence C\ is a subspace of V.


Let v € C\. Then, since s and < commute, we have

t[s(v)} = s[t(t/)] = s(Av) = Aa(w)

from which it follows that C\ is s-invariant.


If At- is an eigenvalue then C\. ^ {0}. If dimC^,- > 1 then since t has
n distinct eigenvalues this would give more than n linearly independent
vectors, which is impossible. Hence C\{ is spanned by a single vector,
Vi say. Since C\. is s-invariant we have s(v{) = /xtvt- for some fa G F.
Hence the matrix of s with respect to the basis {vi,..., vn} is diagonal.
1.27 There is an integer r < n with {it, t(u),..., tr~1(u)} linearly independent
and {u, t(u)y..., £r(u)} linearly dependent. Then

tr{u) = aou + ai*(u) + • • • + ar^tr-l{u),


so that, applying i,

{u) = aot{u) + axt2{u) + • • • + ar_!<r(tx).


47
Book 4 Linear algebra

Continuing in this way we see that {u, t(u),... ,tr~l(u)} spans U and
so is a basis. By the above argument we havetf[£*(w)]= Y?j=i aj^3{u)i
so U is ^-invariant.
Let f{X) = - o o - a i J T ar-iXr'1+Xr. Then f(X) has degree
r and [f{t)](u) = 0. Since U is ^-invariant the restriction tu of t to U
induces a linear transformation from U to itself. Now [f{tu)]{u) = 0 so
[f{tu)]{v) = 0 for all v € U. Hence f(tu) is the zero transformation on
U. If now g(X) is a polynomial of degree less than r with g(tu) = 0 then
{g{tu)]{u) = 0 implies that {u1t(u)i... ,tr~1(u)} is dependent. Hence
/(X) is the (monic) polynomial of least degree such that f[tu) = 0, so
f(X) is the minimum polynomial of tu.
When u = (1,1,0) G IR3 and t(x,y,z) = (x + y,x - y,z) we have
that U = {(1,1,0), (2,0,0)}. Also, t2(u) = (2,2,0) = 2u and so / ( X ) =
X2 -2.
1.28 (1) Proceed by induction. The result clearly holds for n = 1. In this ex-
ample it is necessary, in order to apply the second principle of induction,
to include a proof for n = 2 :

q2 = r{s + t)r{s + t)
= (rs + r*)2
= rsrs + r£rs + rsri + r M
= rsr(s +1)
= pq.

Suppose now that it is true for all r < n where n > 2. Then

which shows that it holds for n + 1. The second equality is established


in a similar way.
(2) If r is non-singular then r~l exists and consequently from rtr = 0
we obtain the contradiction t = 0. Hence r is singular, so both p and
q are singular and hence have 0 as an eigenvalue. Consequently we see
that p(X) and q(X) are divisible by X.
(3) Let q(X) = a1X+a2X2-i \-arXr. Then aiq+a2q2-\ +arqr =
r 2 r+1
0 and so(aig + • • • + arq )p = 0, i.e. aip + • • • + arp = 0 which
shows that p satisfies Xq(X) = 0. Similarly, q satisfies Xp(X) = 0.
By (3), p(X) divides Xq{X), and <?(X) divides Xp{X), so we have

Pl(X)p(X)i Xp(X) =
48
Solutions to Chapter 1

for monic polynomials pi(X),qi(X). Now X2q(X) = Xpi(X)p(X) =


Pl{X)qi{X)q{X) so, since q{X) / 0, we have pl{X)q1{X) = X2. Con-
sequently, either (i) pi{X) = qi{X) = X> or (ii) qi{X) = X2, or

1.29 Clearly, adding together the elements in the middle row, the middle
column, and both diagonals, we obtain

^2 mij + 3m 22 = 40,

so that 30 + 3m22 = 40 and hence 0 = 3m22-


If m22 = a, TOii = a + ft and 77131 = a + 7 then

= 3a- = 3 a — a —ft— a — 7 = a —ft— 7 ,


= 3a- = 3a —

and so on, and we obtain

a +/? a-7
a - (3 -
a+ 7 - 7 a -

It is readily seen that sums and scalar multiples of magic matrices


are also magic. Hence the magic matrices constitute a subspace of
Mat3x3(<C). Also,

{a, 13,7) = a ,0,1)

so that B generates this subspace. Since M(a,/?,7) = 0 if and only if


a = p = 7 = 0, it follows that B is a basis for this subspace.
That ei + e^ H- e$ is an eigenvector of / follows from the fact that

r 3a" l"
M(a,/?, 7 ) 1 == 3a = 3a 1
1 3a 1

To compute the matrix of / relative to the basis {ei + e2 + e3i e 2 ,e 3 }


we observe that, by the above,

e2 + e3) = e2

49
Book 4 Linear algebra

and that
/(e 2 ) = (<*-/? + 7)ei + ae 2 + (a -f /? -
= (a - ft + 7)(ei + e2 + e3 - e2 - e3) + ae 2 4- (a + P - 7)e 3
= (a - j9 + 7)(«i + e2 + c3) + 0 9 - 7)^2 + (2/? - 27)e3,
/(e 3 ) = (a - 7)61 + (a + /? + 7)^2 + (a - £)e 3
= (a - 7)(ci + e2 + e3 - e2 - e3) + (/? + 27)63 + (7 - p)e3
e2 + e3) + {/3 + 2 7 )e 2 + (7

The matrix of / relative to {ex + e2 + e3, e2, e3} is then

3a a — /? + 7 a—7
L= 0-7
2£ - 27 7 -

Since L and M(a,P, 7) represent the same linear mapping they are sim-
ilar and therefore have the same eigenvalues. It is readily seen that

det (L - XI3) = (3a - A)(A2 - 3/?2 + 37 2 ),

so the eigenvalues are 3a and ±>/3(/?2 — 7 2 ).


1.80 If / is nilpotent of index p then fp = 0 and 1
^ 0 . Let x G V be
such that fp~x(x) ^ 0 and consider the set

Suppose that

(*) = 0.

On applying /P*" 1 to (•) and using the fact that fp = 0, we see that
Xofp~l(x) = 0 whence we deduce that Ao = 0. Deleting the first term
in (•) and applying fp~2 to the remainder, we obtain similarly Ai = 0.
Repeating this argument, we see that each Aj = 0 and hence that Bp is
linearly independent.
It follows from the above that if / is nilpotent of index n = dimV
then
Bn = {*,/(*),...,/""'to}
50
Solutions to Chapter 1

is a basis of V. The matrix of / relative to Bn is readily seen to be

0
/* =
In-l

Consequently, if A is an n x n matrix over F that is nilpotent of index


n then A is similar to /*. Conversely, if A is similar to I* then there is
an invertible matrix P such that P~l AP = I*, so that A = PhP~l.
Computing the powers of A we see that
(i) An = 0;
(ii) [An-i\nl = 1, so A"- 1 # 0.
Hence A is nilpotent of index n.
1.81 To see that V is a C-vector space it suffices to check the axioms con-
cerning the external law. For example,

(a + tf)[(7 + iS)x] = (a + tf)[7s - «/(*)]


= a[1x-8f(x)}-Pf[1x-6f(x)}
— a8f(x) —

Suppose now that {vx,..., vr} is a linearly independent subset of the


C-vector space V and that in the IR-vector space V we have

Using the given identity, we can rewrite this as the following equation
in the C-vector space V :

y^(ay - iPj)vj = 0.

It follows that ctj - ipj = 0 for every j , so that otj = 0 = /?y. Conse-
quently,

is linearly independent in the IR-vector space V. Since V is of finite


dimension over IR it must then be so over C. The given identity shows
that every complex linear combination of {vi,..., vn} can be written as
a real linear combination of

51
Book 4 Linear algebra

If dim c V = n it then follows that dimjR V = 2n.


By considering a basis of V (over IR) of the form

we deduce immediately from the fact that / o / = - idv that the matrix
of / relative to this basis is

Clearly, it follows from the above that if A is a 2ra x 2n matrix over IR


such that A2 = -I2n then A is similar to F. Conversely, if A is similar
to F then there is an invertible matrix P such that P~l AP = F and
hence A2 = ( P F P - 1 ) 2 = PT2P~l = P ( - / 2 n ) P - 1 = - J 2 n .
Let £ be an eigenvector corresponding to A. Then from Ax = Ax we
have that xtAt = Ax* and hence xtA= Ax*. Since A = A and A* = —A
we deduce that — x4 A = Ax*. Thus x*Ax = —Ax*x. But we also have
x*Ax = x*Ax = Ax*x. It follows that A = —A, so the real part of A is
zero. We also deduce from Ax = Ax that Ax = Ax, i.e. that Ax — Ax,
so A is also an eigenvalue.
Y = {A - XI)Z gives Yl = Z*(J1* - XI) = -Zl{A + XI) and hence
F* = -~t{A + A/) = - I * (A - XI). Consequently,

F V = -Z*(A - XI).{A - XI)Z = 0

since it is given that (A - XI)2Z = 0. Now the elements of Y Y are of


the form

'a + ib
\a-ib ... x- iy\ = a2 + b2 + • • • + x 2 + y2

and a sum of squares is zero if and only if each summand is zero. Hence
we see that Y = 0.
The minimum polynomial of A cannot have repeated roots. For, if this
were of the form m{X) = (X - a)2p{X) then from (A - aI)2p{A) = 0
we would have, by the above applied to each column of p(A) in turn,
(A — al)p(A) = 0 and m(X) would not be the minimum polynomial.
52
Solutions to Chapter 1

Thus the minimum polynomial has simple roots and so A is similar to


a diagonal matrix.
Suppose now that Ax = iax. Then Ax = —iax and

Au = A(x + x) — iax — iax = ia(x — x) = av,


Av = Ai(x — x) = — ax — ax — —a(x + x) = — au.

These equalities can be written in the form

Au] [" 0 a]\u


Av a v
][- °\[ \'
The last part follows by choosing iai,..., ia^ to be the non-zero eigen-
values of A.
l.SS Since t satisfies its characteristic equation we have

(*-id)(t + 2id)(t-2id) = 0 ,

which gives tz = t2 + At - 4 id. It is now readily seen that

This suggests that in general

*2P = t2 + 4(1 + 4 + 42 + • • • + 4P~2){t2 - id).

It is easy to see by induction that this is indeed the case. Thus we see
that
t2n = t2 + 4(1 + 4 + • • • + 4n'2){t2 - id)

1.84 We have that


/(6i) = A

(t>2) /(6{)=/J l

53
Book 4 Linear algebra

Thus the matrix of / relative to the basis {61, br2,..., Vm} is of the form

0 M

If w G W, say w = W\bi + J2i>2 w^ .

g(w) = ir[f{w)} = ir(wiXibi

t>2

since 61 G KerTr and w acts as the identity on ImTr = W. Thus W is


^-invariant.
It is clear that Mat ( / , (6J)) = M. Also, the characteristic equation
of / is given by det (A - XIn) — 0, i.e. by

(Ai - J f ) d e t ( A f - J ! r / n ) = O.
So the eigenvalues of g1 are precisely those of / with the algebraic mul-
tiplicity of Ai reduced by 1. Since all the eigenvalues of / belong to F
by hypothesis, so then do all those of g*.
The last part follows from the above by a simple inductive argument;
if the result holds for (n — 1) x (n — 1) matrices then it holds for M and
hence for A.
1.85 The eigenvalues of t are 0, 1, 1. The minimum polynomial is either
X(X - 1) or X(X - I) 2 . But t2 - t ^ 0 so the minimum polynomial is
X(X-l)2. We have that

We must find a basis {wi, tU25 W3} with


t(wi) = 0, (t - \dv)(w2) =0, (t - \dv)(w3) = Atu2.
A suitable basis is {(-1,2,0), (1,-1,0), (1,1,1)}, with respect to which
the matrix of t is
0 0 0
0 1 1
0 0 1

54
Solutions to Chapter 1

1.S6 We have that

*2(1) = - 5 ( - 5 - 8X - 5X 2 ) - 8(1 + X + X 2 ) - 5(4 + 7X + 4X%


<3(l)=0.

Similarly we have that t*{X) = 0 and t3{X2) = 0. Consequently *3 = 0


and so t is nilpotent.
Take vi = 1 + X + X2. Then we have t(vi) = 0. Now take v2 =
5 + SX + 5X 2 . Then we have

t(v2) = 3(1 + X + X 2 ) G span{v x }.

Finally, take v$ = 1 and observe that

t(l) = - 5 - SX - 5X2 € span{t/i,t/ 2 }.

It is now clear that {1 + X+X2,5 + SX+5X2,1} is a basis with respect


to which the matrix of t is upper triangular.
1.87 (a) The characteristic polynomial is X2 + 2X + 1 so the eigenvalues are
- 1 (twice). The corresponding eigenvector satisfies

40 -64
25 -40

so —1 has geometric multiplicity 1 with [8,5] as an associated eigenvec-


tor. Hence the Jordan normal form is

A Jordan basis can be found by solving

(A + I2)vi = 0 , (A + I2)v2 = vi.

Take v\ = [8,5]. Then a possible solution for v2 is [5,3], giving

55
Book 4 Linear algebra

(b) The characteristic polynomial is (X+1)2. The eigenvalues are —1


(twice) with geometric multiplicity 1, and a corresponding eigenvector
is [1,0]. The Jordan normal form is

[•:.:}
A Jordan basis satisfies

(A + 72)vi = 0 , (A + I2)v2 = vi.

Take vx = [1,0] and v2 = [0, -1]; then

1 0
0 -1

0 -c 0
(Any Jordan basis is of the form {[c,0], [d, — c]} with P =
3
(c) The characteristic polynomial is (X - I) , so the only eigenvalue is
1. It has geometric multiplicity 2 with {[1,0,0], [0,2,3]} as a basis for
the eigenspace. The Jordan normal form is then

1 1 o"
0 1 0
0 0 1

A Jordan basis satisfies

{A - I3)vi =0, (A - h)v2 = vi, {A - h)vz = 0.

Now (A —13)2 = 0 so choose v2 to be any vector not in ([1,0,0], [0,2,3]),


for example v2 = [0,1,0]. Then vx = (A-I3)v2 = [3,6,9]. For vz choose
any vector in ([1,0,0], [0,2,3]) that is independent of [3,6,9], for example
V3 = [1,0,0]. This gives

3 0 1
P = 6 1 0
9 0 0

56
Solutions to Chapter 1

(d) The Jordan normal form is

3 1 o"
0 3 0
0 0 3

A Jordan basis satisfies

{A - 3/3) vi = 0 , {A- 3/3)v2 = t/i, {A - 3I3)v3 = 0.

Choose i>2 = [0,0,1]. Then — [1,0,0] and a suitable choice for v$ is


[0,1,0]. Thus
1 0 0
P = 0 0 1
0 1 0

1.88 The characteristic polynomial of A is (X-l)3(X-2)2. For the eigenvalue


2 we solve
0 1 1 1 0 X 0
0 0 0 0 0 y 0
0 0 0 1 0 z = 0
0 0 0 -1 1 t 0
0 -1 -1 -1 -2 w 0

to obtain w = t = 0, y + z = 0. Thus the general eigenvector associated


with the eigenvalue 2 is [x, y, -2/,0,0] with x,y not both zero. The
Jordan block associated with the eigenvalue 2 is

2 0
0 2

For the eigenvalue 1 we solve

1 1 1 1 0 X "0"
0 1 0 0 0 y 0
0 0 1 1 0 z = 0
0 0 0 0 1 t 0
0 -1 -1 -1 -1 w 0

57
Book 4 Linear algebra

to obtain w = y = x = 0,z + t = 0. Thus the general eigenvector


associated with the eigenvalue 1 is [0,0, z, —z, 0] with z ^ 0. The Jordan
block associated with the eigenvalue 1 is
1 1 0
0 1 1
0 0 1
The Jordan normal form of A is therefore
2 0 0 0 0
0 2 0 0 0
0 0 1 1 0
0 0 0 1 1
0 0 0 0 1
Take [0,0,1, —1,0] as an eigenvector associated with the eigenvalue 1.
Then we solve
1 1 1 1 0 X 0
0 1 0 0 0 y 0
0 0 1 1 0 z = 1
0 0 0 0 1 t -1
0 -1 -1 -1 -1 w 0
to obtain y — 0, w — —l,z + t = 1, x = —1, so we take [—1,0,0,1, —1].
Next we solve
1 1 1 o" "-l"
X
1 0 0 0 y 0
0 1 1 0 z = 0
0 0 0 1 t 1
-1 —1 -1 -1 w -1
to obtain y = 0}t + z = 0,w = l,x = —1, so we consider [—1,0,0,0,1].
A Jordan basis is therefore
{[1,0,0,0,0], [0,1,-1,0,0], [0,0,1,-1,0], [-1,0,0,1,-1], [-1,0,0,0,1]}
and a suitable matrix is
1 0 0 -1 -1
0 1 0 0 0
P= 0 -1 1 0 0
0 0 -1 1 0
0 0 0 _ J 1

58
Solutions to Chapter 1
1.89 (a) The Jordan form and a suitable (non-unique) matrix P are
2 1 0 2 - 5 5
J = 0 2 0 , P= 2 - 3 8
0 0 2 3 8 - 7

(b) The Jordan form and a suitable P are


"2 0 0 0" 4 3 2 0
0 1 1 0 5 4 3 0
p =
0 0 1 1 -2 -2 -1 0
.0 0 0 1. 11 6 4 1
I.4.O The Jordan normal form is
2 0 0 0 0
0 2 1 0 0
0 0 2 0 0
0 0 0 3 1
u 0 0 0 0 3
A Jordan basis is
{[2,1,0,0,1], [1,0,1,0,0], [0,1,0,1,0], [-1,0,0,1,0], [2,0,0,0,1]}.
1.41 The minimum polynomial is (X — 2) 3 . There are two possibilities for
the Jordan normal form, namely
2 1 0 0 o" 2 1 0 0 0
0 2 1 0 0 0 2 0 0 0
0 0 2 0 0 i ^2 = 0 0 2 1 0
0 0 0 2 0 0 0 0 2 1
0 0 0 0 2 0 0 0 0 2

Each of these has (X — 2) 3 as minimum polynomial. There are two


linearly independent eigenvectors; e.g., [0,-1,1,1,0] and [0,1,0,0,1].
The number of linearly independent eigenvectors does not determine
the Jordan form. For example, the matrix J2 above and the matrix
2 0 0 0 0
0 2 1 0 0
0 0 2 1 0
0 0 0 2 1
0 0 0 0 2

59
Book 4 Linear algebra

have two linearly independent eigenvectors. Both pieces of information


are required in order to determine the Jordan form. For the given matrix
this is J 2 .
142 A basis for IR4[X] is {1,X,X 2 ,X 3 } and D{1) = Q,D{X) = l,D{X2) =
2X,D(X3) = 3X2. Hence, relative to the above basis, D is represented
by the matrix
"0 1 0 0"
0 0 2 0
0 0 0 3
_0 0 0 0.
The characteristic polynomial of this matrix is X4, the only (quadruple)
eigenvalue is 0, and the eigenspace of 0 is of dimension 1 with basis {1}.
So the Jordan normal form is
0 10 0
0 0 10
0 0 0 1
0 0 0 0

A Jordan basis is {/1, /2? A? / i } where


Dh = 0, Df2 = fu Df3 = / 2 , Df4 = f3.
Choose /1 = 1; then / 2 = X, / 3 = | X 2 , / 4 = | X 3 so a Jordan basis is

I.4.8 The possible Jordan forms are

3 0 0 3 1 0" 3 1 0 3 0 0
0 3 0 0 3 1 j 0 3 0 j 0 3 1
0 0 3 0 0 3 0 0 3 0 0 3

The last two are similar.


1.44 (i) and (ii) are true : use the fact that AB and BA = A~1(AB)A are
similar.
(iii) and (iv) are false; for example,

0 0 1 0 1 0 0 0
and
0 0 0 0 0 0 0 0

clearly have the same Jordan normal form.


60
Solutions to Chapter 1
1.4.5 V decomposes into a direct sum of ^-invariant subspaces, say V =
Vi 0 • • • 0 Vr, and each summand is associated with one and only one
eigenvalue of t. Without loss of generality we can assume that t has a
single eigenvalue A. Consider an i x i Jordan block. Corresponding to
this block there are % basis elements of V, say i>i,..., V{, with
(t- Aidy)^ = 0;
(t - A \dv)2v2 = (t- X i d y ) ^ i = 0;

1
{t - A i d v ) f " v t - = { t - !/,--! = • • • = 0.
Thus there is one eigenvector associated with each block, and so there
are
dimKer(i - A idy)
blocks.
Consider Ker(i — A i d y ) J . For every l x l block there corresponds
a single basis element which is an eigenvector in Ker(£ — A i d y ) J . For
every 2 x 2 block there correspond two basis elements in Ker(i — A idy )3
if j > 2 and 1 basis element if j < 2. In general, to each i x % block
there correspond i basis elements in Ker(£ — A idy )3 if j > i and j basis
elements if j < i.
It follows that

dj = rii + 2?T,2 + ''' ~\~ (3 ~ l)Tij—i + jfai H~ ^ j + i 4" * * *)

and a simple calculation shows that 2d{ — d,-_i — d{+i = ?zt-.


I.4.6 The characteristic polynomial of A is (X — 2) 4 , and the minimum poly-
nomial is (X — 2) 2 . A has a single eigenvalue and is not diagonalisable.
The possible Jordan normal forms are
2 1 0 0 2 1 0 0
0 2 0 0 0 2 0 0
0 0 2 0 0 0 2 1
0 0 0 2. 0 0 0 2
Now dimlm(yi — 2/4) = 1 so dimKer(A — 2/4) = 3 and so the Jordan
form is
2 10 0
0 2 0 0
0 0 2 0
0 0 0 2
61
Book 4 Linear algebra

Now Ker(i4 - 2/4) = {[a, y, z, i] | 2x - y + t = 0}, and we must choose


v2 such that (A - 2J4)2t/2 = 0 but v2 £ Ker(A - 2/ 4 ). So we take
v2 = [1,0,0,0], and then v1 = (A - 2I4)v2 = [ - 2 , - 2 , - 2 , 2 ] . We now
wish to choose v3 and v4 such that {vi)v3)v4} is a basis for Ker(A-2J 4 ).
So we take v3 = [0,1,0,1] and u4 = [0,0,1,0]. Then we have

- 2 1 0 0
- 2 0 1 0
P=
- 2 0 0 1
2 0 10

To solve the system X1 = AX we first solve the system Y' = J F ,


namely
y[ = 2yi + y2
2/2 = 2y2
2/3 =

2/4 =

The solution to this is clearly

,2t
y4 = c4e
2/3 = c3e
y2 = c2e

Since now

2 1 0 0" c2te2t
2 0 1 0 e2t
2 0 0 1 e2t
2 0 1 0 c 4 e,2t
:

we deduce that
= -2c2te2t -2cie 2 * + c2e
x2 = -2c2te2t -2Cle2t + c3e2t
= -2c2te2t -2Cle2t + c4e2 t
2t
x. = 2c2te
2t
-f 2Cle + c3e2t

62
Solutions to Chapter 1

147 (a) The system is X' = AX where

5 4
A=
-1 0

The characteristic polynomial is (X - 1)(X - 4). The eigenvalues are


therefore 1 and 4, and associated eigenvectors are E\ = [1,-1] and
£ 4 = [4,-1]. The solution is aEx£ + bE4e4t, i.e.

xx = ael + 46e4t
£2 = — ae* — be4t.

(b) The system is X1 = AX where

4 -1 -1
A= 1 2 -1
1 -1 2

The characteristic polynomial is (X — S)2(X — 2). The eigenvalues


are therefore 3 and 2. An eigenvector associated with 2 is [1,1,1] so
take E2 = [1,1,1]. The eigenvalue 3 has geometric multiplicity 2 and
[1,1,0], [1,0,1] are linearly independent vectors in the eigenspace of 3.
The general solution vector is therefore

so that

x3 = ae2t + ce3t.

(c) The system is X' = AX where

5 -6 -6
A= - 1 4 2
3 -6 -4

The characteristic polynomial is (X — l)(X — 2) 2 . The eigenvalues are


therefore 1 and 2. An eigenvector associated with 1 is E\ = [3,-1,3],
63
Book 4 Linear algebra

and independent eigenvectors associated with 2 are E<i = [2,1,0] and


E*2 = [2,0,1]. The solution space is then spanned by

(d) The system is Xf — AX where

1 3 -2
A= 0 7 - 4
0 9 - 5

Now A has Jordan normal form


1 1 0
0 1 0
0 0 1

and an invertible matrix P such that P~l AP = J is


3 0 l"
6 1 0
9 0 0
=
First we solve Y' = JY to obtain y[ = r/i + 2/2)2/2 V^^ — 2/3
hence
2/3 = ce*
2/2 = be*
yx = 6iee -f ae*.
Thus X' = AX has the general solution

148 The system is AY' = F where


1 -3 2
A= 0-5 4
0 - 9 7
Now A is invertible with
1 3 -2
0 7 - 4
0 9 - 5

and the system Y1 = A~XY is that of question 1.47(d).


64
Solutions to Chapter 1

1.49 The system is X1 = AX where

1 1
A=
2 3

The eigenvalues are 2 4- \/3 and 2 — \ / 3 , with associated eigenvectors


[-1, - 1 - y/S] and [-1, - 1 + \/5]. The general solution is

Since Zi(0) = 0 and x2(0) = 1 we have a + b = 0 and a - >/3a


1, giving a = ^= and b = 37-7=, so the solution is
2\/3

([1,1 + + [-1, - 1 +
2\/3
Let a; = %\ tx[ = x2, x" = x'2 = x 3 , x'/' = x'3 = 2x3 + 4x2 - Sxx. Then
the system can be written in the form X' = AX where

0 10
A= 0 0 1
-8 4 2

The characteristic polynomial is (X - 2)(X2 - 4) so the eigenvalues are


2 and —2. The Jordan normal form is

2 1
J = 0 2
0 0-2

A Jordan basis {vitV2,vs} satisfies


(A-2/3)^=0
(A - 2/3)t>2 = vi
(A + 2/3)^3 = 0.
Take vx = [1,2,4] and v3 = [1,-2,4]. Then v2 = [0,1,4]. Hence an
invertible matrix P such that P~l AP = J is
1 0 1
P= 2 1 -2
4 4 4

65
Book 4 Linear algebra

Now solve the system Yf — JY to get

y[ = 2yi 4- y2? 2/2 = 2^2,2/3 = ~22/3

so that t/2 = C2e2*,t/3 = C3e~2t and hence t/J = 2t/i + C2e2t which gives
yi =cie2t + c2te2t.
Now observe that

1 0 1 • c2te2
_2 2t
2 1 c2e
4 4 4

Hence x = xi = Cie2t + c2t ,2t -2t . Now apply the initial conditions
to obtain

66
Solutions to Chapter 2

2.1 (a) / H-> / ' does not define a linear functional since / ' ^ IR in general.
(b),(c),(d) These are linear functional.
(e) $ : / — i > Jo f2 is not a linear mapping; for example, we have
0 = &[f + (—/)] whereas in general

= 2

2.2 That £> is linear follows from the fact that

= f fo(t)[af(t) + Pg(t)}dt
Jo

[ fo(t)f(t)dt + (3 f fo(t)g(t)dt
Jo Jo

= a<p{f) + f3<p{g).

2.S The transition matrix from the given basis to the standard basis is
1 -1 0
p = 0 1 1
-1 0 1

The inverse of this is readily seen to be

2 2 2
1 i __I
"2 2 2
II I
2 2 2
Book 4 Linear algebra

Hence the dual basis is

U2'2' 2 J' L 2 ' 2 ' 2 J ' 1 2 » 2 ' 2 JJ '

2.4 The transition matrix from the basis A1 to the basis A is

Its inverse is
Consequently, (A') d = {-2^>i + § |
2.5 The transition matrix from the given basis to the standard basis is

Its inverse is

so the dual basis is (b), namely {[-1,0], [2,1]}.


2.6 (i) {[2, -1,1,0], [7, - 3 , 1 , - 1 ] , [-10, 5, -2,1], [-8,3, -3,1]};
(ii) {(4,5, -2,11), (3,4, -2,6), (2,3, -1,4), (0,0,0,1)}.
2.7 Since V = A 0 B we have

i H B ^ f i n s)1 = {o}1 = vd
A±nB± = {A + B)1 = VL = {0}.

Consequently, Vd = A1 0 i?-1.
The answer to the question is eno' : Ad is the set of linear functional
/ : A -> F so if A ^ V we have that Ad is not a subset of Vd. What is
true is : if V = A 0 B then Vd = A1 0 5 ' where A',B' are subspaces
of F d with A1 ~jLd and £ ' - £ d . To see this, let f e Ad and define
/ : 7 -> F by /(v) = f{a) where t; = a + b. Then p : Ad -* 7 d
given by £>(/) = / is an injective linear transformation and <p(Ad) is a
68
Solutions to Chapter 2

subspace of Vd that is isomorphic to Ad. Define similarly fi : Bd —• Vd


by fi(g) = # where 'g(v) = g(b). Then we have

{/i) /2J /s} is linearly independent. For, if + A 2 / 2 + A3/3 = 0 then


we have

0 = {Xifi + A 2 / 2 + A 3 / 3 )(l) = Ax + A2 + A3;


0 = (Ai/i + A 2 / 2 + A3/3X-X) = Xih + A2*2 +
0 = {Xifi + A 2 / 2 + A 3 / 3 )(X 2 ) = X1t21 + A2*l

Since the coefficient matrix is the Vandermonde matrix and since the t{
are given to be distinct, the only solution is Ai = A2 = A3 = 0. Hence
{/15/25/3} is linearly independent and so forms a basis for (IR3[X])d.
If {pi,p 2 ) p 3 } is a basis of V of which {/1, / 2 ) / 3 } is the dual then we
must have f%(p3-) = 6,-y; i.e.

t) = Sty-

is now easily seen that

(X-t2)(X-t3)

=ll;

= 3 4

+3

+3 = 17a+ 226.

69
Book 4 Linear algebra

2.10 Let V be of dimension n and S of dimension k. Take a basis {vi,...,


of S and extend it to a basis

of V. Let {^>i,.. >n} be the basis of Vd dual to { v i , . . . ,t>n}- Given


£> G V d we have

Since <p(vi) — a,- we see that <p[y) = 0 for every v G 5 if and only if
ax — • • • = afc = 0. Thus

<p • • • + an<pn

and so {<Pk+i, • • •} ^n} is a basis for S-1. Hence dim S + dim S1- = n.
liipe Kertd then td(i/>) = 0 and so [*d(^)](«) = 0 for all u e C/'. But

Thus i> e (Imt) 1 . Conversely, let V G (Imi) 1 . Then for all t* G 17 we


have

and so td(i/>) = 0 whence tp G Keri d . Thus Kerf* = (Imi) 1 -


(i) means that v E Imt while (ii) means v £ (Keri^)- 1 . In terms of
linear equations, this says that either (i) the system AX = B has a
solution, or (ii) there is a row vector C with CA = 0 and CB = 1.
It is readily seen that the given system of equations has no solution,
so (ii) holds. The linear functional satisfying (ii) is [1, —2,1].
2.11 If (p G Wd then for all u G U we have

(S o t)d(<p)(u) = *)(«)] = <ps[t(u)} = Sd(<p)[t(u)} = [td(8d(<p))}(u)

from which the result follows. The final statements are immediate from
the fact that Im* = (Kerf*)-1 (see question 2.10).
2.12 To find Y we find the dual of {[1,0,0], [1,1,0], [1,1,1]}. The transition
matrix and its inverse are

1 1 1 1 -1 0
0 1 1 , p-l = 0 1 -1
0 0 1 0 0 1

70
Solutions to Chapter 2
Thus r = {(l l -l l 0),(0,1,-1),(0,0,1)}.
The matrix of t with respect to the standard basis is

2 1 0
1 1 1
0 0 - 1

and the transition matrices relative to X, Y are respectively

1 0 0 1 1 1
1 1 0 5 0 1 1
0 -1 1 0 0 1

By the change of basis theorem, the matrix of t relative to X, Y is then

1 0 0 2 1 o" 1 1 l" 2 3 3
1 1 0 1 1 1 0 1 1 = 3 5 6
1 1 1 0 0 -1 0 0 1 3 5 5

The required matrix is then the transpose of this one.


2.13 The annihilator ( a i , ^ ) 1 is of dimension 1 and contains both (p$ and
<pf3. Thus <p'3 is a scalar multiple of ^>3.
2.14 I* 1S immediate from properties of integrals that

V A 1 / 1 + Aa/a) = W A ) + X2Lg(f2)

and so Lg is linear.
To show that Fx is a linear functional, we must check that

^ ( A i / i 4- A 2 / 2 ) = XrfM + \2Fx(f2),

i.e. that (A1/1 + A2/2X3) = Ai/i(&) + A2/2(z), which is clear.


Suppose that x is fixed and that Fx = Lg for some g. Then

(V/GC[0,l]) f f(t)g(t)dt =
Jo
Now, by continuity, the left hand side depends on the values of / at
points other than x whereas the right hand side does not. Thus Fx ^ Lg
for any g.
71
Book 4 Linear algebra

2.15 The method of solution is outlined in the question.


2.16 Let {v\,..., Vk-i} be a basis of U and extend this to a basis {vi,..., v^}
of V. Apply the Gram-Schmidt process to obtain an orthonormal basis
{ u i , . . . , ttfc} of V, where {tti,...,itfc-i} is an orthonormal basis of U.
Let n = Uk. Then if x = ]C»=i" ^«w» ^ ^ we have
fc-i
••• + Ajfe_iujb_i) = 5^At-(ufc|ttf-} = 0 .

Conversely, if x = X^t=i ^« u * ^ ^ an<


^ if (w|aJ) = 0 then
fc
0 = (ufc|Ai«i + h AfcUfc > = ^ A t - ( w f c | w t - ) = Afc.
t=i

Consequently we see that x EU and so


U = {xeV | (n|x> = 0 } .
Now v — v' = 2(fi|v)n, a scalar multiple of n, and so is orthogonal to
U. Then \{v + v9) = v - (n\v )n G U since
( n | v — ( n\v)n) = (n\v) — { n \ v ) (n\n)
= (n\v)- {n\v)
= 0.
We have
t(v + w) = v + w — 2(n\v + w)n
= v + w — 2(( n|v) + (n|w;))n
= (v - 2(n|«)n) + (tu -2(n|t;)n)
= t(v)+t(w);
t(Xv) = Xv - 2(n|Av)n
= Xv — 2X{n\v)n

and so < is linear.


Note that t(u) = u for every u E U and so 1 is an eigenvalue and
Ker(< — id) = 17 is of dimension k - 1. Thus 1 has geometric multiplicity
A; — 1. Also, t(n) = — n and so —1 is also an eigenvalue with associated
eigenvector n. Since the sum of the geometric multiplicities is A; it follows
that t is diagonalisable.
72
Solutions to Chapter 2

In the last part we have that n = —£=(3, —1,1) so if v = (a, 6, c) then


(n,v) = -i==(3a-6 + c). Thus

s(a, 6, c) = (a, 6, c) - £(3a - 6 + c)(3, -1,1)


= i r ( - 7 a + 6b - 6c
» 6 a + 96 + 2c, - 6 a + 26 + 9c),
which gives
-7 6 -6
mat s = — 6 9 2
-6 2 9
As for i, we have n = - 4 ^ ( 2 , - 1 , 2 , - 1 ) and so if v = (a, 6, c,rf) then

+ 26 - Ac + 2d, 2a + 46 + 2c - a7,
- 4a + 26 + c + 2a1,2a - 6 + 2c + 4a1),

which gives
1 2 - 4 2
2 4 2 - 1
mat t = -
5 - 4 2 1 2
2 - 1 2 4
2.17 The hyperbola is represented by the equation

1 3
[•»] 3 -1
= 1.

The eigenvalues of A = -1 3 are 2 and - 4 with associated nor-


3 -1
malised eigenvectors

1/V2
Thus i * AP = diag{2, - 4 } where

73
Book 4 Linear algebra

Now change coordinates by defining

Then we have x = Pxi and the original equation becomes

= xt1PtAPx1
2 0]|V
0 -4 U

Thus the principal axes are given by a^ = 0 and yi = 0, i.e. y = — x


and y — x.
The ellipsoid is represented by the equation
7 2 0 a;
2 6 -2 y = 1.
J 0 -2 5 z

"7 2 0"
The eigenvalues of A — 2 6 - 2 are 3,6
3,6,9 with associated nor-
0 -2 5
malised eigenvectors

•-1/3" 2/3" 2/3"


2/3 » -1/3 2/3
2/3 2/3

Thus PtAP = diag{3,6,9} where

- 1 2 2
2 - 1 2
2 2 - 1
Now change coordinates by defining

74
Solutions to Chapter 2

Then we have x = Pxi and the original equation becomes (with a


calculation similar to the above)

Since

t/i = | ( 2 x -

the Xi-axis is given by y\ = z\ — 0 and has direction numbers (—1,2,2);


the t/i-axis is given by xx = 2i = 0 and has direction numbers (2, -1,2);
the 2i-axis is given by X\ = yx = 0 and has direction numbers (2,2, —1).
2,18 Let {vi,..., vn} be an orthonormal basis of V. Then for every x € V
we have a: = SJb=i (^l^fc )vfc s o m particular

If A is the matrix of / we thus see that ajk — {f{vj)\vk )• If Af is the


matrix of /* then likewise we have that m3k = (f*{vj)\vk}. The result
now follows from the observation that

2.19 The first part is a routine check of the axioms. Using the fact that
tr(AB) = tr(BA) we have

(fM(A)\B) = tr[B*(MA)} = tr\MAB*}


= tr[B*MA)
= tr[(M*B)*A]
= {A\fM*(B))

from which it follows that (/M)* = /M*-


75
Book 4 Linear algebra

2.20 Let p be as stated and let ,/> € V* be given by fp{a) = (a\/3). Then
since {ai,..., ctn} is an orthonormal basis we have

1=1
Since //? and / coincide on this basis, it follows that / = fp.
For the next part suppose that such a q exists. Then we have

Jo
for every p. Writing rp for p we then have

/Jo'
In particular, this holds when p = fq. So

r
whence we have rq = 0. Since r / 0 we must therefore have q = 0, a
contradiction.
For the next part of the question note that

= / p(t)q(t)r(t)dt
Jo
= fq{t)W)r{t)dt
Jo
= (q\fr)

so (/„)* = /p.
Integration by parts gives
(D(p)\q)=p(l)q(l)-P(0)q(0) - {p\D(q)).
If D* exists then we have
{p\D*(q)) = p(l)9(l) - p(0)q(0) - {p\D(q) )
so that
{p\D(q) + D*(q)) = p(l)q(l) - p(O)f(O).
Now fix q such that q(0) = 0,q(l) = 1. Then we have
(p\D(q) + D*(q))=p(l))
which is impossible (take z — 1 in the previous part of the question).
Thus D* does not exist.
76
Solutions to Chapter 2

2.21 That K is self-adjoint follows from the fact that xy is symmetric in x


and y.
We have
#(/»)= f*yfn(y)dy
Jo

/•l o r1
= / xf+idy--— xydy
Jo n -\-
n + 2,z j0
xy',n+2 2 xy1
n+2 n + 2 ~2~
= 0,
so K(fn) = 0/ n as required.
Apply the Gram-Schmidt process to {/i,/2}. Let ei(x) =
x — | and define e2(z) = /2(^) + afi{x) — (^2 — | ) "*" a ( x "~ I )

OL — —-

Since

it follows that a = — 1 and hence that

e2{x) = x2 -x+\.

Thus ei, e2 are orthogonal eigenfunctions associated with the eigenvalue


0.
If K{f) = Xf then

Xf(x) = x f yf(y)dy.
Jo
If A ^ 0 then / must then be of the form x \-* ctx for some constant a.
Substituting ax for /(re), it clearly follows that A = | . An associated
eigenfunction is given by f(x) = x.

77
Book 4 Linear algebra

2.22 Since t is skew-adjoint its eigenvalues are purely imaginary so ±1 are


not eigenvalues. Hence id ±t cannot be singular. Now

But (id+t)(id-t) = id-i 2 = (id-t)(id+t) so

Hence s* = (id-ht)(id -t)'1 and so

ss* = (id -i)(id +*)" 1 (id +t){id - i ) " 1 = id.

To see that s cannot have - 1 as an eigenvalue, consider

id +s = id +(id -t){\d +t)~l.

We have that

(id +5) (id + 0 = (id +t) + (id -t) = 2 id,

and so (id+s)" 1 = |(id+<) whence id+s does not have 0 as an eigen-


value and hence - 1 is not an eigenvalue of s.
2,23 St = S and T* = -T, so

(r + isf = (r - isf = r* - is* = ~(r + is).


Thus T + i 5 is skew-adjoint. But the eigenvalues of a skew-adjoint
matrix are purely imaginary, so 1 is not an eigenvalue of T + iS9 so

det(T + iS - I) + 0.

As for the second part, we have

U = (/ + T + iS){I -T- iS)'1

The fact that U is unitary now follows from the previous question.
78
Solutions to Chapter 2

2.24 It is given that A1 = A,5* = -S}AS = £A,det(A - 5) / 0. Let


B = (A + 5)(A - S)-1. Then we have
5 t 5 = [(A + 5)(A-5')- 1 ]*(A

= (A* - S*)-1
= (A + ^)" x (A - S)(A + S){A -
= (A + S ) - 1 ^
the last equality following from the fact that since A, S commute so do
A + S and A - S. Hence B*B = I and 5 is orthogonal.
Since

(1) A*A + A = 0
we have, taking transposes,
AtA + At = 0
and hence, taking complex conjugates,

(2) A*A + A*=0.

It follows from (1) and (2) that A = A, so that A is self-adjoint. Let


Ai,..., Xn be the distinct non-zero eigenvalues of A. Then Ai,..., An
are necessarily real.
The relation (1) can now be written in the form

from which it follows that the distinct non-zero eigenvalues of - A ,


namely —Ai,..., - A n are precisely the distinct non-zero eigenvalues of
A2, namely A 2 ,..., A2. It follows that Ai,...,X n are all negative. Let
oii = — At- for each i, and suppose that

OL\ <Ct2 < •" < «n-

Then this chain must coincide with the chain

Consequently, a% = or2 for every i and, since by hypothesis at- ^ 0, we


obtain A,- = -a,- = —1.
79
Book 4 Linear algebra

2.26 A and B are given to be orthogonal with det A -f det B = 0. Now we


have that
(1) det(A 4- B) = det[(AB* + I)B]
from which we see that det(A + B) = 0 if and only if —1 is an eigenvalue
of ABl. But ABl is orthogonal since
[AB^^AB1) = BA^B* = I.
Also, det(A£ e ) = det A det tf"1 = - 1 since it is given that det A =
- det B. Thus C = AJ5* is orthogonal with det C = - 1 . It follows that
— 1 is an eigenvalue of C\ for
C\I+C) = C* + J = ( C + /)*
and so, taking determinants,
det(J+C)[detC-l]=O
whence det(7 + C) = 0. It now follows from (1) that det(A + B) = 0.
£.27 (1) Since A*(A - J) = / - A* = -(A* - /) we have that
det A det (A - I) = ( - l ) n det (A - /)
and so
det(A-/)[detA-(-l)n]=O.
If det A = 1 and n is odd then it follows that det (A — /) = 0 and hence
1 is an eigenvalue of A. If det A = — 1 and n is even then likewise
det (A — /) = 0 and again 1 is an eigenvalue of A.
(2) Since A*(/ +• A) = A* + / = (/ + A)* we have that
det A det( J + A) = det(7 + A)
and so if det A = — 1 then det(/ + A) = 0 whence —1 is an eigenvalue of
A.
2.28 The first part follows from the observation that
g(A) = 0 <=> g{At) = 0 «=> g{-A) = 0.
Suppose now that g(X) is the minimum polynomial of A, say
g(X) = a0 + axX + • • • + ar_i Xr~l + Xr.
Since <?(—A) = 0 we have that

is also the minimum polynomial of A. Thus a\ = a^ = • • • = 0.


Since (An)* = (—l)nA for a skew-symmetric matrix A we see that A n
is skew-symmetric if n is odd, and is symmetric if n is even. Hence /(A)
is skew-symmetric, and g(A) is symmetric.
80
Solutions to Chapter 2
2.29 We have
= tr(UAtUA)

= tr(A*A)
= N{A).
Similarly, using the fact that tr(AY) = tr(YX),
N{AU) = tT[(AUYAU]

= N(A).
Finally, by the above,
N(In - U-U) = N[U(In - U~lA)\
= N(U - A)
= N{A - U).

2.80 Since A*A = AA* we have that A " 1 ^ ) " 1 = (l*)- 1 ^" 1 - But

A'1 A = I=> JF^A = / ==> A'1 = A111.


It follows that

and so A" 1 is normal.


If A = aol + a\ A + • • • + a n A n then clearly AA = A A. Suppose
conversely that A is normal. Then there is a unitary matrix P and a
diagonal matrix D such that
A = P~lDP = PDP, A* = ~P~DP = P-XDP.
Let Ai,..., Ar be the distinct elements of D. Consider the equations
+ +a A
AQ = a0 + ax A2 + a2A2 + h ar_x

Ar = a0 + aiA r + a2A^ + har-i

81
Book 4 Linear algebra

Since Ai,...,A r are distinct the (Vandermonde) coefficient matrix has


non-zero determinant and so the system has a unique solution. We then
have
~D = a0I + a1D + a2D2 + • • • 4- ar-XDr~l
and consequently

A* = P~lDP = P-l{aQI + axD + • • • + ar-iDr-l)P


= aol +aiA+ ha r _iA r ~ 1 .

2.81 Suppose that A is normal and let B = g{A). There is a unitary matrix
P and a diagonal matrix D such that

Consequently we have

and so
B*B = Ptg(D)PPtg{D)P = Ptg{D)g{D)P
and similarly
* =Ptg{D)g(D)P.

Since £(.#) and gr(JO) are diagonal matrices, it follows that B B =


and so B is normal.
We have that

{A + Bi)*{A + 5t) = (A* - B*i){A + 5i)

and similarly (A + 5i)(A + 5i)* = A2 - (A5 - 5A)i + 5 2 . It follows


that A + 5 i is normal if and only if A 5 = BA.
2.83 To get -A multiply each row of A by - 1 . Then clearly det(-A) =
( - l ) n det A. If n is odd then

det A = det (A*) = det(-A) = ( - l ) n det A = - det A

and so det A = 0.
82
Solutions to Chapter 2

Since xtAx is a 1 x 1 matrix we have

xlAx = {xtAx)t = xtAtx = -xlAx


and so xl Ax = 0.
Let Ax = Ax and let stars denote transposes of complex conjugates.
Then we have x*Ax = Xx*x. Taking the star of each side and using
A* = A1 — — A, we obtain

\x*x = {x*Ax)* = x*A*x = -x*Ax = -\x*x.

Since x*x ^ 0 it follows that A = -A. Thus A = t> where \L G IR \ {0}.


If x — y + iz then from Ax — ipx we obtain A(y + iz) = ifj,(y 4- iz)
and so, equating real and imaginary parts, Ay = — \iz^ Az = fiy. Now

and so yly = 2f*2r. Also, /xy*2r = —ytAy = 0 (by the first part of the
question). If, therefore, Au — 0 then

dy = tt*^ = -{Aufz = 0
and similarly
fixjfz = -t/jly = {Aufy = 0.
For the last part, we have

-A 2 -2
det(;4 - XI) = det - 2 -A - 1 = -A(A 2 +9),
2 1 -A

so the eigenvalues are 0 and ±3».


A normalised eigenvector corresponding to 0 is

-1
u= -

To find y, z as above, choose y perpendicular to u, say

> .;
1

83
Book 4 Linear algebra

Then we have
-4
± -1

which gives

z = 3\/2

Relative to the basis {u, y, z} the representing matrix is now

0 0 0
0 0 3
0 - 3 0

The required orthogonal matrix P is then

-1/3 0 4/3\/2
P= 2/3 -
2/3 l/>/2

£.34 Let Q be the matrix that represents the change to a new basis with
respect to which q is in normal form. Then xtAx becomes ytBy where
x = Qy and B = Q* AQ. Now

q{x) = </{y) = y2i + • • • + vl - 2/p+i - —


where p — m is the signature of g and p + m is the rank of q. Notice that
the rank of q is equal to the rank of the matrix B which is in turn equal
to the rank of the matrix A (since Q is non-singular), and

V=
Now if q has the same rank and signature then clearly m = 0. Hence
ylBy > 0 for all y G IRn since it is a sum of squares. Consequently
xlAx > 0 for all x G IRn.
Conversely, if xt Ax > 0 for all x € \Rn then ylBy>0 for all y G IRn.
Choose y = (0,..., 0, t/z-, 0 , . . . , 0). Now the coefficient of y\ must be 0 or
1, but not —1. Therefore there are no terms of the form — y\, so m = 0
and q has the same rank and signature.
84
Solutions to Chapter 2

If the rank and signature are both equal t o n then m = 0 and p = n.


Hence
y2n.
But a sum of squares is zero if and only if each term is zero, so xtAx>0
and is equal to 0 only when x = 0.
Conversely, if xtAx>0 for x G IRn then ytBy > 0 for y G IRn so
m = 0, for otherwise we can choose

with t/p+i = 1 to obtain yl By < 0. Also, xt Ax = 0 only for x = 0 gives


ylBy — 0 only for y = 0. If p < n then, since we have m = 0, choose
y = ( 0 , . . . , 0,1) to get t^ify = 0 with y ^ 0. Hence p = n as required.
The quadratic form q can be reduced to normal form either by complet-
ing squares or by row and column operations. We solve the problem by
completing squares. We have

q(x) = x\ - x\
= (si + x2)2 + x\ - (xi +z 3 ) 2

and so the normal form of q is

1 0 0
0 1 0
0 0-1

Since the rank of q is 3 and its signature is 1, q is neither positive definite


nor positive semi-definite.
Coordinates (xi,x 2 } x 3 ) with respect to the standard basis become
(xi + £2, £1 j %i + £3) in the new basis. Therefore the new basis elements
can be taken as the columns of the inverse of

1 1 0"
1 0 0
1 0 1

i.e. { ( 0 , 1 > 0 ) , ( 1 > - 1 , - 1 ) , ( 0 > 0 I 1 ) } .

85
Book 4 Linear algebra

2.86 Take

ii,ya)) + f({yu\
+ + xzyi) + S2J/2
and
h({xux2)i(yuy2)) =§

We have and so the


matrix of <? relative to the standard basis is

Completing squares gives [xx + f^a) 2 — f^l- ^he signature is then 0


and the rank is 2. The form is neither positive definite nor positive
semi-definite.
2.87 In matrix notation, the quadratic form is

4 -1 r X
x* Ax. = \x y z -1 4 -1 y
1 -1 4 z

It is readily seen that the eigenvalues of A are 3 (of algebraic multiplicity


2) and 6. An orthogonal matrix P such that PtAP is diagonal is

l/\/2 l/\/3
p = 2/V6 0 -l/>/3
1/y/e —1/>/2 1/V3
Changing coordinates by setting

u X
V y
w z
transforms the original quadratic form to
3u2 + 3vr
which is positve definite.
86
Solutions to Chapter 2

2.88 (1) We have

2y2 - z2 + xy + xz = 2(y + \x)2 - \x2 + xz - z2

Thus the rank is 3 and the signature is 1.


(2) In 2xy - xz - yz put x = X 4- Y> y = X - Y, z = Z to obtain

2(X2 - Y2) - (X 4- Y)Z - (X - Y)Z


= 2X2 - 2F 2 - 2XZ

Thus the rank is 3 and the signature is —1.


(3) In yz + xz + xy + xt + yt + zt put

= T.

Then we obtain

(x2 - r2) + (x - r)z + (x + Y)Z + (x 4- F)T 4- (x


= X 2 - Y2 4- 2XZ 4- 2XT 4- ZT
= (X + Z + T)2 - 7 2 - Z 2 - T2 - ZT
= (X 4- Z + T) 2 - (T + | Z ) 2 - | Z 2 - y 2 .

Thus the rank is 4 and the signature —2.


(1) The matrix in question is

1 -1 2
A = -1 2 -3
2 - 3 9

Now
x2 -f 2y2 + 9J2T2 - 2zy + 4xz -
2 2 2
= (x - y 4- 2s) 4- y 4- 52? - 2t/z

87
Book 4 Linear algebra

where £ = x — y -f 2z, r\ — y — 2, f = 2z. Then

so if we let
1 1 2
0 1 2
0 0 2

then we have
X Y
y = p
z

(2) Here the matrix is

0 2 0"
A= 2 0 1
0 1 0

Now
2
4xy + 2yz = (x - (x - y)2 + 2yz
2 2
= X -Y + (X-Y)z [X
= {X+\z) -Y -Yz-\z2
2 2

where £ = x + y + ^z,ri = x - y + ^z and ? = 2?, say. Then

a: = + ?7 -• 0

y= -•?)
z = f
so if we let
" 1 1 1
2 2 2
1 1 0
= 2 2
0 0 1

88
Solutions to Chapter 2

then we have
X

y = p
z Jm
(3) Here we have
* 1 1 0 -1
1 4 3 -4
- 0 3 1 -7
-1 -4 -7 -4

The quadratic form is


4*2 + 2xy - 2xt + 6yz - Syt - Uzt
3t/2 + z2 - 5t2 + 6yz - 6yt - Uzt
- t)2 - 2z2 - $t2 - Szt
- t)2 - 2{z + 2t)2

where £ = y -t, rj = \/3(y 4- z - t), $ = y/2(z + 2£) and r = t say.


Then

t=T
and so
1 -l/\/3 72 -2
0 i/Vs -l/>/2 3
0 0 l/\/2 -2
0 0 0 1

gives
"a;" T
y =P
z f
.r.
and diag{l,l , - 1 , 0}
89
Book 4 Linear algebra

2.40 Here we have

r<8

- Xn)2 + • • • + (Xn_i - Xn)2


(x2 - aJn-l)2 + • • • + (Sn-2

h h x2xn h xn_

where
"n-1 -1 -1 ... -1
-1 n-1 -1 ... -1
-1 -1 n-1 ... -1

-1 -1 -1 ... n-1

Now, by adding ^ j times the first column to columns 2 , . . . , n and


adding ^ - j - times the first row to rows 2 , . . . , n, then multiplying rows
2 , . . . , n and columns 2 , . . . , n by v / 0 ^ , we see that A is congruent to
the matrix
n-1 0 0 ... 0
0 n-2 -1 ... -1
0 -1 n-2 ... -1

0 -1 -1 ... n-2
Repeating this process we can show that A is congruent to the matrix

n- 1 0 0 0 ... 0
0 n-2 0 0 ... 0
0 0 n-3 -1 ... -1
0 0 - 1 n-3 ... -1

0 0 - 1 ^ - 1 ... n-3

90
Solutions to Chapter 2

Continuing in this way, we see that A is congruent to the diagonal matrix


diag{n - 1, n - 2, n - 3 , . . . , 2,1,0}.
Consequently the rank is n — 1 and the signature is n — 1.
241 We have

s)xrx8

nxn).
Now let

Then the form is \y\ 4- 2yi y2 which can be written as


Afo + i y 2 ) 2 - i y l if A # 0;
(»i + y 2 ) 2 - i(lfi - V2? if A = 0.
Hence in either case the rank is 2 and the signature is 0.
2.42 Since A is an eigenvalue of A there exist ax,..., an not all zero such
that Ax. = Ax where x = \a\ ... an]*. Then x?Ax. = Ax*x and so, if
Q = x*.4x then we have

Let Q(x,x) = xtAtAx = (Ax)*As. Since detA 7^ 0 we may apply


the non-singular linear transformation described by y = Ax so that
Q(x,x) h-> Q{y,y) where

Thus Q is positive definite.


91
Book 4 Linear algebra

2.44 Let {ui,..., un} be an orthonormal basis of the real inner product space
IRn under the inner product given by ( x\y) = f(x, y). Let x = J2?=i x%ui
and y = £ " = 1 y.-u,-. Then

f { x , y) = ( x \ y ) =
t=l

and, for some real symmetric matrix B — [bij]nXn,

Q{X> y) =

where

x = y =

Now we know that there is an orthogonal matrix P such that

i.e. that there is an ordered basis {vi,..., vn} that is orthonormal (rel-
ative to the inner product determined by / ) and consists of eigenvectors
of B. Let x = Y%=1 £iVi and y = ^ = 1 Wi- T h e n > r e l a t i v e t o t l l i s
orthonormal basis, we have
n

Consequently,

Observe now that


g-Xf degenerate <=J> (3z)(Vy) (^ - A/)(», y) = 0

92
Solutions to Chapter 2

Since, from the above expressions for f{x,y) and g(x, y) computed rel-
ative to the orthonormal basis {v\,..., vn\,

1=1

it follows that g — Xf is degenerate if and only if A = A,- for some i.


Suppose now that A, B are the matrices of / , g respectively with re-
spect to some ordered basis of IRn. If

X\ 2/1

x = y =

are the coordinate vectors of x, y relative to this basis then we have

Thus we see that g — Xf is degenerate if and only if A is a root of the


equation det(B - XA) = 0.
For the last part, observe that the matrices of 2xy + 2yz and x2 —y2 +
2xz relative to the canonical basis of IR3 are respectively

0 1 o" 1 0 1
A= 1 0 1 0 - 1 0
0 1 0 1 0 0

Since
1 -A 1
det{B - XA) = det -A - 1 -A
1 -A 0
the equation det(B-XA) = 0 has no solutions. But, as observed above, if
a simultaneous reduction to sums of squares were possible, such solutions
would be the coefficients in one of these sums of squares. Since neither
of the given forms is positive definite, the conclusion follows.
2.45 The exponent is —x*Ax where

93
Book 4 Linear algebra

The quadratic form x* Ax is positive definite since

x2 + y2 + z2 + xy + xz + yz = (x + \y + §z) 2 + | y 2 + f*2 + \yz

which is greater than 0 for all x ^ 0. So the integral converges to


7r 3 / 2 /VdetA, i.e. to F

94
Test paper 1

Time allowed : 3 hours


(Allocate 20 marks for each question)
Let V be a finite-dimensional vector space. Prove that if / € £(V, V)
then
(a) dim V = dim Im / + dim Ker / ;
(b) the properties
(i) / is surjective,
(ii) / is injective,
are equivalent;
(c) V = Imfe Ker/ if and only if I m / = I m / 2 ;
(d) Im / = Ker / if and only if the following properties are satisfied
(i) f2 = 0,
(ii) dimV = n is even,
(iii) d i m l m / = ^n.
Suppose that t G £(C 5 ,C 5 ) is represented with respect to the basis

{(1,0,0,0,0), (1,1,0,0,0), (1,1,1,0,0), (1,1,1,1,0),(l, 1,1,1,1)}

by the matrix
1 8 6 4 1
0 1 0 0 0
0 1 2 1 0
0 -1 -1 0 1
0 -5 -4 -3 -2

Find a basis of C5 with respect to which the matrix of t is in Jordan


normal form.
Let y?i,...,y> n £ (\Rn)d. Prove that the solution set C of the linear
inequalities
Pl(s)>0, Pi(x)>0, ...,y>n(s)>0
satisfies
(a) < * , / ? e C = > a + / ? e C ;
(b) a e C , < < E l R , < > O = * > t a e C .
Show that if £>i,..., <pn form a basis of (IRn)d then
C = {tiai + • • • + <na» | *,• € IR,*,- > 0}
where {c*i,..., an} is the basis of IRn dual to the basis {<pi,..., <pn}.
Hence write down the solution of the system of inequalities

<Pi{x) > 0, <P2{x) > 0, <p3{x) > 0, <p4{x) > 0

where <px = [4,5,-2,11], y>2 = [3,4,-2,6],y>3 = [2,3,-1,4] and <p4 =


[0,0,0,1].
Let A be a real orthogonal matrix. If (A — XI)2x = 0 and y = (A — XI)x
show, by considering ytyi that y = 0. Hence prove that an orthogonal
matrix satisfies an equation without repeated roots.
Prove that a real orthogonal matrix with all its eigenvalues real is
necessarily symmetric.
Prove that if a real quadratic form in n variables is reduced by a real
non-singular linear transformation to a form

having p positive, q negative, and n — p — q zero coefficients then p and


q do not depend on the choice of transformation.
For the form

4- a^x^xz 4- h an-.\Xn-\xn
in which each at- ^ 0, show that p = q\ and for the form
aixix2 4- 0,2X2X3 + h an-ixn-ixn + anxnxi
in which each at- / 0, show that

L if n is odd.

97
Test paper 2

Time allowed : 3 hours


(Allocate 20 marks for each question)
1 Let 7 be a finite-dimensional vector space and let e G JC(V,V) be a
projection. Prove that

Kere = Im(idv —e).

If t G £(V, V) show that Im e is tf-invariant if and only if eotoe = to e;


and that Kere is ^-invariant if and only if eoto e = eot. Deduce that
Im e and Ker e are ^-invariant if and only if e and t commute.
2 If U = [ur8] G Mat n X f l (C) is given by

fl ifs = r + l ;
r
* [0 otherwise,

and J - [jr9] G M a t n x n ( C ) is given by

hs =
\ 0 otherwise,

show that U1 — JUJ. Deduce that if A G M a t n x n ( C ) then there is an


invertible matrix P such that P"lAP = A1.
Find such a matrix P when A is the matrix

0 4 4
2 2 1
-3 -6 -5
Let V be a vector space of dimension n over a field F. Suppose that W
is a subspace of V with dim W = m. Show that
(a) dim W1 = n - m;
(b) (W 1 )- 1 = VK.
K f,g € V* are such that there is no A E F \ {0} with / = Xg, show
that Ker / n Ker g is of dimension n — 2.
Let V be a finite-dimensional complex inner product space and let / :
V —• V be a normal transformation. Prove that

/a(x)=o=>/(a!) =o
and deduce that the minimum polynomial of / has no repeated roots.
If e : V —> V is a projection, show that the following statements are
equivalent :
(a) e is normal;
(b) e is self-adjoint;
(c) e is the orthogonal projection of V onto Im e.
Show finally that a linear transformation h : V —• V is normal if and
only if there are complex scalars Ai,..., A^ and self-adjoint projections
e i , . . . , e/c on V such that
(1) / = Aiei+ ••• + A/Cefc;
(2) id v = fi! + ••• + cfc;
(3) ( t ^ ; ) e,-oey = 0.
(a) Show that the quadratic form xlAx is positive definite if and
only if there exists a real non-singular matrix P such that A = PPl.
Show also that if Y17J=I ^3xix3 > ° f° r a ^ non-zero vectors x then
]C£y=i bijPiXi'pjXj > 0 for all x. Hence show that if xl Ax and x*Bx are
both positive definite then so is

(b) For what values of k is the quadratic form

r=l «<y

positive definite?
99
Test paper 3

Time allowed : 3 hours


(Allocate 20 marks for each question)
If U, W are subspaces of a finite-dimensional vector space V prove that
dim U + dimW = d\m{U + W) + dim(tf n W).
Suppose now that V — U © W. If S is any subspace of V prove that
2 dim S - dim V < d\m[{U nS)®{Wn S)} < dim 5.
In the case where V = ©^ = 1 Ui find similar upper and lower bounds
for
dim0(C/tn5).
For the matrix
0 1 0
A= -1 1 1
-10 2
find a non-singular matrix P such that P~lAP is in Jordan normal
form.
If A is positive, obtain real values of z,-y such that C2 = D where

x13" "A 1 0*
C = 0 X22 ^23 0 A 1
0 0 ^33 0 0 A

Hence, or otherwise, find a real matrix X such that X2 = A.


Let V be a vector space of dimension n over a field F and let W, X be
subspaces of V. Prove that

{W and

Given gi,..., gn G V d , prove that the following conditions concerning


f EVd are equivalent :
(1) n " = i K e r f t C K e r / ;
(2) / is a linear combination of gfi,..., gn.
Show that the matrix

A=

is orthogonal. Find its eigenvalues and show that the matrix A2 — I4


has characteristic equation

Find a unitary matrix U such that U~1AU is diagonal.


Let Q(k, r) be the quadratic form

k{x\ + x\ + • • • 4- z 2 ) - [xi + x2 + • • • + x r ) 2 .

Show that

where yr is a homogeneous linear function of x\,..., xr.


Hence find the rank and signature of

n(x\ + x\ + • • • + x2n) - x2 + xn)2

101
Test paper 4

Time allowed : 3 hours


(Allocate 20 marks for each question)

Let F be the vector space of infinitely differentiate complex functions


and let Pn be the subspace of complex polynomial functions of degree
less than n. For every A € C define Pn,\ = {eXzp \ p G Pn}. Show that
Pn,\ is a subspace of F and that

is a basis of Pn<\. If D denotes the differentiation map prove that


(1) Dn(e~Xzf) = e~Xz(D - Aid)"/;
(2) i> n , A =Ker(D-Aid)»;
(3) Pn,A is .D-invariant;
(4) B is a cyclic basis for D — A id.
If Dn,\ denotes the restriction of D to Pn,\ find the characteristic
polynomial of Dn,A- li fi ^ X show, by considering the characteristic
polynomial of Dn,\ + (/x — A) id, that (2?n,A — /J>id)n is invertible.

Given A = , use the Cayley-Hamilton theorem and euclidean


[c d\
division to show that every positive power of A can be written in the
form
An =
If the eigenvalues of A are Ai, A2 show that

{ 2 1 """" 1 2 T

x 2 "T*
2 """" 1 A

.A
*£ \ / \

II ^ i ^^ ^ 2 j
A 2 *™~ ^ i A 2 —~ A j
(1 — fi)Ay"/2 4- TIX^ A if Ai = A2.
Hence solve the system of difference equations
y n +i = 2z n + yn
where Xi = 0 and j/i = 1.
Suppose that / € £ ( C n , C n ) and that every eigenvalue of / is 0. Show
that / is nilpotent and explain how to find dim Ker / from the Jordan
normal form of / .
Let / , 0 G «C(C6,C6) be nilpotent with the same minimum polynomial
and dim Ker / = dim Ker g. Show that / , g have the same Jordan normal
form. By means of an example show that this fails in general for / , g G
7 7

Deduce that if s,t G £ ( C n , C n ) have the same characteristic polyno-


mial

and the same minimum polynomial, and if


dimKer(s — at- id) = dim Ker(tf — at- id)
for 1 < i < r, then 5 and t have the same Jordan normal form provided
A;,- < 6 for 1 < i < r.
Let V be a vector space of dimension n over a field F.
(i) If s G £(V, V) show that 5 o s = 0 if and only if Ims C Ker5, in
which case d i m l m s < | n .
(ii) Let p e £{V,V) be such that pn = 0 and p n " x ^ 0. Show
that there is a basis B = {x\,..., xn} of V such that p{xj) = zy+i for
; = 1 , . . . , n - 1 and p(xn) = 0.
Show that if t = Y^=i AtP*^1 where each A,- £ F then t commutes
with p. Conversely, suppose that t G C(V, V) commutes with p and is
represented relative to the basis B by the matrix [<Xtj]nx»« Prove by
induction that

(; = 1 , . . . , n) t{xj) = ^2 <*iiSi+j-i
t=i

and deduce that t is a linear combination of i d , p , . . . , p n ~ 1 .

103
5 Show that each of the quadratic forms

7y2 4- 6y| 4- Sy2 4- 4yi y% 4- 4y2y3

can be reduced by an orthogonal transformation to the same form

a z2 + a z2 + a z2

Obtain an orthogonal transformation which will convert the first of the


above forms into the second.

104

You might also like