OUTLINE
SERIES
THEORY aiMl>ROBLEMS
of
MATRICES
by
FRANK AYRES,
JR.
including
Completely Solved
in
Detail
SCHAUM PUBLISHING
NEW YORK
CO.
SCHAVM'S OUTLINE OF
THEORY
AI\[D
OF
PROBLEMi§;
MATRICES
BY
FRANK AYRES,
JR., Ph.D.
Formerly Professor and Head,
Department of Mathematics
Dickinson College
^C^O
^
SCHAIJM'S OUTLINE SERIES
McGRAWHILL BOOK COMPANY
New
York, St. Louis, San Francisco, Toronto, Sydney
('opyright
©
1962 by McGrawHill, Inc.
of America.
AH
Rights Reserved.
Printed in the
be reproduced,
United
States
No part
of this publication
may
stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording, or otherwise,
without the prior
written permission of the publisher.
02656
78910 SHSH 7543210
Preface
Elementary matrix algebra has now become an integral part of the mathematical background necessary for such diverse fields as electrical engineering and education, chemistry and sociology, as well as for statistics and pure mathematics. This book, in presenting the more
rial,
is
essential matedesigned primarily to serve as a useful supplement to current texts and as a handy reference book for those working in the several fields which require some knowledge of matrix theory. Moreover, the statements of theory and principle are sufficiently complete that the book could be used as a text by itself.
The material has been divided into twentysix chapters, since the logical arrangement is thereby not disturbed while the usefulness as a reference book is increased. This also permits a separation of the treatment of real matrices, with which the majority of readers will be concerned, from that of matrices with complex elements. Each chapter contains a statement of pertinent definitions, principles, and theorems, fully illustrated by examples. These, in turn, are followed by a carefully selected set of solved problems and a considerable number of supplementary exercises.
The beginning student in matrix algebra soon finds that the solutions of numerical exercises are disarmingly simple. Difficulties are likely to arise from the constant round of definition, theorem, proof. The trouble here is essentially a matter of lack of mathematical maturity,'
and
normally to be expected, since usually the student's previous work in mathematics has been concerned with the solution of numerical problems while precise statements of principles and proofs of theorems have in large part been deferred for later courses. The aim of the present
book
lems
is
to enable the reader,
in
if he persists through the introductory paragraphs and solved probany chapter, to develop a reasonable degree of selfassurance about the material.
The solved problems, in addition to giving more variety to the examples illustrating the theorems, contain most of the proofs of any considerable length together with representative shorter proofs. The supplementary problems call both for the solution of numerical exercises
and
for proofs.
Some
of the latter require only proper modifications of proofs given earlier;
more important, however, are the many theorems whose proofs require but a few lines. Some are of the type frequently misnamed "obvious" while others will be found to call for considerable
None should be treated lightly, however, for it is due precisely to the abundance of such theorems that elementary matrix algebra becomes a natural first course for those seeking to attain a degree of mathematical maturity. While the large number of these problems in any chapter makes it impractical to solve all of them before moving to the next, special attention
ingenuity.
is
much
directed to the supplementary problems of the first two chapters. mastery of these will do to give the reader confidence to stand on his own feet thereafter.
A
The author wishes
Publishing
to take this opportunity to express his gratitude to the staff of the
Schaum
Company
for their splendid cooperation.
Frank Ayres,
CarHsle, Pa.
Jr.
October, 1962
CONTENTS
Page
Chapter
1
MATRICES
Equal matrices. Products by partitioning.
Matrices.
1
Sums
of matrices.
Products of matrices.
Chapter
2
SOME TYPES OF MATRICES
Scalar matrices. Diagonal matrices. The identity matrix. Inverse of a matrix. Transpose of a matrix. Symmetric matrices. Skewsymmetric matrices. Conjugate of a matrix. Hermitian matrices. SkewHermitian matrices. Direct sums.
10
Triangular matrices.
Chapter
3
DETERMINANT OF A SQUARE MATRIX.
Determinants of orders 2 and 3. Properties of determinants, and cofactors. Algebraic complements.
20
Minors
Chapter
4
EVALUATION OF DETERMINANTS
Expansion along a row or column. The Laplace expansion. Expansion along the first row and column. Determinant of a product. Derivative
of a determinant.
32
Chapter
5
EQUIVALENCE
Nonsingular and singular matrices. Elementary transformations. Inverse of an elementary transformation. Equivalent matrices. Row canonical form. Normal form. Elementary matrices. Canonical sets under equivalence. Rank of a product.
39
Rank
of a matrix.
Chapter
6
THE ADJOINT OF A SQUARE MATRIX
The
adjoint.
49
adjoint.
The adjoint
of a product.
Minor of an
Chapter
7
THE INVERSE OF A MATRIX.
Inverse of a diagonal matrix. Inverse from the adjoint. Inverse from elementary matrices. Inverse by partitioning. Inverse of symmetric X w matrices. matrices. Right and left inverses of
55
m
Chapter
8
FIELDS
Number
fields.
64
General
fields.
Subfields.
Matrices over a
field.
CONTENTS
Page
Chapter
9
LINEAR DEPENDENCE OF VECTORS AND FORMS
Vectors. Linear dependence of vectors, linear forms, polynomials, and matrices.
67
Chapter
10
LINEAR EQUATIONS
rule.
75
System of nonhomogeneous equations. Solution using matrices. Cramer's Systems of homogeneous equations.
Chapter
11
VECTOR SPACES
Vector spaces.
section
85
space.
Subspaces. Basis and dimension. Sum space. InterNull space of a matrix. Sylvester's laws of nullity.
Bases and coordinates.
Chapter
12
LINEAR TRANSFORMATIONS
Singular and nonsingular transformations. space. Permutation matrix.
94
Change of
basis.
Invariant
Chapter
13
VECTORS OVER THE REAL FIELD
Inner product. Length. Triangle inequality. Schwarz inequality. Orthogonal vectors and spaces. Orthonormal basis. GramSchmidt orthogonalization process. The Gramian. Orthogonal matrices. Orthogonal transformations. Vector product.
100
Chapter
14
VECTORS OVER THE COMPLEX FIELD
Inner product. Length. Schwarz inequality. Triangle inequality. Orthogonal vectors and spaces. Orthonormal basis. GramSchmidt orthogonalization process. The Gramian. Unitary matrices. Unitary transformations.
110
Complex numbers.
Chapter
15
CONGRUENCE
Congruent symmetric matrices. Canonical forms of real symmetric, skewsymmetric, Hermitian, skewHermitian matrices under congruence.
Congruent matrices.
115
Chapter
16
BILINEAR FORMS
Matrix form.
formations.
125
Transformations. Canonical forms. Cogredient transContragredient transformations. Factorable forms.
Chapter
17
QUADRATIC FORMS
Matrix form. Transformations. Canonical forms. Lagrange reduction. Sylvester's law of inertia. Definite and semidefinite forms. Principal minors. Regular form. Kronecker's reduction. Factorable forms.
131
CONTENTS
Page
Chapter
lo
HERMITIAN FORMS
Matrix form.
definite forms.
146
Canonical forms.
Definite
Transformations.
and semi
Chapter lif
THE CHARACTERISTIC EQUATION OF A MATRIX
Characteristic equation and roots.
149
Invariant vectors and spaces.
Chapter
20
SIMILARITY
Similar matrices.
156
Reduction to triangular form.
Diagonable matrices.
Chapter
21
SIMILARITY TO A DIAGONAL MATRIX
Real symmetric matrices. Orthogonal similarity. Pairs of real quadratic forms. Hermitian matrices. Unitary similarity. Normal matrices.
Spectral decomposition.
Field of values.
163
Chapter
22
POLYNOMIALS OVER A FIELD
Sum, product, quotient
172
common
divisor.
Least
common
of polynomials. multiple.
Remainder theorem. Greatest Relatively prime polynomials.
Unique factorization.
Chapter
2o
LAMBDA MATRICES
The Xmatrix or matrix polynomial. Sums, products, and quotients. Remainder theorem. CayleyHamilton theorem. Derivative of a matrix.
179
Chapter
24
SMITH NORMAL FORM
Smith normal form.
Invariant factors.
188
Elementary
divisors.
Chapter
25
THE MINIMUM POLYNOMIAL OF A MATRIX
Similarity
invariants.
196
and
non
derogatory matrices.
polynomial. Companion matrix.
Minimum
Derogatory
Chapter
26
CANONICAL FORMS UNDER SIMILARITY
Rational canonical form. second canonical form. Hypercompanion matrix. Jacobson canonical form. Classical canonical form. A reduction to rational canonical form.
203
A
INDEX
215
INDEX OF SYMBOLS
219
.
6) in ordinary space. "„„ are called its diagonal elements. The sum of the diagonal elements of a square matrix A is called the trace of A.1) is square and will be called a square matrix of order n or an In a square matrix. lie in the The matrix will be used same plane with the In the matrix ^11 ^12 '^IS ' (1. " or "the mxn 4". (1.are called its elements. 1. The ma could be given a similar interpretation 1). 022. the first subscript indicates the row and the second subscript indicates the column in which the element stands. . 1 . The matrix (a) could be considered as the coefficient matrix of the system of homogeneous (2x + 3y + 7z = linear equations \ X. we shall write simply "the matrix SQUARE MATRICES. such as 1 3 l' 3 1 (a) 1 and (b) 2 1 5 is 4 7 and subject to certain rules of operations given below called a matrix.y + 5z = [ or as the augmented matrix of the system of nonhomogeneous linear equations i2x + 3y = 7  \x y = 5 Later. .4).1) will be called "the mxra matrix [a^]". origin or such questions as whether or not the three points on the same line through the origin. shall use the double bracket notation throughout.' chapter Matrices 1 A RECTANGULAR ARRAY OF NUMBERS "2 enclosed by a pair of brackets. In the double subscript notation. trix (b) we shall see how the matrix may be used or to obtain solutions of these systems. and (4. When m resquare matrix. the elements a^. matrix A = When the order has been established. Thus. = n. order A matrix of m rows and n columns said to be of "m by We or mxra.) [a^ •] At times the matrix (1. we might consider its rows as simply the coor dinates of the points (1. and double bars.7. ( (In indicating a matrix pairs of parentheses. ra" all elements in the second row have 2 as first subscript and all the elements in is the fifth column have 5 as second subscript.3. later to settle (2. are sometimes used.  . ). • .1) *mi "m2 ""m3 the numbers or functions a^.
if . A ±B. '\ 2 31 1 Example 1. . we shall write A = Q instead of the mxn array of zero elements. / = 1. every element of which is zero. 1. if matrices A = [a^] and B = [bij] are said to be equal (A = B) if and only if they have the same order and each element of one is equal to the corresponding element of the and only ^^J if 'V if a and only = 1. tiplying each of its elements Assuming (a) (b) (c) that the matrices A. that is. For every A.B. ZERO MATRIX. For example. 1 EQUAL MATRICES. Conformable matrices obey the same laws of addition as the elements of these matrices. A. (a) Two and matrices of different orders cannot be added or subtracted. SUMS OF MATRICES. the matrices The sum of k matrices ^ is a matrix of the same order as A and each of its elements is k times the corresponding element of A. Two other. moreover. defined as the mxn A corresponding elements of where each element of C A and B. Thus. is meant the matrix obtained from A by mulby 1 or by simply changing the sign of all of its elements. 5(3) J 15j by A. When ^ is a zero matrix and there can be no confusion as to its order. If ^ [: '11 I then A+A + A and 2 3A A3 3 ! 5A r5(l) 5(2)" r5 L10 101 L5(2) In particular.a scalar There exists a matrix D such that A + D = B. meant the matrix obtained from Example 2. They show. we have A +(A) = 0.^] are two mxn matrices. their is the sum sum (difference). ro. two matrices are equal one is a duplicate of the other.MATRICES [CHAP. We define: If k is any scalar (we call k a scalar to disis tinguish it from [k] which is a 1x1 matrix) then by kA = Ak A by multiplying each of its elements by k. C = (difference) of the . A matrix.2. These laws are a result of the laws of elementary algebra governing the addition of numbers and polynomials. called the negative of /4. is If 4 = [a^A and S = matrix [fe.C are conformable for addition. 2 n) Thus. where indicates the zero matrix of the same order as A. we state: (d) A + B = B + A (commutative law) A + (B+C) = (A + B)+C (associative law) k(A + B) = kA + kB = (A + B)k. is called a zero matrix. A±B [Oy ± bij] [c^A . It 14 and « = 2 I 3 0' 12 2 1) 1 then 5 A and +B Lo+( +3 +2 + 5j 12 O(l) 23 12 30 45 or Two matrices of the same order are said to be conformable for addition subtraction. (b) above are nonconformable for addition and subtraction.
1] MATRICES MULTIPLICATION."31 [611 O32 621 612I 622J 021611+022^21 .m. (A) (i) (/) AB AB AB i= BA. • Example 4. [an Oi '^imj ~ L^ii fell "*" ^12 feji + • • • + ««imi] ^^2 a^^fe^^J femi Note that the operation is row by column..6+ 12] = 3J By is the product AB in that order of the mxp p matrix A = [a^] and the p xn matrix B = = [bij] meant the mxn matrix C = \c. 2 [3 1 4] 6 [6 . + + ^^^"ikbkj (f = 1. By the product fell AB in that order of the Ixm matrix A = [a^i a^g ais a^m] and the mxl ^31 matrix fi is meant the 1x1 matrix C = [on fen + 012 fesi + • • + aimfemi] fell fe^i That is. (a) [2 3 4] 1 [2(l) + 3(l) + 4(2)] = [7] L (b) 2."31611+032621 ^12 + '^22 ^22 "31612+002600 The product ^S is defined or A is conformable to B for multiplication only when the number columns of A is equal to the number of rows of S. ^11 *^ 1 9 '011611 + 012621 °11 ^12 + "12 622 0^21 A B . . . we have A(B + C) = AB (A + B)C AC + AC BC (first distributive law) (second distributive law) (g) A(BC) = (AB)C (associative law) However. 2 re).pj ' Hj ii^ij + "t2 ^2. = = AC does not necessarily imply i = or S = does not necessarily imply B = C. / 1. each element of the row is multiplied into the corresponding element of the column and then the products are summed. Think of A as consisting of m rows and B as consisting of n columns. generally. See Problems 38.'\ < V where "ip i>. Assuming (e) (/") A. . . . B is not necessarily conformable to A for multiplication (BA may or may not be of ^^^ined) See Problems that 34. 2.B.C = are conformable for the indicated + sums and products. If ^ is conformable to B for multiplication {AB is defined). 1' Example 3. In forming C = AB each row of A is multiplied once and only once into each column of B. The element c^ of C is then the product of the ith row of A and the /th column of B.CHAP. 0.
C. = 1 AqiB^^ + ^22^21 1 1 1 1 1 1 1" ^21^12 * ^2^ B.^n.. may be formed using be partitioned in exactly the same manner. .(PsXpi) I (psXp2) I (PsXPs) /loo and let 8.. re^. be resquare matrices. B.. the matrix A is in effect partitioned into matrices of order Ixp and B let Other partitions may be used. •.2Xp2) (mLxps) (P2X%) Jmgxpi) I I j (m2Xp3)_ (psxni) ! Am A^i A^2 I Aig "21 A. differences. 1 PRODUCTS BY PARTITIONING. In forming the product AB. I O22 "32 "31 In I any such partitioning. For example. Let A. Compute /4S. Let A= [a^J] be of order mxp and 8= m [b^j] be of order pxn. given /I = 3 1 2 2 110 2 3 12 fill Partitioning so that A = 11 ^12 2 3 1 ! 2 '0 11 A21 A22 and B B21 ^12 111 2 2 1 1 10 S22 3 1 I 2 we have AB [A^iBi^i + A12B21 Aij^B'12 + Ai.. 1110 and B 1 Examples. . orders Let A be partitioned into matrices of the indicated (piX Ps)" (pixpi) (P2X pi) I (piX P2) ^" ^ (P2X Ps) I .. I . into n matrices of order pxl. Then it is ^8 ^llOll + '4i2021 "•" '4l3"31 A2_iiB^^ A^^B12 + A^qBqq + '^13832 Cii C12 c A21B11 + 422^21 + A^^B^^ + ^22^22 + ^423632 2 1 _C21 1^22. rig may be any nonnegative (including 0) integers and rei+ ^2 = n.MATRICES [CHAP. however such that mi+ m2 = m necessary that the columns of A and the rows of 6 be partitioned m^. 5i2. C12 . . A and 6 be parti tioned into matrices of indicated orders by drawing in the dotted lines as (piXTii) I (pixnj) (p2Xra2) (p3Xre2) A = (mixpi) I (m]^xp2) (m. mg. A^2< •••: Sii. Then sums. in exactly the same way.C. and products the matrices A^i. I All A21 A^2 J I I ' "" ' j j (P2XPS) I A22 I I T . 2 1 i4' + [l][2 = ' 1] [2] [3 [1 3] 0] [? + [l][2] [1 0] 2 31 3 1 1 [4 3 To o] To] Tol 4 7 3 3 4 7 3 3 5 3 5 4 5JL0J 2] [2] 5 2 2 _[l 1 1] + [2 3 1] [0]+[2] [3 4 See also Problem 9..
6 15 3 1 2 o" 1 (d)  4 2 5 1 2 1 2 1 4 2 1 2 5 1 2 1 2 3 2.1 2 1. 818 8 b show 1 1 .?1 45s 6+3. (a) [4 5 6] [4(2) +5(3) + 6(l)] = [17] 2(4) (b) 2(5) 2(6) 8 = 10 15 12 18 [4 e] = 3(4) 3(5) 3(6) 12 [J 4 (c) 1(4) 1(5) 1(6).a'^ and ^^.1 2 1. .4 5 6 [1 2 3] 5 [ 6 7 8 9 6" 10 7 11 8 1 1(4) + 2(0) + 3 (5) (6) + 2 (7) + 3(8) 1(9) + 2(10) + 3(ll) 1 (6) + 2(7) + 3(8)] [19 4 4 4] r2(l) + 3(2)+4(3)1 _ ' poT L29J l(4) + 2(5) + 1(2)] (rf) C l] Ll(l) + 5(2) + 6(3)J (e) \ "gl 2J ^ ri(3) + 2(l) + l(2) [4(3) + 0(1) + 2 (2) G ^ p 2J _2 4(4) + 0(5) + 2(2)J 8] [s I2J {2 .u. .0 2p = and p = 2. = A + 3.843 The reader will that A A. Then D =  4 9 1 9. 5 2 1 2. .^^. Then '2 A"" 1 1 1 \2 1 1 r 2 1 "5 3 1 r 4 2. (c) 3 4 2 "l 2 12 . 4 u 13p If A\BD  3+1r _5+4« 22. 1] MATRICES SOLVED PROBLEMS 1 210 2 1 2_ 3 1. 5 and 2 3 3 f 1 2 1 1 1' 11 = = 1 2 1 = 2 _3 4 2 .CHAP. 2 0" 2p  9 "0 0" 4r 9« 1s 9u_ = ./l^ = /4^. 1 1 1" Let <4 = . 4r = and r = 4. If A 3 5 4 6 and 6 1 2 5 3 P find 1 s D = r t such that A + B  T) = 0. (a) 4 2 'l 5 2 1 22 31 15 4 1 2I 3 = ri + 3 2+(4) 1 + 1 1 4+1 2 +2 0+5 5 + (2) 2+0 +3 0+2 1+3 2 + (l) 4202 5 2 4 5 4741 2 1 2 O' 1 3412 1 (b) 4 5 3 = 25 1 12 2 22 31 3 1 13 41 22 2+4 11 20 05 5+2 13 02 13 2+1 6 2 2 35 3 2 1 63 6 3 6.
+o. . a^^.. B .. c + &. 8. + BC + BD.Then the element standing in the ith row and /th column of A(B + C) the oil(feij + Cij) + afe(fe2j + C2j) +. row and /th column of A (BC) is P n ^^"ik<j^. . 2 3 first the 2 = (c) ^_j 2 a. mxn. c fee 2JI fe3 O 37' + 012(621^17 + b2QC2J + 623C3J) fell + ai2*2i)cij + (0^1612 + aiQh.) tfe^^^j^ kh hy 2 ^^ 2 0il(^ll'^lj + ftl2C2J+ ^IsCsj) (oil = ( k=l 3 . = l ^2 i = i 13 2 =22 3 2 o j = ii = i V . hence.D +BD. .i„(feni + %i) = the elements standing in the ith row and /th column of ^^^"ikHj+^kj) AB and ^C. c tfe^ fei • ij + 6. of the j'th row A are a^^ .....22)cQj + (aiifeis + ai2b23)cgj + ( oifefefei)cii •^ S a{fefefe2)c2f ^ + (2 fe=i fe=i ai<^b^^)c3j ** ^ 2 = /.MATRICES [CHAP.C. one may sum elements of each row or the elements of each column.. then /}(B + C) The elements feijj+cij. hence the element standing P + ... = S^^iifefefej +^2^aifeC.A is of order ^ pxo The elements p P of the j throw of /4 areai^. J 3 3 2 (c) 2: k=i 2 (") aife( S h=t = h=i = 2 (2 k=i aif.l/li^^^'fe'^^'^'^r 6.c.bkh)chj ^ J. 1 5. A(C +D) + B(C +D) = AC + AD ^ BC + BD = AC +BC +A. is of order mxn of and if S = [fe^] ^ and C = [c] are of order •' nxp.D conformable. ( b./ij = P n ^^^^j_"ik''kh)<^hj n = n + (X^aif^bf^2)<:QJ + n + (^^"ikbkp)<^pj (2^^0ifefefei)cij This is the element standing in the j'th row and /th column of (AB)C.(bi. °ife(*fei +'^fej) "il^^i + 2 '^lj) + '^t2(*2i+'^2i) 2 = («il^i+''i2*2i) + (°il<^17+'^i2'^2i) 2 (6) 3 2 2 2 t=i j=i a'' = 2 i=i (a^j^+ a^ + a p) = (an + 0^2 + (Oil + O21) 2 2 a^g) + (a^t + 092 + <223) = + («12 + "22) 2 + (ai3 + "23) 2 i=l a " + 2 i a •„ + a. This is simply the statement that in summing all of the elements of a matrix. and if C = [c. show Using (e) (/") in = = two ways that (A +B)(C +D) = (A and then (/).. Using and then (A+B)(C+D) (A+B)(C+D) +B)C + (A +B)D = AC + AD AC +BC +AD +BD.a and the elements ofthe/th column in the ith of BC are 2 h= i 6ih c^ P b^f^Cf^ ^ .^j. Show 2 that: ? 2 2 3 (b) 1=17=1 2 S S aJ = j=i 1=1 b^hCj^j) ^ 22 3 2 a. (e). Prove: It A = [a^] = AB + AC.a^g. Prove: If i = [aij] is of order then ^(6C) = (iS)C.. Assuming A. A(BC)  (AB)C.. 7. .^hh<^hj^ P + °tn ait^^b^chj +ai2^^b2h%j = J^ *n/i<. a^^ and the elements of the /th column of S +C sum are is Of 62J+ c'2j fenj +%j..^ 222 a. 'iJ' 2 2 P b^j^c^j. if S = [6^] is of order rexp.
we have the three forms 611 O21 612 Vl •Vr.i 1 ! 0^0 1 2 3 34 4 15 5 1 2 4 3 1 2 3 1 1 3 1 2 1 ! 7 8 5 3 1 1 2 1 3 4 9 1 2 1 1 5 6 6 7 6 7" 2 1 1 (c) 1 2 1 1 4 9 8 6 7 2 1 5 8. and the transformation '^3 2 The result of applying the transformation is the set of three forms 'xi 022 "11 "2 Os 1 %2 "^2 ^^3 pri ii2 ^2 1 r 1 /. there results a set of with matrix A is subjected to a linear transm linear forms with matrix C = AB . ((^2 1 C^l 1 ^2 2 ^2 ^12 "*" X3 = ("31^11 + ^32^21) Zl + 1^12 + "32 ^22) *i %1 "21 ^3 1 012 "22 r Using matrix notation. (a) 1 1 2 3 _3 + 2 [3 1 2] + 6 2 3 4 6 = 6 9 3 3 4 7 3 9 1 2_ '1 10 2 ' 10 1 0' '1 10 1 1 10 1 o" 1 [0] 2 :] 1 3 10 4 i oil 1 1 (b) ! 10 3 10 5 6 :o] [0 1 [0: [0] 3] °] 1 1 j2 1 10 10 [0] 3 1 1 [0 "][o 2 3 12 10 18 "1 0' 1 ! 1 1 2 1 3 4 5 6 7 6 5 5 6 1 2 3 4' 3 4 5 5 6_ 1 1 1 2 4. zg). j^) into (z^. Hiyi+ os yi + "1272 11^1 . when a set of m linear forms formation of the variables with matrix B in n variables . 2 1 8 5 2 1 — _0 1 1 1 \  +i ^1 8 7 1 7 1 1 1 6 4 [l][8 7] [l][6 5 4] 9 11' "13" [1][1] '7 10 31 3 5 7 9 11 13 13 16 39" 19 41" 4 7 20 13 33 22 13 "35 24 .13. 1] MATRICES "l o' 1 1 1 l' "l 10 1 1 1 1 1 'l o' 1 1 3 1 2 "4 1 2" 9. 20 13 8 33 22 13 7 10 35 24 13 6 13 16 19 37 26 13 5 39 41 28 30 13 13 1 [6 [l] 4 %i 10. The result of applying the transformation to the given forms is the set of forms %i %<2 = ~ (oLi 6^j + "*" aj^2 621) Zi 1) ^1 + "*" (on ii2 + (^2 ("3 1 '^12^22) ^2 ^2 2 ^22) ^2 Z2 . ^'.J1 0]^2^2 Let { %2 %3 = a^iyi + 0^272 be three linear forms in y^ and yj ^nd let 1 be a O21 Zi + h2 % 2 72 new coordinates linear transformation of the coordinates (yi.CHAP.13 37 26 13 5 31 28 13 4] 30 . 2 2 622 1 Thus.
]• [rM'i "] [0 °].1 B = 4 _2 5 3_ and C = 3 1 1_ J 2 3 I 4 (a) 1 f 7 4j . p = q. Explain why. and r would the matrices be conformable for finding the products and what is the order of each" (a) ABC (b)ACB. AB = AC 251 does not necessarily imply B = 1 1 1 3 . m x p (c) r = n. 1 SUPPLEMENTARY PROBLEMS "l 2 3' 2 . in general. A.["0 3 i is a matrix of the set. under what conditions on p. Given the matrices A of order mx„. Given Ans. 3 2 . A = I. compute AB = Q and BA = 22 11 2 1 2 1 Hence. and C of order rx^. Given A = 2 4 3 3 1 1 B = 2 111 1212 C 1 14 10 and C 2 3 112 2 1 1 show that AB = AC. (a) p = r. 1 Verify that D =BA = (AB). mxq (b) r = n = g.]•[:. 14. I. show that A(B + C) + B^ AB + AC and (A+B)C = AC +BC. ABi^BA \ 3 13. 3 1 2 2I 4 . = use the results of (a) to show ACB CS^. Given A = generally. 15. (c)A(B + C)? Ans. B of order nxp.B^ ^ (AB)(A+B). A'^  B"^ = (A B)(A + B). Thus. 1 5 2 2 4 4 3 5 5 and C 3 4 = 13 AC = A. 4p^3. where 2 i = 1. 2 3 5 5 17. derive a formula for the positive integral powers of A . 4pll. 51 Compute: A + B = 9 _3 2 AC = _ 53 t2j 1 2 4 (6) 6 Compute: 24 10 2 (c) (d) 2 4 2 0S = Verify: A + {B~C) = (A+B)C. that Find the matrix D such 1 A+D^B.. m x g . [? 0] 20. where /= n [0 ° ij 19. that 13 1 2 3 (a) (b) show that AB BA CA = C. 16. ip + 2. A according as n =4p. 11 1 1 2 1 2 3 6 12 6 12. ] 2I 2 3_ Given A = 5 . Using the matrices of Problem 11. Given A = 14 1 13 B = 0. Given A = 2 3 B = and C ^["12 [2 3 4"! ij 1 2 1 4 02 = show that (AB)C = A(BC). (A±Bf ^ A^ ± 2AB and A^ . 2 . (A ± Bf = A'^ + b' 18.' MATRICES [CHAP. Show [° that the product of any two or more '"'—[. 3 1 and B 2 1 4 2 6 3 . 9.
for all If a. B. that the parallelism of lines. 1 1 2 1 0^ 1 1 I o" 1 (b) A and 1 B 1 Ans. (i ) congruency) between mathematical entities possessing the following properties: b.. C are matrices such that AC = CA and BC = CB.CHAP. Prove: (a) trace (A + B) = trace A + trace B. (6) trace (kA) = k trace A. Determinative Reflexive Either a is in the relation to 6 or a is not in the relation to a is in the relation to a. 28. 2. (II) (iii) Symmetric Transitive a is In the relation to b then b is in the relation to a. then (AB ± BA)C = C(AB ± BA). Use this procedure to check the products formed in Problems 12 and 26. (iv) If a is in the relation to b and b is in the relation to c then a is in the relation to c.4. 13. Show that the element in the ith row of A Pi is the sum of the elements lying in the sth row of AB. let fij = S b j)^. 25..3y. show . . = 2 1 1 1 2 2n+r. and equality of matrices are equivalence Show that perpendicularity of lines is not an equivalence relation.. Show that conform ability for addition of matrices is an equivalence relation while con form ability for multi plication is not. / = 1.622J' 24. p. 722"] _ Y^l ^. If 4 = [aiji and B = [fe^J are of order m = x n and if C = [c^j] is of order nx. h V. Prove: If . Compute AB. that is. A. 27. 1] MATRICES 21. given: 10 (a) 11 1 j 10 and 10 j 2 1 A 1 B 1 " 1 1 1 Ans. Denote by /S sum of Pi 182 the elements of the /th row of B.] . = 1.n).^^^l^ y. where (i 1. A relation (such as parallelism. 2 m.. similarity of triangles. that (/4+B)C =^C the + BC. is called an equivalence relation. Show relations. . 2 p. 2 m 4 1 12 1 10 ' 10 '2 1 (c) A 1 and 1 I B 1 1 i i Ans 2 2 10 2 2 2 22. Let /4 = [a^] and B = [fej.\ [2 1 3 [2 3 2 2 1 3 r zi + [221 .
Oi^ = Ogs = ci„„ = k. as the reader may readily show.. A lar.. • ' ^nn ) See Problem If 1.p = II . Clearly. Thus "12 tip o "13 tZo o U "2n is 03S upper triangular and is lower triangular. 2 c 3I (5 . The matrix D Ogj which is both upper and lower triangular.. is call . ed a diagonal matrix. then l2A = A.. the matrix is called the identity matrix and is denoted by For example t"] When /„+/„ + . 022. and 1 1 the order is evident or immaterial. in addition.. D '1 is called a scalar matrix. 10 . a. k = l. Identity map) and • f [1 . in the diagonal matrix D above. an identity matrix will be denoted by /. 033. to p terms = p /„ = diag (p. to p factors = /.L hAh A. It will frequently be written as D = diag(0]^i. square matrix A whose elements a^: = for i>j is called upper triangu a square matrix A whose elements "ii a^= for i<j is called lower triangular.chapter 2 Some Types of Matrices THE IDENTITY MATRIX.p. /„. if..
The inverse of the product of verse order of these inverses. least positive integer for which A'^ = ther A is said to be nilpotent of index See Problems THE INVERSE OF A MATRIX. that if shall find later (Chapter?) that not every square matrix has an inverse. Example 1. then A is callled idempotent See Problems 34. is called nilpotent.fe+i where is a positive integer. Since 1 3 1 = ( 1 each matrix in the product is the inverse of 12 the other. then A is said to be of period k. matrix A such that A^ = I is called involutory. We can show See Problem 7. is a positive integer. A has an inverse then that inverse is unique. A matrix A for which 4=0. isinvol An involutory matrix is its own inve r$e. The columns of matrix of order an mxn matrix A is called the trahspose of 1 nxm obtained by interchanging the rows and A and is denoted by A' (A transpose). 1. 4 1 1 We here. we have immediately (A'Y = A and (kAy = kA' In Problems 10 and II. matrix. A. If A A for which A . is a simple matter to show that A is any rasquare 2. aP where p 0. is the product in re See Problem 8. If A and B are square matrices such that AB = BA = I. (A + BY = A'+ S' . for example.. 11. it If A and B are square matrices such that It AB = BA.e. If A and B are square matrices of the same order with inverses A^^ and B~^ respectively. then (AB)''^ = B'^A~^. i = 1. kti th^ matrices A: A and 6 are said to anticommute. THE TRANSPOSE OF A MATRIX. however. i. CHAP. See Problem 9. commutes with itself and also with L See Problem If A and B are such matrix that AB = BA. so that A^ = A. tijvo matrices. and (b) if A: is a scalar. An identity matrix. The matrix B also has A as its inverse and we may write A = B~^ 1 2 3 3 6 2 3 1 1 1 ) o' /. we prove: The transpose of the sum of two matrices is the sum of their transposes. then A and B are if called commutative or are said to commute. If p is the 56. Note that the element aIr • in the ith row 7 3 and /th column of A stands If in the /th row and ith column of A\ A' and ZTare transposes respectively qf (a) A and B. A utory. then B is called the inverse of A and we write B = A'^ (B equals A inverse). 2] SOME TYPES OF MATRICES 11 SPECIAL SQUARE MATRICES. that is. having inverses. is called periodic. k is the least positive integer e for which If A A. For 4' example. the transpose of A ^[e] IS i' = 2 5 6.
Zg = (acbd) + (ad+bc)i z^^ = (acbd).= a. A matrix 1 A = 2 \ a^j] is square matrix A such that A'= A is called symmetric. 3 4 With only minor changes in Problem 13. that is.^ . When i is a matrix having complex numbers as elements. Clearly. THE CONJUGATE OF A MATRIX. Problem IV. See Problems 1415. 3 45 35 6 2 In is symmetric and so also is kA for any scalar k. the conjugate of the sum two complex numbers and sum (ii) z^. z = a+ hi is The complex numbers a+bi and abi are called conjugates. a square For example. SYMMETRIC MATRICES. then A + A' is A is symmetric. If (i) z. the diagonal elements are " 2 2 3" zeroes.for all values of i and . that is. From Theorems IV and V follows VI. each the other. then Z2 = z^ = a bi = a+ bi. A 4 is skewsymmetric and so also is kA for any scalar k. symmetric provided o^. If 13. we prove an rasquare matrix. /. a square matrix A is skewsymmetric provided a^. the conjugate of the product of two complex numbers is the product of their conjugates. the matrix obtained from A by replacing each element by its conjugate is called the conjugate of /4 and is denoted by A (A conjugate). we have readily = A and (X4) = ~kA Using (i) and (ii) above. called a complex number. of the product of two matrices = B'A' is the product in reverse order of their transposes.= Uj^ for all values of i and /. Every square matrix A can be written as the sum of a symmetric matrix B = ^(A+A') and a skewsymmetric matrix C = ^(AA').. then z^ + Zg = (a+c) + (b+d)i and of + z^ = (a+c) . we may prove . 2 and in. then A  A' is skewsymmetric.(ad+bc)i = (abi)(cdi) = F^i^. that is. (AB)' See Problems 1012.e. the conjugate of the conjugate of a complex number z is z itself. being the conjugate of If Let a and b be real numbers and let i = V1 then. we can prove V. If A is any resquare matrix. its conjugate is denoted by z = a+ hi. If z = a+ bi.. For example. l + 3 2i i I 21 3 2 + i Example 2. . When A then A = 3i 23i If A and B are the conjugates of the matrices (c) (. z^ = a + bi and Z2 = z^ = a~ bi.(b+d)i = is the (abi) + (cdi) of their conjugates.12 SOME TYPES OF MATRICES [CHAP.i=a+bi and zL Z2 = c+ di. Thus. The transpose i. A square matrix A such that A'= A is called skewsymmetric.4) A and B and itk (c?) is any scalar. Thus.
. The conjugate of the sum of two matrices is the sum of their conjugates. diag(^i. the diagonal elements of an 1i 3 2 i Example 4. Is kA skewHermitian it k is any real number? any complex number? any pure imaginary? By making X. of the diagonal matrix is called the direct sum of the Ai . (A + B) = A + B. their conjugates. T^e transpose (Ay = (A').. Thus. ^ is skew Hermitian provided Clearly. As) A. A^ As be square matrices of respective orders m^. The matrix A = 1 +j 2 is Hermitian. i. 2 3 of the conjugate of A is equal to the conjugate of the transpose of Example 3.. 1 ^ = all [0^^] values of such that i and /. i. i a^^ = o^ values of and li 3i i 2 i Examples. of The transpose We have IX. .ms.. Thus.e. /I Clearly. The conjugat e of the product of two matrices (AB) = AB. the diagonal elements of a skewHermitian matrix are either zeroes or pure imaginaries. A is denoted by A^(A conjugate transpose).^2 . DIRECT SUM. Every square matrix A with complex elements can be written as the sum of an Hermitian matrix B={(A + A') and a skewHermitian matrix C=i(AA'). 2] SOME TYPES OF MATRICES 13 _VII. A. A is square matrix Hermitian provided a^j = uj^ for Hermitian matrix are real numbers. of is the product. If minor changes in Problem 13. Vin. I Is kA Hermitian square matrix if k is any real number? any complex number? [0^^] A ^ = such that for all 1= A i is called skewHermitian.. The matrix A = 1t 2 is skewHermitian. It is sometimes written as A*. A' = A is called Herraitian.. /..2i i (AY = 2+ 3i while 1 + i 2i 3 A' 2 and [l2i (A') 3 2 1 =  (AY Zi i + 3iJ HERMITIAN MATRICES. CHAP. in the same order... From Theorem X follows XI. is we may prove A an rasquare matrix then 4 + Z is Hermitian and AT is skewHermitian. Prom Example 1 .e. ization Let A^. . The general ^1 A^ . i.e..
This follows from \" ^1 P "^l = '"^ + H = ^ '^l [" *1 2 3. where Ai and Behave then AB ^s^s) SOLVED PROBLEMS °ii Or.A^. a^^ a„n) and any mxn matrix B is obtained by multiplying the first row of B by %i. . 2 4 4 is idempotent. c. A^B^ ^s). same order for (s = 1. 2 and S = diag(Si. ^5 B:} and An 2 4 1 2 2 12 The direct sum of A^. the second row of B by a^^. Use BAB to show that B is idempotent.and so on.14 SOME TYPES OF MATRICES [CHAP. S2 = diag(^iSj^. the XII. J. Let /4i=[2].. Show that if AB = = A and BA = AA = B. Show that the matrices ° and K P^+*'^ commute for all values of o.. 2 3_ 1 2 3_ 2 3 4. illustrates . /4g) = 121 2 3 4 12 Problem 9(6).4 and 4 is idempotent.4 = (AB)A A^ and ABA AB A . then 4^ = . "12 "22 'm °11^1 22 21 1. = . then i and = .Aq 3 is 4 diag(^3^. 2 1 2 1 3 Examples. 6. /42.Ag) s).4(34) = B are idempotent. Show that 13 1 2 3_ 2 2 4' 4 ' 2 2 4" 4 2 3 4 4 / = 13 _ 1 13 . Since °2i — — °2n OooOoo "^22 ''2n the product AB of ''ml "7)12 "tn ''WB "Ml ''wm"m2 ^mm"nn an msquare diagonal matrix A = diag(oii.. Chapter 1. 2. If A = Aia.A^.45.g {A^.
Prove: Let A'. (AB)^ B'. Show that A 5 2 2 1 1 3 3 1 3" 1 1 1 1 / 3 = 5 2 2 6 5 2 6 = 3 1 3 9 and A A^. B'/l'is Hn. B = [6y] be of order is nxp = . Let A = [ay] be of order mxn. then A^ = I and (IA)(I+A) = IA^ = II = 0. 1 1 . (ABC)' = \(AB)C\' = C'(AB)' = C'B'A'. (AB)' = B'A'. only check that the element in the ith row and /th column of bj^. (A+B)' = A' + B'.Then the element row and ith column of n n Thus. Prove: (ABC)' = CB'A'. Then by Problem . 2] SOME TYPES OF MATRICES 15 1 5. AO±Af= A for n = any positive integer. .. 7. (AB)'^ is unique. A = [ay] and B = [6^].. Supposed is involutory.A = 3 3 9 5 2 6 = 1 3 2 1 3 1 3 1 1 3 2 1 3 6. 12. Now B^IB = )AB b\a''A)B = B'. Let C. and aj^+ 11. AB cy J^aife.. 5 = C= ^ be square^ matrices such that AB = 1 and CA = 1.B = / and AB(B~'^A~') A(BB''^)A~^ AA By Problem 7. mite ABC = (AB)C. then A^ = and /I is involutory. S' and (A+B)' are respectively a^. then C = AB = [cy] is of order mxp.A' 9. ^" Then A{I±Af A{l±nA) A±nA^ = A. The element standing in the ith row and /th column of ing in the /th row and ith column of (AB)'. = Since /? = A^ = a'' = . 10. Suppose (IA)(l+A) = IA^ = 0. hence. We need bjj_. b^j and this is also the element stand The elements of the "i2 /th row of S'are in the /th iy. show = that = 0. b^j bnj and the elements of the ith column of ^'are a^^. If A is nilpotent of index 0. A.C Thus. 1 3 6 is nilpotent of order 3. Prove: (AB)^ = definition B~^A'^ By {ABf{AB) (B' A = (AB)(ABf = = /. CHAP. Prove: A matrix A is involutory if and only if (I—A)(I+ A) = I 0.B. Prove: (AB)' = S'i'. 2. (What is S~^?) iCA)B = CUB) so that B 8. Then ^ is the unique inverse of ^.
Second Proof. By Problem 10. bji Thus.4 is aij and the corresponding element of a^i /I' is aji. Now (AB)' = B'A' = BA hence. 16. oj^j] in the ith row and /th column of in the /th . hence. then B = [bij] = A + A' is symmetric. and A and B commute./ and B kl commute. Suppose 4 A.4') is symmetric.k(A+B) BA .k(A+B) BA . A and 5 commute if and only if A~ kl and B kl commute for every scalar Suppose A and B commute. = AB is symmetric so that (AB)' = AB.4 and + k^I + k^I = {AkI)(BkI) AB .k(A+B) + k^I + k^I = (B kr\(AkI) AB = BA. bij = bji and B is symmetric.4' The element = a^^ + aij. Prove: P'AP is . Suppose A and B commute so that Suppose trices AB = BA. Then (AB)' = BA = AB and AB . AB BA and the ma A and B commute. The element bij = aij + a^i. . 14.2g SOME TYPES OP MATRICES [CHAP. row and ith column of A is and the corresponding element of is hence. then AB = = B.4 the msquare matrix A is symmetric (skewsymmetric) and symmetric (skewsymmetric). Prove: If A and B are resquare matrices then k. A—kl and B —kl commute. then (AkI)(BkI) = AB . . First Proof.k(A+B) (Bkr)(Akl) Thus. . 15. Show that if A = [o^j] is nsquare. is symmetric.4 skewsymmetric then B' = (P'AP)' = P'AP and B is skewsymmetric. 2 13. (A+Ay = A' + (A')' = A'+A = A + A' and (^ +. Prove: If A and B are resquare symmetric matrices then AB B'A' = is symmetric if and only if A and B commute. If if P is of order mxn then B = If If is is symmetric then (see Problem 12) B' = (P'AP)' = P'A'(P')' = P'A'P = P'AP and B is symmetric.
a22 where a. Show that each of anticommutes with the others. is idempotent. 19. (a) Show that A = 14 1 13 and 5 B = i _3 _5 3 are idempotent. 21 SOME TYPES OF MATRICES 17 SUPPLEMENTARY PROBLEMS 17.k k) A. 1 ~1 1 _l 1 4_ 2/3 "112' (b) 1/3' A = 2 3 1 and B = 3/5 2/5 1/5 commute. 31. 3 2 1 6 2 9 Show that (a) A = 3 2 and B = 3 commute.A  5! = 0. 2 03 1 3 4 4 is nilpotent. Derive a rule for forming the product BA of an mxn matrix S and ^ diag(aii. of period 2. 2 2 (b) If 2 1 3 2 1 . . Show that 13 1 3 4 '12 27. show that A^ where p and q are positive integers.g(k. 2 1 o" 3 1 Show that _ 1 1 = 1 1 1 1 = 1 1 = /.4 is row order of A a'^ a'^ = a'^ resquare. If . Show that the scalar matrix with diagonal element of / is the A: can be written as Wand that kA = klA =Aia.94 = 0. c are the nsquare scalar matrices. 4 7/15 1/5 1/15 Show that A anticommute and (A+ Bf = A^ + B'^ 29. = 18. Show that A 3 9 is periodic. 22. Show that the product of two upper (lower) triangular matrices is upper (lower) triangular. (a) diag(a. A 1 1 1 2 show that 4 .6.2A . 26. 12 28. where the order 20. 1 1 1 2 6 2 25. show that the converse of Problem 4 does not hold. (a) Find all matrices which (b) Find all matrices a^^). 30. but 4^  2/1  9/ / 0. If A show 2 that B = IA is idempotent and that AB = BA = 0.a22 a„„). 3).fe. 5 235 21. Hint. 3 4J [1 5 (b) Using A and B. 1 1 1" 24. show that . 2. Prove: The only matrices which commute with every nsquare matrix commute with diag(l.4 — 4. (a) If A 2 2 1.CHAP. Ans. c) are arbitrary. 1 2 1 23. which commute with diag(aii. See Problem 1.
A' and If A and B commute so also do ^"^ and B' . 43. 41. Hint. ^4™ and S"^ commute A and 5 commute. Prove: 48. Show that the inverse of a diagonal matrix A. 44. 23J (c) (<f ) 1 iB is Hermitian.3 1 i i is Hermitian. 45. Show: (a) A = 2. (6) AirA'. 18 SOME TYPES OF MATRICES [CHAP. Thus. 1 (b) (A + B) I = A + B i . Let A 10 1 I2 /j. Prove: (a) and C 47. Prove: (a) (A^)'^ = A. 40. Prove: (a)(A)=A.. 38. the 1 1 4 4 4 3 3" 35. B\ if and A' and B" Show that for m and n positive integers. (c) (^^)' = (/l')^ for p a positive integer. Prove: (a)(A')'=A. Every skewHermitian matrix A can be written as A = B + iC where B is real and skewsymmetric (6) A' A is real if and only if B and C anticommute. 2 1 2 3 7 I "321 is the inverse of 32. A is Hermitian and B is skewHermitian. If A is nsquare. (A'^y forp a positive integer. all of whose diagonal elements diagonal matrix whose diagonal elements are the inverses of those of inverse of /„ is /„ A and in the same order. J i 1 i + 2i j 2 1 3 J (b) B = 1 + is skewHermitian. 46. Show that A = 43 33 and B 1 01 4 4 43J are involutory. 2311 33. 1 J + 2 2 + 3 42. 2 4 5 3 is the inverse of (b) 10 2 10 4 2 10 2100 02 _ 8 1 10 1. Set 1 1 [3:][:^E3 to find the inverse of [3 4] ^"' [3/2 1/2] are different from zero. show that (a) AA' and A' A are symmetric. /I21 Show that A /s a c b —'2 d 01 (b) (kA)" = kA'. Write ABC=iAB)C. by partitioning. 37. (c) (A'^Y^ = 39. Show that every real symmetric matrix is Hermitian. Prove: Every Hermitian matrix A can be written as B + iC where B is real and symmetric and C is real and skewsymmetric. is real and symmetric. (d) {AB) = A B. (c)(kA)=kA. Prove: If H is Hermitian and A is any conformable matrix then (A)' HA is Hermitian. AA'. . Prove: (ABCy C'^B^A'^. Show that (a) 2 5 4 2 " 1 1 1. is a 34. 10 36. (b) (kA)^ = jA'^. and A'A are Hermitian.
If . b. that the matrices where A and B are nonzero nsquare matrices. Prove: If A and B (c) idempotent. 6 g are scalars and p is a positive integer. (b) dia. 2. ^2 Ans. 3). Let A and B he nsquare matrices and let ri. Show (6) /I that the nsquare matrix A will not have an inverse when (a) A has a row (column) of zero elements or has two identical rows (columns) or (c) ^ has arow(column)whichis the sum of two other rows(columns). BA = B then (a) B'A'= A' and A'B"= B\ (b) A" and B' sue 56.m^. Prove: If ^ and B A are nsquare skewsymmetric matrices then AB is symmetric if and only if A and S commute. where r and s are scalars. m^ with arbitrary elements.B^ B^) where ^^ and B^ are of the same order. S2 be scalars such that ri4+siB. 63. Prove that Ci = 64. Find all matrices which commute with (a) diag(i. 57. /I2S2 A^B^) . where A and B are 2square matrices with arbitrary elements and b. 2). 53. 52. dia. 2. show that k(.+«/ where a. Prove: ^ is real and skewsymmetric or if ^ is complex and skewHermitian then ±iA are Hermitian..A^ A^ are scalar matrices of respective orders '^s) mj^.A2 and B = di&giB^. 59. 66. si. rg.g(B^.. (6) diag(l. If A and B are nsquare matrices and A has an inverse. If AB = 0. A^) Show 61. 55. A s). 51. are idempotent and j(I+A) ^(IA) = 0. 2] SOME TYPES OF MATRICES 19 n A A 49.B^ B^) where Si. then A and B are called divisors of zero. If A^. 60.m^ m^. (J = 1. are such that AB = A and ^ = B = / if ^ has an inverse.CHAP.. S2 85 are of respective orders m^.g(A. 65..g(A. Prove: If A is symmetric or skewsymmetric then AA'= A'A / are symmetric. Prove: If is nsquare and B = rA+sI. (b) (A)^ = A\ of (c) (r)^=(^V (A''') Hint. Show that the theorem of Problem 52 can be stated: Every square matrix A can be written as A =B+iC where B and C are Hermitian.c).. 1 1 A 1 nA . = diae(Ai. 62. find all matrices which commute with diag(^i. Ans. If the (a) nsquare matrix A has an inverse A show: (A Y= (A') . (a) Prom the transpose AA~^ = I. Prove: Every square matrix A can If bfe written as /I = B +C where B is Hermitian and C is skewHe rmitian. Prove: If 4 is symmetric so also is a4 +6/1^ +.^2 + S2 = ^s + Bs) diag(^iBi. C2 = r^A+SQB commute if and only if A and B commute..B) i. 2. If^ is involutory. AB trace /liB^ + trace /I2S2 + + trace A^B^. risj 7^ rssi. c are scalars. obtain as the inverse of /!' 58. 54. show that = (A+B)A~'^(AB) (AB)A''^(A+B) .I+A) and kOA) . A and B of Problem 21 are divisors of zero.ra Show A raA (a) (6) A 2n('»l)A ni = A nX A" \ A A and 50. 1. (a) dia. show = that (a) (b) (c) A+B AB = trace diag(^i+Si. then A and B commute.
selected so that one and only one element of convenience.. the have been arranged so that the sequence of first subscripts is the natural order 1. the permutation 132 is odd since in it 3 precedes 2.4) °^j.. the factors matter as a column.1) together the 3! = 6 permutations of the integers 1. For example. is called of a square matrix of order a determinant of order re. 4 precedes 3. and 2 precedes 1.1) the permutation 123 is permutation the 312 is even since in sion.2. define ^j^j^. gers 1.4). comes one element and only re.2 of the intepermutations re! of the one some then is subscripts second sequence j^.5) W2 k°'^k ^^h ••• °'^Jn By ucts of the form (3.2. j^ /„ of (Facility will be gained if the reader will parallel the work of this section bere. the permutation is (odd).2) = 24 permutations of the integers 1.3. denoted by U. In (3..2) the permutation 4213 4 precedes 1. is meant the sum of all the different Ul.. and 3 precedes 2.5). it 3 2. "'^L ~^3 1 ~'2 °3Jo '^" "^n comes from any row and one of n of its elements...) For a given permutation /i. called terms of (3. le Jlj2p=n\ The determinant M re '^h where the summation extends over permutations hkJn of the integers 1.3) 'ni "ne "n3 and a product (3./2. . even is inversion.2 ginning with a product arranged so that the sequence of second subscripts is in natural order. Consider (3.^ of the second subscripts.j^ = +1 or 1 according as the permutation is even or odd and form the signed product (3.. 20 .2. in (3. precedes 1 is even since in it 4 precedes THE DETERMINANT OF A SQUARE MATRIX. If in a given permutation the number of inversions there is no inversince even called even (odd). which can be formed from the elements of A. In from any (3. thus.6) signed prodthe determinant of A.3 taken 123 4! 132 213 231 312 321 and eight of the (3.4 taken together 1234 1324 If in 2134 2314 3124 3214 4123 4213 we say that there is an a given permutation a larger integer precedes a smaller one. Consider the resquare matrix '11 "12 "IS * Uo 1 Oo p ^03 (3.chapter 3 Determinant of a Square Matrix PERMUTATIONS.
If 4 is a square matrix then U'l = \A\. for every theorem concerning the rows of a determinant there is a corresponding theorem concerning the columns and vice versa. every term in the sum is zero If is zero.3(10 12) + 5(1102) = 2(l) . \B\ = k 1 P \£: . as a factor..6) can be obtained from A' by choosing properly the factors in order from the first.6). that is. . . II. columns. .6) contains one element from this row (column). one and only one element having A. Thus. nant Throughout this section. that is. Suppose that every element of the sth row (every element of the/th column) is zero.8) 21 "31 ^ "? ^123 ail022°3S + £231 + ^132 Oll02s'^32 + ^213 Oi2«21<%3 "32 "S3 %2 023«31 ^11023032 + £312 Ol3'%lC'32 + ^321 %3 ^^2 %! flu 022 %3   012^1033 %l(«22 033 . PROPERTIES OF DETERMINANTS. A is the square matrix whose determi Ul is given by (3.02S«32) " "12(021 "ss " ^S^Sl) + ^IS (a2lOS2 . every element of a row (column) of a square matrix A a scalar k. second..CHAP. a. It can be seen readily that every term of (3. and we have I. n^n \ *%'2i„°'iii°2J2anj^! .3(2) + 5(1) 3 4 02 5 6 = 2{0(6)(2)(5)! . CEj^2^21 "12 12 (3. 21 14 (a) 3 2 (fc)  23 4 6 = 2 11 20  (1)3 + 3 3 2 3 5 10 1 1 (O 1 1 11 1 01 = 2 11 + 2 5 2 2 1 11 2(00 2 (d) 1 11) . = Jc\A\ Thus. .022^31) 022 "^32 ^3 ^3 O21 "31 053 °21 <^2 ^^32 + <^33 Oi '^Sl Example 1 1. 3] DETERMINANT OP A SQUARE MATRIX 21 DETERMINANTS OF ORDER TWO AND THREE. Since every term of (3.(3){l(6).6) we have for n = 2 and n = 3.(2)0! + (4) {l(5) 20 18 Oo! + 20 18 See Problem 1. Prom "ll (3. fth Denote by B the matrix obtained by multiplying each of the elements of the ith row of A by Since each term in the expansion of 5 contains one and only one element from its row. then U =0. Consider the transpose A' of A.7) ^11^22 "'" 21 '^12^^21 ^21 "22 11^22 — and •'ll "12 "13 P*^ (3.
3) whose determinant \A\ is given When from A the elements of its ith row and /th column are removed. Cfg ^ + H'Qri^ ^Q'2 "^ i^Clr Zgg + ka^^ See Problems 27. . For example. . VI. Suppose that each element (j row of A is expressed as a binomial %. Let A by (3.... The most useful theorem IX. except possibly for signs. . each product of (3. CI ^ t* QO f€ (X' product in (3. VII. the determinant is multiplied by k. then \b\ = . expansion of i \b\. then \b\ = \A\. Hence. pth terms of the sums and all other rows (columns) are those of A.l)square matrix is called a first minor of A or of \a\ and denoted by \M^j\. before i+1 in the row subscripts is an inversion. Each \A\ is a product of s. then of the first \A\ = . If and \e\ =  \A\. If «n2 "•na — '^nn ^n2 '"ns every element of the ith row (column) of A is the sum of p terms. second. If B is obtained from A by carrying its ith row (column) over p rows (columns). = 1. we have its B is obtained from A by interchanging any two of rows (columns). the determinant of the remaining (re. As a consequence V. If a factor then k a 7']^]^ O\ OrQ 1 may be factored from ltd AQ /l' \A\ . and vice versa. a scalar mul= \a\.6) as a term of its \b\. Then I ^Jd2 in ii2 O22 ^^^ii ^ "^i^ "2 J2«3 J5 ""in Os. hence. then = (i)^UI. For example. 3 every element of a row (column) of a determinant U is multiplied by a scalar k.n).'.. Clf.. then \A\ can be expressed as the sum of p determinants. p^kk^ Jn ^^k'^k'^kh. be the resquare matrix (3.6) is the Let S denote the matrix obtained from A by interchanging its ith and (i+l)st rows. s If of Theorem IV. If is B is obtained from A by adding to the elements of its ith tiple of the corresponding elements of another row (column).6). then \b\ row (column).. If two rows (columns) of A are identical.2. "Jn ^13 ••• ^m 02n 'In "23 ••• + Oni In general.6) with s sign changed is a term of IV. «21 "njn^ Os. VIII. Cfrii T"/CCEog { ^22 Clf=n Clnr Clr^r. FIRST MINORS AND COFACTORS.\a\. B is obtained from A by interchanging any two adjacent rows (columns).22 DETERMINANT OF A SQUARE MATRIX [CHAP. thus. Karin.• ^li = b^: + c^j. In counting the inversions in subscripts of any term of (3.6) of (3. if every element of a row (column) of a determinant \A\ has k as III. The elements in the ith rows (columns) of these p determinants are respectively the first..
h 72 °i2 :h i ^n _ "^m Ji %J2 • • % ^n _ . (1 < m < ra). we can prove of rasquare The sum of the products formed by multiplying the elements of a row (column) matrix A by the corresponding cofactors of another row (column) of A is zero. im (3. is the sum of the products obtained by multiplying each element of a row (column) of U by its cofactor. ai2 I (1) \m.2 n) Using Theorem XI.3). If an Example 3. Such • '•l'J'2 '^i. X.11) °i2.„^.3). be m.• . we prove of the determinant U. MINORS AND ALGEBRAIC COMPLEMENTS. be respectively i^. i.„+2 ^ . A is the matrix of Example 2.8) is Ml = = aiilA/iil + O12IM12I + + aislMigl "11*11 ai20ii2 "130^13 In Problem 9.j2. of the row indices 1. = 1. 3] DETERMINANT OF A SQUARE MATRIX 23 More frequently. The signed minor.Jl.. cofactor of a^• it is and is called the minor of Oy.9) Ul = aiiflii + a^g a^g + + ^in * in ^ aik'kk n fe?i"fei«*j (3. If A "21 "'31 "23 ''SS ^^32 M 11 and •22 "23 "21 "23 ki2 "32 "33 "31 21 22 kic "33 "31 "32 1+1 I 1 + 21 (1) kll I 1^11 1 .. and /™+„. ces. (if'*'^ \Mij\ is called the "V °11 °12 '^SS ''13 Example 2. Consider the matrix (3. VII. arranged in order of magnitude.CHAP.10) Ml = "^if^if + 021*2/ + + a^jd^j (ij. denoted by a. Let i^ arranged in h. •Jm . we have + + "320^32 and while "310^31 + + "330^33 = = Ul '4 I I "12*12 "22*22 "32*32 and "31*21 "12*13 + + "32*22 "22*23 + + "SS*23 "32*33 = = See Problems 1011.2 n and let j^. be m of the column indices. Let the remaining row and cofumn indi.. i^ order of magnitude. i^ a separation of the row and column indices determines uniquely two matrices a.ii ''2. 1^12! = 1431 (1) ki3 Then (3.e. j^ /^ arranged in order of magnitude. where A is the matrix of (3. The value n (3.
{3\i I I and .• is called the algebraic Jn ^n complement of Jm'i'Jni2 7(41. is. 3 and ''m+i'J'm+i 'm + i'Jffl+s n.24 DETERMINANT OF A SQUARE MATRIX [CHAP. Let (3. Is this always true? . The complement of a principal minor of A is also a principal minor braic complement of a principal minor is its complement.5 A^ 34 I = I 2 4 5 ^ijg'.4.7ra + i'Jm+2' Jn ''m + S'Jm + i ^m+2>7m+2 (3. each being the complement of the other.• The signed minor (1) A.3 ^^is I Note that the sign given to the two complementary minors the same. Examples. is called a principal of A. The complementary minor algebraic complement is the cofactor in the notation of the section above.11) becomes J2. whose diagonal elements are also diagonal elements of A. • • • J. and the A minor of A.5 '^21 I '^23 2. A and the pair of minors Jm J'm'i'Jn^Q Jn ''r and ^m+l' 'm^2 are called complementary minors 5 square matrix of A.+ . ''m.14)  in +1 + fm+2 + + *n + ^ra + /m+2 + •" + In p Ji.12) ^^Jmii ^n'j7ii+2 ^ri'Jn called submatrices of ^. A Jn is OLLj.1. I 2+5 + 1431 13 Ap' g (1) I I = I I  .3 '^2. l(3 + 4^2l44•6 (1) I .4 I is the algebraic complement of is 1. For the A [«iij. "I'Z %4 ^^34 '^15 '^SS I i 1. 3. . (3. 'to+2 J™+i'Jm.J3.4 "32 "42 °63 I "44 ^^45 are a pair of complementary minors.2 4 5 A. the alge minor of A. the algebraic complement of J1.72 l>^m+2> ••'' Jm ^n H>h Exainple4.7n ^ra+2>Jn . .2. [%ii] Ml i I and "HJi an element of A.3 /i^ I 1 g I is the algebraic complement Of I . The determinant of each of these submatrices is called a minor of J1. For the minors of Examples.Ji Ji When m = l.+2 7n is called and (l)'^ /I.5 ~ I I and f^Sl ' '^l.J2.4.13) U + q In + + in + h + i + h + + h and (3.
SOLVED PROBLEMS 1.3(l) = 11 4 (b) 5l 3 5 4 6 5 7 6 (1) 6  3 5 7 3 5 4 (l)(47  56) =  + 2(36  45) 71 5 6 24 18 6 1 (c) 3 5 1 4 6 15 21 = 1(421  11 id) 2 3 1 5 3 = 1(33  51) 4 2. Vn 1 removing the common factor from this third column and a+b+ c a b c a b c h c + c 1 1 1 a b c 1 1 1 1 1 1 +a a+h a+b+c a+b+c (a + b + c) 1 1 4. "54 055 A What is the algebraic complement of each ? The terms minor. 3.(") 11 1 ! !l 4 2 = 24 . algebraic complement. then removing the common factor 2. finally carrying the third row over the other rows . complementary minor.t 5 ! = 0^2 045 °52 are a pair of complementary principal minors of .3 and °31 °S3 I A. subtracting the second row from the third. 1 1 1 1 4 1 1 1 1 4 1 1 4 1 4 1 1 1 1 4 1 4 1 4 1 4 by Theorem I. subtracting the third row from the first. For the 5square matrix A H' "?2 "11 "24 044 (^25 "15 ^1. Adding to the third row the first and second rows. Adding the second column using Theorem to the third. subtracting 'the first row from the second. Adding to the elements of the 4 1 1 1 1 1 first column the corresponding elements 1 1 1 1 1 1 1 1 1 of the other columns.CHAP. and principal minor as defined above for a square matrix A will also be used without change in connection with U  See Problems 1213. . 3] DETERMINANT OP A SQUARE MATRIX 25 Examples.
hence. But '^ if Ul then I ^iij2. Note that U vanishes if and only if two of the a^. is a term of \a\ and. Mi3 = 1+3 (ly 2 3 i 11 2I 2 2 2 2 2 3 a22 = (1) 2+2I 1 1.. U is a real number. Prove: If A is is Hermitian. 02 2 (a^ _ 2 02) a2 2 O2 t by Theorem III Og Og Similarly.. og. (i) \a\ = k(a^^a^)(a^as)(asa^) A: = The product of the diagonal elements. hence.02) (02 ~ '^s) ('^s  %) Og Og Subtract the second row from the first. Now M is of order three in the letters. Since A and = \a\ = U' = U by Theorem II.j„"yi°% %. then Ut = U! and Ul = 0.2Jn^iii"2i?^"in = " " ^' Now Ul Ul requires 6 = 0. the term is ka^a^.„ = + *' Ul = = eii. Ul = Ul. Hermitian. l and \a\ = {a^a^){a^a2){asa^).2+1 ?1 2 3 = 2. 1. bi C]^ 63 Cg a^ Oo 03 Oi 5. For the matrix A = 2 2 2 12 a. ai2 = (1) 1+2 1 2 2 2 3 2 2.23 = (1) 2+3 1 1 1 2 3 2 1 (ly 3+ 1 1 age = (1) 3+2I 1 3 2I a 33 (1) 3+3I 1 2 2 2 31 .Ul. Thus. 4'= A. % ^2 1 1 1 Without expanding. 7. hence. from (i). os are  equal. show that Ul 02 (Oj^ . 3 Oj^+fej a^+b^ 62 + ^2 ag+63 &3+^3 "s+^s+'^s bi 6^+CL 62+ ^2 C2 + a2 ^3+^3 i^+Ci a5^+5]^+Cj^ bi + c^ b^+c^ b^+Cr Cj+a^ C3+a3 a2+fe2+"^2 62 bi 63 62 C2 ^3 Cg a^^ 02 ^2 C2 flg 6^ + CL 62+^2 02 ^3+^3 '^3 Cl a. a.26 DETERMINANT OF A SQUARE MATRIX [CHAP. 6. a^a^ and aga^ are factors. Prove: If A is is skewsymmetric and of odd order 2pskewsymmetric. A = A' . by Theorem II. 1 Og ag 1 and oj^oj is a factor of U. then Ml = Since A U I = \A (if^^UI .But. then Oi + a2 1 1 «l«2 2 flj^ — ^2 ar. 0^02. = (ir'\l 12 (ly .. then \A\ is a real number. 1 2 3 3 8.
and hence. Then r+fel . + ai„ !(!)'*" M. a factor.6) having a^^ as a factor are Now e. By the argument leading to (c). in the display as the element.9) with J = i .15) the expansion of U along its first ments. whose cofactor is required. Moreover. We shall call (3.. We shall prove this for a row. khJn~ ^jbjb.JWlTr\T B be the the first umn over sl columns.fe. .3! + + •• .CHAP. Then (a) may be written as (6) where the summation extends over the cr= (ni)! permutations of the integers 2.rk\ 2 k=i a^j^a.rkUl) and we have (3. Then U as the terms of ais!(if M^/isli are all the terms of \a\ having a^s as a factor Thus (3. the element standing in the first row and first column of B is a^s and the minor of a^^ in B is precisely the minor \M^\ of a^s in A.15) + "11*11 since (1) .r.. Prove: The value of the determinant U of an resquare matrix A is the sum of the products obtained by multiplying each element of a row (column) of A by its cofactor.s+i = (1) ••• + ai5!(lf«/. + ai2«i2 + ain*in . . We have (3. Let m. 3] DETERMINANT OF A SQUARE MATRIX 27 Note that the signs given to the minors of the elements in forming the cofactors follow the pattern + + + + +  where each sign occupies the same position cupies in ^.^ VI. rk^rk = r. as °2S <»2n "an "ii "11 1 'twill ^n2 "ns Consider the matrix B obtained from A by moving its sth column over the first sl columns.rs in The element standing in the ^ Thus. Then "' ''^ '°^ ^^^^^ '^' ^^^^ '°' ' = '^ '^ °'''^i"^'J by f^P^^ting the above arguJ'*' ^iT^ matrix obtained from A by moving its rth row over the first r1 rows and then its . thus..l. all the terms of (1)^"^ having a. in. the 1 is in natural order. first row and the first column of 5 is a„ and B the terms of is yreciseiy i» precisely are all the terms of U  having a^^ as a factor.th col°^ T~l (1) sl •(!) \a (1) u the minor of a^^ in ^^ the minor of a. By Theorem \B\ = (1) U. oc 9.9) for j M. the terms of ais mis\ are all the terms of \b\ having a^s as a factor and.3 022 (c) n.in ^i"''^ i" ^ P®™"'^ti°n lJi. The terms of (3. Write the display of signs for a 5square matrix.
11 27 41 31 11 . When oLij is the cofactor of aij in the rasquare matrix •'ll A = "12 [a^j] . Evaluate: (a) \ A 3 04 5 1 1 (c) 1 2 5 42 38 56 47 2 2 2 ((f) 4 4 8 (b) 2 1 5 3 2 4 34 56 3 23 4 (a) Expanding along the second column (see Theorem X) 1 2 3 2 4 Cl^^fX^Q "f" 022^22 ^ ^32 32 • ai2 + • a22 + (5)o. A are replaced and VII. 7 + 1 • '^n In making these with k^ 0^7 with Atj.ji "'1 %. 1 2 3 4 5 3 (e) 28 25 38 65 83 11. 027 ^2J appearing is affected since none contains an element (t^j replacements none of the cofactors OLij.2 and ^ Write the eauality similar to (0 obtained from (3. .n *2n (i) fC^GC^j "T fCi^OL^j "T + itj^dj^j '^i ^2 "W. This relation follows from (3.2 n n and s s ^ /). from the / th column of A VII.g2 5 1 34.. \7 (r (r = 1.fi (h. 3 10.9) when the elements of the ith row of by /ci.QL^j.10) by replacing a^j with k^. the determinant in 4  when Ay = 1. j).2I 5(l)' 1 2 I 5(46) 10 3 4l (b) Subtracting twice the second column from the third (see Theorem IX) 1 4 8 1 4 1 2 15 3 2 4 2 3 2 824 521 422 42 1 4 1 = 2 3 4 2 1 3 3(l)'' 3 2 3(14) (c) Subtracting three times the 3 4 1 second row from the first and adding twice the second row to the third 5 3 33(1) 43(2) 1 53(3) 3 = 1 2 4 2 9 3 2 4 9 2 2 2 2 5 4 2 + 2(1) 5 + 2(2) 4+ 2(3) 2 (4 + 36) 32 in (c) (d) Subtracting the first column from the second and then proceeding as 2 1 2 u 34 56 3 23 4 4 3 = 511 4 2 3 1 4 + 4(1) 22(1) 1 52(ll) 11 3 + 4(ll) 42(2) 2 3 + 4(2) = 27 8 11 41 2 .j4i •• 02 i + ^ ^'i. By Theorems Vin. 28 DETERMINANT OF A SQUARE MATRIX [CHAP. show that %. ./c2.ji <%. + fea^g. the determinant in {i) is (i) is I By Theorem when A^= a^^.
5 is even. and 52314 is even. 4. then using TheoremIX to reduce the elements in the remaining columns 2 28 25 38 14 25 38 65 14 2 3 42 38 65 56 3 38 2512(2) 3820(2) 3812(3) 6520(3) 4712(4) 8320(4) 47 83 4 47 83 4 2 4 3 1 2 5 1 2 14  12 6 9 1 14 1 6 9 413 12. 53142 is 15.13) and (3..] 11 12 17 22 13 14 19 10 15 the algebraic complement of  Ao's is 16 . 24135 is odd.3.e.3. using (3. (l + 2++7J) + + 2+"+n) = 2kn{n + l) = n(n +1) or both odd. upper triangular.b. show that half are even and half are odd. odd. and B = [^ 6J ^^^^ ^^^^ AB^BA^ A'b 4 AB' ^ a'b' ^ B'a' but that the determinant of 18. 2. are either both even or both odd.51 . taken together.1.^2+3+2+4 .14).4. 3 8 4 9 5 7 For the matrix A [«. 3. Given 4 = [j J] each product is 4. is Since each row (column) index P + q = (1 found in either p or 9 but never in both. is Thus. 14(l54) 1 770 1 Show that p and q.CHAP. given by (3. as in Problem 2 (a) 1. 2. hence.6). 1 1 2 22 3 = 2 3 3 2 4 3 = 27 (b) 12 2 3 4 (c) 1 4 2 3 4 4 = . Show that the permutation 12534 of the integers 1. or lower triangular then \a\ = abcde. 41532 is even. that when ^ is 17. 16.21 18 23 24 20 25 . Now (1) p+q = (1)^ even (either n or n + 1 is even). diagonal. 3] DETERMINANT OF A SQUARE MATRIX 29 (e) Factoring 14 from the first column. Let the elements of the diagonal of a 5square matrix A be a.5l 13 16 21 18 5 20 25 (see Problem 12) 23 and the algebraic complement of  '4^'4^'5 is I12 141 SUPPLEMENTARY PROBLEMS 14. (1) Ml. Evaluate. p and q are either both even and only one need be computed.cd. 12 6 13. List the complete set of permutations of 1. Show.4.
any two rows (columns) of a square matrix A are proportional. 1/3 3 3 1 29. subtract twice the first row from the second and four times the first row from the third. then 4  is either real or is a pure imaginary number. 26. if A = diag(A^.  . then VIII. (g). Show that the cofactor of an element of any row of 1 is the corresponding element of the same 4 4 3 numbered column.  A U4i 1/3 28. Prove Theorem VII. 3 1 2 10 9 19. Prove: If 24. Same. then \a\ = k = \a'\. 23.4  by interchanging its first and third columns. 1 2 7 1 2 3 (d) Show that I A 2 3 5 2 3 4 4 5 3 thus verifying Theorem VIII. then check that \a\ = / ". Use Theorems and VII to prove Theorem IX. Evaluate to show that \A I has been tripled.30 DETERMINANT OF A SQUARE MATRIX [CHAP.4  = o. Use (3. Evaluate the determinants of Problem 18 as in Problem 11. I U I Evaluate (g) In /I I multiply the first column by three and from it subtract the third column. A^ are 2square matrices. 30. 4 5 8 1 2 7 3 (e) Obtain from \A \ the determinant o = 2 3 by subtracting three times the elements of the first 4 51 Evaluate j column from the corresponding elements of the third column. (c) Denote by I  C  the determinant obtained from  .6) to evaluate \A\ = c d e . Thus. If 4 is an nsquare matrix and 4 is a scalar. 22. a b 27. Do not confuse (e) and 20.  (b) If is skewHermitian. use (3. the resulting determinant. III. (a) Evaluate \A 2 3 4 5 11 (b) Denote by B Evaluate \B  \  the determinant obtained from to verify j 4  by multiplying the elements of its second column by 5. A^). Hint: Interchange the identical rows and use Theorem V. Show that the cofactor of each element of 2/3 2/3 2/3 2/3 1/3 2/3 2/3 4 is that element. (/) In D j to verify Theorem IX. Theorem III. Compare with (e). Prove: (a) If U A  = k. Evaluate C I to verify Theorem V. (a) Count the number of interchanges and thus prove the theorem. 25. 21. Prove: (a) If A A is is symmetric then <Xij= 0^j^ when i 4 j Ot^ (b) If nsquare and skewsymmetric then aij = (1) "P when i 4 j . for of adjacent rows (columns) necessary to obtain 6 from A in Theorem V (b) Theorem VI.P/.6) to show that U^  = /t^M . where g h A^.
show that the equation xia x+b +6 a 38.. a a a +6 .. na^+b^ nb^ + c^ na^+bg nb^+Cg ncQ + a^ (n O]^ 02 62 O3 is Cg Without expanding.. Prove: = S(«i  «2)(«i  as)... ."ni ni ra2 o„ 1 na^+b^ 36. 1 1 1 .ni b {na + b). show that nbj^+e^ nci^+a..1 ("!) 1 1 1 1 1 1 n1 (1) (n~l).d)..03X0204). 1 35.b.. —a x—b xc has as a root... . (02 a^j °7ii . x+c a a a a+ b Prove . For the matrix A of Problem (a) 8 .c .. 1 1 1 1 . Multiply the columns of 6^ c^ be ca c2 52 respectively by a. 1 1 1 1 1 1.^ + l)(rfn + 1) 61 Ci nCQ+a^ Cg X 37...CHAP. Without expanding. 3] DETERMINANT OF A SQUARE MATRIX 31 31. Show that the rasquare determinant \ a\ 1  0. (aia^H(a2. a 6 e 1 1 I 1 bed a^ a^ a 1 Without evaluating show that 6^ aed abd abc (a e^  b)(a c^  c)(a  d)(b  c)(i  c'^ e d)(e ....1 1 34. 1 1 1 1 1 1 .0 1 rei n2 a^ 1 rai n2 ar... remove the common factor from each of ab ab ca be ca be the rows to show that A ab ca ab a^ 33.. known. . 1 1 d^ d d° d^ d 1 1 1 . show that j ^  = 1 (b) form the matrix C 12 _'^13 (^^22 32 Otgg and show that AC = [.. ^23 (c) explain why the result in (6) is known as soon as a a^ (a) is be 32.
except one. For determinants of higher orders. 0.312 0. the given determinant U! by another b = \bij\ having the property that all elements.309 0. S = bpq dp^ iir ^Pq minor of b. In Problem 11 of that chapter.384)(0. (b) to replace an element of a given determinant with 0.384) 0.309 0.872 0. by repeated use of Theorem IX.476 0. If &>.637 0.320 0.= (l)P' 286 See Problems 13 30 may be used: For determinants having elements of the type in Example 2 below.123 0.217 0.921 1 0.309 0. Example 1.680 0.872 0.„ is this nonzero element 'Pq and ^p^ is its cofactor.196 0.841 0.201 0.157 0. the following variation divide the first row by one of its nonzero elements and proceed to obtain zero or elements in a row Example 0.921(0.138 0.123 0.240 0.921(0.240 1 0.517 0.921 0.841 0.555 0. 'pq Then the minor of bp is treated in similar fashion and the process is continued until a determi nant of order two or three is obtained.196 0.667 0.527 0.782 0.782 0.384) 0.517 0.240 0.240 0.037 32 .492 0.196 0.384 0.921 2.921(0.667 0.384) 0.492 0.757 0.265 0.312 0.157 0. in some row (column) are zero. 32 4 32 12 2 3 2 2 + 2(3) 3 + 2(2) 2 + 2(1) 1 4 + 2(2) 2 810 32 6 8 1 8 2 3 2 23(2) 4 3 4 33(3) 33(1) 43(2) 5 02 2405 = 2 2405 8 81 (1)" 6 2 8 8 8 + 8(l) 1 8(8) 8(4) 8 4 + 8(l) 010 58 8 2 5 6 + 2 + '^ 37 2 + 8(8) 5 62 37 4 + 8(4) 30 4 (l).680 0.448 0.614 0.309 0.680 0.104) 0.201 0.921(0.757 = 0.799 0.448 0.555 0.217 0.799 0.138 0.527 0.chapter 4 Evaluation of Determinants PROCEDURES FOR EVALUATING determinants of orders two and three are found in Chapters. the general procedure is to replace.309 0.492 0.217 0.484 0.384 0.492 0.217 0. two uses of Theorem IX were illustrated: (a) to obtain an element 1 or —1 if the given determinant contains no such element. Chapters.492 1 0. column.637 0.185 0.921 1 0.484 0.265 0.
2.4 (1) + .2! 3.2. Instead of selecting one row of \a\. 4I 2 3 3 3 4 5 2 3 2 1 2 4 4 2 + 3 + 5 3 4 2 2 3 4 2 3 3 2 3 4 2 • 3 4 5 2  1 _2 _2 2 + D 2 1 4 • 3 2 2 2 4 (13)(15) .1.. . 4] EVALUATION OF DETERMINANTS 33 THE LAPLACE EXPANSION. 3 4 2405 Prom (4.(ra  12 . (4. (1) I ^1. be selected.2) If A and B are nsquare matrices. m + l) m minors can be formed by making all possible selections of m columns from the n columns. (4. bered I'l. The expansion of a determinant U  of order n along a row (column) Is a special case of the Laplace expansion. .1) (1)^ .31 2.2r 1^3. Let tiable functions of a variable x.34 *l. +h and the summation extends over the/? selections of the Evaluate  A 32 4 2 32 12 ..(8)(6) + (8)(12) + (1)(23)  (14)(6) + (8)(16) 286 See Problems 46 DETERMINANT OF A PRODUCT.=2 IJ IJ Where a^ is the cofactor of o^ and a^^is the algebraic complement of the minor "ii^iil °ii Otj I of^. have as elements differen Then . U 1+2+1+21 (1) 1 1 + 2+1 + SI U1.. Jn i.3) If ^ = [aid n n is nsquare.I 2 CHAP.. we have the Laplace expansion > Using these Jrr. + i^ +A + /2 +••• column indices taken m at a time.41 "Mi'. ^m4i'% + 2 ^n where s = i^ + i^+. (1) Mi'jlUs'.1+2 + 2+4..U 1.4 . then Us I = Ullsl See Problem 7 EXPANSION ALONG THE FIRST ROW AND COLUMN. J2 i^ .31 _l + 2 + l + 4 1. using minors of the first 3 2 two rows.2 + U5aI ^3.4 1 + (1)" . Prom these m rows numm rows P n(n l).] ^tj. minors and their algebraic complements. (4. then ii ^11 ^1 i=2 j. let when arranged in order of magnitude.: + I + (ly 1 + 2+3+4 .4! 1 + 2 + 2 + 31 . Example 3.1). Jn+i' jm+2 . the rasquare matrix A = [a^.. DERIVATIVE OF A DETERMINANT. i.13 ...
subtract the column from the second. 4 2 4 2 4 3 10 3 2 3 2 3 4 2 3 3 32 2 2 1 4 2 2(2) 3 42(3) 32(2) 102(4) 2 3 4 5 286 4 5 3 4 5 (See Example 1) 2 2 4 2 4 There are. of course. The derivative. U I Example 4. the fourth column from the second. x^ x^\ 3 x"" 2x = 1 1 x2 x + x^ 1 3 x+ + 1 1 3 x" dx 1 2x\ X 2x\ X + 2 3^^ 2xi 2 5 2 X 6x° 2 + 4x  12x^ See Problem 8 SOLVED PROBLEMS 2 3 7 1. first 1 1 3 2 2 5 2 1 1 + 3 1 2 2 1 2 2+2 2 5 22(1) 22(2) 12(2) 10 2 2 3 3 2 3 4 1 2 3 3 4 1 +2 4 46 43 62(3) 5 4 3 + 3 32(3) 189 4 4 6 = 32(4) 4 42(4) 83(4) 4 1 43 8 43 43 9 4 = 13(4) 72 93(3) 11 4 5 3 11 4 1 + J 1+ 2j 3. 4 \A\. the first row from the second. etc. many other ways of obtaining an element +1 or 1.J 34 EVALUATION OF DETERMINANTS [CHAP.4 14i 25  5j . of \A\ with respect to x is the sum of n determinants obtained dx by replacing in all possible ways the elements of one row (column) of by their derivatives with respect to x. for example. 1 +J 14j 1 + 2J 5j (l+J)(l+2j)Ml = (1+3j)^ 2 5 5J 4 + 7i 1+i l 2 1 5J 4 + 7j 10 + + 18 2J 1 8 25  4 + 7J 10 + 2i + 2j I 6 and . then +j +i 1+2j l 1+2. I. Evaluate \A\ 1  i 23i 2 + : i 1 2i Multiply the second row by l + j and the third row by 1 l + 2J.
Prom these rows p • S(i/ where /i.. n(n~l). Similarly. Derive the Laplace expansion of \A\ = \aij\ of order n. As a result of the interchanges of adjacent rows and adjacent columns.. algebraic com 7.m m\{n — m)\ minors may be selected. rows of P only one nonzero nsquare minor.(nm + l) different m. moreover.CHAP. there are no duplicate terms of U among these products. 5.square l'2.b. Evaluate A 12 3 4 12 1 11 3 4 12 2 using minors of the first two columns.. and C are resquare matrices.  row after by % interchanges of adjacent rows of ^ the row numbered ii can be brought row. i. can be formed. Let I'l. 72 Jn I'm Jm1' Jvn • Jn I .2 interchanges of adjacent rows the tow numbered is can be brought into the second m interchanges of adjacent rows the row numbered % can be brought into the mth row. Each of these minors when multiplied by its algebraic complement yields m!(/jm)' terms of U. prove A C B Prom the plement is first n B Its s. I A has changed sign I cr [1 + ^2+ •• + in + /i = j. Thus Jm+i'Jni2 '•mH'''m +2 % /2 + ••• + /«"»('" +1) times which is equivalent to Ji.. Thus. into the first A. p = . by i^. the minor selected above occupies the upper left corner and its complement occupies the lower right comer of the determinant. the columns numbered /i. by their formation. using minors of order m<n in ji'h Jrn Consider the msquare minor order of magnitude.1) I'l. 4] EVALUATION OF DETERMINANTS 35 4.+ i„ + + + /i + + ii + •• + in changes. Let C = [c^j] = AB so that c V ^Hkhj i^rom . + (igi^ 2) + numbered i„ ••• + (ijj m) 11 + ^2+ occupy the position of the ••• first + 'm2'"('"+l) interchanges of adjacent rows the rows m rows./2 occupy the position of the first m col/jj umns. Prove \AB\ = A U •   S and Suppose Problem 6 = [a^j] B = [bij] are n square.. Hence. after /i + j^ + ••• + /^ . by the Laplace expansion. 12 in be held fixed. li A. U. H''^ 1 %\ of 1^1 in which the row and column indices are arranged .4.\m(m+l) interchanges of adjacent columns. Now by  (I'l. Since.B.J2' •• Jm 'm Jn yields ' Jn m!(ra AH'''^ m)! terms of (1) \a\ or Ji~h(a) • Jn JnH' Jm+2' •"TOJl' (ir n+2' yields w!(n m)! terms of \a\. (1)^ 1 21 1 I 1 ll 1 + 2 2 (ir 1 21 41 2 1 ll + 3 ll (If 2 3 11 13 ll 4 1 41 (3)(1) + (2)(1) (5)(l) 6. s = i^ + i^+ ••• + i^ +j^ + j^ + + in and the summation extends over the p different selections in of the column indices.
j = 1.36 EVALUATION OF DETERMINANTS [CHAP. (i. Next. denoting ra. we obtain finally A \P\ C .2. hi ?. Then 031 032 033 %l"22*33 + and. 1 To the (n+l)st column of \P\ add fen times the first column.. to the (n + 2)nd column of P add fei2 times the first column. 62 •• 622 •• .'. %2 %S where Oif Let A = 021 '%2 '^a = aij(x).b.. 1/„ = (l)'^ can be formed. we have On "21 012 •• •• "m «2ra Cii •• "22 •• •• C21 •• "ni "n2 •• • "nn Cm 612 •• 1 1 •• •• •• h..: °12«2Sa31 + %s"S2"21 " (^ix"^"^ " 0:l202lOs3 " (hs^^O'Sl by a.3). Prom the last n rows of  P  only one non zero nsquare minor. \p\ = (i)Vlf" ^"^"lc and \c\ = \ab\ = U1.. = (lf'2"^^'c. 4 ail 012 •• •• "m °2n •• 021 022 • •• •• "ni «n2 •• • "nn 6ii •• 1 1 • •• •• •• 612 •• . We have «in »2n Cii C12 •• C21 C22 •• "nn Cm Cn2 ^13 •• •• •• •• ^m b^n *23 Continuing this process. Its algebraic complement is (_i)i+24"+ra+(n+ii+"+2n( = \c\ Hence. 621 times the second column 6„i times the nth column. are differentiable functions of x. Oil 8. 622 times the second column.2 621 ^22 •• . dx ^^ ^J . times the nth column.
Ans.4"B i. \ts\ to Use \AB express (01 + 02+03 + 04X61+62+63+64) as a sum of four squares. 1 1 Evaluate 4 5 6 5 4 3 3 2 2 2 2 using minors from the first three rows. Chapter 3. show that \A^A \ is real and nonnegative. 4] EVALUATION OF DETERMINANTS 37 dx '^ll"22''S3 + °22°ll"33 + ''33"ll"22 + O32OISO2I f + "l2"23"31 + 023012131 + O31O12O23 + %3032021 f + 021013032 f ~ Oj^jCggOgg f — >i23'^n''S2 — OgjOj^jOgg — a^OQiOg^ — 021012033 — Og^a^a^^ — air^a^^a^. 222222. SUPPLEMENTARY PROBLEMS 9. If A Is nsquare. Evaluate the determinant of Problem 9(o) using minors from the first two columns.CHAP. first two rows.2 [62 + 864 6ij6gJ 1 2 13. 11. Evaluate: 3 5 7 2 1 2 (o) 4 11 2 2 1 3 2000 113 4 1116 2 4 156 ic) 2 3 1 4 4 3 4 5 3 5 3 1 304 4 2 1 1 6 2 2 3 1 {b) 4 2 16 12 9 2 7 2 2 1 = 41 (d) 1 1 2 118 4 3 4 3 2 5 2 2 2 2 10. (o) Let aJ"' 4B Use ^B = "^ and B = r '' '^ bjj (< + 02)(6i+62) (oi [^02 ojj .^ — / 02205^303^  / Ogj^o^gOjg Oil 0^11 + oL20^i2 "*" OisOi^g + a^^dj^x + 022(^22 + 023(^23 "^ OgLCXgi + 032(^32 + 033(^33 oil 012 Ol3 021 031 11 O12 Ol3 O22 O32 '11 "12 "13 022 023 + 21 O23 + i»2i "22 "23 032 033 31 O33 '31 "32 "33 by Problem 10.2. also using minors from the 12. (6) Let A + t03 a2+iaA I tOj^ and B 62 + 164 = Og + ja^ OiiogJ inii. 720 1 1 1 3 3 4 .4B to 1^62 show that 0000 61 + 163 Q O = (01610262) + (0261+0163) .
^ 1=2 J=2 11 "IJ l^lj ^il^lj^lj where ffi.. If a^j denotes the cofactor of a^j in the nsquare matrix A = "11 "21 [a^. then . . xl 3 a: 1 2a: x^ + (c) +5 2x Zx + l 3«2 4ns. For each of the determinants . In + ai4«i4. 62 63 Use the Laplace expansion A to show that the nsquare determinant A B C . use the Laplace expansion to prove diag(4i. tti* along its first col umn show ^11^11 ~ ti .^2 ^^s ^"^^ UilU. find the derivative.. K 17.4 . X («) (b) I 2*: 2 1 . 4 4 . x^ (a) 2a: + 9a:^ 8a. "12 • •• «ire Pi ?i 92 '^12 •••• ?n «22 • •• "2ra P2 Pi "11 " — "m Pilj . 20.! . where is fesquare.^ x+l 15a:*. (6) 1  6a: + 21a:^ + (c) 6a:^  5*"^ .38 EVALUATION OF DETERMINANTS [CHAP.. U a^ a^ Og a^ *1 16.^ OL^a. If ^1. 2 11 square matrices.42 As)\ = 15. 1 Evaluate 111 110 112 12 2 using minors from the first two columns. is the algebraic complement of the minor "ii "ij Of U show that the bordered determinant 19..3).28x^ + = 9a:^ + 20a.4ns.<:° a: 1 4 a. Expand a^ *2 ^3 *4 using minors of the a^ ^2 a^ flg first two rows and show that a^ *4 *1 ^3 a2 62 60 6. x^+l 12a. expand each of the cofactors a^g.]. 4 112 12 14. Use (4 .^ 'J t=i j=i X Cti ^V "ni «n2 li • •• "nra Pre Pn "ni "712 "nn 92 • •• In Hint. Prove : If A and B are real nsquare matrices with A nonsingular and if ff 4 + iS is Hermitian. is zero when > 2"\A\ to = aiiOiii + ai2ai2 + OisO^is 18.  2 21..=^ .
to the The addition The addition of the /th row. In It is clear that Problem 2.. chapter 5 Equivalence THE RANK OF A MATRIX. The matrix of Example 1 is singular. that is. it is the rows (columns) of a an elementary transformation cannot alter the shown that an elementary transformation does not alter its THE INVERSE OF AN ELEMENTARY TRANSFORMATION. denoted by H^(k). is zero. The rank of A 2 3 4 7 is r= 2 since 2 1^0 while U = 0. on a matrix do not change either (1) The following operations. nonzero matrix A is said to have rank if r if at least one of its rsquare minors is different from zero while every (r+l)square minor. and /th columns. A is called nonsingular if its rank r=n. denoted by Ki(k). The tion is an operation inverse of an elementary transformawhich undoes the effect of the elementary transformation. if \ A ^ 0. the prod uct of two or more resquare matrices is singular if at least one of the matrices is singular. denoted of the ith by Hij. a scalar. A zero matrix is 'l 2 3 5 2 3 Example 1. An A is resquare matrix called singular. times the corresponding ele The transformations H are called elementary row transfonnations. ELEMENTARY TRANSFORMATIONS. See Problem 1. I AB\ A\\B\ follows of The product two or more nonsingular resquare matrices is nonsingular. denoted by K^j(k) k. 3" 1 any. a scalar. after A has been subjected to one of the elementary transformations and then the resulting matrix has been subjected to the inverse of that elementary transformation. 39 . rank. denoted by K^j (2) The multiplication of every element of the ith row by a nonzero scalar k. that is. Prom I. Otherwise. elements of the sth row of denoted by Hij(k) . times the corresponding elements to the elements of the ith column of ments of the /th column. order of a matrix. being precisely those performed on determinant. k. A said to have rank 0. called elementary column transformations. the final result is the matrix A. called elementary transformations. The (3) multiplication of every element of the ith column by a nonzero scalar k. . need no elaboration. the transformations K are The elementary transformations. its order or its rank: The interchange The interchange of the ith and /th rows.
is the element 1 of the ith row.2 r). The inverse elementary transformations "ij (2') are = % 1 ^ij Hi\k) = H^(i/k) H(k) (3') = H^Ak) K^jik) = Kij(k) We have II. Examples. ^siCD.r).2. (i =1. . If B is said to a matrix A is reduced to B by the use of elementary row transformations abe row equivalent to A and conversely. See Problem 3. let the column in which < /2 < •••• < j^. The matrices A and B of Example 3 are row equivalent. lone. Ws2(1). . Let A 4 5 6 7 8 9 "l 2 3" The The effect of the elementary row transformation H2i(2) is to produce B 2 .. — 1 2 1 5 5 4 121 '^ 4 2 1 4 2 6 7 7 3 3 53 S Since all 3square minors of B are zero while the rank of I I 5 —3 1 t^ 0. (i this (C) ]\ =1. Applying in turn the elementary transformations 1 2 1 3 4 5 1214 53 1 2 6 W2i(2). ROW EQUIVALENCE. rows are nonzero while all other rows have (b) in the ith row. Two matrices A and B are called equivalent.7 10 8 9 effect of the elementary row transformation ff2i(+ 2) on B is to produce A again Thus. from the other by a sequence of elementary transformations. ^ is 2.'.. 5 1 2 3 Example 2. the rank of is 2 . the first nonzero element element stands be numbered 't . if one can be obtained Equivalent matrices have the same order and the same rank. Any nonzero (a) matrix A of rank r is row equivalent to a canonical matrix C in which first r one or more elements of each of the only zero elements. The inverse of an elementary transformation is an elementary transformation of the same type.1 40 EQUIVALENCE [CHAP. A'^B.. (d) the only nonzero element in the column numbered j^. hence. is 1. EQUIVALENT MATRICES. ff2i(2) and H2x(+2) are inverse elementary row transformations. This procedure of obtaining from A an eauivalent matrix B from which the rank is evident by inspection is to be compared with that of computing the various minors of 4.
which this does not occur.1) \l% "'"'• own normal M form. (5. A zero matrix is its Since both row and column transformations may be used here. ffgiCD Example 4. suppose number of the if first column bqj^ in use ^2(1/^2^2) as in in (il). ^2i(2). //ssCS) applied 2 4 1 3 4 5 ^\j 1 2 1 5 4 1 '\^ 2 1 1 4 1 '%^ 2 1 17/5 2 1 3 2 6 7 5 3 5 3/5 3 3/5 having the properties (a)(rf). Other wise. the element of the second ^31(1). I Ksi(l). Examples of elementary matrices obtained from /g 1 1_ "0 1 1 0' '1 0" 'l 0' 1 = K. use but //i(l/oi. Otherwise. an elementary matrix will be denoted by the symbol introduced to denote the elementary transformation which produces the matrix. ff2i(2). By means r of elementary transformations any matrix A of rank > can be reduced to one of the forms /. K^{\/%). See Problem 5. J Vi = o^ ^^7 • ^ 0. Then both the 1 row and column can be cleared of other nonzero elements. the sequence /^32(l). and so on. For example. ^42(3) ff2i(2).iji ) reduce ffij. applied to 4 of Example 3 yields ^ .CHAP. Here. ELEMENTARY MATRICES. The matrix which results when an elementary row (column) transforma row (column) matrix. If a. the normal form. X4i(4). K23. as clear the /gnd column of all other nonzero elements. B = C. 7^ suppose /i is the to number of the it first nonzero column of A. 1_ Ha(k) 1 Ksik). If &2j ^ 0. row can be brought into the second column. The sequence of row transformations to A of Example 3 yields 1 . aij 171 0. If nonzero elements of the resulting matrix B occur only in the 72 is the first row. //i2(l). we have C. of the first row first 1 called its normal form. use H^^ and proceed as in (ii). H^g(k) : k 1_ K^^yk) _0 k_ . (ii). Then. See Problem 4. the element obtained in the section above can be moved into the first first column. &2J2= but f 0. If nonzero elements of the resulting matrix occur only in the first two rows. (i^). to 1. Similarly. 5] EQUIVALENCE 41 To reduce A (ii) (is) If to C. 0' 1 tion is applied to the identity matrix /^ is called an elementary Example 5. the procedure is repeated until C is reached. «2(l/5) . when necessary. use and proceed as in (ii) Use row transformations of type (3) with appropriate multiples of the first row to obtain zeroes elsewhere in the /^st column. THE NORMAL FORM OF A MATRIX.
. Let the elementary row and column matrices corresponding to the elementary row and column transformations which reduce /I to 5 be designated as //i.3) Ih and We have III. P and Q See Problem 6.Ks(i) 2J ["100"! r 1 1200 1 ~1 1 0~ 1 2" "1000" o"j 1 02 ij 10 10 1 10 10 _0 1 10 1 10 1 10 1_ _0 1_ "1000" 10 5 _0 1_ [j^i L iJ 1254 10 o"l 1 [:.3) such that "1 PAQ = B.A KiK^ H^H^ K.3) such that PAQ = 1^ . H^^A 3' = 1 4 5 6 0_ = 4 5 6 _1 interchanges the first and third 8 9_ 7 8 9_ 2 3_ 1 rows of A . there exist non singular matrices as defined in (5. K^ is the second Then '^t (5..2) //. 2 "l o" "723' = 4/^13(2) = 4 5 6 10 _2 1_ 16 5 6 _25 adds to t( the first column of A two times J the third column. 2 1 2~ 7.T^Q. . apply the transformation to K and multiply A on the right by K. Two matrices A and B are equivalent if and only if there exist nonsingular matrices P and Q Example defined in (5. /„ to To effect a given elementary form the corresponding elementary matrix ~1 column transformation on A. To to Ijn effect a given elementary row transformation on A of order to form the corresponding elementary matrix H and multiply A on the mxn. When A 2 _1 523. (Why?) The effect of applying an elementary transformation to an mxn matrix A can be produced by multiplying A by an elementary matrix. When A = 4 5 6 _7 . 2 1 ^3i(l) •//2i(2) ^ •^2i(2) Ksid) .=} Since any matrix is = 1 PAQ [1 2 10 1 oj equivalent to its normal form. "O 2 3~ f 1 'l 2 3~ "7 8 9" Example 6. apply the transformation left by H. 8 9 39J LET A AND B BE EQUIVALENT MATRICES.. K^ is the first column transformation.. //g is the second. 5 Every elementary matrix is nonsingular. PAQ = where (5. we have IV. IUH. . ^s< J^\.K4i(2) K^sd) ./Zg where //^ is the first row transformation.42 EQUIVALENCE [CHAP. If ^ is an resquare nonsingular matrix.
VIII.1) as r ranges over the values 1. Thus. the rank of AB (also of BA) is that of B.. the rank of PAQ is that of A. RANK OF A PRODUCT. This requires that the first r rows of Q'^B be zeroes while the remaining rows may be arbitrary. The rank of the product of two matrices cannot exceed the rank of either factor. suppose iS = then from (5.. is of rank r and if the pxn matrix B is such that AB = the .. In Problem 8. P as in (5. Such a canonical set is given by (5.2 m or 1. From this follow If If VI. 5] EQUIVALENCE 43 INVERSE OF A PRODUCT OF ELEMENTARY MATRICES. Let S be a pxre matrix and consider the rank of = AB VI.. P and Q are nonsingular.. Now the rows of NQ~'b consist of the firstr mr rows of zeroes..2. By Theorem III there exist nonsingular P and Q such PAQ Then 4 = P (5. the rank of AB cannot exceed that of S.re whichever is the smaller. CANONICAL SETS UNDER EQUIVALENCE. A is nonsingular. See Problem 7.CHAP.6) = N = P'' °] NQ . A set of my. the rank of AB cannot exceed r the rank of A Similarly.4) P~^ = H. the rank of Q'b and.H^H^ and Q = K^K^.Kt H and K has an inverse and since the inverse of a product is the product in reverse order of the inverses of the factors (5.H^^ and let Q'' = Kt. hence the rank of B cannot exceed pr.n matrices is called a canonical set under equivalence if every mxn matrix is equivalent to one and only one matrix of the set.6). We have proved rows of By Theorem Q B and IX. NQ'b = 0. VII. Every nonsingular matrix can be expressed as a product of elementary matrices. We have proved : X. matrices Let A be an mxp matrix of rank that r. See Problem 9. we prove if Two mxn matrices A and B are equivalent and only if they have the same rank. . If the mxp matrix A rank of B cannot exceed pr.\hI\. Since each Let = H^.KtKtthat Let A be an resquare nonsingular matrix and = /„ .. P and Q defined above be such = PAQ Then A = (5.3). the rank of P~'NQ'^B AB is that of NQ'^B. Hence..5) P'\PAQ)Q^ = P'^kQ^ P'^Q'^ We have proved V.
44
EQUIVALENCE
[CHAP.
5
SOLVED PROBLEMS
1.
(a)
The rank
of
A
1
4
1
2 3] sj 2 3 5
is 2
since
1
2
4
^
and there are no minors of order three.
(b)
The rank
of
A
12
"0
is 2
since

^
j
=
and
2
3
^0.
2 2 4 8 2
3"
5
(c)
The rank
of
A
_0
4 6
6 9_
is 1
since

i

=
0,
each of the nine 2square minors
is 0, but nov
every element
is
Show
that the elementary transformations do not alter the rank of a matrix.
shall consider only row transformations here and leave consideration of the column transformations Let the rank of the mxn matrix ,4 be r so that every (r+l)square minor of A, it any, is zero. Let B be the matrix obtained from .4 by a row transformation. Denote by \R\ any (r+l)square minor of A and by Is] the (r+l)squaie minor of B having the same position as \R\
We
as an exercise.
.
Let the row transformation be H^j
two of
in the
its rows, or (lii) to
Its effect on /? is either (i) to leave interchange one of its rows with a row not of \R\
. ;
it
.
unchanged,
In the
(ii) to
(i),
interchange
= \r\
case
\S\
=0;
case
(ii),
\S\
= \r\ =
in the
case
(iii),
\s\ is,
except possibly for sign, another (r+l)square minor
of l^l and, hence, is 0.
ply one of its rows by
Let the row transformation be Hi(k). Its effect on \R\ is either (1) to leave A:. Then, respectively, S = /? = o or S = ;i:/? = o.
it
unchanged
or (ii) to multi
Let the row transformation be Hij(k). Its effect on /? is either (i) to leave it unchanged, (ii) to increase one of its rows by k times another of its rows, or (iii) to increase one of its rows by k times a row not of S. In the cases (i) and (ii), S=ft = 0; in the case (iii), \s\ = = /? + A: (another (r+i)square minor of /I)
 
0±
k0
=
0.
Thus, an elementary row transformation cannot raise the rank of a matrix.
lower the rank
tor,
if it
On
it.
did, the inverse transformation
would have to raise
the other hand, it cannot Hence, an elementary row
transformation does not alter the rank of a matrix.
For each of the matrices A obtain an equivalent matrix B and from
rank of A.
"1
it,
by inspection, determine the
2
1
3"
1
'^/
2
3
"^
"1
2
1
3~
1
""J
'1
2
1
3"
1
(a)
A
=
2
_3
3
1_
3 3
2
4 8
_0
1
2_
_0
1_
The transformations used were
"1
//2i(2). ffsi(3);
"1
H^(l/3), HQ(l/i); Hg^il).
2
3
0"
The rank
is 3.
2 3
0"
1
''V.
2
3
0"
2
3
0'
1
2
3
(b)
A
2 4 3 2
3
_6
3
4
2
3
5_
''\j
4
2
1
3
8
8 3
3
'~^
4
8 3
3 2 2
r^
4
8 3
S.
3
2
2
5_
The rank
is 3.
8 7 5_
4 4 11 —i
+
'~^
p 4 11
1
3
1+i
(c)
1
i
A
1
i
2j
_1
1
+
2j
2J_
'Vy
i
1
+
2J
=
B.
The rank
is 2.
+
2j
l+i_
i
1
+
Note.
The equivalent matrices 5 obtained here are not unique. In particular, since in (o) and (i) only row transformations were used, the reader may obtain others by using only column transformations. When the elements are rational numbers, there generally is no gain in mixing row and column
transformations.
CHAP.
5]
EQUIVALENCE
45
4.
Obtain the canonical matrix
C row equivalent
to
each
2
'>'
of the given matrices A.
(a)
A =
13 12 6
2 3 9
113
12
6 2 3 9
113
13
3
l"
132 132
2_
1
10 4 132
113
1
13
3
3
v^
GOOD
3"
2
3
(b)
A
1
=
2 4
.1
1
2 2 3 1
f
1
2
1
2
2 3
1
'10
^\y
3
7~
1
1
'^
1
1
6 4
2
1
2
1
4_
10 10
1
2
2_
O
10 1 10 01 10 2
1
4
6_
p 1
1
5_
1
pool
2
5.
Reduce each of
1
the following to normal form.
2
1
1
1
^\y
2
1
1
"l
o"
fl
o"
1
'l
o'
'l
'V.
o'
(a)
A
3 2
4
3
2 5
2
5 3
02
1
5
p
12
Lo 2
5
12
11
10
P
10
p
1
0_
2
7 2
7 2 sj
7 3_
U
7_
=
Us o]
The elementary transformations
are:
^2i(3). «3i{2); K2i(2), K4i(l); Kgg; Hg^^y. ^32(2). /f42(~5); /fsd/ll), ^43(7)
(6)
A
[0234"! P 23 5 4^0
4 8 13 I2J
_4
354
2
i
1
'^
3
2
5
3 2 3 4
'Xy
"1000"
"1000'
^\j
"1000"
<^
3
4
8 13 12
2 8 13
13 13
4
4.
10 p
1
10
0_
p
0_
The elementary transformations
are:
^12: Ki(2); H3i(2); KsiCS), X3i(5), K4i(4); KsCi); /fs2(3), K42(4); ftjgC 1)
6.
Reduce A
1 2 32 22 1 3 3 04 1
to
normal form
A'
and compute the matrices
P^
and
(^^
such that
Pi_AQ^
=
A^.
Since A is 3x4,
we
shall work with the array
is
A
U
Each row transformation
is
performed on a row of
seven elements and each column transformation
1
performed on a column of seven elements.
3
2
1
2 3
1
2
10
1
1 1 1
12
22
3
1
3
1
1
2
32
5
7 7
1 1 1
1
1
10
1 1
6
6
2
4
5
6 5 7 2 6 5 7 3
5
7
2
011
1/3 3
1/3 4/3 1/3
1/6
01/6 5/6
10
1 1
10
1
7/6
157210
0111
10
1
or
N
Pi
0210
011
1
46
EQUIVALENCE
[CHAP.
5
1
1
1/3 4/3 1/3
Thus,
Pi
=
1/6 5/6
2
1
1
1
1
10
1
7/6
and
PiAQi =
10 10
N.
1
3
3
7.
Express A =
1
.1
4
3
3
4_
as a product of elementary matrices.
The elementary transformations
/
//2i(l).
=
^siCD; ^2i(3). 'f3i(3) reduce A
to
4
,
that is,
[see (5.2)]
=
H^H^AK^K^
ff3i(l)^2i(l)'4X2i(3)?f3:i.(3)
fl
0"1
fl
0"j
fl
3"
fl 3
O'
Prom
(5.5),
HlH'^K^Kt
=
110
_0
010
[l
010
[p
010
_0
1_
ij
ij
ij
8.
Prove:
If
Two mxn
matrices A and B are equivalent
the
if
and only
if
they have the same rank.
to =
A and B have
Conversely,
VII,
other.
By Theorem
same rank, both are equivalent to the same matrix (5.1) and are equivalent ^ and B are equivalent, there exist nonsingular matrices P and Q such that B A and B have the same rank.
if
each
PAQ.
9.
A
canonical set for nonzero matrices of order 3 is
[i:l[:i:]
A
canonical set
tor
[!:][::•]
nonzero
3x4
matrices is
Vh o]
[•:=:]
nm
r^,
&:]
B consisting
n.
10. If from a
square matrix
A
of order n and rank
a submatrix
r^
of s rows (columns) of
A
is selected, the rank r^ of
B
is
equal to or greater than
+ s 
The normal form of A has nrj^ rows whose elements whose elements are zeroes. Clearly
'A
are zeroes and the normal form of
6 has srg rows
^
from which follows
r
B
>
r
+
s
 n as required.
A
.
.
CHAP.
5]
EQUIVALENCE
47
SUPPLEMENTARY PROBLEMS
2
1
2 2
(c)
1223
2
4
5 (d)
5 6
6 7 8
7
8 9
2
11.
3 5
2
3
2
Find the rank of
546
2
(a)
3 3
1
(b)
4 3 4
4 5 7 4 6
1 3
2
2
6
7
416
10 11 12 13 14
15 16 17 18 19
Ans.
(a) 2,
(b) 3,
(c) 4.
(d) 2
12.
Show by considering minors
that
A,
A'.
A. and
.4'
have the same rank.
13.
Show
that the canonical matrix C, row equivalent to a given matrix A, is uniquely determined by A.
14.
Find the canonical matrix row equivalent
Tl
(a)
to
each of the following:
1
'X^
23"]^ri 07]
4j
2 2
12
(b)
3 4
1/9"
1
1
3 4
_4
[2 5
[o
12
1
10
1
1
2j
3
2
1/9
(c)
2 3
1
11/9
3
10 10 12
1
3 (d) 2
10 1 101
1 1
1
1
1
1
1
1
1
6
1
2
1 1
3
1
^V/
(e)
2
10 012 10
1
2
1
2
1
2
5
1
1
3
3
3
15. Write the
normal form of each of the matrices of Problem
[I, 0],
14.
Ans.
(a)
(b).(c)
[/g
o]
(d)
P^
j]
(e)
P'
jl
12
16.
3
4
1
Let A
=
2
3
3
4
4
12
(a)
(6)
From Prom
/a
U
form Z/^^. /^O). //i3(4) and check that each HA effects the corresponding row transformation. form K^^. Ks(l). K^^O) and show that each AK effects the corresponding column transformation.
(c) Write the inverses H^l,
H^
{?,),
H^lci) of the elementary matrices of
(a).
.
(d) Write the inverses K^l. ifg^Cl).
K^lo)
4)
1
of the elementary matricesof (6)
3
Check Check
that for each
that for each K.
H,HH~^^l KK~^ = I
"0
0'
"0
and C
=
(e)
Compute B
1
4"
=
^12
^
•ft,(3) //isC
4
1
H^^(4:)H^(3)Hi
1/3
0.
ij
(/)
Show Show Show
that
BC
CB
H.
=
I
.
17. (a) (b)
that
/?',=
/?
.
K(k) = H^(k).
and K^,(/t)
= H^(k)
that if
is a product of
elementary column matrices. R'is the product in reverse order of the same
elementary row matrices.
Prove:
18.
(a)
(b)
AB AB
and
and
BA BA
are nonsingular if are singular
if at
.4
and B are nonsingularra square matrices.
least one of the nsquare matrices
A and B
is singular.
19.
UP
Hint. Express
and Q are nonsingular, show that A,PA,AQ, and PAQ have the same rank. P and Q as products of elementary matrices.
20.
Reduce B
13 61 14 5 1 15 4 3
to
normal form
/V
and compute the matrices
P^
and
Qr,
such that P^BQ^ = N
Prove: to normal form by row transformations alone. The matrix A of Problem 6 and the matrix B of Problem 20 are equivalent.PAQ. (m ^n). 28. Hint. of matrices in a canonical set of mxn matrices under equivalence is the smaller of 22. of rank m. By considering each of the show that \AB\ = ^ • fi 26. Show how to effect (3). If the mxn matrices A and B are of rank rj and rg respectively. 23. Exhibit a matrix which cannot be so reduced. show that the rank of A+B cannot exceed 25. on any matrix A the transformation H^: by using a succession of row transformations of types (2) and 31. Prove: The row equivalent canonical form Not every matrix A can be reduced of a nonsingular matrix A is I and conversely. 2 5 6 10 Hint. Let ^ be an arbitrary nsquare matrix and S be an nsquare elementary matrix. 24.. Show that equivalence of matrices is an equivalence relation. Find a 4square matrix S 7^ such that AB = 0.4 is an mxn matrix. (a) (6) Show Show that the that the number of matrices in a canonical set of nsquare matrices under equivalence is n+l.5) and Problem 25 to show that 27. 30. (a) If at least one is singular show that \AB\ = \a\\B\ . Given A 12 13 4 2 4 6 of rank 2. m+l number and n+1. (6) If both are nonsingular. 29. 48 EQUIVALENCE [CHAP. Prove: If . \AB\ = /4s . six different types of matrix S. Find P and Q such that B . then AA' is a nonsingular symmetric matrix. Let A and B be nsquare matrices.b h are arbitrary. . 5 21. Follow the proof of Theorem X and take Q'b abed _e / g A_ where a. State the theorem when the rank of A is < m. use (5.
ffai = 5. adj ^ . Otgg = 4. 0122 = 5.1) adjoint A = adj ^ 12 ^22 '^ire '^sn Note carefully that the cofactors of the elements of the ith row (column) of A are the elements of the jth column (row) of adj A.2) "In 0271 i(adj A) '^ 2 1 ^^2 ni CE 7 = diag(M). For the matrix A 2 3 4 = «11= 6. fflsi = 1.4) 4 I 49 . 1 2 3 3 3 2 Example i. I adj /I . Using Theorems X and XI of Chapter 3. agg = 1 * 15 1 4 3 and adj A 2 5 3 1 See Problems 12. we have = I U. If A is resquare and nonsingular. I U There follow I. then adj I (6.2). For the matrix ^ of Example 1.3) taking determinants in (6.chapter 6 The Adjoint of a Square Matrix THE ADJOINT. Let A = [a^ ] y be an nsquare matrix and oij be the cofactor of a. ai2 = 2. then by definition y (6. 6 a^g = 3. CllS 3. 1^1 1^1) AL 7 and 6 1 5I 2 5 5 4 3 33 ij (adj A) A Examples. we find %1 %2 (6. U 2 3 3 3 2 = 1 /I fT = [_ 0' (adj /I) = 2 3 07 7 7/ 4 By (6.
adj (6.i2 In Problem i» 6. A is nonsingular. we prove square minor of the resquare matrix IV. (6.7) becomes Jl'j2 Jre1 M. 6 II.5) AB = adj 6 adj A MINOR OF AN ADJOINT.6) .4).].diA) = (ad}A)A If = adj If A is of rank < ral. In Problem 4. then A(a. (1) Ul a. If in (6. . See Problem 3 THE ADJOINT OF A PRODUCT.J2 \A\ • algebraic complement of When m = nl. ^ii. When m = n.7) becomes (6. i is of rank ra1. Let ^l'''2 % • • be A = [o.. ^2 % in adj the msquare minor of adj A dei denote Ji' J2 whose elements occupy the Jm same position A as those of occupy in A.50 THE ADJOINT OF A SQUARE MATRIX [CHAP. (6.?i.9) (6. . (6. (1) s aHi u I Jm+i' Jm + 2 i Jn ^M+1' '•m+2. tjj 7m + i' 7m+2' let Jn be its complement in A. III. we prove If A and B are resquare matrices. then J2< (6.6) M: s (D^'i^l' where = j\ + S2 +••• + %+ /i + 72 +••• + Jt.7) M When m = 2. ^n becomes Jg>J4 Jn (6. then adj A = 0.7) ' Jm H. then A is of rank 1.8) (_l)V*2+Jl'j2m H'J2 ^2'J2 . %. Then Ji' J2 Jm Jvi + i' Jm + 2 Jn (6. If A is resquare and singular.J1. and ^m+i' ^ra+2' „Ji'J2 Jn let %. ••.
Ul 1^1 ) Ul Then adj (adj ^) adj (adj = = = UT'^^ \aT' A ^) ^) and adj (adj UT'^^ 6. The adjoint of i4 = a h IS c '^ll 0^21 I _ d b I d c a 3 4 2 3 2 3 4 3 2. the rank is 4. A.2) adj(adji) = U and (6.. Prove: If A we is of order n and rank n\. Prove: adj ^S = adjS • adj y4 .•] IS 1 4 3 3 6 3 1 1 1 1 3 1 1 1 1 1 1 1 3 2 4 1 2 1 2 3 4 1 4 3.CHAP. ^y(6. Hence.J2 /I whose elements occupy the same position in adjv4 as those of is ^H. 4 3l 1 1 3 4 7 The adjoint of A = [. Jw42 let in ^n ^m+i. First IS at least one.  adj/l adj^) 1 diagd^l . Chapter 5.. 6] THE ADJOINT OF A SQUARE MATRIX 51 SOLVED PROBLEMS 1. there is at least one nonzero cofactor and the rank of adj A By Theorem X. Then .4 = \ab\I /I = {a. then adj A is of rank 1. in '"n ^^ ^" msquare minor of the resquare matrix I A = [a:] tj Jm+i. Show that (6.2) Since ^Badj^S ^SadjSadj^ = . h .4). . exactly one. and ^^^ nH. note that..aiAB)AB = = (S adj • S ) adj = ^(l^lOadj/i jslc^adj/^) = b. Prove: Ji' Let A. ^m+2 be its complement in A. By adj ^ • adj (adj ^) = = diag( adj ^./.U./ = \ = \AB\I and (adjBadj^)4fi that = adjS {(adj^)^lfi = adj adjSU. the rank of adj^ is at most n{nl) = i.i2 in\ ^®°°*^e the msquare minor of adj J1. since A is of rank n\. I we conclude ^S adj S • adj /I 5.i^. if \a\ ?^ 0.B = U{(adjB)Si Ab\ .i„ occupy in A.
• •• ''i2Jn • Ul "^M'Jn^i " "^Jn •• • "%ii'Jn^i "V+i'in • • •• ^ri'^m+i "in.ji "W2 °in. By partitioning. . . » r = when A \ oi \.• 1 ••• a.ii "^mJ's "im. .i "traJn a.im ! "^raJm+i " "^m^n tti. ./2 "^ra. . 6 u where s = Ji Ji.. Prove: of i. hj9 a. If 4 is a skewsymmetric of order 2ra. as defined in the theorem.. the required form follows 7. • a. 1 • a.M %• where s is • H\ ''w Jm+1' Jmt2 4.zii^zy '^2k^. we are to show that under the conditions given above this polynomial is a perfect square. write A = P^l \ \ = E= where £ r \ Then S is skewsym . \A\ = a .Jm^i. we have Ji. 1 *i„ j ai j 1 ••• Wl'A a. The theorem 1 is true for n = l since.• •• a. Js' (ifUI. ^^2 «. > • '00 ••• •.i™ ' "V. Jn ^2 ^m+l' ''m+2 % immediately.52 THE ADJOINT OF A SQUARE MATRIX [CHAP. then \A\ is the square of a polynomial in the elements By its definition. \a Oj and consider the skewsymmetric matrix A = [a^j] of ' zri^i.• 1 1 •• ''m+iJs ' ''m+iJm 1 '^m+iJm+i ''m+iJn "ht.1 2 . ••• a. .im4. Prom this.in by taking determinants of both sides. a.2n^\ Assume now order2A: + that the theorem is true when n = k 2. a. . J/2' • H' ^2 ' > Jm ^m Jra + i' Jii ' Jn (ifur 7m ^i i + f2+ •• + % +/1 + 72+ ••• + Prom a.• 1 ••• 1 Ul \a\ •• • h. \ a\ is a polynomial in its elements.
a^j denotes the cofactor of a^j in we have by Problem Chapter and (6. 2fe+2 and SUPPLEMENTARY PROBLEMS 8. Prove: If ^ A is Hermitian.sfe+i = fflgfe+i^sfe+s I hence. 10. then adj^ = 0. 16. Prove: If ^ is symmetric. elements of B. 1^1/ a perfect square. then adj^ is symmetric or skewsymmetric according as . Prove: If is skewsymmetric of order «. 0^2^+2. I 0^2^+2.8) *2fe+l. 11. 1 2 2 12. Compute the adjoint "l of: ~1 2 1 3~ 2 1 S' 5 'l 2* 2" (o) _0 2 0_ (b) _0 2 1_ (c) 2 _3 1 110 (rf) 2 1 1 2 2 1_ 10 1 1 1 2 1 1 4 2 4 1 (rf) 2 2 4 6 Ans. is odd . 15. Write a matrix /I 7^ of order 3 such that adj^ =0. If \b\ = /I.  Ll2/fe+l. 2/!e+2 I Moreover. so also is adj/1. adjoint of a diagonal matrix is a diagonal matrix. (a) _0 02 1_ (i) 2 1 (c) 2 1 5 2 16 10 10 35 2 9. 14. Show 4 3 3 is 3^' and the adjoint of that the adjoint of ^ = 2 1 2 1 A = 22 13. Verify: (a) (b) (c) The The The adjoint of a scalar matrix is a scalar matrix. Prove: If 1 1 is A itself. show that adj(adj/4) = A. 2fe+l 0'2fe42. in the 3. 4 4 3 an nsquare matrix A is of rank <n\. If ^ is a 2square matrix. f where /is a polynomial 6. 6] THE ADJOINT OP A SQUARE MATRIX 53 metric of order 2k and. 2fe+l f''2fe+l." CHAP. by assumption. adjoint of a triangular matrix is a triangular matrix. 2fe+l 2fe+2 2fe+2 •^2^+1. Otsfe+s. so also is adj4.
Let 11 A = [a^] and B = [^ka^A be 3square matrices. l/k l/k). HJ 1/k..H^ H^ A K^K.. (a) 1 1 7 21. Use the method of Problem 19 to compute the adjoint of 1110 (o) A of Problem 7 Chapter 5 . 1 1 • 1 1 adj A = adj Xi adj K^ adj K^ adj A • adj H^ adj H^ adj H^ 20. Let A^ =\. Is there a theorem similar to that of Problem 16 for skewHermitian matrices? 18. Hj and verify for n =2. where the element 1 stands in the ith row (c) adj Hij (k) Hjk = with similar results for the K's. for 10 12 10 13 3 1 110 Define (i) bij "V = (1)^ ^ai. 23. Let B be obtained from A by deleting ^ij iq its ith and pth rows and /th and gth columns. 19. For the elementary matrices. S(adj4) = If S{C) = sum of elements of matrix C. 6 17.%j\ example.54 THE ADJOINT OF A SQUARE MATRIX [CHAP.2^t = ^ where \ is L or . 1. . Prove: If 4 is nsquare then I adj (adj ^) I = UI .n) be the lower triangular matrix whose triangle is the Pascal triangle.4 = that [6. show that = k •S(adj. Prove: If A is an nsquaie matrix of rank n or then 1 1 • nl and if H^.. 2 (b) 3 3 3 2 2 4 2 12 4 6 7 14 7 3 3 1 1 22 2 2 14 (b) 2 1 Ans.3.^. show that 1 (a) adj Hij = Hij = 1 (6) adj H^ (k) diag(l/A:. Show that ^pj pq where o!^ is the cofactor of a^j in \A\ ..] adJ4„ tjj 24..4) S(adjB) and \b\  Ui 22. l/k Hij(k). ('•/ = 1. 2.
. then the inverse of the direct sum diag(/}„ 4 diag(Al^. If A then AB = AC implies B = THE INVERSE of a nonsingular diagonal matrix diag (i„ k. The inverse II. adj^ am/Ml a^n/\A\ . if ^ is nonsingular «n/M «i2/U (7.4 . Chapter2. (See Problem?... A~^ = ^'^^ ^ f X k 2 1 ^J See Problem 2.1) a^^/\A\ a22/!. of a nonsingular nsquare matrix is unique.chapter 7 The Inverse of a Matrix IF A AND B A is are nsquare matrices such that B''^). (B=A~^) and called the inverse of B. From (6. A^ A~s) Procedures for computing the inverse of a general nonsingular matrix are given below.l/kr. is nonsingular.. AB = BA =1. A„) is the diagonal matrix diag(i/k^.. . 55 . From Problem 6 1 1 2. (A = In Problem I. /.. B is called the inverse of A.2) A adj i = U . a„i/U a„2/UI .. l.. the adjoint of A = 1 1 4 4 3 2 1 "7/2 3  Since Ml = 2. 1A„) ^^ :^i' ^2 4 are nonsingular matrices.. we prove A has an Inverse if An rasquare matrix and only if it is nonsingular. INVERSE FROM THE ADJOINT..) C. Chapters. ann/\A\ 1 2 3 3 Example 7 is 1 1 1.
13 Write the matrix [A Ir. is. /g is carried into A 1 1 See Problem 3. Let the matrix A = [a^j] of order n and its inverse B = [6.5) and.Kt = PAQ = I Then A ^ P (7.. Thus. = A'^ (P'^Q'Y 7. since (S~V^ = S.] be par titioned into submatrices of indicated orders: 41 (pxp) ^21 A12 ^11 "12 (pxq) and (pxp) "21 (px?) where p + q = n S22 (qxp) A22 (qxq) (qxp) (qxq) . from (7. KfH^.3). then A~ is equal to the product in reverse order of the corresponding elementary matrices. by elementary transformations so that Let the nonsingular nsquare matrix A be reduced to / H^.. AK^K2..56 THE INVERSE OF A MATRIX [CHAP. we have normal form by row H.4 = 1 4 3 of Example 2 using only row transformations to reduce /I to/.. INVERSE BY PARTITIONING..2) ^ Q^ = by (5. 7 INVERSE FROM ELEMENTARY MATRICES. From Problem Chapters.H^. 1 1 "l 3 1 1 3 1 HqH^ AK^Kq 1 1 1_ 1 1 • A _0 = 1 / 1 1 1__ 3 1 o" 'l 3" 1 "106" '100" 1 1 1 7 = 3 3 1 Then A K^K2H2'i± 1 1 1 1 1 1 10 to 1 In Chapter 5 it was shown transformations alone...H^.H^. 1 3 3 Examples. That If A is reduced to / by a sequence of row transformations alone. 1 We have 1 3 3 1 "X^ 3 1 3 1 1 '\j 3 1 4 1 3 "o 1 1 1 7 3 3 1 [AI^] = 1 1 4 3 3 4 1 1 1 1 1 1 1 li 1 1 1 1 1 [/3^"'] 7 3 3 1 1 by (7. 4 Q 1 and perform the sequence of row transformations which carry A into on the rows of six elements. III...H. = QP K^K^. H^H^ Example 2. Find the inverse of .2) with Q =1. (7.^.3) that a nonsingular matrix can be reduced Then.. as A is reduced to /g.
the following procedure is used. we have = = Ip (iii) /4iiSii + /lisBgi Bgi'^ii + B^z'^si = = Iq (7. To obtain A^^. taken of order ral. %2 %3 %4 022 Osi «31 G^ = 021 Ogj 023 033 '''43 '^24 6(34 Ogj «32 °41 %2 ''44 After computing C^^. Repeat the proc ess on G4. 7] THE INVERSE OF A MATRIX 57 Since AB = BA (i) = I^. See Problem In practice.CHAP. 1 3 3 Example 4.^^^ "izK (^^si '^n) ^21 = ~C (^21 '^u) ^ = . . partition G3 so that /422 = [o33] and use (7. and so on.4) (ii) 4iiSi2 + A^^B^z (iv) Bjiiia + Bss^^as Then.0] 522 = = [1] " fill 7 — 3 3 1 1 fil2 and S21 B22 _ 1 1 See Problems 56. !Bxi where = '4ij^ + (A.^^ is usually 4. after partitioning it so that ^22 = [044]. ^12 . ^421 = [13].' ^12) = [4][l3]M = [1]. provided /4ij^ is nonsingular. Take ^11 11 = [!']. and f^ = [l] Sii = 4\ + ('4i\/ii2)rw^ii) = [j i]^[o]fii'^^°^ S12 = ('4i\^i2)r^ " I oj S21 = ^ r' (^2i'4la) = [1.422 . and ^22= [4] Now ^= Then ^22  '42i(^. A.5) to obtain C^^. Find the inverse of A = 1 1 4 3 3 4 using partitioning.A^i^Ali A^^). Let "11 ' [Oil «2i %2l ^^aj %2 022 °11 «13 O23 «33_ .
The A. Suppose A that nonsingular. the pair of transformations are and K^{l/\Ja). When A tors is symmetric. b .A. and then the inverse of A found by = (A'Ai^A (7. IV. _i /l/l would be of rank < n. which symmetric. If there is to be any gain in computing A~^ as the product of elementary matrices.7) A'^ SOLVED PROBLEMS 1. this procedure recommended. Chapter exists.singular.. For example. When A is is not symmetric. Ht ^l£ a b c b c a c ^i(. is nonsingular. aij = ciji and only ^(n+1) cofac need be computed instead of the usual n^ in obtaining A~^ from adj A. hence. /4 were singular. '0 b c . . b a c c .. Prove: An resquare matrix is A has an inverse By Theorem A''^ if and only 5. Then A=P~^Q'^ and 1 = QP Supposed A 1 exists.6) where f = A^^ . See Problem 7.) However.A =1 is of rank n. If there exist nonsingular matrices P and Q such PAQ=I... when the element a in the dia^gonal H^(l/\Ja) is not is replaced by 1. a row transformation followed immediately by the same column transformation. the elementary transformations must be performed so that the property of being symmetric is preserved. This requires that the transformations occur in pairs. hence. the method of partitioning is used since then (7. a b .58 THE INVERSE OF A MATRIX [CHAP.^) c ^^i(. the above procedure is may be used to find the inverse of A'A...^^(Af^A^^).5) reduces to The maximum gain occurs when (7. 7 THE INVERSE OF A SYMMETRIC MATRIX. In general. if it is non. ^Ja is either irrational or imaginary.
B21 = . Finally. and. —1 = / and f  '^'2 — ('^21'*lll)'^12 . B^^ ^ti .^~\A^^At^y.2 (11) + iisSsi = = / (iii) S^i^ii + B^^A^^ = for + A^^B^^ ^ ^ Bii.42 from (iii). 7] THE INVERSE OF A MATRIX 59 2.CHAP. f (42i'4ii)^i2 + ^'^422 *. (a) When A [? !] 1 » /I I I =5. substituting in (iv).Si2.3 10 1 1 1 1 1 3 14 1 10 1 1 5 8 1 4 5 14 _4 5 14 0' 1 3 5/2 10 2 1 1 'Xy 2 1 3/2 1 ] 1/2 "1 7/2 11 2 1 0' 1 5 ] 1 . = / Set S22 = f"^ Prom (ii). adj A 7 5 and A 7 5 3 2 7 7 2 4 3. adji r43l..A'l^A^^B^^ = 4\ + (^lVi2)f~\'42iA\).. 1 1 'V^ 115 12 55 1 1/2 1 3/2 3 2 3 1 3 8 10 . from (i). B^^ = (4^1412)^"^. 10 3 1_ 11 2 2 3/5 1/5 1 '^/ 1 23 10 1 29 64/5 18/5 26/5 6/5 3/5 100] 1 i 12 2 2 7/5 2/5 1/5 1 1 2 Ih A'] 23 29 64/5 18/5 26/5 6/5 3/5 7/5 2/5 1/5 The inverse is A 10 1 12 2 2 2 (i) iiiSn 4.andi"=r4/53/5] [1 2j [_l/5 2/5J 1 2 3 1 5 1 5 1 (b) When A = 1 2 3 1 then U! = 18. Solve •(.S2i. and S„„ (iV) ^21^12 + 52. 2 13  h 7 2 2 2 10 1 '\J 5 13 10 1 'X' 18 0' 1 10 1 1 18 ! 72 2 1 ' 1 7 4 2 3 j 7 2 4 1 3 [ 2 5 .„ /fiiS. 3 5 2 2 Find the inverse of i 4 = 3 6 2 5 _4 5 2 3 14_ 0' 1 "K^ 14 2 4 3 2 1 1 1 2 3/2 5 2 1 1 1/2 1 ^\j 1 2 3/2 1 ' 1/2 [AIJ 3 2 6 5 5 2 2 10 1 1 3 6 2 5 2 I 1/2 1 1 3/2 ' 1 1 .
01 1 . and f 3 =[l/3] Then Sii = 4l\ + (.  . B22 [3] 1210 and [Sii S21 S12 I 12 2 23 3 B22J 111 32 .[2 4]r = [3]. 1 (6) Partition A so that A^^ 1 3 3 A^Q  2 3 421 =[ 1 1 1 ] and /I22 = [ I ] 2 4 3 3 36 Now 4i = i 3 2 3 A~'^ A  i A^iA^^  "X"L2 —3 2J. Now 1 and [3] r 3 11 = r 3 2l jj. 4fl 36 and 3 3 3 D01 Oo 2 01 1 2 3 .f ^22  '42i('4ii^i2) = [3] . 4 3 3 1111 1 2 3 3 (a) Take Gg 1 3 3 and partition so that 2 4 [24]. Find the inverse of A 12 13 2 3 3 1 2 by partitioning.4lVi2)rV2i^i\) = [j "^l + Hri] [2 0] 2] _ [2 ol ij 1 [o oJ = 3L3 3J '^ B12 = — ('4ii'4i2)C  1 3 f (/I2A1) = i[2 0]. = [24] ^ . 3 ^1 1 " 3 [3][2 3 2] 2 1 0' I 1 4 = 69 2 3 6 12 2 2 1 B12 = 3 _ 1_ S21 = [2 3 2].60 THE INVERSE OP A MATRIX [CHAP. ^1141. ^_^ = . 7 5.f = [l][l l](i) 3 1 & 3 = 3 2 and C'^=[3] 3 6 3 3' 6 3 3' 1 2 1 1 Then B.1 [3 ^_^ 2 '4214 11 2 [2 0].
Compute the inverse of the symmetric matrix A 13 1 112 23 11 4 2 231 2 1 1 2 1 Consider first the submatrix G^ 1 3 2 partitioned so that 1 421 = [1 2]. Now A< _1_ 10 31 5 5 <f= [18/5 ]. 4 3 We cannot take 4.. . Find the inverse of A = 1 1 4 by partitioning.^2i(Ai^i2) = [l][l 2]rM = [2] and f^= [i] Then B. B22 = [i] 1 3 5 5 and 1 3 1 5 10 5 5 Consider now the matrix A partitioned so that 2 '^11 1 1 1 2 1 3 2 [2 3 1]. if the (rai)square minor A^. 7  3 1 3 Then 1 By Example 3. [4] 1 1 3 5 5 1/5 2/5 . ^22= [1] Now ^22 . CHAP. 7] THE INVERSE OP A MATRIX 61 1 3 3 3 6. 5 2 t = [5/I8]. B12 = ["jj. B21 = [k k]. we first bring a nonsingular (nl)square matrix into the upper left corner to obtain S. and by the proper transformation on B~^ obtain A''^ 2 7. 1 1 1 1 3 3 . the inverse of //„„ A A = 1 1 4 3 3 4 "1 S is B 1 1 7 7 3 3' 1 1 0' 1 1 3 1 3' 1 B^Ho 1 1 = 1 1 Thus.^_ of the nsquare nonsingular matrix A is singular. u since this is singular. find the inverse of B.
for the matrices (a) 2 5 4 5 8 4 31 2 3 4 (d) (b) 5 2 7 3 9 3 13 1 1 4 11 2 3 2 (a) 1 4 5 4 4 12 8 111 1212 2 16 6 . 4 3 2 3 3 2 7 2 (c) 2 5 3 6 2 3 3 3 1 2 3 3 11.62 THE INVERSE OF A MATRIX [CHAP. .30 144 48 (c) 36 60 21 22 41 1 18 10 44 4 13 1 11 7 30 6 2 1 48 5 20 12 4 12 13 48 12 12 3 30 20 15 30 11 25 7 26 16 (b) 1 1 7 3 1 18 21 5 8 6 (^)i 2 1 1 2 30 15 15 12 9 1 1 1 12 69 6 1 6 7 1 12. Obtain the inverses of the matrices of Problems using the method of Problems.11(c). 8(6). Same. 1 3 3 1 2/3 1/3 2 31 ^ 15 5  ^'^i  15 4 14 1 (c) 3  1  . 11(a) . 7 2 57 521 = Then JS^ 515 7 5 ^[1 2 10]. 13. Find the adjoint and inverse of each of the following: " 1 21" 1 "2 3 3 2 4" "1 2 3" 1 2 3 (a) 1 _ 2 1_ (b) 4 _1 1 (O 2 3 4 5 5 (d) 2 1 n U n w n ^ 1 J. B22 = [5/I8] 11 2 571 52 11 5 and 51 7 1 10 5 2 10 SUPPLEMENTARY PROBLEMS 8. 13 1 1 1 1 3 3 2 1 1 3 2 . 4 6 3 315 Inverses (a) 10 • 4 9 . 10. (d) 3 J 5 6 ^ 21 1/2 1/6 1/3 9. Obtain by partitioning the inverses of the matrices of Problems 8(a). Find the inverse of the matrix of Problem 8((?) as a direct sum. Use the result of Example 4 to obtain the inverse of the matrix of Problem 11(d) by partitioning.
(c) A a. 16. Hint: (a) {A'^ B)' = {BA^'^)' = (A'^)'B' = A~^B.^ 0. that if (b) A und B . Show that A 13 14 2 3 4 5 5 7 9 has neither a right nor a left inverse. (a) 1 (b) 1 1 1 33 2 34 42 3 45 3 22 32 3 = 15. 7] THE INVERSE OF A MATRIX 63 1212 14. A^A / = (AA"^)' = (A~^)'A 18. Chapter 6. where a. An mxn matrix A a right inverse if is said to and only if have a right inverse B if AB = I and a left inverse C it CA = I. and (c) A~^B~^ are symmetric. Prove:(i) of Problem 23. c 4 23. Show that if the nonsingular symmetric matrices A and B commute. and c are arbitrary. so also do B Hint. then AB AC implies B = C. 22. (a) A (AB)A is = A (BA)A 17. Prove: If . 1 3 2 1 Hint. then (a) A~'^B.4ii 1^22 . so also is /I ^. A right inverse of 5 17 9 5 3 1 A is the 4x3 matrix B = 4 1 1 L J 3 3 1 7 3 3 1 21.4 is nonsingular. Show that the submatrix T 1 1 4 3 3 4 of A of Problem 20 is nonsingular and obtain 1 1 as another 1 right inverse of A.4ii ^ 0. + .^^si^ii/ligl 25. then /111 A21 then (/ ^12 Aq2 .A) commute. The rank of A is 3 and the submatrix S 1 1 4 3 is nonsingular with inverse S . 8 24.nd A and B commute. Obtain 310b 3 1 7 1 1 a 1 1 1 as a left inverse of 3 3 4 3 3 . If /+ l . symmetric.CHAP. Find a right inverse of A 13 2 3 14 13 13 5 4 if one exists. Prove: If . 19.b. Show that A has A is of rank m and has a left inverse if and only if the rank of A is n 20. Show Hint: the nonsingular matrix :r A . 12 (b) 2 3 Obtain by partitioning the inverses of the symmetric matrices (a) 2 211 1 112 2 1 1 2 1 2 3 2 3 3 3 112 2 1 1 1 1 1 1 1 1 1 5 1 1 Ans. 26. (b) AB"'^.4)^ and (I . Show (a) that if the nonsingular matrices A andS. .
A collection or set S of real or complex numbers. As For each element a in ab = there exists a unique element F such that a + (a) = 0. other examples of fields are: (e) the set of all quotients ^^^ Q{x) of polynomials in x with real coefficients. i. 64 . addition (+) and multiplication is called a field F provided that a.e. called of characteristic 2. The set of all integers and the set of all numbers of the form bvs . By interchanging the two identical rows. GENERAL FIELDS. ab is a unique element of F ab = ba (ab)c = a(bc) For every element a in For each element a ^ a~ F there exists an element 1 ?^ such that 1a = «• 1 = o. (g) the set in In this field. b. (b) the set of all real (c) the set of all (d) the set of all numbers of the form a+b\f3 complex numbers a+ bi. will be excluded hereafter. and division (except by 0) on any two of the numbers yield a number of S. we are led to D = D or 2D = which 0+0=0. multiplication. . where a and b are real numbers. are not number fields... consisting of more than the ele ment 0. .chapter 8 Fields NUMBER FIELDS. but D is not necessarily 0. customary proof that a determinant having two rows identical not valid. where a and b are rational numbers. Examples of number fields numbers. being elements of a + b is a unique element of F a+b = b+a a + (b + c) = (a + b) + c For every element a in F F there exists an element in F such a in that a+ = +a = a. together with two operations called (•). the is is This field. subtraction. in F there exists a unique element a'^ in F such that a a'''^ = 0=1. where i is a rational number. are: (a) the set of all rational numbers. is called a number field provided the operations of addition. c. collection or set S of two or more elements. c) Di : a(b + = ab+ ac D^' (a+b)c = ac + bc In addition to the number fields listed above. A F. scalars. for example. . (/) the set of all 2x2 matrices of the form ti where a and b are real numbers.
For example: difference.B. say. MATRICES OVER A FIELD.CHAP. is (a+bi){c + di) = (ac 1 bd) + (ad + bc)i _ g the inverse (M5) of a + a bi^o is — bi _ _ b^ hi a + bi a^ +62 a^+ a^ + b^ Verification of the remaining properties is left as an exercise for the reader. An examination of the various operations de elements other than those in on these matrices. the field of all real numbers is a subfield of the field of all complex numbers. The sum. individually or collectively. Hereafter when A is said to be over taining all of its elements. and product are matrices over F. Let A. then S is called a subset of T. its inverse is over F. A is also over the real field while B is not. the properties A^A^. The zero element are (/I4) is and the the unit element (M^) is (A/ 1) If a + bi and c + di two elements. to the real field. S and T are fields and if S is a subset of T. For example. from the rational field to the real field. the negative (A^) of a + . . A = 1 all of the elements of a matrix A are in a field F. bi is abi. say. To do product this we simply check 1. For exam ple. in the previous chapters shows that no F are ever required. Otherwise. we say that 1/4 1/2 2/3 is over the rational field and B 11 2 1 + } . C. the field of the elements will be extended. and D^D^. When 'Mis over F". the field F rational field and not the real or complex field. F it will be assumed that F is the smallest field con In later chapters it will at times be necessary to restrict the field. if all F and let F be the smallest field which is the contains all the elements. If If S and T are two sets and if every member of S is also a member of T.. . that fined the elements are rational numbers. the statement "A over F" implies no restriction on the field. be matrices over the same field is. At other times.3i is over the complex field Here. also A is over the complex field. MiMg. then S is called a subfield of T. that A ^l then A there exist matrices P and Q over F such is of PAQ = / and / is over F. If is over the rational field and rank r. the field of all rational numbers is a subfield of the field of all real numbers and the field of all complex numbers. 8] FIELDS 65 SUBFIELDS.. Verify that the set of all complex numbers constitutes a field. except for the excluded field of characteristic two. If If A is nonsingular. its rank is unchanged when considered over the real or the complex field. SOLVED PROBLEM 1.
+3. where a and b are rational numbers. 3+ ^^ 5. it is spoken of as a ring with unit element. Why does not the set of all 2x2 matrices with real elements form a field? D^. R may be called a non. image of the identity element K is the identity element of C. 9.. Is there a right unit element? 10. By Problem 4.. 3. forms a field. Can the set ments ±1 (a) of Problem 6 be turned into a commutative ring with unit element by simply adjoining the ele to the set? 8. Page 64 is called a ring. A^. is an example of a commutative ring with unit element. ring with unit element. Verify that the set of all 2x2 matrices of the form —b .. 4. and numbers a+bi.c. . q. . V are real all complex numbers p + qi and K bi be the field of all 2x2 b^ matrices where p. 8 SUPPLEMENTARY PROBLEMS 2.y. it is called commutative. If A.±2.±4. L a left unit element. . L4 Oj 2i. satisfying the conditions {Ai. Is every field a ring? Is every i commutative ring with unit element a field? To ol ^ . is any matrix of the ring and ^ = . Let C be the field of u. show that LA = A. M.66 FIELDS [CHAP. is an example of a commutative ring without unit element. Verify (a) the set of all rational numbers. (b) the set of all integers (c) the set of all (d) the 0. (b) (c) (d) (e) Show Show What that the image of the that the is the sum (product) of two elements of of K is the sum (product) of their images in C.. When a ring R satisfies M^. ring..±2. 5. where a and a b b are rational numbers. 7. To emphasize the fact that multiplication is not commutative. Verify (a) the set of all real numbers of the form a + b\r5 (6) the set of all quotients where a and b are ra:ional numbers and —^ Q(x) of polynomials in x with real coefficients constitute fields. D^) of 6. (b) the set of all (c) the set of all numbers a + bvS. Describe the ring of all 2x2 matrices Call \ where a and b are in F. nsquare matrices over F is an example of a noncommutative .+l. set of all 2x2 matrices of the form where a and b are real numbers. a ^ as corresponding elements of numbers. A^. Take the complex number a + 3l 2j and the matrix the two sets and call each the image of the other.. A^. image of the conjugate of a + bi? What is the image of the inverse of [: "!] This is an example of an isomorphism between two sets. Verify: (a) the set of even integers 0.commutative When a ring R satisfies Mg. is an example of a commutative ring with unit element. To [2 3 4l . the set (d) of Problem 6 is a field.b. Mi.. where a and b are rational numbers are subfields of the complex a b field. A^. a Show that this is a subfield of the field of all 2x2 matrices of the form —h a w]\eK a and h are real numbers. A set R of elements a.
1') denote the (9.. as (91) I Xi .x^q] are distinct 2vectors. consider the pxq matrix A as defining p row vectors components of a 17vector) or (the elements of a row being the as defining g column vectors. 67 . 91). X2 ''^3(*11 + ^21.x2. • . The elements % it are called respectively the first. if k is any scalar. The same 2vector or vector twodimensional the to denote used here will be as x^]. 92) yields A3 — A^ + A2 = L ^11 "^ ^21 1 ^12 "^ '^221 Treating X^ and Xg as 1x2 matrices. OX (see Fig. same vector.1) as a row vector We may. Moreover. VECTORS.. second. we shall speak of (9. 91 If . written [x^.chapter 9 Linear Dependence of Vectors and Forms THE ORDERED PAIR pair of real numbers (%. X22) X(Xi. however. fv Xy Q J is the familiar multiplication of a vector by a real number of physics.1) X x^. X2...X2 = [a. the parallelogram law for their sum (see Fig..2(X^1. as an n dimensional vector or revector X over F is meant an ordered set of n elements x^ (9. . By of F.1') as a column vector. kXi L rC%'\ 1 is merely the rule for adding matrices giv .. ^12 + ^2! X. Xi J Now and (9. Xq) Fig. of numbers. we see that this en in Chapter 1. .Yi = [%i. x^) is used to denote a point Z in a plane. reth components of X. Later we shall find more convenient to write the components of a vector in a column. then.%„] .i.xi2] and X^= [x^^..1) and (9.
1]. If m vectors are linearly dependent..3] S] .Xq+ {d) 2X^  X^ = The vectors used here are row vectors. (a) ^2= [2. I.5[2.4. some one of them a linear combination of the others. Example 2. and 10.3).0] Then 3fti + 2k^ = and then ^2 = 0.[lO. 1.l. .k^ of F. is called the zero vector and is denoted by 0. ^3= [0. not such that k. Note umn vectors.8. k^+ 2k^. are linearly independent. [4.4. If in (9. ^2 ^"d Xj^_ are linearly dependent.6] = = [o.3]. Consider the four vectors of Example (c) By (6) the vectors {d).4. 0. ^4= [4. = [6.3) + k^X^^ + — 1.i_~ Zk^l lik^ = [o. For. k^. The sum and difference two row (column) vectors and the product of a scalar and a vec formed by the rules governing matrices.1.68 LINEAR DEPENDENCE OF VECTORS AND FORMS [CHAP. . may always be expressed as .7] (6) (c) 2[2. 2.k^. A if vector ^.3]+ [4.4) = Xi . ft^ + 2^2 = and ik^  = 0. we may solve for ^i (9. 0. Example Consider the 3vectors A:i= [3.B+i is said to be expressible as a linear combination of the vectors Ai. assume the contrary so that = + 0. X^ k^. 2X^ = = 2[3.%2' ^mJ are said to be linearly dependent over all zero. all of whose components are of zero.6] I5] = 2X^iX^ IX^+X^.T"!^!'^! + = SiJVi • + ^ii^ti + ^i+i^i+i + ••• ••• + ^m^m! or + ••• + s^_iZi_i + s^ji^i+i + + s^J^ Thus.o] = 3X2^3 X^. and X^ by and the entire set by The vectors X^ and fel^l Xq^.2.4].k^ X^ there exist elements of F such %A]^ + that «:2A2 Xfn+i = + ••• + k^Xf^ BASIC THEOREMS. + k^X^ Otherwise. The m Ai = revectors over F . so also are X^.c2A'2 [Zk^ + 2k^. X^.2..0.2. k^ ?^ 0. 'iJc. however.^X1 (9.4] . Prom the first two relations A^ = Any nvector X and the nzero vector are linearly dependent. the m vectors are said to be linearly independent.%]^. 9 The tor are vector. F provided there exist m = elements h^^. that if each bracket is primed to denote col LINEAR DEPENDENCE OF VECTORS.. . the results remain correct.
5) of the vectors be of rank r<m. If the rank is m. A LINEAR FORM (96) over F in ra variables n x^. the vectors X^ and X^ are linearly dependent..2) is linearly independent so also is every subset of them. is A necessary and sufficient condition that the vectors (9. and X^ are linearly independent while linearly dependent. CHAP.^niXg Zg=2A^i3^2 are III. Prom Example the vectors X^. ^n2 — % r associated with the m vectors (9. If m vectors X^.^ is X^ are linearly independent while the set obtained by addlinearly dependent..5) m<n *TOl . satisfying the relations 2X. the set of four vectors is linearly dependent. Clearly.X^. X^. See Problems 23. F such = that Kk ^^^2/2 + A. See Problem 1.k^ k^ + . x^ «„ a^x^ is a polynomial of the type + ••• 2 i=i OiXi = + a^x^ + a^A:^ where the coefficients are Consider a system of in F linear forms in n variables /l m = CIu% + %2^2 + + '^2n*re (9. V. by (d). X^ ing another vector bination of Xi. the vectors are linearly independent. IV. Example 4. then Jt^+i can be expressed as a linear com Examples. X^ X^ there is a subset of r<m vectors which are linearly dependent. If the rank of the matrix Xi 1 %2 (9.i^2X^ Xg= 0. X^j. there are exactly vectors of the set which are linearly independent while each of the remaining mr vectors can be expressed as a linear combination of these r vectors.„4 . The If set of vectors (9. in + .2) be linearly dependent that the matrix (9. the set of vectors (9. X^.7) /m 1^1 + 0~foXo + %n^n and the associated matrix 'ml "m2 If there exist elements k^. 9] LINEAR DEPENDENCE OF VECTORS AND FORMS 69 II. X^ 2. By (b) of Example 1. . If among the m vectors X^.2) is necessarily linearly dependent if m>n. the vectors of the entire set are linearly dependent. not all zero..2) is r<m.
.Yot = with not all of the k'a equal to zero and the entire set of vectors is linearly dependent. xpf xpq where the elements xp.k^.X^ X^. Let h^. say. which is linearly dependent. 4 7 The system (9.«2 + 3*g. the linear dependence dependence /i = or otherwise the forms are said to be linearly independence of the forms of (9. .. to the linear Thus. xp^ Xpr..X^ there is a subset.^+i = xj^q are respectively from any row and any column not included in A.10) . /2 = x.. 3/^  "if^  fs = . then 0..70 LINEAR DEPENDENCE OF VECTORS AND FORMS [CHAP. Thus. %1 %2 *rl X^2 Xqt x^q ^rr Xrq . so also are the Since.^+ 2% + 4^=3.A.X^. xpq. there are exactly r vectors which are linearly independent while each of the remaining mr vectors can be expressed as a linear combination of these r vectors.7) is necessarily dependent if m>n.. is Let (9. /g = ix^  Tx^ + Xg are linearly depend 213 ent since A = 1 2 4 1 is of rank 2. 9 the forms (9. Prove: If among the m vectors X^. 2. ••• k^X^ + + k^X^ = + k^X^ + with not all of the k's equal to zero. k^X^ + m vectors. Prove: If the rank of the matrix associated with a set of m ravectors is r<m. by (3. by hypothesis. r<m. or Example 5.7) is equivalent independence of the row vectors of A. The forms 2xi . Here. we have 11 11? A 21 25' Consider now an (r+l)rowed minor *11 «12 • %r Xiq .7) are said to be linearly dependent. Then. x^q. and .5) be the matrix and suppose first that m<n If the rrowed minor in the upper left hand comer equal to zero. x^q of the last column of V.. A be the respective cofactors of the elements x^g. independent. . Why? SOLVED PROBLEMS 1. X^. we interchange rows and columns as are necessary to bring a nonvanishing rrowed minor into this position and then renumber all rows and columns in natural order.Y^+i + ••• k^X^ + k^X^ + ••• + 0.
Then 432. . Then 1^X^ + 1X21X2= and Xg = 2X^ + X^.2. The V be replaced by another of the remaining columns. + k^2t + t. each of these may be expressed as a linearcom ^j^.3] is linearly dependent. is of rank 2. 9] LINEAR DEPENDENCE OF VECTORS AND FORMS 71 fci^ii + k2X2i + ••• + krXri + ^r+i*^i ••• = = (i = 1. there are two linearly independent vectors. nents are added.8.2.5. there are two linearly independent vectors.2 respectively. in either case.X^. is a. linearly independent vectors X^^.2.4] (a) XL = [2. 1 1 1 The cofactors of the elements of the last column of 2 2 are 2. Show. Xr+^ X^ are linear combinations of the linearly Independent vec X^ as was to be proved.6.^+i = A ji^ 0.3. 1 Thus.. let the last column of not appearing in A. linear combination of the V^+i. X^ consider the matrix when to each of the given m vectors mn additional zero compoThis matrix is [^ o]. For the case m>n. (a) Here. 2X^ + 2X2 . Consider then the minor 3 1 and 7. 2 (b) 3 3 Here 2 4 2 2 6 3 11 12 23 .2 n) and. 3.1] ^^3= [1. that each triple of vectors X^ = [1. say X^ and Xg.^ and X^ .. Now 1 the we interchange the 2nd and 4th columns to obtain 3 2 2113 2213 4326 2 for which 2 2 5^0.1.. . ^2 = [3.1] [2. 3.2 = r) and by hypothesis k^x^q + k^xQq + + krx^q + krAixpq y Now u. 312 15 8 respectively 14.2.7] and (b) ^2= 1. 1234 3121 15 is of rank 2. The minor 87 1 2 3 The cofactors of the elements of the third column are 1 2 j^ . the vectors tors X^. But ^^j was any one binatlon of of the mr vectors ^r+2 ^I hence. 7. k^x^j. Xp X^. In each determine a maximum subset of linearly independent vectors and express the others as linear combinations of these.1. say the column numbered cofactors of the elements of this column are precisely the k's obtained above k^x^n + ^2*2W + •• so that + ^rXru + ^rn^pu = Thus. " + f'r'^rt + f'rn'^pt = (t = 1.2] ^3= [4. say X. X^ X^.2Xs = and Xg = Xi + X. summing over all values of k^X^ + k^X^ + ••• + k^X^ + r k^^^Xp = Since /i:.CHAP. using a matrix.3. Clearly the linear dependence or independence of the vectors and also the rank of A have not been changed.
9 4. P4 lies in tt. s^. (a) [1.6] Xs X^ Xs (c) = [0.6. 5] [6.s^ ) X^ i=i 7.Pq ]' 1 is of rank 2. see Problem 2.2] (6) Xa = = [4. P^ X y z (i) 1 1111 12 3 1 1 2y + z Substituting the coordinates of P^ into the left 2 3 member 4 of (i). 3). 1).1] (c) ^1 = [2. and P^(2. then ^m+i can be expressed as a linear combination of X^.3.2) be linearly dependent is that the matrix (9. 2. PsCl.3] A. 1. ^1 Xj_ = [1.7] X3 = [3. In each dependent set select a maximum linearly independent subset and express each of the remaining vectors as a linear combination of these.^. we have 2 3 1 4 1 2 3 1111 12 3 1 1 1110 12 3 1 4 1 = 1 1 2 3 2 3 1 4 1 Thus.X^ 6.4. Examine each of the following sets of vectors over the real field for linear dependence or independence. PsiZ. 1 2 3 We have Show verified: Any three points of ordinary space lie in a plane through the origin provided the matrix 2.5) of the vectors be of rank r<m.5. 1. X^ X^ are linearly independent while the set obtained by adding another vector X^^.2] X2 = [2.4). m and consider X i=i 2 i=i (A:^. Show that the representation of /^^+i in Problems siXi is unique.1.2 1.0 5. ^m+i is linearly dependent.1.Q = = = Xq = 2X^ . 8. 3..2.4) holds.6. .5. The significant fact here is that [P^. In (9. A = 5a 1 — 2a o x^ 2^1X^ 2X2^1 .X^ A3 = 2a^ + (b) A.2. 2).3) subtract S2.2] ^4 Ans. Px.2. 1] = = [2. 72 LINEAR DEPENDENCE OF VECTORS AND FORMS [CHAP. Prove: A necessary and sufficient condition that the vectors (9. ••• Hint: Suppose the m vectors are linearly dependent so that (9. For the converse..1. Prove: If m vectors X. SUPPLEMENTARY PROBLEMS 5.3 1.3. Let Pi(l. from the ith row the product of the first row by the product of the second row by ^s Indicated in (9.3] (a) X^ [1. 4) be points in ordinary space.2 . and the origin of coordinates determine a plane tt of equation The points Pi. 3.8. m Hint: Suppose ^7^+1 = n kiXi = 1.4] ^2 = [4. of their coordinates is of rank that Pg does not lie in v.
li]. (a)  2/2  = 14.X2+ 2Xq + Arts.l2i.Mq are considered as defining vectors of four p components. a in F._ = [l.l+i. 12.Mz. (a) Show Show that X.i_i] and X^ = [li2i. that is. X^ = [j. Investigate the linear /i dependence or independence of the linear forms: fx = 3% 2:«:i Xg + 2Xg + x^ (b) = 2Xx 3%L 5Xj^  3Xg + 4A!g .i. X^ = [i. Pi = (a) ~ 3x2 + 4^ _ 2x2 2 Pj_ = 2x* + 3:c^ 4x^+5x x^ + + 3 \ P2 = Ps = x° . . [::]Ci][::] Show that fe^il/i + A:2^^2 + ^3^3 = . Consider the linear dependence or independence of a set of 2x2 matrices M.2*4 («) fz /a = = + 3x2 .2i] complex field.) Extend the result to a set of mxn matrices . (a) 2Pi + Pg  (6) P^ + P^ 2Pg 16. A" ando are considered proportional. requires that the rank of the abed matrix e f q g h s t be < 3.CHAP. If the polynomials of either system are linearly dependent. (Note that the matrices M^. hence. the set of vectors is linearly dependent. according as the rank of 4 is less than or equal to 1 15. Hint: Consider k^^X + ShowthatanynvectorZandthenzero vector are linearly dependent. when not all the k's (in F) are zero. Consider the linear dependence or independence of a system of polynomials + ai„_^x + a.l+i. (i = Hox cient matrix + aij^x 1.Xg+ 2x^ f^ = fg = + 2^2  2x3 + 5*4 X4. 9] LINEAR DEPENDENCE OF VECTORS AND FORMS 73 9.2) either Xi = Xj or X^ = aXj.i]._j. k^O = where fc^ = o and ftj ^ 0. Why can Show there be no more than n linearly independent « vectors over F'> 10. Is the converse true? 11. 5x^ 3/i  95C2 + 8xq /g x^.2 m) and show that the system is linearly dependent or independent according as the row vectors of the coeffi"10 °20 "11 «21 "^0 "ni — °nn r are linearly dependent or independent.6» + X 2P3 = 4 (b) P2 = Pg = x° + 2x2. over the complex field. and Xq = [o.Zx +  2*2 + X* + = 2x X + 2 Ans. ] are linearly dependent over the rational field and. over F.i]. find a linear combination which is identically x° zero. 2j [l. hence.li. (b) that Zi = are linearly independent over the real field but are linearly dependent over the 13. that if in (9.
:] = Ans. 3. If is of rank C^. where C^ show how to construct a nonsingular matrix B such that AB C^ are a given set of linearly independent columns of A./!^ 4™ in the light of Problem 16. 23.. 5) lie in the plane of (6)? 22. Show that every nsquare matrix A over F satisfies an equation of the form k^A^ ^ A^ + where the k^ are scalars of F Hint: Consider /. (c).] 4^24 A '»[::] (b) = \_ \. (6) (c) Show Show Does that the rank of [Pi. 2. 5. Find the equation of minimum degree (see Problem 22) which (a) is satisfied by 4 = L J. 19. P^. 1^ . 2).74 LINEAR DEPENDENCE OF VECTORS AND FORMS  [CHAP. = [C^. 2. (b) + 27 = 0. P3]' is that [P^.2A +1 24. 2. P3. [:. 1 P5(2. PaY is Ps(2. (a) 1. 1). then (b) A'''^ = I^A. In Problem 23(b) and multiply each equation by 4"^ to obtain is nonsingular. 2. 4). (c) A [. Pjd. 20. 3 2 _ 4 3  _ 18. where 7^ = 2 A aijXj. Show that any 2x2 matrix can be written as a linear combination of the matrices n n [o oj' [o oj and [i oj' Generalize to nxn matrices. + kr. + kp^^A + kpl = 4. and /VO. (c) A~'^=2lA.X^ X^ are linearly independent..Yr. F A' can be expressed as a polynomial in A whose coeffi . so that the points lie on a line through the origin. 4. 3. if show that the vectors Y^. and thus verify: If A over cients are scalars of F. 1. are linearly independent and only if ^ = \_aij'\ is nonsingular.^^. 9 1  2 2 3 2 3 1 3 2 1 .A^ ^ + . o] 21. Given the points Pi(l. If the ravectors X^^. r. 3 3 6 17. 6) of fourdimensional space. Show that 3 1 4 2 _ 4 2 and 3 are linearly dependent. C2 C7. of rank 2 so that these points lie in a plane through the origin. (c) A^ . (a) 4^24=0.
as (10.2) 02 1 02 2 ^2n *2 K hm r".4) is simply an abbreviation of a corresponding equation of (10.Xr.1) "Wl*!"'" Ob2*^2 + •• \ + c!2n*n ~ ^2 + OIj.1) the augmented matrix Oil ai2 02 2 •• "i?i K [A (10. the system is said to be inconsistent. When the system has a solution.1) may be obtained from it by applying one or more of the transformations: (o) interchanging any two of the equations.3) ?here AX A = [o^^] is = H = [x^. as In matrix notation the system of linear equations (10. (b) multiplying any equation by any nonzero constant in Two systems if alent F. = Aa in which the coefficients (o's) and the constant terms (A's) are in F x^ in By a solution in F of the system is meant any set of values of %.1).%2. Solving a system of consistent equations consists in replacing the given system by an equivalent system of prescribed form. the coefficient matrix.4) 021 •• ^271^2 H] «ml 0^2 • <^nn ™m (Each row of (10. we simply supply the unknowns and the + and = signs properly. A system of equations equivalent to (10.) 75 .a. it is said to be consistent. or (c) adding to any equation a constant multiple of another equation. •• > *r? OqiXi + 022X2 + (10. F which sat isfy simultaneously the m equations.. X xj\ and H = [h^h A^]' Consider now for the system (10. or. chapter 10 Linear Equations DEFINITIONS. of linear equations over F in the same number of unknowns are called equivevery solution of either system is a solution of the other.1) may be written «1 ^11 \' = (10. SOLUTION USING A MATRIX. tion or infinitely A consistent system has either just one solu many solutions.. Consider a system of m linear equations in the n unknowns xi. more compactly. otherwise. to read the equation from the row.
Prom the first x: . of zeroes. the first r rows of C contain one or more nonzero elements. V .0 OJ xi 0.1) is reduced to the A:„]'. 10 To solve to replace the entire the system (10. say 0. = = = 6 4 Example 2.^ + 0% + ••• + 0*71 7^ and (10. l] . The remaining rows consist the variables x: ..1) is consistent and an arbitrarily selected set of values for X. Solve the system 3xi + ^.. For the system aci + 3^2 + 5a:2 xg 2^1 + 2%3 5x^1 10 . are uniquely determined. 7^ On the other hand. 0.1) by means of (10. «2 = 0. in the inconsistent case. X: Jr+2 = k .constitute Jr from zero. When the coefficient matrix A of the system (10. least one of V+i' "r+s ' J2 is different x „.1) of rank so that the coefficient matrix of the remaining Xi + 2*2 — — 3*^3 ~ 4^4 — — 2x4... Thus = A system AX H of m linear equations in n the coefficient matrix and the augmented matrix II. the solution is the equivalent system of equations: =1.4). .nds has zeroes elsewhere. we proceed by elementary row transformations A by the row equivalent canonical matrix of Chapter 5. the corresponding equation reads Qx. we operate on rows of (10. . row equivalent canonical form C. they have different ranks. In the consistent case. FUNDAMENTAL THEOREMS. Jr X. . . X. rer of the unknowns may be chosen When these nr r unknowns is of rank r. . a solution. together with the resulting values of if at % Ji . Uq Jn . where K= ^1.1) is inconsistent. then (10..76 LINEAR EQUATIONS [CHAP. unknowns other r values. X™. In doing this. The first nonzero element in each of these rows is 1 and the column in which that 1 sta.0 1 11 5 5 '1 1 1 1' 1 1 1 1 1 1 1 . A and [A H] have the same rank. the whatever assigned any unknowns are In a consistent system (10. we have X = [l.xj (the notation is that of r rows of [C K].. I. . Thus. and one of the i^. kj. we may obtain each of Chapter 5) in terms of the remaining varia k^. bleS X: If Jr+1 If . suppose {A H] is reduced to [C K].4).^5 If A is of rank r. 0. Ex pressed in vector form. if r<n. xq = 1. — k^ ac.X^ 2xj^ x^ ^Xq^ — — — 2X2 Xg  1 = = 3 + ^Xq + 2xq 2 1 4 1 1 2 1 2' "1 2 1 1 1 2 1 2 The augmented matrix VA H\ 5 = 3 1 ] 1 5 5 5 5 0. Example 1. of the unknowns is consistent if and only system have the same rank. ..
+ 2*2 + 2x3 We find 1 1 5 1 5 1 6 2 3 4 2 1 120. 24a. where a is arbitrary. X2 + Sxg + X2 *4 = Example Xl + 3. 10] LINEAR EQUATIONS 77 1234 [A 6 1234 ">/ 6 'Xy 1 1 11 8 10 4 H] 13 12 2 525 4 10_ 1 1 4 4 22 32 lO" 4 22 _0 10 "l 11 1 V 02 = [C K] ^0 10 24o. that is. xj = xg = a. Then. (i = 1.4x4 6x2 . Denote by 4. The first of these is the familiar solution by determinants. of n (a) Solution by Cramer's Rule. then ajl = 10 + 11a and xq = 24o. NONHOMOGENEOUS EQUATIONS. o]' a consistent system of equations over F has a unique solution (Example 2) it 1) that solution is over F. method above. . moreover. A system of re nonhomogeneous equations in n unknowns has a unique solution provided \a\ ^ 0. In Problem 3 we prove ni. provided the rank of its coefficient matrix In addition to the A is n. Solve the system 3x:i 2%^ + . the system of Example 2 has infinitely many solutions over F (the rational field) if o is restricted to rational numbers. the given system is consistent. tions over infinitely See Problems 12.2x3 + X4. a. = X If or = [lO + llo. x^. Let xs = a. it has infinitely many complex solutions if a is any whatever complex number.3x3 .3x4 using Cramer's Rule. two additional procedures for solving a consistent system nonhomogeneous equations in as many unknowns AX = H are given below. if \A\ y^ 0. X2 = X = ' "' See Problem 2xi 4. A system AX = H is called a system of nonhomogeneous zero vector. However. The solution of the system is given by x^ = 10 + lla. Since A and [A H] are each of rank r = 3. 3 4 1 2 2 = 240 23 3 . For example. the system AX = H has the unique solution <1"'5) % = 777 . the general solution contains nr = 43 = 1 arbitrary constant. Prom the last row of [C K]. The systems of Examples 1 and 2 are nonhomogeneous systems.2 n) the matrix obtained from A by replacing Its Jth column with the column of constants (the h's).CHAP. x^= 0. A a^ linear equation Xi + 0^X2 + + «n*n = h is called nonhomogeneous if equations provided H is not a A 7^ 0. the system has many solutions over any field 2F of which F is a subfield. it has infinitely many real solutions if a is restricted to real numbers. If the system has infinitely many solutions (Example has infinitely many solu F when the arbitrary values to be assigned are over F.
78 LINEAR EQUATIONS [CHAP.. VI. 5 7 L8_ 5_ The solution of the system is x^ = 35/18. If the rank of A is r<n. trivial assures the existence of nontrivial solutions.8) in n AX is called a = unknowns of the coefficient matrix A always consistent. that is..6) A^AX A^H or X ^ A^H Xg is 2 3 2 1 1 2xi + 3X2 + Example 4. then n of the equations of (10. If the rank of (10. it is Note that X = 0. 10 2 1 5 5 1 2 15 6 2 8 1 1 3 4 3 2 821 2 = 24. 1 114 1 3 23 2 1 1 1 2 5 5 23 and 3 2 6 2 3 1 2 8 2 2_ 96 Then xi = 240 120 A^ 24 120 1 = 0. thus. If  ^ #  0. x^ . x^ = 29/18.7) is called linear equation '^1*1 + "2*2 + + ''n'^n = homogeneous. Theorem IV. for (10. the system has exactly nr linearly independent solutions such that every solution is a linear combination of these nr and every such linear See Problem 6.8) can be solved by Cramer's rule for the = x^= and the system has only the trivial solution. unique solution X2= . A~^ exists and the solution of the system = AX = H is given by (10. The coefficient matrix of the system { x^ + 2% a. necessary and sufficient condition that a system of n homogeneous equations in /4 n unknowns has a solution other than the trivial solution is =  0. A system of linear equations (10. A 18 7 5 1 ["9" Then "35' 1 5 1 7 5 1 7 •AX A^H J_ 18 7 5 1 6 29 18 . and Ml 5 120 96 ^4 120 (b) Solution using A ^. the system is = is always a solution. HOMOGENEOUS EQUATIONS. system of homogeneous equations. %i = xs = •• = % called the trivial solution. A necessary and sufficient condition solution is that the rank of A be r < n.5/18. A (10. .8) the rank and the augmented matrix [A 0] are the same.8) to Thus. For the system (10. A have a solution other than the V.2 + 3xg + 1 A 1 3 2 3^1 + 2a:3 3 5 1 7 From Problem 2(6). See Problem 5. Chapter 7. If the rank ot xj^ A = II is n. combination is a solution.8) is r < n.
Z ranges over the complete solution of AX = H. x^. AX^ = H. 10] LINEAR EQUATIONS 79 LET Iiand X^he Thus. Then AX^ = H. and may be given as xx= ^£4 1. = [o. two distinct solutions of AX = H. + 'P Conversely.a. Then the complete solution of the given system = is X [7a.3 X4 . 2+3a]' Note. it is first necessary to show that the system is consistent. Thus. where a and b are arbitrary. In the Xi — 2x2 + 3x3 = set x^ = 0. . thus the given system and has no solution.2xs = 0.3o]' + [O. xg = a and x^ = b. AY = 0. a.2]' = [7a. x^^a.4 Xg . x^ = b or as AT = [l. the complete x^ = 3b. If the system of nonhomogeneous equations AX = solution of the system is given by the complete solution of lution of E AX is consistent.3oJ . x^ + X2 + 2Xg + X4 = = = 5 2 7 . However. The above procedure may be extended to larger systems. 1. Solve 2%i + 3x2  4xi + Snliitinn* 5% '11 2 «s . 1. system i Example 5.9 xg 3 tion The augmented matrix 'l [A H] = 12 21 2 2 1 3 l' 2 v "1 1 3 r '\J 1 1 1 2 6 3 4 3 9 3_ 1 3 1 2 2 2 1 2 6 1 8 3 1 0_ 01 2 18 3 12 6 1 1 18 1 1 3 2000 13 Then x^  solution 1.X^) = Y = X^~ X^ is a nontrivial solution of AX = 0. 2]'. 3 6. 6]'. X = Xl+ Z is also a solution of AX = H. then Zy. where a is arbitrary.CHAP. then xg = 2 and x^ = 5 1. SOLVED PROBLEMS xi + e X2 ~ 2xs + ^2 + X4 + 3 K5 = = = 1 2»i 3 ail 2% + 2x4 + 6*5 2 + 2 X2 . I Xj + ^2 + 2 Xg A particular solution is A:^. Take x^^ 2a. a complete = plus any particular so AX = H.2x4 + 3xg 2 5"' "V 1 '112 1 5 1 1 [A H] = 3122 5 7 5 13 4 3 7 O^tj + 154 154 8 13 5 4 8 5 is inconsistent The last row reads 0xi + O^g + 0«4 = 5. 2a. if Z is any nontrivial solution of AX = and if X^ is any solution of AX = H. and A (Xx. This is a long step in solving the system by the augmented matrix method given earlier. As Z ranges over the complete solution of AX = 0. The complete solution of < *^ \^Xi ~ + *^ „ „ Xq + 2*3 = ^ IS [7a. 1 +a. + 33=5 = 0. VII.a.
K = L. Derive Cramer's Rule. [/ it is equivalent to = /. we finally multiply the equations of (i) respectively by a^n. Then X K is a solution of the Suppose next that Since A is X = L is a second solution nonsingular.71 and sum to obtain Oil "21 "l. 'ni 2Xi + 5. n n ai^di^x^ + n ai20iii% + •• 10. n 1 We have S which by Theorems 2 i=i . = 120 120 Solution: 2 15 1 120' 80 The inverse of A 1 134 62 1 23 2 3 2 69 73 17 120 15 35 5 Then 40 . and AK = AL. unique. 4.x^ — — "m "2n so that *<. ain^ilXn =1 X and XI. . When A is reduced by row transformations only to system. Multiply the first equa tion of (J) by ttn.24 8 8 40. •••• and add. and the solution is of the system. «2n (^.2 aig + + + 3x4 2a. /.3 2x1 2x:2 1 using the inverse of the coefficient matrix. Let the system of nonhomogeneous equations be °il*l + a^jAiQ + (1) ] — ••• + ain*n + «2n*n  h^ ^z uqiXi + 022*2 + "ni*i + an2*2 + = — + + "nn^n  ^n Denote by A the coefficient matrix [a. then AK = H and AL • H.ni ^2 so that x^. ^ii "11 ^1 Oqi /12 ''is '^23 \A\. A is nonsingular. the second equation by Ksi the last equation by ttni.be the cofactor of « • in A . X2 + 5 *3 + *^4 ~ = = 5 Solve the system X2 ~ 3x3 . Prove: A system AX 7^ = H of n nonhomogeneous equations in n unknowns has a unique solution provided \A\ If 0. suppose [A H] is reduced to K]. multiply the equations of (i) respectively by a^i.80 LINEAR EQUATIONS [CHAP.S i=l 1=1 3.ni "1 "2. 10 3. Continuing in this manner. •] and let a^. and Problem hx ai2 ^^2 Chapter reduces to "m '^2n °22 ^n ""no. — — so that xi = T— ^11 j ^1 "n «n2 and sum to obtain Next.4*4 Xi + «:4 3 Xi + 6 Xj .
is !« . + ap .a^ a.CHAP. 10] LINEAR EQUATIONS 81 "120 1 120 120" 80 5" " 2 120 69 73 17 15 35 5 1 8 . f^ f^ are m<n linearly independent linear forms over F in n variables. in F . ^^ = linearly independent solutions. X4 = 1 and x^ = 1. Since the rank of One such pair. for the cofactors of another row (column) of A (another solution X^ of the system). The complete solution of the system is we may obtain exactly nr = 42 taking a = 1.24 (See Example 3.2 sj^fi p .4 Xj^ = = Solve «i + 3*2 + 2xq + 2Xi Solution: + Xg  = 1 1 1 1 o" 'X/ 1 1 1 1  1 11110 2 \A U\ 1 3 2 1 4 2 3 13 2 1 1 ''\J 2 1 3 i 1 11110 1 i 3 2 o" i 1 2 A is 2.:S)f. the cofactors of the elements of any two rows (columns) are proportional. ^3 = b = 3.. 0). 6 = x^ = 0.6. Hence. such «lgl + "2g2 + ••• +apgp = ai ^^Hxfi P + «2 . si^fi + . Prove: If /"i..2 ( 2 J=l 1=1 a.2 p) mxp matrix [5^] is of rank r<p. ^2 + «s + *4 4a. x^ = 3. *3 = 3. J ^J ^ . 1. 1/5 40 . :x:4 = 1 What can be said of the pair of solutions obtained by taking a = 1 and a = b = 37 7. 8. x^ = = 2 1 ^a + 16. The g's that are linearly dependent if and only if there exist scalars a^. xs=a. then the linear forms ^j are linearly dependent if and only if the = ^^'ijfi 0=1. 2. obtained by first 6 = i and then a = X2 = 2. not all zero. Prove: In a square matrix A of order n and rank n1. x^ = h. Since 1^1 =0.) 8 8 40. we have X^ = kX^. 4/5 Xl + 6. the cofactors of the elements of any row (column) of A are a solution X^ of the AX system = (A'X = Now the system has but one linearly independent solution since A (A^) is of rank nl.
. apV if and only if [^^j] is of rank r<p. the system of m homogeneous equations trivial solution S s^j xj  has a non = [a^. Find all solutions of: x^ + x^ + «3 = = = = 4 3 5 {a) Xj_ — 2^:5 + *3 ~ 3*'4 (c) 2%i + 5%2 x^ ^ 1 x^ ~ 2%3 — 7^3 >=3 Xj^ + %2 + + %4 ( X^ + Xq + 3C2 Xg 2 Xg = = 4 (d) ^1 + %2 + ^S  % :>:4 = = = 4 4 2 (b) \2x^ + 5 ~ 3 X\ Xi ^ Xq — x^ + Xr.4*2 + 5*3 0. Similarly. by hypothesis. this requires j?/i^v X ^i^ii + "p'iP in p unknovras (i = 1. by Theorem IV. Find all nontrivial solutions of: i xi — 2x^ *1 + 2*2 "*' 3*3 + 3x3 (c) (a) < 2*1 + 3*i X2 + 3*3 *3 12*1 + 5*2 + 6*3 + 2*2 + 2xi (. x:2 ajj = o. = a 3.Bg B„ be the column vectors say AB^ = 0. each being a column of S. x^ = 17/3. Then. Show that there always exists a matrix R = [h^j] ?^ of order n such that AB = 0. 9. or of B. 6 + 3c. «ii&it + "iQ^'st + ••• + "m ^nt = «ni*it + °n2^2t+ ••• + "nn^nt i>nt Since the coefficient matrix ^ is singular.. 2 m) Now. (a) (b) «! = xi = = (d) Xi 1 + 2a — 7a/3 + — % = 1.a^.h) — 4*1 — X2 + ^x^ + 2*3 + 2*1 + 3*1 + 2«2 + *i *3 . — Xq + + x^ /4ns. — *3 ^ *47*2 — 4*3 — 5*4 3 *2 2*1 11*5 1*3 5*4 /4ns.82 LINEAR EQUATIONS [CHAP. Con sider any one of these.. AB^ = 0. :». have solutions. = AB^ = 0. *i *o = — 4 o *4 .^'2t has solutions other than the trivial solution. Let Bi. X3 = b. X2 = *g = a (6) *i = (d) — *2 5 = —*3 J. Suppose A = [a ] of order n is singular.3 = 4a/3 = 2 5/3. SUPPLEMENTARY PROBLEMS 10. AB^ = 0. 10 Since the /'s are linearly independent.. AB^ = ^65 = . (o) *i = 3o. Xg = — «4 11. the system in the unknowns h^t.
4a]' 17. j Evaluate the 2square determinants j first I "17 ai5 a^q 025 \'^3q agg and "4(7 "AS 22. . AX = that any nonzero vector of cofactors be a system of n homogeneous equations in n unknowns and suppose A of rank r = n1. x^ = 1 and X3 = x^ = 0. of rows of A two rows are linearly independent so that a^j = p^. (7 = Suppose the 1. Show that a square matrix Let is singular if and only if its rows (columns) are linearly dependent. 0. Follow AX s.2%2 + ixg 3% = = (b) / 2xj_ + 2x^  Xg (c) \2x^ + 5%2 + 6*3 Xg Hint.4 arbitrary except = H. C2 equal to 0.^1 + 3*2 + 4 % = = + 2%3 = = 1 2xi — %2 + 6 Xg To the equations of (o) adjoin "l Oxi + Ox^ + 0%3 and find the cofactors of the elements of the 2 5 a"" third row of 2 6 Ans. o]' as a solualso X3 = x^ = 0.h = °'''''"'' '' *'' ""''"'''' '""'"' ^ '" °' '^"' ""'• (t) *^'" "'' '°''°^'"^ ^^1^"°"^ ^"""S = "^ °° (a) 1. 0. a^^a^^ = a^J^(x^. AX =H Let the coefficient and the augmented matrix of the system of 3 nonhomogeneous equations in 5 unknowns be of rank 2 and assume the canonical form of the augmented matrix to be 1 1 613 623 614 624 fcj^g ci 625 C2 _0 with not both of tion of ci. o o O 1 1 13. 17a]'. and X^. a^^aJJ a^jUji 2 n). Select the columns of B from the solutions of AX = 0. (6) [2a. Write a proof of the theorem of Problem 21. Consider the linear combination Y = s.i. 15.3. a]'. matrix of rank and S is a p x„ matrix of rank r^ such that AB = 0. a^. x^ =  ^c — O . Xg == 1 to obshow that these 52 + 1=4 solutions are linearly independent. Problem 17 with c^ = c^ r^ = 0. Hint. x^ = d.2 5). Use Problem / ^1 15 to solve: . X3 = x^ = x^ = 18. a^j OLinV of a row of ^ is a solution of AX = 0. Reconcile the solution of 10(d) with another 2 4 6_ xi = c. Show 16. . Then choose X3 = tain other solutions X^. . X2 = 0. AX = H. Prove: Hint.f^ ij +p42a2. c^. the rsquare determinants formed from the columns of a submatrix consisting of any r rows of A are proportional to the rsquare determinants formed from any other submatrix consisting r Using the 4x5 matrix Hint. Prove: Theorem VI. X3 = 9a or [3a. ^'" f?c°tors''h°cJ?'" ' where (h.. Given A = 2 3 2 3 find a matrix B of rank 2 such that AB = 0. then r. (c) [lla.Xg. 20. CHAP. . If ^ is an m xp Use Theorem VI. with s„ for (O. 2a. 14. and obtain X^ = [c^. (a) xi = 27a.p^a^.ha2 2j. + r^ < ^ n f 21.. x^ = —c +—d. [a^i. verify: In an ™x„ matrix A of rank . y'ix^ — 4%2 J 2..1 .a^j +Ps^a. 0. 7a.X^ + s^X^ + s^X^ + s^X^ if of the solutions that Y is a solution of i^ AX of = H Problem 17 Show if and only (i) s. x^ = xg = 0.2 +3 +^4 = 1 Thus.1 = [a^^] of rank 2. First choose 1. +.. IS a complete solution of Hint. 19.j. 10] LINEAR EQUATIONS 83 12.
A —c 1 Solve also for £2 and h and I^. 2 p. Generalize the results of Problems 17 and 18 to m nonhomogeneous and the augmented matrix of the coefficient If the r to prove: cient and augmented matrix of the same rank Xnrn are rank r and if X^^. k^. 71 m<n 1 mvector whose ith component is and let S^ be a solution of AX . 27. H ^ system AX = K has a unique solution for any nvector K ^ 0. . If K = \k^. Sg 31. where E^ is the . 1222 2 3 4 01000 00100 00010 00001 „ „ „ . 10 24. 32113 IS row equivalent to .. k^' show that ^n^% is a solution of AX = K. . Identify the matrix [S^. Let write down the solution of A' = Y (i A be nsquare and nonsingular. If AX =H is consistent and of rank r. Show that B 11114 1234 21126 2 . > 1 and d I^. Let the system of n linear equations in n unknowns 4X = H. the system is inconsistent.X^ have unknowns = in n equations nonhomogeneous AX H of m system linearly independent solutions of the system.84 LINEAR EQUATIONS [CHAP. Solve the set of linear forms AX X = 2 1 3 3 X2 _»s_ F = 72 for the x^ as linear forms in the y's. Show that when there are + 1 . then X nr+i where = si^i + 51^2 + ••• + ^nr+i^nr+i S i=i 1. Show that the 1 1 1" 'xi "n" = 29. {i 1. 4 and E^ . the imput quantities £i and a c h h are given in terms of the output quantities £2 and Iq by E^ = oEq + 6/2 "£11 _/lJ _ 'eI = A 'e2 h Show that cE^ + dlQ d 'b h. have a unique solution. and whose other components are 0. 25.. k^Si + A:2S2 + + m).E^. and let S^ be the solution of AX nvector whose ith component is 1 and whose other components are 0. Let 4 be an m X matrix with 1. is a complete solution. Now 30. 10 „ From B =[A H] infer that the 3 1 ij [_0 system of 6 linear equations in 4 m>n linear equations in n ra unknowns has 5 linearly independent equations. for what set of r variables can one solve? equations in n unknowns with coeffi26. 'l 0. Show that a system of unknowns can have at most re +1 linearly independent equations.2. 2 where £^is the S^].. = E^. 28. In a fourpole electrical network. = n).
x^ now be column vectors.1).2. is The set of (6) The set of all vectors [x^. X^.. + KX^ + •• + KX^ (kiinF) For example. Any set of revectors over F which if is plication is called a vector space.x^. both of the sets of vectors (a) and (b) of Example 1 are Clearly.1) is also called a linear vector space. = \i . The totality V^iF) of all revectors over F is called the redimensional vector space over F. A set V of the vectors of V^(F) is called a subspace of V^(F) provided V is closed under addition and scalar multiplication. 1. Let F be the field R of real numbers so that the 3vectors X^ ^3= [1.. Z^ belong to V^(F). The set (a) of Example 1 is a subspace (a line) of ordinary space.l) is a subspace of A vector space V lie in is vided (a) the Xi vectors X^. X^. the zero revector is a subspace of F„(F). X^ (11. (a) Example all vectors [x^.l) contains the zero revector while the zero revector alone is a vector space. x^Y of ordinary space havinr equal components (x^ = x^^ x^) closed under both addition and scalar multiplication.) vector spaces. Thus. A them if set of such 7zvectors over is a vector of the set.2]' and X^= [3.1]' lie in ordinary space S = s can be expressed as = V^R). For. 2 i]' \a b  cY j ot 85 .Chapter Vector Spaces 11 UNLESS STATED OTHERWISE. . (The space (11. X^ closed under both addition and scalar multiX^ are nvectors over F. SUBSPACES. the set of all <"!) is a vector K^i space over F. all vectors will a^]'.lY X^ Any vector . X^ V and said to be spanned or generated by the revectors (b) every vector of F is a linear combination X^. VECTOR SPACES. In general If X^. the sum of any two of the vectors and k times any vector (k real) are again vectors having equal components.3. .so also IS V^(F) Itself.i. When components are disThe transpose mark (') indicates that the elements are to be written in a column. we shall write [xj^. x^. every vector space (ll. [i. X^ proNote that the X^_ are not restricted to be linearly independent.x^Y of ordinary space is closed under addition and scalar multiplication. Examples. the space of all linear combinations (ll.. linear combinations Thus. played. F is said to be closed under addition if the sum of any two of Similarly. the set is said to be closed under scalar multiplication every scalar multiple of a vector of the set is a vector of the set.
' then from the set r the remaining mT vectors lie. + + + ys 3ys 2ys + j^Xq + yg^a yi yi + !yi xi The resulting system of equations + y2 + ys = .. has a u Xi+ nique solution.X^ X^. The plane n of Example 2 AND DIMENSION. They span a subspace (the plane real numbers. tt) and ^2 are linearly independent.h. dependent vectors required to span V.X^ are a basis of S. The vectors X^. c). basis of S . hX^. BASIS maximum number of linthe dimension of a vector space V is meant the number of linearly inminimum same thing. of (Show this. In elementary it as a considering been have we Here (a. 6..^ + 3X3 = yi + 2y2 unlike the system ( i).X^. X^ set the basis is whose tt Example 2. Theorem IV may be re If X X^ . shall agree to write p. are a set of revectors over In particular. These F and if r is See Problems 23. the vectors Xj^. Thus. The vectors X^. the early independent vectors in F or.X^. X^ of Example 2 span S since any vector [a.^.(F) for %\F). what is the is considered as space ordinary geometry. X^ of their components. where /i and k are The vectors Xy of The vector X^ spans A is a real number. where a subspace (the line L) of S which contains every vector See Problem 1. b. Xr. 11 yi + Ti ^1 y2 + ys + 3y4. basis of the space.) They span the subspace Theorems IV stated as: I of Chapter 9 apply here. By dimension 1. T vectors span a V^iF) in which the rank of the raa:ro matrix Unearly independent vectors may be selected. 86 VECTOR SPACES [CHAP. When r = n. All bases of vectors the combination of tor of the space is then a unique linear independent vectors of the linearly r V^(F) have exactly the same number of vectors but any A set of r space will serve as a basis. c ]' of S can be expressed yi Xy 71^1 + + ys 2y2 3X2 a h c . r ^ A we vector space of dimension consisting of 7ivectors will be denoted by F„(F). a 3space (space of dimension three) of points is of line L the and is of dimension 2 3space of vectors ia.c \. + 72^2 + ys'^s + y4^4 yi + 2X2 + Sys + 3y2 + 2y3 yi + Ja since the resulting system of equations yi + (i) y2 + ys + 3x4 yi + yi 2y2 + 3y2 3ys + 2y4 2y3 y4 + + is consistent. 274. Xg. X4 spanS. S which contains every vector hX^ + kX. of course. Example 3. 3X2 + 2xs = are not a The vectors X^.. Xj. Each veclinearly independent vectors of V^(F) is called a of this basis. .
!™ are m<7i linearly independent «vectors over F. if and only if each See Problem 5. is a vector of ^V.CHAP. IV. [s^. Zm are m<n X^. Thus.. Clearly. 11] VECTOR SPACES 87 Of considerable importance are II. ^m4. then the vectors n Yi = 7 1 = 1 a^jXj " (i = 1. If Zi. Restating Theorem VI.Zg. then AX = has N^ linearly independent solutions X^.¥2. Consider the subspace 7f.] is of rank r<p. sum space and V^(F) as inter Example 4. ^n are any nm vectors of V^iF) which together with X. .X2 = X^: thus. S IS of dimension 3..2.. then the set X^ Z„ is a basis of V^iF). the sum space of tt^ and tt^ is S. NUIXITY OF A MATRIX. . so also is aX. IDENTICAL SUBSPACES.[(F) subspaces of F„(F).. likewise if X and y are common to the two spaces so also is aX^bY. spanned by X^ and X^ of Example 2 and the subspace tt^ spanned by Xs and X^. The dimension of the intersection space of two vector spaces cannot exceed the smaller of the dimensions of the two spaces.2 re) are linearly independent if and only if [a^] is nonsingular. ^2. Chapter VI. we have A has nullity N^ . X^ lies in both tt^ and 7t^. we call it the sum space V^^iF). they are identical if and and conversely. If 10. denoted by ^ A'^ IS called the nullity of A. The dimension s of the sum space of two vector spaces does not exceed the sum of their dimensions. and L is of dimension 1. If Z^. the intersection of two spaces is a vector space. sum is meant the totality of vectors X+Y Let V„\F) and f3^) be two vector spaces.. : If JYi. V. we call it the intersection space V„\F).V^(F) and X(F) are two each vector of X(F) a subspace of the other. . By two spaces. when p<m.. Z„ are linearly independent revectors over F. . III. only is if If . SUM AND INTERSECTION OF TWO SPACES. . The subspace (line L) spanned by X^ IS then the intersection space of 77^ and 77^ Note that 77^ and 77^ are each of dimension 2. Now 4X1 . the solution vectors X constitute a vector space called the null space of A. X^ . linearly independent nvectors of V^iF) and if J^+j^. By their where X is in V^(F) and Y is in V^iF). See Problems 68. then h + k = s + t. that is.^. If two vector spaces F„V) and V^(F) have V^\F) as section space. then the p vectors m ^i = i^'ij^i if (/=l2 P) are linearly dependent if p>m or. the intersection of the two vector spaces is meant the totality of vectors common to the Now if Z is a vector common to the two spaces... independent set. this is a vector space. Since rr^ and tt^ are not identical (prove this) and since the four vectors span S. •••. The dimension of this space. X^ X^ form a linearly See Problem 4. For a system of homogeneous equations AX = 0. This agrees with Theorem V...
then r 2 1 "1 1 7 X [Zi. 9. 1. rank and nullity of their product If A and B are of order ra and respective ranks q and rg .3) Nab > Na . we have X relative to the Zbasis. 2. Nab > Nb Na + Nb Nab < See Problem 10. is called the /th elementary vector. important basis for f^^Cf).0. Examples. BASES AND COORDINATES. Every vector X = [%.2. .4) X is the matrix = [Zi. The El = [1. If Zi = [2. 1. 1 ]'. Z3 = [l. 03 given relative to this basis.0 En = [0. See Problem Vn.^Z.0 1]' are called elementaiy or unit vectors over F.88 VECTOR SPACES [CHAP.2]' 1 1 2 See Problem 11. ZXz Zg Z„.Zg Zn]Xz = Zi. n Then there exist unique scalars a^ X i 1 =i aiZi a. + = n SYLVESTER'S LAWS OF NULLITY.0. x^ x^ oi X X X relative to the Ebasis. 0]'. (F) and where Z whose columns are the basis vectors Z^ = [l. relative to the £basis. The elementary vector Ej." (11.1. 2 xiEi XxE^ + «2^2 + are ••• + '^nEr. 11 Xti A such that every solution of AX = is a linear combination of them and every such linear combination is a solution. Zg. whose /th component The elementary vectors E^.3]' is a vector of Vg(F) relative to that basis. Let Zi. A basis for the null space of A is any set of N^ linearly independent solutions of AX = 0. E^ = [0.2) mxre matrix A of rank rj rA and nullity Nji N/^. For an (11. unless otherwise specified. is now called the coordinates of we shall assume that a vector %. Writing (11.0. Zg in F such that Zn be another basis of 1^(F). E^ £„ constitute an is 1.^ + 02 Zg + ••• + dj^Zj^ These scalars JY^ 01.05 = [oi.% n 1=1 uniquely as the ^nl' of 1^(F) can be expressed sum X Of the elementary vectors. og o„ are called the coordinates of a^Y. Zg]A:^ 1 3 2 1 [7.0 ravectors 0]'. 3]'. The components %. 1 ]' is a basis of Fg Xz = [1. the AB satisfy the inequalities ''AB > '^ + 'B . Hereafter.
Suppose r Xig = [61. where x^ + x^ + x^ + x^ = Q vectors of the set. we may take and Xg as a basis of the V2{F).1 ]'. X2. VIII. = = \_\.O.^2 K\' so that X (11.3.0.2.X^. and X^. X^.0.2. is a subspace V of V^(F) since the sum of any two vectors of the set and any scalar multiple of a vector of the set have components whose sum is zero. hence. A'g = [2.\ • WX^ • .Xg = since the matrices [X^. are 1 3 1 2 2. solely by the two bases and given by (11. that is. x^. 11] VECTOR SPACES 89 Let (11. For a basis.3]'.1.^ Since 2 1 = [1.1. Xy From (11. then there exists a nonsingular matrix P determined . = ^'^. Xy] are of rank [1. X^ = [3. Find a basis.1. the vectors ^"1 = [1. = [1.5) W^.5). .O]'. X^ and A:g. For a basis of this space we may take X^.0. Since 14 13 12 12 2 4 is of rank 2. The vectors X^.Xg] and [X^.6) such that Xf/ = PX^ See Problem 12. X^ Now any two of these vectors are linearly independent.6. x^Y.2.6) Z Z JV^ = IT Zj^ and X^ IF"^Z.4.Xg. and X^ = [1.4.0]' or X^. The set of all vectors A' = [%. where P = Thus. X^. Xg of Problem 2 lie in V^(F). and X.2. A's = [4. If a vector of f^(F) has coordinates Z^ and X^ respectively relative to two bases of P^(F). or X^ 3.4]'. the vectors X.3.X2. \ \ be yet another basis of f^(F).CHAP. Xg. SOLVED PROBLEMS 1.X^ = [l. we may take any two of the vectors except the pair Xg.0.O. 4 is of rank 2.4) and (11.0]'. X^.0.2. 4. 1.8]' 4.3. X2.1]'. 1 1 2 and ^"4 = [4.2]' are linearly dependent and span a V^(F). 1 ]'. 1 ]' 4 3 1 are linearly dependent and span a vector space ^ (F).ZX^ PX. and Xg = [o.
say 2lg^(F) (c Thus. = 1 %+ 2*2  s. the X^ span a space of dimension two.90 VECTOR SPACES [CHAP.^ of iPg^CF) and any vector cXj^ + dX^ of iff(f) + d)Y2 . = 0. Prove: If Suppose s t = h. any vector iX.2.2*1 + 4%  % = and % 2 xi % «4. we note that X^ and X^ are linearly independent while iF('''). Fi = kXz  aY^ + fcK. Xq = 2Zj^ + X^.A:2 ^t.X^ left belongs to P^(F).5]' be vectors of Show that the space spanned by X^.(4c + 2<i)Ki of QVi(F).4. Z^+j Now suppose (114) there exist scalars a's and 6's such that t h k X 1=1 t "iXi + .S t=t+i a^Yi + S i=t+i biZi ft i=i h i=t+i i=t+i k to The vector on the V^(F). First.2. this requires = .^. Then =h + kt .1]' and X^ = [3. then Vr!XF) is a subspace of (^^F) and their sum space that the is 1^' itself. X^.1. i'2=[l.1]'.and from the right member.4). hence. Next. Since 1 4 0. Z2 = [l.^li ^i = ^i^ . Thus. X^ = Y^.3]' and X^ = 1 1 [ 1. Xq and the space spanned by Y. t<k and let X^^. of is a vector is a vector 6. t a^+i = at+2 = k 2. Vs(F).^.X3.2]'. = "f. of nk 7. then 3Ci 1 1 1 % is of 1 «2 % %4. say Also. »2. (50 + 26)^2 . span l^(F) and vectors Then by Theorem H there exist vectors Z^ so that Zj^^.4^1. : i=t+i But the X's and Z's are linearly independent so that a^ = 05 = •• = °t = the ^f's. Thus.Q. But X^. then 2% + 1 5X2 + IXg (b) If X = [xi. yt+2 X^ span Yh V^(F). s=k.6.(2a + b)X. thus it belongs Xt span Vn(Fy. ^t^i.%]' lies in the Vg(F) spanned by X^ = [ 1.* = .3]'.ys.271.x^. Y^ = 2^2.l]'.Zt+1 2fe span I^'^F). two vector spaces Vn(F) and I^(F) have Vn(F) as sum space and V^iF) as intersection space. Y^ = [0. Suppose next that t<h.2. 11 5. 3 These problems linearly independent verify: Every ^(^ ) may be defined as the totality of solutions over F of a system homogeneous linear equations over F in n unknowns. and Z's are a linearly independent set and span ^(F). The reader will show same is true if t = k.0. Let Zi = [l.2. t =h and + t  h+k. 2 3 1 1 2 1 1 rank 2.1.X^ Yh so that X1.i'tn A:i.x^Y lies in the Pi(F) spanned by X^ = [1.lY. *t+2 = •• = ^fe = as was to be proved. Y^ are identical. (a) If Z = [%..5 ]'.2.X2 ^t. 2 i=i "iXi + b^Z^ ''t^i = s t*^"^. A:3 = [3. the Yi being linearly independent span a space of dimension two. the two spaces are identical. belongs also to ^(F). then h + k = s + t. Now from (11. Hence.
2e aX. Z2 = [1. Thus relative to the Zbasis. Then there exist nonsingular matrices P and Q such that has that form while the rank of PAQB is exactly that of AB (why?).1. Prove: r„ > AB — r A + r .2Y and Z2 = [ 1. we may take = s X^.0. 11] VECTOR SPACES 91 8. Consider ^FsCF) having X^ = [l. rows of B while "• the remaining rows are zeros. 2 1 = a 1 + h 1 + c 1 = = 2 1 and a = 0. 10.1]'. we equate linear combinations of the vectors of the bases of iFg(F) and s^'sCF) as 6 . Find its coordinates relative to a new basis Zi = [l 1 0]' Write 1 1 \ 1 !a Then a + i + c = + b c c 1 (i) X = aZ^ + 6Z2 + cZr^. 6 = 4/3.1]' is also a basis. 113 9. As a basis.2]' and Fj = [ 1. Let X=[l. and Y^ Prom h + k + t. we have X^ = [0. that is.l. Then + .2. B Suppose first that A has the form Then the 10. By Problem Chapters. Since the matrix of the components 113 2 110 1 is of rank 3.0. and Zg = [1. the intersection space is a VsiF). 2 2 To find a basis. The reader may consider the special case when B = 11.(F).]^ + 6^2 = [1.4 Xs + 2x^ = is the pair of linearly independent solutions [1. Solution (a).1]' and [2. 6 = — 1. 3 Determine a basis for the null space of A 2 2 2 4 1 10 113 Consider the system of equations 3 AX x^ = = which reduces to \x2+ A basis for the null space of .1.1. The vector [3. and solve ^2a ( 3a + 66 = = 1 obtaining a = 1/3. 1 + c = 2.CHAP.2/3.2.1/3 ]' is a basis of the intersection space.1 ]' as basis.o]' of these equations.n.0. rows of AB are the first r. first r.lY relative to the Sbasis.1 ]' as basis and jFgCF) having 71 = [3. the rank of AB is ^45 > 'k + %  PAQ Suppose next that A is not of the above form.1. the sum 3 12 1 space is V. X^.3e c = 1 take d = 1 for convenience. l]'.2.l.2.2]' . c = 2/3.
1 ]' and A'2 = [3. Show that [ 1. Assume Y 51 aiXi = S biXi .3.11.6).92 VECTOR SPACES [CHAP.^ = [1.3. {c). = [4.3. 0. 2 1 _ _ [0. Z^. (6) All vectors (c) All with x^ = x^.2 ]'. Zg] 1 . Ifg 7. Z X 1 1 1 1 1 .l.1 ]' and 72= [17. Show that the vectors X^ = [ 1.1. (a). [1. (b).1. X ff^ with respect to the two bases = [2.2]' Ans.4.Z^.1.2.2]'.1. any other vector Y of the space can be repre sented uniquely as a linear combination of k X^ X^ . vectors with %4 = Ans. x^. where R denotes the field of real numbers.0. trix ? such that X^ = PX^ be the coordinates of a vector Z3= [1.l.xj^ integral. then X^.Z^^X^ = ZXg.4]' ^'^ [1. ^ = 1 2 2 and W 23 2 01 2 3 1 1 2 3 1 4 1 1 1 Then P W'^Z = ^ 2 by (11.1.3.1.3]'.4]' [1. Show that if the set X^.l]'. .2^ 12. Let X^ and X^ Z2=[l.1.y be an arbitrary vector of Vi(R).1]'.3.O.3. Determine the dimension of the vector space spanned by each set of vectors. [1.1. [5.Xr.1]' [2. 03 SUPPLEMENTARY PROBLEMS 13.2 ]' do not span the same space as Ti = [2. 15.5.l.1.3.2. Determine the ma 1 1 1 1 'l 2 2 1 l' Here Z = \_Z^.5]' (a) Select a basis for each. Rewriting (i) as X = {Z^.2]' span the same space as Y^ = [9.1.3. Which of the following sets are subspaces of K^(R)'? (a) All vectors with Xj_= X2 = X3 = x^.5.2. Let [xi^.3]' [1. (a) (6) Show that the vectors X^ = [l.4.. x^=2x^.6]' .2]' are a basis of the fi^(F) of Problem 2. r= 2 16. [l.1 ]' and [2.0]' = [1.1.0.3. x^. 14. x^.l]' (b) [3.1]' [1.2]' and K. 1 we have 1 1 X^ . 11 Solution (6).1. All except {d) and (e).4. X4. (d) All vectors with (e) All vectors with x^ = 1 .1]' and f4 = [l.4.1 ]'.l]' and X^ = [3. Hint.4.2.1]'.2. n.2.2.Xs.X^ Xfe is a basis lor Vn(F).
l]'. .o. Follow the proof given in Problem 8.0. r^g < and rg . 21. Z2=[i.l. Prove: (a) N^.1. to prove Theorem HI.0 ]'.0.2. Derive a procedure for Problem 16 using only column transformations on A = [X^. (c) [l/3. Show that [l. then 2 hjPj is a solution of AX = H. [6.1. Consider n~r^g . ]' 22. [^2= [1.0]' [l.1. Show that the rank of this matrix is 4.l]' and the space spanned by [l. [l. Find.1.1.1 ]'.2 n).0. is the intersection space of the two given spaces.1. X^.0.1.l]' the coordinates of the vectors (c) [0.0.>N^.l.0. Show that the columns of AB are in the column space of A and the rows of AB are in the row space of fi.3. The vector space defined by all linear combinations of the columns of a matrix A is called the column space of A. 2.0. relative to the basis (a) Z^= (c) [l.l]'.0]'. [0. [o.o]' are of dimensions 4 and 3.2. 1/3.3]' = [0.o]'.2]'. Z3=[3. 1 12 12 3 4 3 4 Ans.0. 2]' 28.0.l]'.l. Then resolve Problem 5.0]'. hence. Zg = [2. Zg = [l.l ]'.^2 is a solution of AX n = Ej . i ]'. 26.l.l.l]' and [l. (a) [2. [l. (6) Ans. [l.2.0. N^^>N^ = n (b)N^. Consider the 4x4 the \i(R) of Problem matrix whose columns are the vectors of a basis of the Vi(R) of Problem 2 and a basis of 3. (6) [ 1. Z2=[i.1. 11] VECTOR SPACES 93 18.0.2/3.l.l]'.1]'. [l.0.1. The vector space defined by all linear combinations of the rows of A is called the row space of ^.1.5]'.1. Determine the P such that Xj^ = PX^ .CHAP.3]'.l. y^ Y^]. Show that the space spanned by [l.0]'. equations in n unknowns. 1^2= [1. [1.0]'.2. !fi 2^ = [1.l.0]'.2]' the coordinates of the vectors [l.7.0. respectively. [o.o]'.l.1.2]'. [0.1. relative to the basis (a) [2. Show that the vector AX = H a system of m nonhomogeneous H belongs to the column space of A . 2 Z3= If'3= [1.o]'.2.l]'. (6) [4/3. V^R) is the sum space and l^(R).1]'. 23 = [1.1]'.1.0]'.0]'.1. the zero space.1. [1/2.2. 3/2. l]'.0.o]'.l. Prove: If Pj [''1. (a) P = ^ 2 P = 1 2j L 1 ij 24.0.2.0.0. r (6) 1 2"1 2 Ans. 19. 1/3. 29.l.l.0. 1/2 ]' 23. Chapter 10.0.l]'. [1/3.0. where H = KVH = Hint. 1 ]'.1]' = [l.<N^^N^ r^ Hint: (a) /V^g (b)  r^g .O.1.0. (c) (b) [1. (j = 1.2.0.0]'. 25. Find. 1^3= [1. 20.0.0. using the theorem of Problem 10. il^^^ Zi = [0. Zi= ea) 1^1 [1. Let X^ and trix X^^ be the coordinates of a vector X with respect to the given pair of bases. is consistent if and only if the 1 1 1111 (6) 27. (a) (b) [1.1]' 41 . h^Ej^ + h^E^+ +hnE^. Determine a basis for the null space of (a) 1 11. [0. (a) [1. Ans.l]' are a basis for the intersection space.l. Zi=[o.
if If in (12... in general.a^j 0]'=£'i then Hence. and carries aX^ + bX^ into aY^ + feFg.. For this reason. %]' and Y = lyx.Q "nf]' ^ = £• then I. for every scalar k.11/5. A linear transformation (12. briefly.27..1).7/5]'. Ogi. Y = [a^j. of (12. Let X = [x.0..^ri or. Suppose that the coordinates of X . then (a) (fe) it carries kX^ into ^y^.1) is a transformation T which carries any vector X Y of the same space. for it every pair of scalars a and b.5]' is Y 1 1 2 3 = 5 27 17 [12.5]' is obtained by solving 1 . V^(F) into (usually) another vector If Then (12. JnY ^^ ^^° vectors of l^(F).17]'.chapter 12 Linear Transformations DEFINITION. 7/5 BASIC THEOREMS.] where A = is over F.0. 94 . 1 1 2 5 3 '12" A" Example 1. the respective columns of A being the coordinates of the images of these vectors. their co ordinates being relative to the same basis of the space.1) is uniquely determined when the images (Y's) of the basis vectors are known. 'l 1 2 'x{ *2 = '2' (6) The vector X whose image is y = [2.1) carries X^ into F^ and X^ into Y^. AX [a^. Y = [ an. X = [\. Y are related by yi '^ll'^l "^ ^12 "^2 "^ T ^^Yi J^ + (12. 13 'l 1 2 5 3 2 (a) The image of A" = [2.. See Problem l.1) df^Yj. . 5 112 Since 2 12 13 5 3 5 10 10 1 13/5 11/5 ..1 2 3 5 3_ 3. called its image. the transformation is called linear. x^. Consider the linear transformation Y = AX 1 2 in ordinary space Vq(R).. a^J' and. . X = [13/5. y^.
Since Q = P"^. Note. A study of similar matrices There we shall agree to write B = R'^AR instead of S = SAS'^ but no compelling reason. will be for made might have been written as B = PAP^.1) X = A'^y carries the set of vectors Y^. VI. Suppose changed and let X^ and Y^ be the coordinates of X^.. When any two sets of re linearly independent revectors are given.. In particular. . such that that the basis is X^ = QX^ = and Y^ = QY^ = Then where (12. Prom Theorem HI follows IV. k When A is nonsingular. there exists a nonsingular matrix P such that X^ = ?X^ and Yy. that is. Relative to a Zbasis. V.F). let 7^ = AX^. the inverse of (12. (12. By Theorem VIH. is nonsingular. nonsingular linear transformation carries linearly independent (dependent) vec tors into linearly independent (dependent) vectors. Y^. and if Z into W. setting ?~^ = Q. then Z = BY = {BA)X carries X into Z and IF = (CBA\X carries X into IF. if Z = BY carries Y into Z. See Problem 2. if the images of distinct vectors Xi Otherwise the transformation is called singular. 12] LINEAR TRANSFORMATIONS 95 A linear transformation (121) is called nonsingular are distinct vectors Y^. = PY^ or. there exists a nonsingular linear transformation which carries the vectors of one set into the vectors of the other. then A and B are similar.\ whose components are the columns of A into the basis vectors of the space. CHANGE OF BASIS. If IT = CZ carries Y = AX carries a vector X into a vector F.1) is nonsingular if and only if A. The elementary vectors £^ of \{F) may be transformed into any set of n linearly independent nvectors by a nonsingular linear transformation and conversely.1) the image of a vector space Vn(F) is a vector space VjJ. be a linear transformation of ^(F). the transformation is a mapping of ^(F) onto itself. VII.CHAP. II. and Y^ respectively relative to the new basis. Under a nonsingular transformation (12. See Problem 3. It is also a linear transformation. in. Chapter 11. A A linear transformation (12. . the dimension of the vector space is preserved.2) later. matrices A and B such that there exists a nonsingular matrix We have proved Q for which B = Q'^AQ vm. 6 = = Q^AQX^ BX^ Q^AQ Two are called similar. the matrix of the transformation.2) Y^ = Q% Q^X. If Y^ = AX^ is a linear transformation of V^(F) relative to a given basis (Zbasis) and Yjf = BX^ is the same linear transformation relative to another basis (IFbasis)..
Kg] 8 9 (c) The rank of [A'^. E^ into [3.1. . Find the linear transformation Use the result of (b) to find the image ijf of Xy = (a) (b) [1. I2 = [3.Y^. and ^3 = [4.27]'.0.7]' 3 23 L. K3] .l]' is Y^ 2 1 [6.0.5]'. (a) Set up the linear transformation and £3 into Fg = [2.1]' be a new basis.4. Let Y = AX = 1 1 2 3 Z be a linear transformation relative to the £ basis and let W^ 2 IFg [l. X^.2]'.1.13. A = [y^. the vector X = [3. 12 1 1 3 1 Example 2.19/9]'.l.8]'.1. = 21 10 3 3 [6.W^.3]'. = [1. 1 (c) Given the vector X = [3.1.3]'. and Z3 are linearly dependent as also are their images.W^] = 2 1 1 1 2 then W 1  1 9 1 2 3 1 5 1 3 Xff = (a) Relative to the If basis. to the Wbasis. The image of Yg is Ys = [8. Y = AX which carries E^ into Y^ = [1. Fg.1. Show that Xi.1. {h) (c) (d) Find the images of li= [1. X^] and {Y^.5.2]'.2]'. 1 3 1 2 1 (a) By TheoremI.2. Show that X^ and Zg ^^^ linearly independent as also are their images. X^. 1 3 6 is 2 as also is that of [^i. find the coordinates of its Yjr image relative = BXjf corresponding to V = AX. relative to the IFbasis is W Y = [14/3.2]' has coordinates W X Yf^ = [l.2. 1 1 3 3 Write W = [W^. X^ = X^ + X^ and Fg = Fi+Zg so that both sets are linearly dependent.96 LINEAR TRANSFORMATIONS [CHAP.3.7]' which. X^ and X^ are linearly 19 independent as also are their images (rf) We may compare the ranks of \_X^. however. SOLVED PROBLEMS 1.Yq\. the equation of the linear transformation is Y =^ AX 2 3 2 3 13 (6) 2 1 The image of X^= [l.l.9.19]' and the 3 2 3 image of Xg is K3 =[ 14.0.4 ]'. = The image of ^ is Y = AX = [9.3]'.l]'. 7 See Problem 5.2.1]'. W^ = [1.20/9.Xg] = 1 _1 1 4 4 8 Thus.l]'. 36 (b) 21 15 11 1 Y„ w\ 36 W^AX (W~^AW)Xjf = BXj^ 21 10 3 23 21 15 11 1 1 6 = 2 (c) Yj.
suppose that the images Yi = AXi. Similarly. 1. Then A{Xi^Xi) homogeneous linear equations AX = Q has the nontrivial solution X = X^X^.1 ]'+ [3. Xf^r = PX^ = i 2 1 ^z Then 3 0_ 1 P''X. = Suppose A is nonsingular and the transforms of X^ ^ X^ are Y = AX^ = AX^.1 ]' into [2. find the same transformation 7^ = BX^ relative to the f'basis of that problem. This sible if and only if .2]'.CHAP. 3 9 . Then there exist scalars s^.s^ P p) sp of the linearly independ.2. the independent.l Y2 = 11. Chap 2 ter 11.+ cXg= E^.0. But this is contrary to the hypothesis that the Xi are Hence.1]' into [2. + spXp = Y^ are linearly o. E^= hX^ + X^ + ^Xg  c = and its imageis J Y. that is.1) is nonsingular if and only if A is nonsingular. a contradiction of the hypothesis that A is nonsingular.^ ^(4^1) = A(Sj^X^+ S2X^+ — + spXp) = Since A is non singular. 1 1" 1 2 1 : ^!i = Q^w d 2 14 6 14 9 3 and PY„ Q^AX. = [l. Q^^Q^r..0. Prove: A nonsingular linear transformation carries linearly independent vectors into linearly in dependent vectors.2. If Yy = AXf. = ^2. £. H2. tion of the transformation.1 and the image of Eg ]'.Y^. and I3 =[ Find the images of 4 ^^s =[ 1.! fj^.l.2]' + is K. A certain linear transformation F = iZ carries Z^ = [ 1.Xr. Chapter 11. a + b + c [3.1 ]' = The equation 1 [l.Yg]X 2 2 1 1 1 2 1 5. then 6 a + b + 2c = and a = ^.7.2]'.7. 12] LINEAR TRANSFORMATIONS 97 2. = 2 3 2 1 X^ is a linear transformation relative to the Zbasis of Problem 12.0.3. the image of £5 is of the transformation is 1 1 1 3 1 Y = [Y^. 1 4 1 1" From Problem 12.. not all zero. such that ^ 1=1 H'^i = ^lYi + s^Y2+ + s^Yfy ^^ f^ = P °' .3.1]'. (i = 1. s^X^ + s^X^ + . linearly independent.2 ent vectors X^. 4.. the system of  and is pos 3. Assume the contrary. 3. \^ 7 X. Prove: A linear transformation (12. c = i. Xp are linearly dependent.1 ]' into and write the equa = I Let aX.4 = o.1]'. 6 = 1.1. Thus.i^ + bXr.
2]' into the same image vector.3]' relative to the Ebasis.1]'. (a) (6) = [1. Z = AX carry the set Xi into E{ and Y 15. where P 1 iZ^..0.5]'. (c) the image 3 13. 1 Zf2. Z3 = [1. 9.Y^ Y^.2 and A:3=[i.5]'. (b) X = A Y carries the column vectors of A into the elementary vectors. show (a) it is singular.l]'.1. Show that the transformation is singular and carries the linearly independent vectors [ 1. Given the linear transformation 7^ 1 1 1 Xjf .3]'. relative to the IFbasis: W^= [o. say Z.1. show 1 (a) it is singular. (a) [8.0. Show that Y = [14. ^i=[l. Z^i 1 (d) Yy = Q ^AQX. In Problem 1 show: (a) the transformation is nonsingular.1. Let AT = [1.= [4.98 LINEAR TRANSFORMATIONS [CHAP. 1 1 3 5 ]'. 12 16.1. Hint. JVg = [2. find (a) the image of [2.. also Y = klX. 12 SUPPLEMENTARY PROBLEMS 6. Prove: = BZ carry the E^ into Similar matrices have equal determinants. and £3 into [2. Given the linear transformation Y 2 V.2]'. Given the linear transformation Y 2 3 1 X.10.2]'.1. where Q =P ^. 3 1 1_ Let Y = AX ^2 = 3 _1 2 1 A^ be a linear transformation relative to the £basis and let a new basis.4]' and 1 Y has coordinates Y^ = [8.1) is nonsingular and show that ages Y^.5.1.5. (6) the images of the linearly in 2 dependent vectors of V^{R) is a Vs(R).2.0 ]' lies in the K. Consider the images of a basis of P^ (F). 7.11]'.2. 4ns.3]' are linearly dependent.1. when referred to the new basis.I. Use Theorem changed.0]' .(fl) spanned by [5.2. 10.2]' 01" 1 1 (c) X^ = PX and Y^ = PY.1. 1 1 12.4. (6) the image of every vector of the 113 (R) spanned by [ 1. 2]'.2. Suppose (12. . . (b) [3.l. Let Xi and Yi. X. (h) the vector X whose image is 8. 1 2 4 X.1]' A^ be Chosen. Af = [1. 2]' Study the effect of the transformation Y = IX.1]'. III to show that under a nonsingular transformation the dimension of a vector space is un Hint. = [2. = [1.2 n) be the given sets of vectors.0]'. Set up the linear transformation which carries E^ into [l.7.X^.1. 0" 1 1 1 17. X^ are linearly dependent so also are their im 11. if X^. Using the transformation of Probleml. (j = 1. £5 into [3. Let Kj. 14. has coordinates X^.1.1 ]' and [3.. IK.6]' is the image of under the transformation.l.1 ]' and [2. Prove Theorem Vn.
(c) Y 2 3 4 6 6 X. If y = AX carries every vector of a vector space I^ into a vector of that same space.l. 121 LINEAR TRANSFORMATIONS 99 IK5 = [2. 4 Consider the linear transformation tion of (a) 1. the V^ spanned by [2.1 ]' 19. ^28 ^nTnWrite P = [£^^.l]'. £^^] where ii.o]' and [3.Find the representation relative to the Zbasis: Z^ = [i. £4.l.l. then the null space of A is the vector space each of whose vectors transforms into the zero vector.o]'.. 10 3 Am 25 10 2 2 in the linear transformation 18.l. number of the elementary n Prove: P is a permutation matrix so also are P' and PP' = /. the li spanned by [l.2. (e) (/) Show that each permutation matrix P can be expressed as a product umn matrices K^2.l]' and the ^ spanned by [l. the \l^ spanned by [l.l]'./2 /^ is a permuta Describe the permutation matrix P Prove: There are n! permutation matrices of order n.l]' is an invariant vector space. Prove: If If (b) (c) (d) Pj and fg are permutation matrices so also are P3 = PlP2 and P^^P^Pt. (Note that every vector of the V^ is carried into itself.l]' and [2. £4. £'4.1. 1.l]' I/^(«) spanned by [2. £1.0. 146 20.0]' are invariant 2 2 spaces. If. £1. Show that in the real space V^{R) under the linear transformation 1 f \ (a) F = 12 2 2 X. A is singular. ^h 1. £3].0. E^^. £1].l]'. £3]. and the V^ spanned by 3 [1.l. £1.2 n) in which /1.) 1 (c) y = 10 1 X. Y = PX : yi (i = 2 n. Determine the null space of the transformation of 123" (a) Problem 12.2]'. Z3=[l. when P = [E^ E^. 9 4ns.2]' are invariant vector spaces. £2.2 and £^ . then P'^ = [£2. Y  AX.l]' (6) (c) I^\R) spanned by [2. ij % is a permutation mentary nvectors.0. CHAP. are the ele P = [£s. (6) Problem 13.l. the Vq spanned by [l. then P~^ = [£g. when n = 4 and £2].4]'.0. v^ is called an Invariant space of the transformation. Find a rule (other than P~ = P') for writing P~ For example. 2 (6) 2 1 y = 13 1 1 X.1. .l. Z2 = [l.l.l. of a col of . (a) ^ (R) spanned by [l.
Examples. YnY + are two vectors of l^(R). vectors X and Y of of V^iR) are said to be orthogonal if their inner 1 The vectors Z^ and Xg Example are orthogonal. X'Y and Y'X are 1x1 matrices while XY is the element of the matrix. The following (a) rules for inner products are immediate X^X^ = X^X^.. all real revectors. \\ X^\\ = V3 . x^y and y defined to be the scalar (13. The inner product is frequently defined as (13.2. = [ji. 100 .Y = X'Y = Y'X The use of X'Y and Y'X is helpful.1) all vectors are real and l^(R) is the space of 72.1') will be used In vector analysis.1') X.chapter 13 Vectors Over the Real Field INNER PRODUCT. however. ed the dot product. X^hX^ = HX^X^) (X^+Xs)X^ = = + (13. See Problems 12.1 + 2 = = = = 5 + l(2) + + 11 12 +11 11 X. Two product is 0. In this chapter If Z = [%.1.l]': X^X^ X^X^ = = = = 12 11 11 + 1. = x^y^ x^j^ + ••• + XnJn Example For the vectors X^=\\.. denoted by \\ X\\ .2) ib) (c) X^(X^ + Xs) (X^+ X^)  = X^X^ + X^X^ (Xs+ X^) X^X^ X^X^ + X^Xs + X^X^ ORTHOGONAL VECTORS. ^i(R).\. (13. Prom Example 1(c). ner product of X and X .2]'.\\'. ••.. ^3 = [l. THE LENGTH OF A VECTOR X of thus.3) II ^11 = \/ XX = \/ xl + xl+  + X. . Some authors write Zy for XY . the inner product is callhere. their inner product is XY 1. With this understanding. is defined as the square root of the in (13.yX^ ^12^:2 + + 3 14 14 10 = 2(^1^2) Note.%. (a) (6) (c) (rf) ^2= 1 [2.
the space spanned by X^ = [1.lX^+ 0^X2+ mutually orthogonal nonzero + c^X^) Xi = — m .c. See Problem 4. X^ X^. m>0. nvectors and 0.yh The elementary vectors E^ A are X whose length is z is called a unit vector. we have Any set of m< n mutually orthogonal nonzero revectors is a linearly independent Ijf(/?). Since mutually orthogonal vectors are linearly independent. then l!^+yll < IUII + y ORTHOGONAL VECTORS AND SPACES. THE SCHWARZ INEQUALITY. then for = 1.5) If X and Y are vectors of ^(/?).0.1. there exists at least one vector X of V^CR) which is orthogonal to V^\R). examples of unit vectors. a vector space V'^(R). and by Theorem HI. .1. then \XY\ < \\x\\.1]' and X^ = [0.d. it may be shown = that x.0]' is orthogonal to the space spanned by X^ = [ 1.0.b. Every vector space V^(R).Y vector iilz+y' = 1  WxW . contains m but not more than m mutually orthog onal vectors. we show IV.) (0X3+ dX^) = for all a. See Problem 5. there exists at least one vector of V^(R) which is orthogonal to the I^(/?).CHAP. Two vector spaces are said to be orthogonal if every vector of one is orthogonal to every vector of the other space.0 ]' since (aX^ + bXr. a subspace of V*(R).3). 0.0. If Vn(R) is a subspace of I^(/?).1. it is orthogonal to the space spanned by them.2 m. 13] VECTORS OVER THE REAL FIELD 101 Using (13.+ c„^^ = Since this requires 0^ = for i = 1. (c.1. set and spans a vector space A vector Y is said to be orthogonal to a vector space Vn(R) if it is orthogonal to every vector of the space.1) and (13. (136) If X and Y are vectors of )/(/?). HI. For example.\\y\\ that is.. if If X^.1]' and X^ = [0. II. Suppose we have found r<m mutually orthogonal vectors of a V^(R). See Problem 3. X^ X^ i are m<n CiZi + ^2^2+ . the numerical value of the inner product of two real vectors is at most the product of their lengths. m>0.2 I. If a vector Y is orthogonal to each of the revectors X^. They span a V^iR). We now have r+l mutually orthogonal vectors of l^(R) and by repeating the argument.0. THE TRIANGLE INEQUALITY. k>h. (13..4) (13. can contain no more than m mutually orthogonal vectors.
divide each component by ^ = V4 + 16 + 16 = 6 and obtain the unit vector . l/^/3l ]'.102 VECTORS OVER THE REAL FIELD [CHAP.4. See Problems 89.. the basis is called a normal orthogonal or orthononnal basis.~^Yi ^3  = [1. . Y^ = X^ = [1.^Y.1. 0. ^ n ggg Problem 6. Each vector X.1]' (ii) Y^ = X^.1]' The vectors G^ = jpj = [l/\/l.4]'. G2 = ^. to normalize the vector X = [2. tor The set \ of all vectors orthogonal to every vector of a given Vn..(R) is a unique vecJ k space Vn'^(R).2. Example 3.^y.1.. V^(R). We may associate with any vector ^ 7^ o a unique unit vector U obtained by dividing the components of X by \\X\\ This operation is called normalization.2/3]'.2/3.3]' . Construct. l/Ve]' and Gg = ii = [I/V2. A sis of the space. X^ !« are a basis of Define = X. = [1. THE GRAMSCHMIDT ORTHOGONALIZATION PROCESS. . given a basis A'i= [1. basis of Vn(R) which consists of mutually orthogonal vectors is called an orthogonal baif the mutually orthogonal vectors are also unit vectors.2. a:2= [1. 1/\A2]' are an orthonormal basis of ^(fl). using the GramSchmidt process.2. .0.2. 0. = [l/Ve. 2/V6. See Problem 7.1]' = [1.^[1.m) are mutually orthogonal and are an orthonormal F„™(/?). Thus.1]'. Suppose X^. (i) Xs=[i. ^S V  Y ^3 ^g'^3 V ^1^3 Y V ^ V V V ^ 'wl 'm • X„i  '^ y Y „ %l •" ll*^! y y' "'l Then the basis of unit vectors Gj = —^ 1^11 . The elementary vectors are an orthonormal basis of ^(R). 13 V. an orthogonal basis of V2(R).1]' (ill) ^3 = ^Y.2. y.1]'.zY. [1/3..2. l/\/3.1.^ G^ is a unit vector and each product G^ • Gj = Note that Fg = ^2 here because and A^2 a^re orthogonal vectors.1]'  ^11 = [1. (i = l.
By Examples. A2 ' A2 A^ A2 A2 X^Xp XpXp G = Xp..10) Let Y = AX . VI. (13. 14. . XI. A l/\[Z 2/\/6 \/\[& l/x/2 l/\/3 There follow readily VIII..2 Thus. Clearly.. it is easy to show. The  equality holds if and only if the vectors are linearly dependent. ..^ are mutually orthogonal unit vectors of a Vn(R). X^ Z>. • ^m X^^.X^ is an orthonormal basis.  G >0. X^.. . (i = 1. . IX. we may obtain an orthogonal basis of the space of which. there i" the space such that the set X^. exist unit vectors . Problem VII. The determinant ORTHOGONAL TRANSFORMATIONS. are mutually orthogonal. Let X^. For a set of real revectors Xp. Chapter 17. The inverse and The product the transpose of an orthogonal matrix are orthogonal. X^.9') square matrix A is called orthogonal if AA' = A'A = I A' = A' Prom (13. X^ A..Xp XpXi XpX^ .8) . and conversely. X^^^. THE GRAMIAN. l/\/3 l/x/6 1/^2 is orthogonal. of an orthogonal matrix is ±1. Xi^.X^ Xp. X^ Xg. Yg Zi. if (13.X2 . we shall prove Z^. Xs. . ^2 ^m be a basis of a f^(/?) and suppose that X^. A2 Aj X2 A2 (13.. }^ Then.. its column vectors (row vectors) are an orthonormal basis of V^(R). A (13. . 13] VECTORS OVER THE REAL FIELD 103 Let y^.(l< s< m). Xp. by the GramSchmidt process.9) that is.9) it is clear that the column vectors (row vectors) of an orthogonal matrix A are mutually orthogonal unit vectors. . be a set of real nvectors and define the Gramian matrix A^ A2 • ' A^ A^ ' ' . Example 4.CHAP..(l< s<m). . X.. If X2. If the real resquare matrix A is orthogonal. Yj^ = X^. s). AL • A>) Ajj A]^ A^ A^ A2 . ORTHOGONAL MATRICES. of two or more orthogonal matrices is orthogonal. the vectors are mutually orthogonal if and only if In G is diagonal.
length of each. (b) the find: (o) their inner product.2. The linear transformation Y = AX = l/\/3 2/\/6 l/\/6 X l/v^ ii3 orthogonal. (13. The image of I/V3 X = [a.f if \\X. A lem 10. and = = u\\x.^x. then the Zbasis is orthonormal if and only if A is orthogonal.i. Prom (13. the Zbasis. l/\/2 l/v/6 I/V2 Examples. and conversely. Given the vectors Zj. we prove XIII. If (13. In Prob A linear transformation preserves lengths if and only if its matrix is orthogonal. 13 be a linear transformation in Xi(R) and let the images of the nvectors I^ and X^ be denoted by Yi and Y^ respectively.4]'.3] 3 4 = 1(2) + 2(3) + 3(4) = 8 (6) \\X^f = X^. 2 (a) X^X^ = XiX^ = [1. XIV. linear transformation Y=AX is called orthogonal if its matrix A is orthogonal.^Y.4) we have x^x.3] = 14 and \\Xji = vTi lAjf = 2(2) + (3)(3) + 4(4) = 29 and \\X^\\ = V29 . A linear transformation preserves lengths if and only if it preserves inner product s.10) preserves lengths preserves inner products.Y. XII.f] it Comparing right and left members.c]'is y [ " + _^ _ _1_ _f 26 a b c "]/ and both vectors are of length yja^ + b^ + c^ .3]' and X^ = [2.X^ = X[X^ = [1. = [1.2.3.10) is a transformation of coordinates from the £basis to another.f y.104 VECTORS OVER THE REAL FIELD [CHAP.f k\\\Y. Y. SOLVED PROBLEMS 1. we see that Thus.f] Y.2. II  \\X.
Prove: If a vector Y is orthogonal to each of the nvectors X^. i(X.2/3. ax^ + y^ yj^) axn+jnY + 2ax^y^ + + (a^x^ + Zax^y^ + y^) + • + (o^^ + 2a%j^ + y^ ) = a^xf + 2aX. then there exists at least one vector A" of which is orthogonal to the f^(fi)..y. then \XY\ < that A'. llaA^ + yf = = = (aX + Y)(aX + Y) [ax^ + y^.. Vn(R). (i = 1. X^ X^. orthogonal to that space.^ + a^^^Xf. is is orthogonal to the space.2 m).o] 2/3 1/3 2/3 and compute the cofactors 2/3. 2/3]' are orthogonal. + aj^X.^^ be a vector in the vM) but not in the P^(R) consider the vector <*) X that = a^X^ + a^X^ + . Prove the Schwarz Inequality: Clearly. CHAP.11) column of zeros. it space spanned by these vectors. 1/3]' is orthogonal to both A: and K. + Since a^X^)Y Thus.1/3. 13] VECTORS OVER THE REAL FIELD 105 2. 2/3. let X^..^^ The condition X be orthogonal to each of X^. it is 5. a is any real number. (a) Show (b) that 1 = [1/3. k>h.Y + \\Yf > Now a quadratic polynomial in a is greater than or equal to zero for all real values of o discriminant is less than or equal to zero. is orthogonal to the Any vector of the space spanned by the X's can be written as a^X^+a^X^^ —^c^Xtji .i\\xf. Prove: If a ^(«) is a subspace of a V^(R). Then {aiXi + a^X^+.Yf . In particular. 1/3 0_ of the elements of the 2/3 (3. 3. if Y orthogonal to every vector of the space and by definition is orthogonal to every vector of a basis of a vector space. 2/3 (a) XY = Xy = [1/3. Then by Z = [2/3.\\Yf and \xy\ if and only if its < < m\\Y\ 4. ^I'^s v\R) " and ^^* ^h be a basis of the F„V).X^ Xf. 2/3 ]' and Y ^ [2/3..y if A" Assume then X and Y are nonzero vectors.2/3] 1/3 2/3 1/3 = and the vectors are orthogonal. 2/3.. consists of h homogeneous linear equations . Thus. 2/3. Find a vector Z orthogonal to both X and Y. the theorem is true If If X or and Y are vectors of F is the zero vector. Y = a^X^Y + a^X^Y + + a^X^Y = XiY = 0. 2/3 O" (6) Write [A:.ax^+y^ (a x^ axn+yn][ax^+y^.
0. ft + l unknowns a^. is obtained.vectors X orthogonal to each of the Jt. + bY^ = . When these values are substituted in (i).Y^ and o = = Y^X^ + Y^aY^ = Y^X^ + aY^Y^ = . Since Y^.X=o.. Chapter 10. X . = Y X ^Sill X^ ~ ^1—^ Xo Y^ . + a. YiXg + aY^. . Y^ the set of mutually orthogonal vectors to be found.i—2 Y . + aY^ .Y^ Y^ X^ + bY^Y^ = and Y^. (c) Take Y^ = X3 + aYr.. X Y = 1A/3]' complete the 8. Y^. obtain Z = [l/^Jl. 7. In the ^ ""^^ Vnk'J'^^ °^ ^^^ ^^''^'''^ orthogonal to every vector of a given V^{R) is a unique vector space ^^* ^i^s be a basis of the V^{R). given X = [1/^6. IA/2 ]' to another unit vector such that set. Choose Y = [l/Vl. Y^ are to be mutually orthogonal. Afe system of homogeneous equations ^'> The . X^ + a^XhX^ + af^^^Xf. the coefficient matrix of the of rank k . hence..) is the zero Uniqueness follows from the fact that the intersection space of the V^iR) and the V^'^(R) space so that the sum space is Xi{R).^^.^^Xf^^^. there are nk linearly independent solutions (vectors) which span a K„"\/?). Find an orthonormal basis of V^(R)..^^. 2/x/6. K. Derive the GramSchmidt equations (13. (See Theorem VI. By Theorem IV. K.^ll^ . we have a nonzero (why?) vector X orthogonal to the basis vectors of the kA«) and hence to that space.X^ = = o o a^X^.a^ ay.XhX.106 VECTORS OVER THE REAL FIELD [CHAP.X^ + a^X^.7). (b) Y^ = Xr. Since Y^ and Y^ are to be mutually orthogonal. Yr. 6 = . (a) Take Take Y^^ X^. + af. = Z3  ^ki^ K _ 2W^ v (d) Continue the process until K. ^^' ^^' ^2 >^n be a given basis of V^(R) and denote by Y^.X^ + .is a unit vector. l/x/6]'.X=0 system (i) is Since the are linearly independent.X=Q X^^ Xk. = yiYj. satisfy the X^. Thus. Chapter 10.Y^ = = Y^X^ + aY^Y^ = Then a = _ ^ . and 7. Note that A..Y^ + bY^Y^ Y^Xs + aY^Y^ + bY^.. l/VI. Then. as in Problem 2(a).X^.. a nontrivial solution exists. 13 a^X^X^ + a^X^X^ + .
3]' K.3. Y^Y^ = Then X{(A'A)X2 = X^X^. 5/V42. suppose lengths (also inner products) are preserved. 1/V3. Suppose A (1) orthogonal so that A/l = /. Xq . Z^ Xg = Xj_Xq= • 0. 3.CHAP. verify (13. l/\/42]'. we have [2/\/l4. [4/\/42. aX + bY + cZ and find a I^(ft) is vector W of the V^{R) orthogonal to both X and Y. given the basis Z^ = [2.2]'.4]' and Y = [2. (a) Prove: (6) Prove: A vector of orthogonal to itself if and only if it is the zero vector. 15.2.1. X^.^ 9 7 14 I4J 7 [_3 3 3j Normalizing the ys. and Xg = [1.2. by Theorem XII lengths are preserved. 1/^3 ]' as the required orthonormal basis. Then = Y^Y^ = YiY^ (X'^A')(AX^) = X\X^ = X^^X^ and.1]'. then . Let y^. 13. Then = [1.l ]'. ^3(7?).3/14]' V _ y ^3 V ^1 '"^3 V [1.1.X2 under the linear transformation is Y = AX.1.4 ]'. Let (a) a: = [1. [l/Vs.1. 14.1]' 12.1]' be a basis of a V^(R) and Z = [4. 3 (6) Vb. Show that (b) Write W = Z is not in the ^^{R).l]' lie in a V^{R) containing X and y.3]' = [6/7.0.2.15/14. Ans.2).1. l/i/Ii.3]'.Yg = [2. Yq be the respective images of X^. [3. Take Y^ = X^ = = [2.4). Using arbitrary vectors of Prove (13. if If X^.ii[2. find: product of each pair. Conversely.3.3Y. . X^  ^^ii "2 . Prove: A linear transformation preserves lengths if and only if its matrix is orthogonal.2. SUPPLEIVIENTARY PROBLEMS H. (6) the length of each vector.1.^.1]' . V^ (c) [l.1. vYg = [2. X. X^ (a) 6.I. .3]'. A'A=1 and A is orthogonal.I. Xg are a set of linearly dependent nonzero ^vectors and X^ and Xq are linearly dependent. 10. 13] VECTORS OVER THE REAL FIELD 107 9. 3/v'l4 ]'. Given the vectors A'l = [l.2. Construct an orthonormal basis of Fg.2.l ]'. X^ = [1. 0. (a) the inner (c) a vector orthogonal to the vectors X^.
1/3]' [0. 3/\/34]' (6) [3. iV2]'. = cl+4+.4 is is orthogonal and  . 1/3.108 VECTORS OVER THE REAL FIELD [CHAP.1.l]' ]' (c) [2V5/5. [0. X. 2/3]'. [iV^. iV^. Normalize the vectors of Problem Ans. (i = 1. l/V^I.2/3. [2/3.1.0]' in order: 31.. Ans. 25. Take Y^ = X^. 0. IX. 11. Obtain an orthonormal basis of Hint.1]'. 19.2]' X^. V^/3]'. iV2]' [o. l/\/6]'.0. 0]'.0]'. [1. so also are the unit vectors obtained by normalizing them.2^/5/5.0 ]'.2. 17. 0]'. then (a) XX^ = c^.1]' [2. [iV^.1.1. where A is nsquare.Z X^ of Problem 2 are an orthonormal basis of V^{R). given ^"1 =[ 1. 2\A2]'. Prove: If two spaces V^iR) and t^(fl) are orthogonal. Show A" that !Z + 7 p < (. V^i^R) 3/VT3]'. [^/5/5. [V^/6. 22.1. Prove. 4/V2i ]' 21.2]' [4. (a) (b) [4.2 n). Prove that AA' are orthogonal. 2/\/T7.l/\f2Y.1]' (6) (e) [3. that if X^. [2.4  = 1.1.oy. [^\/~2. Ans. 32. given (a) Find an orthonormal basis of Ans. XI. 0. Hint. [5\/2. Prove: (a) If A . [2/V2i.0. Prove Theorems VIII. Prove: The Triangle Inequality.Y + y)^. their intersection space is I^°(if). [2/3. I^(ft). [4/\^. 20. X^ are linearly independent so also are the unit vectors obtained by normalizing (6) Show that if the vectors of (a) are mutually orthogonal nonzero vectors. and Y^ by the method of Problem 2(6). using the given vectors [1.0.4 and B commute and C is orthogonal. \/6/3. Prove: If A"]^. V5/5.2]'.0]'. (6) If orthogonal and \a\ = i.1. each element of^ is equal to the negative of its cofactor in ^ .0.1 and ATg = [2.1.. Show (o) Y . Prove: UX If and Y are nvectors. 2v'V3]'. is a diagonal matrix and only if the rows (or columns) of A 27. [1. 23.1. [2/VI3. then XY'¥YX' is symmetric. (a) (6) X^ = [ 3/VT7. (or A'A). using the Schwarz Inequality. (b)X. Prove: A vector X is orthogonal to every vector of a P^"(/?) If and only if it is orthogonal to every vector of a basis of the space. V2/6.1]'. [l/Ve. [1. that the vectors X. Show them. Prove:  + 7 1 = 1a:1 + \y\ if and only if X and Y are linearly dependent.0]'. 0. obtain Y^ by the GramSchmidt process. 2/1/17]'. V'3/3. Z^ A'„ be an orthonormal basis and + if A^ = 2 ^=1 c^X^. then C'AC and C'BC commute. 0. \/6/6 ]' .3. Prove: X is nsquare. 13 16. and Y are nvectors and A 28. Construct orthonormal bases of {a) by the GramSchmidt process. cl VsiR).l/^^2. if 26. [3/VI3. 3/^34.0]'. 2/V6.X 30. [\A3/3. If . 2/\/l3]'. [0. then X(^AY) = {A'X) Y n 29. 18. each element of A is equal to its cofactor in  ^4  . 0. 24. [\/6/6.
Prove: If is an orthogonal matrix and it B = AP . then B = (I A)(I + A)~^ is orthogonal. Y = 12). X. 34.CHAP. Y. After identifying «2 yi yi estciblish: the z^ as cofactors of the elements of the third column mn of (a) The vector product The vector product of two linearly dependent vectors is the zero vector.xsY and Y = [yi.ygY 25. 40. Prove: If A is skewsymmetric. to prove Theorem XIV._i ]'. [l. X2 X xY . (a) 2 51 ^ . 36. = [21. 38.x^. 13] VECTORS OVER THE REAL FIELD 109 33 Obtain an orthonormal basis of V^iR). In a transformation of coordinates from the £basis to an orthonormal Zbasis with matrix P.6]' are linearly dependent. and / 39. and [5.3. Use Problem 35 " to obtain the orthogonal matrix S . 3]'. given 12 s"! (a) A = (b) A 5 10 23 (b) 3 oj' "514 12 Ans. 23]' be two vectors of VsiR) and define the vector product. ]. and con versely. XxY = (YxX) = (d) (kX)xY k(XxY) = XxikY). establish: = = X x{Y+Z) XxY + XxZ = (b)X(YxZ) (c) (d) Y(ZxX) WY (WxX)(YxZ) = XY XX (XxY)(XxY) = YX Z(XxY) WZ XZ = \XYZ\ XY YY . ^1 Zg = yi ^3 ys % X^.A) (I + A)'"^ is skewsymmetric. is PB^ is orthogonal. AX be comes 71 = P^^APX^ or 7^= BX^ (see Chapter Show that if A is orthogonal so also is B.4]'. and I + A is nonsingular.5.2. then 37.4 10 10 5 10 2 5 I2J' 11 where P nonsingular. Let X y = [xj^. where 21 = «3 ys Z2 = . Prove: If 4 is orthogonal + .4 is non singular then B = (I . given X^ = [7. 1._i. (a) Z are four vectors of V^{R). of ^ and as Z = ZxF Jl .4. If W. 41.2. Show in two ways that the vectors [l. k a scalar. 35. (6) (c) of two linearly independent vectors is orthogonal to each of the two vectors. Y^.yQ.
be expected that each theorem con cerning vectors of I^(C) will reduce to a theorem of Chapter 13 when only real vectors are considered. Iy = y^ (/) XY+Y.4) Triangle Inequality \\X+Y\\ 2) < Z + i'll and the Schwarz Inequality (see Problem (14.X = 2?. If Z = [%. x and j are real numbers and i is defined by the relation j^ = —1. \/%% + %^2 + ••• + XnXn Two vectors X and Y are orthogonal V^{C). z It The absolute value z of the complex number z = x+iy is given by follows immediately that for any complex number z = x + iy. (14. 72 y^Y are two vectors of P^(C). we have (see Theorems IIV of Chapter 13) 110 . x^ XnY and y = [ji. A complex number The conjugate x + iy = if and only z if x = y = 0. it The is to totality of such vectors constitutes the vector space I^(C).5) hold.2) XY = XY = %yi + x^y^ + • + XnJn The following laws governing inner products (a) (b) are readily verified. Let X be an ravector over the complex field C. The sum (product) of any complex number and conjugate is a real number. Since ^(R) is a subfield. \XY\ < \\X\\\\Y\\ Moreover. the if XY = YX = For vectors of (14.{X. of the complex number its = x+iy is given by z = x+iy = x—iy. The real number x is called the real part and the real number y is If called the imaginary part of x + fy.1) > \A and \z\ > \y\ VECTORS. Two complex numbers are equal if and only if the real and imaginary parts of one are equal respectively to the real and imaginary parts of the other.3) (c) (d) XYYX X(Y+Z) = XY + XZ = 2CiXY) where C(Zy) is the imaginary part of ZF.chapter 14 Vectors Over the Complex Field COMPLEX NUMBERS. z = \J zz = \fW+y^ (14. z = x^iy is called a complex number.Y) is the real part of (cX)Y = c(XY) X(cY) = c(XY) (g) where R(XY) XY. their inner product is defined as (14. The length of a vector given by = \/ XX = Q. (e) (Y+Z)X = YX+ZX X is See Problem Z 1.
the vectors are mutually orthogonal if and only if G is diagonal. 14] VECTORS OVER THE COMPLEX FIELD 111 I. then it is or III. Chapter 17.2 m) are an orthonormal basis for ^^(C). X^ X^. A THE GRAMSCHMIDT PROCESS. In  Xn (14.i' in the snaoe such that the set Z„ Z^ Z. are mutually orthogonal unit vectors of ¥^(0) there exist unit vectors (obtained by the GramSchmidt Process) Z.Y l2l2 ^ y 'mi y n _ ~ y "Si T.6) Yd = Xn — ^2^3 y YiXs Y^ Y. X^ Xp with complex elements. then there exists at least one vector Z in V^(C) which is orthogonal to V^(C). k>h. is an orthonormal basis. Following Problem 14. p^p Clearly. basis. THE GRAMIAN. If a vector Y is orthogonal to each of the revectors thogonal to the space spanned by these vectors. Y. V. X^ X^. hence. Any set of m mutually orthogonal nonzero revectors over C is linearly independent and. ^ .. The equality . Y . we may prove \G\ VI For a set of revectors X^. holds if and only if the vectors are linearly dependent. If V^iC) is a subspace of V^C). (l<s<m).Y. Let = X.7) Ag X^ XqXq ••• XpX^ XpX^ XpXp XI X^ XI x^ ••• x^x. IV. X„ X^ Xp be a set of . basis of I^(C) which consists of mutually orthogonal vectors is called an orthogonal If the mutually orthogonal vectors are also unit vectors. X^.. > 0. Let Gramian matrix. If X^... X^ X^ be a basis for F^^CC).CHAP. Y y 'ai'^m y ^mi''mi 'y^ The unit vectors Gi 7—. the basis is called a nonnal or orthonormal basis..s vectors with complex elements and define the Xi'Xi Xj^Xq X^'X'i •• ••• X^Xp X^Xp Aj Xy X^Xq ••• X^ Xp X2 xp X^X^ (14. spans a vector space I^(C). contains m but not more than m mutually orthog onal vectors. Define X. (i = 1. X. m>0. II. Every vector space J^(C).
= XY + YX XY + + 3i) 3j) + (73i) = 14 6j = = 2(7) 2R(Xy) 2C(XY) YX (730 = 2(30 = 2. Y = AX is a transformation of coordinates from the i'basis to another the Zif basis. When X and Y are nonzero vectors and a is real. is called a unitary transformation. (c) verify XY + YX = 2R(XY) (a) find XY and YX (d) verify XY YX = 2C(XY) (b) verify XY = YX 2+ (a) 3/ XY = X'Y = [li. then the Zbasis is orthonormal if and only A is unitary. Given X = [l+j.f. l2i i = (li)(2+3!) + i{l2i) + 1(0 = 7 4 3j 1 +i YX = Y'X = [23i. The determinant of a unitary matrix has absolute value 1. then \\aX + Yf = {aX+Y){aX+Y) = a XX + a{XY + YX) + YY = a'Wxf + 2aR{XY) + \\Y\\ > 0.8) linear transformation Y = AX where A is unitary. Prove the Schwarz Inequality: \XY\ < p • 11Y. the conjugate of YX. Paralleling the theorems on orthogonal matrices of Chapter 13. the Inequality is true if X = or ^ = 0. VII. inner products) if and only matrix is unitary.112 VECTORS OVER THE COMPLEX FIELD fCHAP. The inverse and the transpose of a unitary matrix are unitary. we have The column vectors (row vectors) of an resquare unitary matrix are an orthonormal basis of l^(C). that is if (i)'= A ^. 1.2J. 1]' and Y = [2 + 3J. The (14.i] — i 7  3i 1 (b) (c) (d) From (a): YX. VIII. As in the case of real vectors. XII. 14 UNITARY MATRICES. An nsquare matrix A is called unitary if (AyA = A(A)'= I. UNITARY TRANSFORMATIONS. The column vectors (row vectors) of a unitary matrix are mutually orthogonal unit vectors. is = = (7 (7 7 + 3i = XY. SOLVED PROBLEMS 1. i]'. J.i. . if its A If linear transformation preserves lengths (and hence. The product of two or more unitary matrices is unitary.l + 2i. IX. and conversely. X. XI.
XY . Thus. 3j ]' are both linearly independent and mutually orthogonal. R{XYf If \xf\Yf F  < < x • RaY) if < O. then ±iA is Hermitian. Show that [l + i. Derive the relations (14. Prove Theorems IIV. [iAi. Prove: If A is skewHermitian. 8.CHAP.lV. R(cXY) \XY\.5:y) y  .2i. 4. 1+ /. by (14. If i = B + iC Since is Hermitian. IkMyll define c = XY = 0.C^ if and only BC + CS = o or BC CB . thus.3(6)).3). Then i(A) = = (riA)' = t(Z)' = iA = B and S is Hermitian. show that [li. 9. (d) find a vector orthogonal to both (a) X^ and Jf g Ans.i XY \xy\ Then R[ciXY)] R(cXY) < = \\cX\\\\y\\ = cUiyl = UMy while. 11. \(A)'AV = (A'A) = (A)A = B and B is Hermitian. . Prove the relations (14. 23i.f = fl(. then . find the length of A:2 = [l. and Xg = [i. 5. = = (A)A This is real if (B + iCUB+iC) (B + iC)(B+iC) = B^ + i(BC + CB) . 1.iY. 2]' each vector Xi is . Since A is skewHermitian.3 i] 7.i (b) ^/e . 3. V^ V^ . if and only B and C anticommute.oy. (a) (b) (c) Given the vectors X^=[i. Prove: (B) B = = (A)'A is Hermitian for any square matrix A. o]'. The reader will consider the case B = iA. 10. (A)' = Consider B = ~iA. 14] VECTORS OVER THE COMPLEX FIELD 113 Since the quadratic function in a is nonnegative if and only and if its discriminant is nonpositive. find X^X^ and X^X^. = if (B+iC)' = B + iC.6). 1  j. 1. Prove the Triangle Inequality. ij]' orthogonal to both X^ and X^. B+iC Hermitian. SUPPLEMENTARY PROBLEMS 6. and [l i. show that (I)/l is real is if and only if B and C anticommute. thus.i. (d) [1 5iJ . (S)' A. Uyl < Uy forall^andy.
22. J 1 + j i _9 + 8i 1 10 4i l 1618i 10 4t 9 + 8j and rl + _ 2j 42i" (b) Ans.A has only pure imaginary elements. then A 3+ is involutory. r 1 1  3+ [3. construct an orthonormal basis for iy.iY.4P where P is nonsingular.1. i+j1 (6) i i Use Problem 18 to form a unitary matrix. in order.l]' [l + i. (a) 5 224J 29 + 12i 24i 2jJ' 4lOj of the 224J 20.12i.l. Prove: If A A is unitary and if B = .l. then B = (I . ^d + ]' (&) [2(1+0. Prove: If 18. [1(1 + 0.A) (. Prove Theorem V.2\Am^ 13. 2 5t ]'. [O. I. 14 12. Prove Theorems VIIX. . Using the relations (146) and the given vectors vectors are (a) (b) Arts. ri [^i.iV2]'. to prove Theorem XI. 3jy r ^(1 + 0.C) when the [0. is 25. given ( "^ L1+. A is skewHermitian such that I + A is nonsingular.i.^V2. Prove: If /I is a matrix over the complex field. (a) [2.l]'. 23. Prove: If ^ is unitary and Hermitian. 4^ J 6 + 3i 7 5 5 3j b 1i t y L2V30 'aVso. Prove: X and Y are nvectors and A is resquare. Prove: U is unitary and I+A is nonsingular. then B = (lA){I + A)~^ is unitary.lY. then XAY = A'XY 17. I 1 +t r 19. i 1(1 + J//3 l/\/3 2^ 4 + 3i is unitary. then PB is unitary.114 VECTORS OVER THE COMPLEX FIELD [CHAP.2 + iY. A'A = If if and only the columns of A are mutually orthogonal unit vectors. Follow the proof Problem 10. Chapter 13. 2.. then A + A has only real elements and A. If A is nsquare. 16.O.^. Show that k 2y/T5 i/y/3 ^ 2Vl5_ 1 24. 14. [lj. show if (a) A'A is (b) diagonal / and only if if the columns of A are mutually orthogonal vectors. If ^ and B are unitary and in same order. 15. Prove: 21. [li.! + A)'^ skewHermitian. then AB BA are unitary. [l + j. 4^.1]'.2 1 ^ I • 1 T J.
an error has been made. £ . Two (15.chapter 15 Congruence CONGRUENT MATRICES. r is congruent over F to a diagonal matrix diagonal elements are nonzero while all other elements are zero. A and B are congruent provided A can be reduced to B by a sequence of pairs of elementary transformations. I.P'AP is diagonal. SYMMETRIC MATRICES. row transformations are made first and then the three column transformations. congruence is a special case of equivalence so that congruent matrices have the rank. then //gjCS) reducing A to D. that expressed as a product of elementary column matrices. given 12 2 3 3 5 2 3 5 8 8 8 10 2 10 8 ff2i(2) and K2i(2). P' is the product in reis. each pair consisting of an elementary row transformation followed by the same elementary column transformation. if the three In . Considerable time is saved. then H^ii2) and K^^{2) to obtain zeroes in the first row and in the first column. In Problem 1. over F if FAP same Clearly. When P is verse order of the same elementary row matrices. transformed into a symmetric matrix. however. we prove Every symmetric matrix A over F of rank first r whose Example 1. we use [A /] and calculate en route the matrix P' First we use and XgiCS). Find a nonsingular matrix P with rational elements such that D .1) resquare matrices there exists a nonsingular matrix A and B over F P over F such that B = are called congruent. We have If A is not then 12 [AH^ 2 3 3 5 8 2 8 1 1 1 1 10 c 'V 10 4 4 3 2 5 8 10 011 011 4 10 8 1 412 1 2100 3010 2001 1 1 0100 4 2 1 10 1 c •~\^ 0100 4 2 10 4 1 1 4 1 10 1 1110 [DP'] 115 .
will replace D by the diagonal meitrix 10 while the 10 transformations H^d) and K^Ci) replace D by 0900 . no pair of 4 rational or real transformations which will replace ative elements in the diagonal. In the real field the set of all resquare matrices of the type (15. We have II. While the nonzero diagonal elements of D depend both on A and P. . By of D a sequence of row and the same column transformations of type 1 the diagonal elements may be rearranged so that the positive elements precede the negative elements. Then a sequence of real row and the same column transformations of type 2 may be used to reduce the diagonal matrix to one in which the nonzero diagonal elements are either +1 or —1. rank r = 3. that is. 15 1 2 10 1 1 41 1 Then 10 The matrix D to which A has been reduced is not unique. A is of 1110 index p = 2. Kr.2) is called the index of the matrix and s = p(r— p) is called the signature. Thus. The additional transformations 10 0100 ffgCi) and Kg(^). Let the real symmetric matrix A be reduced by real elementary transformations to a congruent diagonal matrix D. we have 10 1 1 [A\n 0100 4 2 10 C 1 1 10 010 5 2 2 5 4 10 s IC\(/] 1 1 and (/AQ = C. for example. There is. Two resquare real symmetric matrices are congruent over the real field if and only they have the same rank and the same index or the same rank and the same signature.2) is a canonical set over congruence for real rasquare symmetric matrices. it will be shown in Chapter 17 that the number of positive nonzero diagonal elements depends solely on A. A real symmetric matrix of rank r is congruent over the real field to a canonical matrix P (15.(k) to the result of Example 1 1 1. Example 2.116 CONGRUENCE [CHAP. Applying the transformations H23. K^a and H^ik). if III. however. and signature = 1. let P'AP = D.2) C = 'rp The integer p of (15. D by a diagonal matrix having only nonneg REAL SYMMETRIC MATRICES.
Two if rasquare numbers and only complex symmetric matrices are congruent over the field of complex they have the same rank. then = (FAP)' Thus... 4. or conjunctive if P such that (15. F are congruent over F if and only if The set of all matrices of the type (15. 15] CONGRUENCE 117 IN THE COMPLEX FIELD. Dj 0^..2 t). SKEWSYMMETRIC MATRICES. HERMITIAN MATRICES. FAT = r(A)P = FAP Every matrix B = FAP congruent to a skewsymmetric matrix A is also skew symmetric. 0) where n . See Problems. there exists a nonsingular matrix [^ ]. VI. V.. Two nsquare Hermitian matrices A and B are called Hermitely congruent. D.CHAP. If A is skewsymmetric.4) B = r I = diag(Di. we prove is Every nsquare skewsymmetric matrix A over F congruent over F to a canoni cal matrix (15.(f=l. There follows Vin. . we have 10 1 5 2 [^1/] 2 1 1 5 C 010 10 10 10 5 2i 2 i k [D «'] 1 1 1 1 1 1 and R'AR »^&:] if See Problems 23. we have r IV.4) is a canonical set over congruence for resquare skewsymmetric matrices. Two rasquare skewsymmetric matrices over they have the same rank.0.5) FAP Thus. .3) Examples. Every rasquare complex symmetric matrix of rank complex numbers to a canonical matrix /^ is congruent over the field of (15. In Problem VII. The rank of /} is r = 2t. Applying the transformations H^ii) 1 and K^f^i) to the result of Example 1 2.
The reduction of of an Hermitian matrix to the canonical form (15. If A is skewHermitian.6) Irp The integer p of (15. (PAT) FAP Every matrix B = FAP conjunctive to a skewHermitian matrix A is also skew Hermitian. An Hermitian matrix A of rank r is conjunctive to a canonical matrix Ip (15. By Problems. p(rp) is called the signature. each pair consisting of a column transformation and the corresponding conjugate row transformation. .7) B FAP = . then = (FAPy Thus. XIII. The extreme Problem?. 15 IX.ilr~p Thus. By Theorem there exists a nonsingular matrix P such that Ip FHP = C = Irp Then iFHP = iF{iA)P = FAP = = iC and Up (15. Two resquare Hermitian matrices are conjunctive if and only if they have the same rank and index or the same rank and the same signature.6) is called the index of A and s = XI. X. See Problems 67. SKEWHERMITIAN MATRICES. H = iA is Hermitian if A is skewHermitian. and only if See Problem 8. XII.6) follows the procedures to the proper pairs of in Problem 1 with attention troublesome case is covered elementary transformations.7) in which r is the rank of A and p is the index of —iA. if XIV.X 118 CONGRUENCE [CHAP. Two resquare skewHermitian matrices A and B are conjunctive they have the same rank while iA and iB have the same index. Every resquare skewHermitian matrix A is conjunctive to a matrix (15. Chapter 14. Two resquare Hermitian matrices are conjunctive if and only if one can be obtain ed from the other by a sequence of pairs of elementary transformations.
we move into the (s+l. A is ultimately reduced to a diagonal matrix diagonal elements are nonzero while all other elements are zero. *s+l.s+i) position by the proper row and column transformation of type 1 when u = v. we have obtained the matrix • hss S+1.2) and to canonical form (15. we proceed as in the case Ar^+i ^^. s+i "n. '12 2. a sequence of pairs of elementary 3. F of rank r can be reduced to a diagonal matrix having exactly transformations of type reduce A to Suppose the symmetric matrix A = [a^.CHAP. Prove: Every symmetric matrix over r nonzero elements in the diagonal. = o. each consisting of a row transformation and the same column transformation. however. 1 we have 1 1 1 1 1 1 [0 1 Pi'] = 2 4 1 1 2^f2 k\f2 1 i\Pi \C\P'\ 2 01 2 . we have proved the theorem with s = r.] is not diagonal.^ = o above.)th row and after the corresponding column transformation have a diagonal element different from zero. If every it 4y = 0. s+2 .c^^. s+^ / 0. we add the (s+u)th row to the (s+t.2). 2 5 5 to Reduce the symmetric matrix A 2 3 5 canonical form (15.) Since we whose first t are led to a sequence of equivalent matrices. otherwise. . = o. are different from zero. If a^ / 0.3). some say V« If.2 In each obtain the matrix P which effects the reduction. 1 2 3 5 2 1 1 I 1 1 1 u\n 1 1 1 1 I 2 2 5 5 1 1 c 1 1 1 1 1 2 2 C 1 1 2 2 c 1 2 4 1 1 1 1 4 2 [0^'] To obtain (15. s>2 in which the diagonal element k^^^^^^^ k^j . n k^ S+2. (When a^. 15] CONGRUENCE 119 SOLVED PROBLEMS 1. Suppose then that along in the reduction. s+ 1 k c '^ri . will "n2 "ns Now the continued reduction is routine so long as b^^.
120 CONGRUENCE [CHAP.3). (J = 1. 15 1 av^ 2 j\/2 1 and 5a/2 To obtain (15. Oy g £2 1 V Next multiply the first row and the first column by l/a^^ £3 'S* i . j ^ by the skewsymmetric matrix a. Interchange the sth and first rows and the columns and the /th and second columns to replace oy a. Find a nonsingular matrix ? such that ?'A? is in 1 canonical form (15.. given i 1 + i A = 1 i 2j i + 2i 10 + 2i ^ 1 i 1+J 2j 1 1 1 1 1 1 1 [^1/] 1 c '\J i 3  2j i 1 1 1 +i 2i 10 + 2j 1 32J _ 10 p 1j 10 1 1 1 1 1 1 —i c 2i 1 1 1 i 7+ 13 4i!' 1 5+12J l + 2i 3 + 5+12i 32t 13 1 1 13 J = [c\n 1 I 7+ 4i 13 5+12i Here. 1 13 32j 13 4. Prove: Every ?isquare skewsymmetric matrix A over F of rank 2t is congruent over 7)^. we have 1 1 1 1 1 [D\Pl] = 2 4 1 1 1 C 1 1 1 2\/2 2i 2\/2 i iV2 Vc\?'\ 1 2 1 1 2V2 k\f2 2J and i 3. then some mj = and /th and second rows..0 F to a matrix B where A D^ = «)• diag(Di.2. Dg 0) = U :] then S = ^. If 4 ^ 0. then Interchange the Jth first aji ?^ 0.3). It = Q..
6). 2 we need only interchange the third and second rows followed by the interchange of the 2 4 third and second columns of 13 2 1 2 10 10 2 4 10 2 to obtain 1 1 2 10 1 03 4320 Next. Find a nonsingular matrix P such that P'AP is in 1 canonical form (15.4). 15] CONGRUENCE 121 1 ''2 to obtain 1 £4 and from it.CHAP. in turn. two rows and the first 10 1 1 2 i k 1 1 2 03 10 10 1 and 1 5 5 i 2 1 2230 1 1 2 1 Finally. We have. otherwise. the process is repeated on F4 until B is obtained. the reduction is complete. 5. given 2 4 n n 1 2 1 4 3 Using a^s 4 0. multiply the third row and third column by 1/5 to obtain 1/2 1 1 10 1/10 1/5 Di P' Do 1 010 1/2 1 02 1/10 1 Thus when P 1/5 = 102 1 P'A? = diag (Di. obtain 1 1 Di F4 If /v= 0. given 1  i 3 + 2i A l + i 3 . then proceed to clear the two columns of nonzero elements. 6.2j .Dj). Find a nonsingular matrix P such that P'AP is in canonical form (15. by elementary row and column transformations of type 3. multiply the 10 10 1 4230 first 1 first row and first column by 2.
15 1 1j 3 + 2i j 1 1 1 1 [^1/] 1+i 2 i i 1 1 c 5 5 li 3 1 1 3  2j 1 13 + 2j 10 1 1 1 0—0 „ 25 13 „ 23i 1 5_ C 23f 1 13 1 13 3 13 1 5V13 5\/l3 „ \/l3 1 13 + 2i 01 3 + 2i VT. Find a nonsingular matrix P such that P'AP is in canonical form (15.122 CONGRUENCE [CHAP.5 Vl3 vc\p'^ 2+ 3t 32i \fl3 5\/l3 13 and 5\/T3 1 1 ^/I3 \/T3 7. given 1 l + 2i 5 4 1 .3i 4 .2s 2i HC 1 1 5j 1 + 2j 10 1 2+ 4 + 13 10 1 5i 23! 1 10 HC 10 5i 10 1 i 2 HC 10 2 1 i 5J 2Zi 1 5/2 22J 5J 2 1 HC 10 01 '10 10 \/T6 4  4j \/To x/Io x/To [cin 4 + 4i ao and 10 \/T0 10 —i vTo .6).2i 3f .2 2 j 2+ 1 4 + 1 2f 13 1 + 5 2i 2  3j 1 1 1 1 [41/] 1  2i 3i 4 .
2). CHAP. given (c) (d) 1 (6) 2 122 : = 1 2 A 3 = 3 A = " 2 3 2 3 (b) Ans. 1 1 2 1 . 10 Then P'AP = diag[i. given r I H. i ] SUPPLEMENTARY PROBLEMS 9. Find a nonsingular matrix P such that P'AP (a) 1 /I is in canonical form (15. 1  Ans. given "O (c) 1 2 2" " (d) 1 1 o1 2 1 (a) A [: :] (6) A = 2 31 1 2_ 1 ^ = 1 4 4 0. 1 1 1 Ans. IB] CONGRUENCE 123 8.2 2 11. "^V2 ^v^ 1 . Find a nonsingular matrix P such that P'AP 1 is in O' canonical form (15.3). (a) 1 2 1 3/2"' 1 1 (c) 01 23 "0010" A 2103 213 '1 1 01 1 (d) P = P = P = 10 10 3 P 2 1 1 = 1/3 2/3 2 1/3 1 r . (d) P = 1 10. A = 1 . (a) P (b) P  I (c) ^ = [::] I kV2 ky/2 k i . 1.4.)] (b) 2(1 J) i/\/2 (1 + 0/2 b P = (lJ)/\/2 (32i)/l3 (3 ^ J + 2i)/l3 11..j 1 2  4j 1 i + 2i 1+211 I ' l +j +i 2i l2i 3 5i _1 + 2i 1 + 4iJ 2 . Find a nonsingular matrix P such that P'AP is in i canonical form (15.4). i. given —1 —1 + l j A = 1 1 + 2t 2j + i l + i 2i 1 1 + i Consider the Hermitian matrix H J 2 2 i 2 1i 1 +i l2j 1 i 1 The nonsingular matrix P = is such that P'HP = diag[l. (a) P n 2(i2. Find a nonsingular matrix P such that P'AP is in 2i canonical form (15. .7). l].
Find a nonsingular matrix P such that P'AP is in 1i 1j 1 1 (a) A (c) A = 1j . 15 12.0 1 1. if P'SP = S when P = (S + TT^iS.T) is nonsingular. Li oj Let A be a nonsingular nsquare real symmetric matrix of index is even. (S 19. Show that T = S(/P)(/ + P)^ = S(I + PrHlP). (a) P [1 i + 1 sfl (6) P = (2f)/V5 (c) P = (2J) 1 J i/Vs canonical form (15. Li+j i I J l+i i 1 1 2i i 2+j (6) A 1 (i) A 1 _2 + i l2i l+i 6i . = Hint. Hint.6) for Hermitian 18. 1i i. Show that U  > if and only if n p 16. Take P BB' where B'AB = I and show that P'AP = A .S+T)'^r^. given 2 1 1 (a) A tl + l 1 3i  1 1 +i 3 i + 3 j 3  2i 3i" ' (6) A 1i 2 i 4 (c) ^ 1j 3 + 2f 34i 18 1 10 J 3 + 4t 1 1i (5i)/\/5 1 1j (2 + 51) 1 Ans. 20.Prove: A nonsingular symmetric matrix A is congruent to its inverse. (a) (lJ)/\/2 l/\/2 P [: 1 r] i 1 (c) P .7). 21. 15. 12J l' . I to obtain (15. given i 13. Find a nonsingular matrix P such that P'AP is in canonical form (15.T) is non P Hint. = (S+TT^(ST) P'SP =[(STf^(S+T)S~'^(ST)<. 2 + 3j 2 1 (rf) l/x/2 (l2i)/VT0 (2i)/yJT0 l/\/l0 l/\/2' i/\/2 (6) P = P = i/y/2 J J 14 If D = ° ^1 show that a 2square matrix C satisfies C'DC = D if and only if Cl=l. Show that P'SP = S when 7") . . Rewrite the discussion of symmetric matrices including the proof of Theorem matrices.6).T) and I+P is nonsingular. Show that congruence of nsquare matrices is an equivalence relation. p. then T is skewsymmetric. i Ans. Prove: It A £B then A is symmetric (skewsymmetric) if and only if B is symmetric (skewsymmetric). Let S be a nonsingular symmetric matrix and let T be such that (S+T)(S . 17. Let S be a nonsingular symmetric matrix and T be a skewsymmetric matrix such that (S + singular.i 124 CONGRUENCE [CHAP.
^. x^ «m) XxJi + 2x172  ISxija  4x271 + 15x272  ^s/s is a bilinear form in the variables {x^. x^ x„ ]'. See Problem Example 1. The matrix A of the coefficients is called the matrix of the bilinear form and the rank of A 1.72 7„]'. The most general written as bilinear form in the variables (x^.chapter 16 Bilinear Forms AN EXPRESSION and (y^. For example. The bilinear form 1 1 1 1 Ti ^iXi + ^irs + *2yi + ^2X2 + ^373 Ui. Xg x„) and {y^. y^.y) t n S =i S aix^y j=i ^ ' %1 %• *2> ••. yg). Xg) and (y. y^ which Jri) is linear is called a bilinear and homogeneous in each of the sets of variables fonn in these variables. is called the rank of the form.1) /(^. Om2%y2 + • • + ""mv^n Jn more briefly. (x^.% 1 y2 73 X'AY 125 . 012 ^22 • • • «in '7i' 72 % Ojl • • • «2n _0'n\ Om2 • • • Omra_ _Yn_ where X = [x^. and i' = [yi. ^ = [0^^]. %. as n (16.y^ Jn) maybe %i%yi + 021^271 + + oi2%y2 + • • • + + %r! X i7?i 022^72 + • • • fl2ra%7ra + ami^myi + or.
1) be replaced by new variables u's by means of the lin ear transformation Xi = 1. (16. F sire equivalent over F if and the rank of (16.3) Ti 2 J=i c^jVJ. only If if Two bilinear forms with mxn matrices A and B over they have the same rank.4) in (16. 16 CANONICAL FORMS. p' 1 1 1 1 1 1 1 1 X = PU V and y = QV V reduce X'AY to 1 1 1 1 1 1 1 1 1 1 l' 1 1 1 U%V U^V:^ + U^V^ + Ugfg 1 . Now applying the linear transformations U = V = lY we obtain a new bilinear form in the original variables X\B'AC)Y = X'DY Two bilinear forms are called equivalent if and only if there exist nonsingular transfor mationswhich carry one form into the other.. Any bilinear form over F of rank r can be reduced by nonsingular linear transfor+ u^v^ + ••• mations over F to the canonical form u^^v^ + u^i'r 1 1 1 1 Examples.2) Let the m x's of (16.2 n) or Y = CV IX. 10 10 1 10 1 1 1 1 1 1 1 10 1 10 1 A h 10 110 110 10 10 1 10 1 10 1 10 1 10 1 10 1 1 10 11110 1 11110 0110 1 h Thus.2) Ir and C = Q in (16. II. (f=l. For the matrix of the bilinear form X^AY = X' 1 Y ofE^xamplel. We have X'AY = (BU)"A(CV) = U'(BAC)V.3). V J (f=l. r.1) is there exist (see Chapters) nonsingular matrices P and Q such that FAQ Taking B = P' (16. I. the bilinear form is Ir reduced to UXPAQ)V U' V = U^V^ + ^2^2 + + U^V^ Thus. bijUj.2 m) or X = BU and the n y's be replaced by new variables v's by means of the linear transformation (16. 126 BILINEAR FORMS [CHAP.
. V. hence. and Y = CV the variables are said to be transformed cogredlently. («i. where A If is resquare.3^) Consider a bilinear form X'AY in the two sets of n variables When the «. We have . IV. is carried into the bilinear form U\C'AC)V. so also is C'AC. the bilinear form X'AY. A symmetric bilinear form remains symmetric under cogredient transformations of the variables. A symmetric bilinear form of rank transformations of the variables to can be reduced by nonsingular cogredient (16.7) complex field to %yi + «2y2 + ••• + x^y^ See Problem 3.5) %%yi II + 02*272 + + OrXryr From Theorems and IV.^4. A symmetric alternate bilinear form 1 S a. Two if bilinear forms over variables and only if their F are equivalent under cogredient transformations of the matrices are congruent over F. We have Under cogredient transformations X = CU and Y = CV. «2 ^) and = (yi. 16] BILINEAR FORMS 127 The equations Xj^ of the transformations are = u Ur. ij x '^iJj X'AY is called symmetric according as A is skewsymmetric Hermltlan Hermltlan alternate Hermltlan skewHermltlan COGREDIENT TRANSFORMATIONS.ys. A is symmetric. ChapterlS. the variables are said to be transformed contragredlently. we have r VI. CONTRAGREDIENT TRANSFORMATIONS. formation the x's are subjected to the transformation Let the bilinear form be that of the section above. Ti = = ^1 V^ + "3 1^3 X2 = = U. and 72 Ts Xg = ^3 See Problem 2 TYPES OF BILINEAR FORMS.i 7^4. A real symmetric bilinear form of rank r can be reduced by nonsingular cogredient transformations of the variables in the real field to (16. When X = (C~^yU and the y's are subjected to the trans Y = CV. From Theorem I.'s and y's are subjected to the same transforma tion X CU III. •••.CHAP. Chapter 15. follows VIl.6) «iyi in the + %y2 + + xpyp  a.1  •••  x^Jr and (16.
1 1 1 2 10 1 1 41 1 = Y = 10 . the bilinear form X'AY. In Problem 4. %yi + 2«iy2  13%yg  4a^yi + 15x^y^  1 2 13] Jx 1 2 13l x^y^ = %.128 BILINEAR FORMS [CHAP. »2 4 15 1 Ji 73 = X' 4 15 ij 2. X. where A IX. the nonsingular matrices 4 10 2 1 1 1 1/3 4/3 1/3 1/6 5/6 and Q = 1 1 mations 10 1 7/6 are such that /g PAQ Thus. to (16.5) in the rational field. 3 5 8 2 Reduce the symmetric bilinear form X'AY = X' 2 3 2 3 Y 10 by means of cogredient transfor 5 8 10 8 mations field. Reduce %yi + 2x^y^ + 3%y3 ical form. and (c) (16.7) in the complex 1 (o) Prom Example 1. Chapter 5. (b) (16. y 6 ^ X = P'U or I :x:2 =  U3 "3 and Y = QV 6 ^ 6 or xs = 7a reduce X'AY to u^v^ + U2t)2. the linear transfor 14 X^ = 1 Ui — 2U2 U2 — Uq . FACTORABLE BILINEAR FORMS. is nsquare. is carried into the bilinear form U\C AC)V. 16 VIII. the linear transformations X 2 10 1 4 1 1 u.6) in the real field. 12 3. Under contragredient transformations X = (CK^T )'U and Y = CV. Chapter 15. is The bilinear form X'lY = x^y^ + x^y^ + ••• + %y„ transformed into itself if and only if the two sets of variables are transformed contragrediently. we prove if A nonzero bilinear form is factorable and only if its rank is one. SOLVED PROBLEMS 1.  Ix^y^ + 2%yi  2»^y2 + x^y^ + 3x2y4 + "ix^y^ + 4%y3 + x^y^ to canon 1 2 32 1 The matrix of the form is A 22 3 3 1 By Problem 6.
CHAP.y) and so f(x. Thus the rank of /4 is 1. Chapter 15. 1 5 2 1 2 1 1 5 2 1 2 (6) Prom Example 2. the linear transformations X 5 2J 1 i 1 2 u. 16] BILINEAR FORMS 129 reduce XAY to uj^'i — "2% + ^usfg. Now the inverses of the transformations = jnr •] and = 2 '^jyj carry u^u^ into (S rijXj)(l s^jyp = f(x. suppose that the bilinear form is of rank 1. Conversely. as aij aj5 a^bj fk^'j aib^ bib^ «fe*s_ «t H "k L°fei "ks_ "k vanishes. Prove: A nonzero bilinear form f(x. we may obtain 000' 2 1 1 1 5 2 1 i C 10 10 10 1 5202 2j i 1 1110 1 Thus.y) is factorable is factorable so that if and only if its rank is 1. the linear transformations X 1 u. 4. . Prom the result of Example 1 1 Chapter 1 15.y) is factorable. 1 1 1 Y = 5 2J 1 1 2 t V reduce X'AY to 1 2 2 U^V^ + U2V2 + UgVg."3%2. Then by Theorem I there exist nonsingular linear transformations which reduce the form to U'(BAC)V = u^v^. any second order minor of A = [a^]. Clearly. hence. 1 Y = 11 1 1 2 i reduce X'AY to u^v^ + ug^s (c) . a^j = b^cj. Suppose the form and.
Y = 1 . Hint. 11. C2 are nonsingular nsquare matrices such that B^A^C^ = B^A^C^ = find the transfor mation which carries X'A^Y into UA2V..Y = PV Y . to show that there exist cogredient transformations duce an alternate bilinear form of rank r = 2t to the canonical form X = PU.7). Y = PV which re U1V2  W2VL + U3V4. = PV.^72 3 2 (c)  ^sTs 7 (rf) 2510 (6) 10 8 1 4 7 3 5 5 3 8 5 6 r 4 11 2 7 y. i 1 2—2 i 2 9. (a) C 2 1 (6) C = \/3/3 \/3/3 Ir 7. in 8. 3 4 14 1 2 7 1 'l 3 4V3/3 1 Ans. Y = C^C^V 5."sf^eti 14. reciprocal bilinear forms result. 10.4) (a) x^j^ . X' 1 5510 6.2^73 + "ix^j^ + ^272 . Chapter 15.4ns. If Bj^. Ci . Prove that an orthogonal transformation Prove: Theorem IX.130 BILINEAR FORMS [CHAP. Use Problem 4. 2 2 r 4 5 8 12 6 3 5 10 Obtain cogredient transformations which reduce 1 2 5 3 1 3 1 (a) r 2 4 y and (6) X' 3 1 10 2 2 5_ y to canonical form (16. 1 12. Arts. Chapter terms of a pair of bilinear forms. y 2 i 2 i ± 2 2 ± ~2 ± . 1 2 £/.^sTi . 13.  1 1 1 1 1 .6) and (15.6).3*273 . See (15. is contragredient to itself. If X'AY is a real nonsingular bilinear form then XA Y is called its reciprocal bilinear form. Interpret Problem 23. Show that when reciprocal bilinear forms are transformed cogrediently by the same orthogonal transformation. Write the transformation contragredient to . Determine canonical forms for Hermitian and alternate Hermitian bilinear forms. Obtain linear transformations which reduce each of the following bilinear forms to canonical form (16. that is .u^V3 + • + ugti "^t . . 16 SUPPLEMENTARY PROBLEMS 5. X = (B'^B^iU. B2.
021^*1 trix of the form may be written in various The maways according as the crossproduct terms 4x1x2 and 013X1X3.1) of the type n q n = X'AX whose coefficients ^It ^2 ^7J' a^. carries the quadratic form (17. II.x^. 1 ' "^s 1 7Xg 4  4Xj^X2 + SXj^Xg 2 2 = X' 2 4 07 and the rank of The symmetric matrix A = A otherwise. carries one of the forms into the other. together with Y =IX.1) with symmetric matrix A over F into the quadratic form (. The (17.chapter 17 Quadratic Forms A HOMOGENEOUS POLYNOMIAL (17. we have The rank of a quadratic form is invariant under a nonsingular transformation of the variables. 012X1X2.3) 1. Prom Problem the form (17. Chapter 15. If the rank is r<n.xg. hi/=o 131 . and 8x1x3 are separated to form the terms We shall agree that the matrix of a quadratic form be symmetric and shall always separate the crossproduct terms so that a a^ Thus. 9 = 2 as^^ + 2*2  2 2 "^x^  'ix^x^ + 8x^x^ is a quadratic form in the variables xi. Since B'AB is congruent to A. I. X = BY .031X3X1. Two quadratic forms in the same variables %. it follows that a quadratic form of rank r can be reduced to K^i + Kr^ + + h^y^. TRANSFORMATIONS. "u are elements of F is called a quadratic form over F in the variables Example 1. the quadratic form is called singular.] is called the matrix of the quadratic form is called the rank of the form.2) linear transformation over F.BYyA(BY) YXB'AB)Y with symmetric matrix B'AB. V . [o^^. xz % are called equivalent if and only if there exists a nonsingular linear transformation X=BY which. nonsingular. Two quadratic forms over F are equivalent over F if and only if their matrices are congruent over F.
which consists essentially of repeated com Example 3. .4) sz^ s.3).z7 11 + s z^ s^z2 2 + .. REAL QUADRATIC FORMS.132 QUADRATIC FORMS [CHAP. 1 — 7%.  s*+iZa+i  . See Problems 12. (17. where C is obtained from 6 by a sequence of row and column transformations of type 1.. 2 2 4 1 1 o' 1 1 1 1 We have [A I] 2 4 1 02 8 8 2 1 1 07 1 1 23 4 1 02 9 2 1 = 1 [D Bl 4 4 1 2 1 4 4 1 Thus. LAGRANGE'S REDUCTION. q = X'AX be reduced by a real nonsingular one or more of the h^ are negative.2(..4x3 Xg X2 X3 y2 + 4y3 ys reduces q to y^ . + Qxl (\2^2+H)' yi = . + Sfyzj. ys 73 = = . Reduce q = X' 2 X 7 1 of Example 1 to the form (17.3).2(A:^4^3f + Qxl xx . 17 in tion which only terms in the squares of the variables occur. The pleting of the square. X = BY Y reduces q to q' = y=2v^+9v=. reduction of a quadratic form to the form (17.. ViTe recall that the matrix B is the product of elementary column matrices while 6' is the product in reverse order of the same elementary row matrices. in which the terms with positive coefficients precede those with negative coefficients. 4 2 Example 2.3) can be carried out by a procedure.2(xl8x^x^) 2Sx„ (x^2x^ + 'ix^f .  s^. 3 ' 4x^*2 "*" ^*1*3 [x^.'ix^{x^2x^)\ + 2x1 7^: {xl'^x^{x^2x^) + i{x^2x/\ + 2x1 ..Ixl  Ux^2xJ (x^2x^ + ix/ . See Problem 3. known as Lagrange's Reduction. by a nonsingular linear transformaX = BY.xlZx^x^\\Qxl.2x2 + xq^ 4acs xi ot = = = yi + 2y2 + 4ys Thus. there exists anonsingular transformation X= CZ.2y^ + 9y^. which carries q into Let the real quadratic form If transformation to the form (17.
As a consequence of Theorem IV. is called the index and r III. we have Every real quadratic form can be reduced by a real nonsingular transformation to the canonical form (17. a (/ = 1.1.g ^ = 4/3 1/3 2^ W X3 = 3"^ = S qr to q W^ + 2 wl w%. The quadratic q form is of rank 3 and index 2.3).{rp). we prove the law of inertia: IV. p .. we have that the nonsingular linear transformation Xl = Wi + ^W2 + V^M>3 1 4/3 v^ X2 = mJ2 + 5V^«.4) into the canonical form (17. r+2. Q ^12 o = a^ + 2x^ . yj . zg = wa/v^ 922 zf i. = 1. in (17. the index of a real symmetric matrix depends upon the matrix and not upon the mentary transformations which produce (15.5) is called the signature of the quadratic form. 2i clear that the nonsingular transformation (i = = ^iVi. 22 = w^/S.Ix^ . SYLVESTER'S LAW OF INERTIA.5) where p.ixx 3 8xx 1213 + was reduced to J reduces q to q "' = 2.singular transformation Wi "'i = = VSi Zi 'j . Two if and only nature. 17] QUADRATIC FORMS 133 Now the non. is the rank of the given quadratic form.2 w^+ w^  w^. we have V.. the number of positive terms. 2 r) Zj (7 = r+1. If a real quadratic form is reduced by two real nonsingular transformations to canonical forms (17. real quadratic forms each in n variables are equivalent over the real field if they have the same rank and the same index or the same rank and the same sig COMPLEX QUADRATIC FORMS.5) wl + wl + + w^^  «. since the product of nonsingular transformations is a nonsingular transformation. . Let the complex quadratic form X'AX be reduced by a nonsingular It is transformation to the form (17. In Problem 5. the quadratic form Z . ele The difference between the number of positive and negative terms.2). ys = 22 carries q' 2y^ + 9yQ ~ + ^^3 ^^^ ^^^ nonsingular transformation 21 = w^. ' . Example 4.5). y^ = 23.2^^   wl Thus. they have the same rank and the same index.CHAP.1 J w carries (17. Thus.r+2 71) or = '''4k .7i) .2 r) • =r+l. 2 Combining the transformations.. q = into y^ q" = " The nonsingular transformation yi = z^. In I Example o 2.
A real singular quadratic form i. ^"^ y^ 0.e. in the real field a y^ and for any nontrivial set of values of the x's. the a.e. to p = yl . If q =X'AX is positive definite. \a\ f 0. is called positive semidefinite if its rank and index are equal. 6. PRINCIPAL MINORS. •)^ < VI.e. • • \a\ =0. i. q = X'AX is called negative definite if its index p = 0. we have VIII. 17 Ul Y carries (17.y^ and for any nontrivial set of values of the x's. The X. q is negative definite (semidefinite). the diagonal elements of a principal minor of A are diagonal elements of A. in the real field a negative definite form can be reduced to y^ . Two complex quadratic forms each in n variables are equivalent over the complex field if and only if they have the same rank. then U >0. ^ > 0. = n. in the real field a positive semidefinite quad+ y^ r< re.134 QUADRATIC FORMS [CHAP. and for any nontrivial set of values of .. DEFINITE AND SEMIDEFINITE FORMS.'s. 1. A real nonsingular quadratic form r A i. We have A real symmetric matrix A is positive definite if and only if there exists a non singular matrix XI. in Thus.p = 0. A real symmetric matrix exists a matrix of rank r such that A of rank r A = C'C. 4 • •  4 field of rank r Thus. In Problem IX. q < 0. Thus. r q =X'AX. VII. A n variables is called positive definite real nonsingular quadratic form if its rank and index are equal. X'AX is called negative semidefinite if its index p = 0. is positive semidefinite if and only if there 7. matrix i of a real quadratic form q = X'AX is called definite or semidefinite according as the quadratic form is definite or semidefinite. See Problem . if Thus.' 1. A minor of a matrix A is called principal if it is obtained by deleting certain rows and the same numbered columns of A. For positive definite quadratic forms. Every quadratic form over the complex can be reduced by a non singular transformation over the complex field to the canonical form (17. j\^ J^^ ^ positive definite quadratic form can be reduced to q = X'AX. we prove r Every symmetric matrix of rank has at least one principal minor of order r differ ent from zero. r real singular quadratic form q = < re.y^ Clearly.6).y^ . q> 0..3) into (17. DEFINITE AND SEMIDEFINITE MATRICES.. then q is positive definite(semidefinite).6) 1 ' ''''{/h^'/h. C such C that A = C'C. in the real field a negative semidefinite form can be reduced ^°f ^y nontrivial set of values of the x's. n. Thus. ratic form can = p< "^ be reduced to y^ + yj Thus. q 5 0.
XVI. for ( i) not both p2 and ps are zero. in the symmetric matrix A of rank r is said to be regularly arranged if no two consecutive p's sequence po. p2 = 0. P4= Thus.Pi is positive. Thus. 2 3 4 1 po = 1. principal minors as For a symmetric matrix A = [a^. . For the quadratic form X'AX = X 112 112 2 2 X. then p„_2 and p„ Example 5.. we apply the above procedure to the matrix of Pti..7) PO = 1. If 8. if PniPn^^ 1 but Pni = 0.2 %3 O23 '^3 (17. until M is regularly arranged. = p„ = 0.] over F. ^^^^ so on.CHAP. •. every principal minor of A is nonnegative.definite. See Problem XIII. Pl= Oil. A real quadratic form X'AX is positive definite leading principal minors are positive. Then p^ ^^ while p^^. A Let i be a symmetric matrix of rank r. 17] QUADRATIC FORMS 135 XII.^ = p^^. P2 = 021 052 Pn= Ul 022 O32 In Problem 9. is positive semi. pi = 1. XVIII. If p^_^ j^ and p^_2 =0. if Pt2^0. When A is regularly arranged the quadratic form X'AX is said to be regular. the transformation X = Kg^X yields '1112 112 2 (i) X' 12 14 2 2 4 1 X 3 for which po = 1.^ = . By Theorem XIV. we define the leading Oil O12 %1 Pa Ogi Oqi 04. See Problem XVII. if 10. If A is asymmetric matrix and have opposite signs.pi. By Theorem IX. p3 = 0. every principal minor of A is positive. Pi = 1. Any resquare nonsingular symmetric matrix A can be rearranged by interchanges of certain rows and the interchanges of corresponding columns so that not both p„_^ and Pn2 are zero. In Example 5. the quadratic form (i) in the same example is regular. we prove XIV. P2 = 0. the given form is not regular. and only if its rank is re and all A real quadratic form X'AX of rank Pr r is positive semidefinite if and only if each of the principal minors po. A contains at least one non vanishing rsquare principal minor M whose elements can be brought into the upper left corner of A. If A A is positive definite. 12 Here Oigs t^ 4 0. the first r rows and the first r columns may be rearranged so that at least one of Pri and p^_2 is different from zero. XV. REGULAR QUADRATIC FORMS. Ps = 1. we apply the procedure to the matrix of P7_2. Any symmetric matrix (quadratic form) of rank r can be regularly arranged.Pr ^^^ zero. P4 = m= 1.
. .. n . re1) 10 1 «!„ a^n BY 1 <^ni.2. .. 're. U q = X'AX is a quadratic form over F in n variables of rank then by a non singular linear transformation over F it can be brought to q' = singular rrowed minor C oi A occupies the upper left corner of B. X'BX which a nonMoreover. _i n1 z = sy = a ''n. n 'ni. The into one in method of Kronecker which only the squared terms of the variables appear for reducing a quadratic form is based on r. but U ' q = 0. 17 KRONECKER'S METHOD OF REDUCTION. For the quadratic form X'AX = X' 2 4 P2 = O^S The 07 . a nonsingular XX. there exists in a nonsingular linear transformation over quadratic form in r variables.. li q = X'AX is a nonsingular quadratic form over F in n variables and a„„ ^ 0. F which reduces q to q" = X'CX. a.. Example 6.?!! ^ X'AX is a nonsingular quadratic form over the nonsingular transformation over F " F and if o^i. «!. 2.. or 1 1 . 1 2 2 4 1 ^i 2 2 ^ 0. ni = ^nn xi *rai = yi + CLi.8y3 2 2 nonsingular transformation Xl n = = + a. Xn a n ni Yn± . . „^...niyni + '^inyn.n yn.n2) •^ni. 2.»2.136 QUADRATIC FORMS [CHAP.13 ys yi 1 1 8 8 »2 ys + a23 ys «ss ys = ys  8y3 2ys xs  02 reduces X'AX to 1 1 2 2 1 1 8 8 Y' 1 2 2 Y' 1 2 4 2 8 8 2 07 in 02 squared form 36 in which the variable yg appears only XXI.. (i 1. a2 n1 1 a.0 .. n «„„ carries q into S 2 HiJiYj + PnxPnYn in which one squared term in the variables has been isolated. a„_„_i .. XIX. the nonsingular transformation if Pni = a ^nnyn or = 1.
q can be reduced to 9i = X'AX where A has a nonsingular rsquare minor in the upper left hand corner and zeros elsewhere.r1.8) PriVrYr If pri = but a. ri ^0. Example 7.O'ss = but xi = = = tXg2 = 1 + i^ 0. 2 n2) Jnx = = Zni Zni — Zn 2n Jn n2 + n2 yields V 2 1 . riJ a ri.2 J 1 a^jz^zj + 2a„_^_j^p„(zni .9) Pr2 ri.n\Pnjn\yn The further transformation ' Ji = Zi. Theorem XX can be used to isolate one squared term (17.z^ in which two squared terms with opp site signs are isolated. For the quadratic form 1 2 1 X'AX X' 2 1 4 3 3 1 •^22 .y3 2 ] The transformation )'l 1 = = = 21 1 72 22  23 23 1 1 1 1 73 carries 22 + Y'BY 1 into 1 1 o' 1 1 1 1 1 1 Z' 1 1 1 1 z' 2 < 2 + 1 K 2zi Consider now a quadratic form in n variables of rank r. Since pr2 ^ 0. ..T CHAP. If pri f 0. By Theorem XIX. A may be regularly arranged. riPrYr which have opposite signs since pr2 and pr have opposite signs by Theorem XV. 17] QUADRATIC FORMS n2 137 n2 carries 9 into 2 2 (Hjyiyj + ii j\ 1'^n. interchanges of the last two rows and the last two columns yield a matrix in which the new pri = a^_i^^_i f 0. By Theorem XVI. (i = 1. Theorem XX can be used twice to isolate two squared terms (17. The nonsingular transformation + yi cti272 *i3ys *23r3 or 1 2 X2 xq 1 a 32 72 010 ] reduces X'AX 1 1 to 1 2 1 1 1 1 1 1 1 2 1 4 3 3 1 ( y = rSY = yl + 2y.
if X'AX has rank or 2 it may be reduced respectively by Theorem VI to y^ or r^ + y^. ratic form. Let the variables and their coefficients be renumbered so that o^ is ci The nonsingular transformation ••• !Y^ 72 = = a^Xx + 02*2 + X2 + (^n^n y„ = Xn 1. (1) is of rank 1 Conversely. A quadratic form if X'AX ^ rank is with complex coefficients is the product of two linear factors and only if its r<2 . Let X'AX f 0.. hi J tj . rv Pr ^"^^ Pr2' ^r. pi. sequence pri.10) it is seen that the sequences p^_2. ri In (17.. 1) is of rank the factors are linearly dependent. This process may be repeated until the given quadratic form is reduced to another con taining only squared terms of the variables. a regular quadratic form of rank r. 17 If prL = used to and a^_i_ri = 0. . Let the becomes . FACTORABLE QUADRATIC FORMS.Pr presents a permanence or a variation of sign.9) and (17. ri and a. We have proved XXIII. The nonsingular transformation yx = = = OiaCi + 02*2 + ••• + 0„«„ + ^n^n y2 ys ^1*1 + ^2^2 + ••• Xs ( y„ = 2. P2 P^.^. oLri. where a zero in the sequence can be counted either See Problems 1113. Thus.^ is nonsingular. of sign regardless of the sign of dri.10) 2ar. .%1 + 02*2 + ••• + «„«. It q = X'AX.8) the isolated term will be positive or negative according as the • XXII. as positive or negative but must be counted. is reduced to canonical form by the method of Kronecker. x^ transforms If ( i ) into y^yz of rank 2. In (17. at least one element a^^f 0. rv Pr Present one permanence and one variation Thus.r~ipr{yriyr) having opposite signs.„)( iiXi + b. with complex coefficients. ri ^ and Theorem XXI can be (17. the number of positive terms is exactly the number of permanences of sign and the number of negative terms is exactly the number of variations of sign in the sequence po. each of which may be written in the complex field as the product of two linear factors. be the given quad Suppose X'AX factors so that (1) X'AX = (oj.2_X2 + •• + b^x^) 01 to. isolate two squared terms then (see Problem 9) a.138 QUADRATIC FORMS [CHAP. transforms (i) into — y? of rank 1. Thus.
3). we have 16*2*3 + ** .8x^x^ . (a) q = = „2 .32*2  = (^ + ^ + 4^^)2 ryi + 4 (*g .CHAP. We find 1 [^1/] C 8 1 1 [D\P'] •2 1 1 i 2 4 1 1 Thus.2y^ 3.2 2 2x^ + 5^2 + ..„ 19*3 .2x^+Zx^+2x^) + {2x^ + 3x^ + 2x^f\ + bxl + 19*2 .)2 ^q_^ = 2 (*i 2 (*i + 2*2 + 3*g + 2*^)2 . 17] QUADRATIC FORMS 139 SOLVED PROBLEMS 12 1.8x^x^ . 12 [^1/] 2 3 2 3 5 8 3 5 8 2 8 1 1 1 1 1 c ^\J 0100 4 10 2100 10 iD'.P'] 4 1 10 10 8 1 1110 Y reduces q to the required form y^ 2 1 10 1 4 1 1 1 Thus. 3 5 2 8 Reduce q = X'AX = X' 2 3 2 3 5 8 8 10 X to the form (17..IGx^x^ 2{xl+ 2x^(.24x^+ 8a:^a=2 + I2x^x^ + Sx^x^ + l%x^x^ .24*2 ^ jg^^^^ ~ 8*2^^ g^^^^ _ ^^ "2*3 _ = _ 2(2*^ + 3v +2*. 108 Prom Example 1. 00^ (i) For the quadratic form of Problem 9 2. the transformation X = PY = yl^'^yl 1 2 2 2.2*4)2 + 2*4 *i + 2*2 + 3*3 Thus..3).IGx^x^ 2{xf+ 2x^(2x^ + 3x^+2x^)1 + 5x^+ 19x1 " 24*4 + ^S^j^g . *2 + 4*^*2 + 4*^*3 + 4*2 + tenii in (*^ + 2*2 + 2*3)2 + 8*2*3 Since there is no term in *2 or *2 but a *2*3 we use the nonsingular transformation . the transformation X = PY i Y 2 reduces q to y^ + 8y^ . the transformation \yi = Irs = *2 + *s + 4*4 *32*t *4 'Educes. Reduce q = X'AX = X' 2 2 4 8 8 4 X to the form (17. . to 2y2_3y22+4y2. Lagrange reduction.3 + 2*2 + 3*g + 2*^)2 _ 3 {*^ + 2*2(*g+ 4*4) ! + *3 . . Chapter 15.
+ yg + 2 2 2 y^ and <ii) y' + yl + ••• + yq  yq. the nonsingular transformation X =  2 I 1 y effects the reduction. Xs  Z2 + ^3 to obtain q = (2i + 4z2+2z3f' +82^+82223 = (z^ + 422+2z3)2+8(22+2Z3f . ^3(2  V2 ). Z X\ hence. 17 ( i ) Xl = Zl .2 5.i  yj.2 = 22 . A.2z2  = y^ + ^y^  2yi^ 1 1 4 2 1 1 1 1 4 2 1 1 1 2 2 Now Y X 2 1 Z and from ( i).2  •••  y' then p = q Suppose q>p.2 iv^ .i». we have 10 [^!/] 1 1 1 1 £ 8 4 c 10 01 10 [cic'] 02 2 1 2 iVl ii\/2 V2 k^f2 \\f2 y i\f2 reduces q = X' 1 2 4 8 2 Thus. Prove: If a real quadratic form q is carried by two nonsingular transformations into two distinct reduced forms (i) y. the transformation X = (^y 2 2 8 4 X to the canonical form 2. 1 6" Y = i 1 X 1 = ii 1 1 1 '1 1 4 1 1 Thus. Using the result of Problem 2.140 QUADRATIC FORMS [CHAP. 2 4. Let mation which produces ( ii X ). = FY be the transformation which produces (1) and X = GY be the transfor Then filial + + bi2X2 + 622*^2 • • • + binxn + b^nXn 621X1 + • • • F~^X = bjiixi + bn2X2 + • • + bnnpcn and cii% + ci2«2 + C21X1 + C22«2 + • • • + Cin^n • • • + C2re«n G'^X cr!. 1 1 [^1/] C 8 4 2 1 1 2 i 2 and applying the tiansfoimations HgCiV^).i + cn2*:2 + • + cnnXn . K2(i+\/2) and ^3(5 a/2).
b+i.rfJ^ • (Vl«l + V2«2+.2*2 + «i + • • • (Ot^. this requires that each of the squared terms be zero. 17] QUADRATIC FORMS 141 respectively carry (i) and (ii) back into q. ( substituted into Chapter 10. they have a nontrivial solution.s2 stands in the rows numbered Ji..i*i + '^(7+1. Thus.2*2 + 't?+2.+ V?I«n) + (<^7i«i + ^(?2«2 + • • • = (^11 %2«2 + • • + c. rows are linearly independent while all other rows are linear combinations of them. Since A is of rank r.l*l % + 6i2 *2 + • • + *in*n = ° + 6^ + '^g + M*re = ''21*1+^22*2 + + '^9+2. Clearly.. Suppose that it these rows be moved above to become the first r rows of the i^ be moved in front to be the first r columns. this is a princi pal minor of A 7. + {bp^'^^ + bp^x^\.+ bp^Xyf = (CiiA:^ + c^2*2+. Since A is symmetric. Prove: matrix A C real symmetric matrix r A of rank r is positive semidefinite if and only if there exists a of rank such that A = C'C.2*2 + bpiH By Theorem + *. Hence. we have + ^^+1.2*2 + l.^n'^nf + + >an)^ Clearly. we now have the first r Now «ili2 • H2i2 "Vii %i2 •• "vv in which a nonvanishing minor stands in the upper left hand corner of the matrix. But then neither F nor G is nonsingular. the same operations on the columns will reduce the last nr columns to zero.l^l''' "^g +1.+ c x^f — Consider the *ii + l.i2 matrix and let the columns numbered I^^t one rsquare minor which is different from zero. of the above argument under the assumption that 6.i°'i + bp^i. (iii) (''ii*i + *i2*2+.. A repetition a contradiction. r has at least one principal minor of order r different it has at least 'rfi. By taking proper linear combinations of the first r rows and adding to the other rows.^2*2 + • • • + 6jbn% = •^ri *i + ^¥2*2 + • • • + c^^x^ IV.+ *in^n)^ + . Hence q=p. say iii ).ttg 0!^^). contrary to the hypothesis. q<p will also lead to q<p./!^^ rq+p < • r equations ^g+i.+ c^„«^f + ('^g ••• "*" + (cq^x^ + '^q c x^+. Thus. CHAP. When this solutionis  (*. r. Since A is of rank its canonical form i '[t:] Then there exists a nonsingular matrix B . Prove: Every symmetric matrix A of rank from zero. these last nr rows can be reduced to zero.
Prove: If A is positive definite.n PnsPn a„ n. ps = 4. ni and a„ Vi. rows and the same columns of A.. p* = 3. Thus. (^2 ^5... I thus. Prove: Any rasquare nonsingular matrix A = [ay] can be rearranged by interchanges of certain rows and the interchanges of corresponding columns so that not both p„_^ and p„_2 are zero. the theorem is true for A of order 1 and of order 2. obtained from A by deleting two. 2 s) and *ii + *i2 + •• + ^jn = 0' ii = s+l.. then every principal minor of A is positive. Since \A\ J^ 0. A^^j^. let C be a real nsquare matrix of rank form be then A = C'C is of rank s<r. 8.0 0) Set where each di is either +1 or 1.. every principal minor is positive. ps ^ 0.>0 and A is positive semi definite. 1111 Here po 2 = 1.ni. . is positive. r. the interchange of the second and third rows and of 4 2 3 . each d. = C'C Since A'i'= Ai = A^ .n a. Aj_ is positive definite. at moved to occupy the position Suppose (6) all a^i = 0. we examine the matrix 1 3 4 of p3 . the After the ith row and the ith column have been new matrix has p„_^ = CC^j^ 0. Then there exists a nonsingular real matrix E such that E'(C'C)E = CE = B = [bij\. A^j> 0. . This argument may be repeated for the principal minors . ps = 0.. The cofactor B22 = 2 ?^ . Note that this also proves Theorem XV. 1 1 1 Renumber the variables so that q = X'AX = X' 2 13 3 4 X is regular.s+2 n) Clearly. a a In the least one a^i i^ new matrix 0. 2 10. 2 Since pi = P2 = 0. pi = 0. we have 2 A^g 2 2 (i = 1. The principal minor of A obtained by deleting its ith tow and column is the matrix A^ of the quadratic form q^ obtained from q by setting x^ = 0. Set C = A^B. 17 such that A of rank r = B%B.0.n Pn^^O. three. Suppose ?^ 01^„ = . Clearly. By Theorem VI. Now every value of qj_ for nontrivial sets of values of its variables is also a value of g and. Move the ith row into the (nl)st position «„_i.142 QUADRATIC FORMS [CHAP. Since B'B = N^.n = \ni ^ ^ ^^ (^^^^ '^® ^^''^ ^ni. ?^ Moreover.ni ^ni. Let its canonical N^ = diag(di. Suppose some CL^ 0. Conversely. then either (a) some a^^ or (b) all a^^ = 0. Let q = XAX. and the ith column into the (nl)st position. hence. Ai> 0. we have A = B'NxB = B'N^Ny^B = B'N(N^B . of the last row and the last column. it is true for A of order n>2 when p^_^ = a^„ (a) j^ 0. A^j. then C is and A as required.n Vi.
11 . p^ .8) and (16. 3 6 9 2 1 Reduce by Kronecker's method q = X'AX = X' 2 3 4 6 3 2 5 X. 1112 1 .<:g p^ = 0.243y . p^ = 27 the reduced form will have two positive and two nega 1 2 4 I 1 1 tive terms. p^ =0.eo^g  looy^ 12 12. 1 2 Reduce by Kronecker's method q = X' 2 212 Here p^ .1. it can be reduced to 2 4 3 13 5 3 6 2 9 1 2 1 4 3 3 5 1 Now q has been reduced to XCX = X' 2 1 X for which p^ = 1. p^ = 4.CHAP. pg .20. 13 in 5 which C = 2 2 1 4 3 3 5 Since S is of rank 3. 13 Here A is of rank 3 and fflgg i 0. Since each pj / 0. 4 11 X. Pg — L 9. Consider the matrix B 2 1 of pg. p^ and q is regular.y) + pgp^y^ = yl + 54y 54^3^ . pg = 4. the reduced PoPiri + PxyizJi + yiiP^yl = yl + 4y  4y 12 13. p^ = 5 permanence and three variations in sign. Here. The reduced form will contain two positive and one negative term. 17] QUADRATIC FORMS 143 the second and third columns of A yields 2 1 r for 2 4 3 3 1 1111 which p^ 1. the reduced form will have one positive and three negative terms. p^ = 1.9) PoPiTi^ + 2/332 Pg (y . p^ 1.1. form is by (16. p^ = 3. — 4 6 4 repeated use of Theorem XIX yields the reduced form = PoPiTi + PiPgrl + PsPsrl + PsP^rl y\  "^yl . pg = 1.8) Since p„ = 1 but 1 = 4 5 i^ 0. 2 3 Here Pq . An interchange 1 of the last two rows and the last two columns carries 12 13 A into 5 2 4 3 6 2 1 12 10 ^ 0. X2 has been renumbered as xq and as xg 11. p^  0. Since /^gg = but /3g2 =3^0 the reduced for ij is by (16.1. Reduce by Kronecker's method q = X' 2 3 12 3 15 5 4 6 3 X. The sequence of p's presents one 3.
S kjXy^ =  ^. 17 14.?.= 210 (c) 2 Ans.2 p). Ai • . xg = Z2. . X2 = zi. and the given vectors Xj are linearly dependent. Write out in full the quadratic form in xi. not all zero. .^ 4. Reduce by the method 1 of Problem 1 and by Lagrange's Reduction: 2 4 (6) 1111 X (a) X' 2 4 62 X 2 18 yl 1133 012 X (c) 1 13 2^2^ r y^ 1 1 1 (d) ^ 1 1 2 3 3 1 13 13 (6) y. A]_ • Ai A2 Ag Ai * A2 ^2 ' .i = zg. 2x1 .)^ t J' X'GX Since this quadratic form is positive definite. such that kiXjjXi + k^Xj • X2 + ••• + kpXj • Xp a = 1. P let Suppose the vectors Xi are linearly independent and and X = = [«i. To prove the converse of . = Then Z = 2 i=i X^x^ ^ 0<ZZ = ( 1=1 2 Xx.«2 xp]' f 0.2 p) where f= l^k^Xi. J  G =  and P reverse the steps of (6) to obtain ^ ^ = 0.6x^x^ + 2x.x^. (a) + 2y^ . (i = 1. and \G\ We have proved that I G > 1 0.f.^Xg + 2x^ + 8X2^3 5< 17. if = 0.: 1 144 QUADRATIC FORMS [CHAP. (6) Suppose the vectors X^ are linearly dependent. we need only p i= 1 to assume Thus.2 p) Thus the system of homogeneous equations •• XjX^xx + XaXqxq + 'J J + XaX.jy.t2x > Z^ Zi • Xp.xs whose matrix is 3 1 4 4 5 Ans. Prove: For a set of real ravectors Xi. ^= 0. Xp. X^. SUPPLEMENTARY PROBLEMS 15.. . hence.i«2 + »^s <<^) ^1 " 2^  3a:§ + 4j:i%2 + 6.48y + 4y^  . (/ = 1.. G >  0.Xp the equality holding (a) if and only if the set is linearly dependent. Write the following quadratic forms in matrix notation (a) xl + 4*1^2 + 34 ^JX (6) 2x1  6j«.•• (b). 'j ' P ^P 0. a = 1. (a) 12 (c) 1 rP (b) X' 3 r 2 3 2 4 4 3 2 3 2 1 16.X2 • • Xp. 72 + 8yg id) 2 Ji 2.c^*g .2 p) has a nontrivial solution xi= k.^ = and.k^ kp.^x^x^ 3 23 Ans. In (c) and (d) use a.2 y^z^ya Hint. )( 2 X:xA j=i J 3 = X\X':XAX tj'  r(A:^.. P Then there exist scalars k^^^. such that S f = i=i X.
Prove: If C Hint: Consider is any real nonsingular matrix. 16. so also is their sum. Show that ax^ . (a) Show that X'\ \x = ^' \x but the matrices have different ranks. where R = Ks^K^^{5)K^^a).16y + 16y + y + y2 yf ^^j (/) (g) (A) Pq = Pi = ^ «22 = ! P3 1 I . Show that = xf Gxl  6^ . 20. 28. 19.3y2 See  4y2 . Hint: In (g).128y= yl jI q  ^yl 8y. that if 23. Show that X'AX Theorem XIX.. then g = ?i + 92 is a positive definite form in xx. Verify the stated effect of the transformation in each of Theorems XX and XXI. Prove: A real symmetric matrix is positive (negative) definite if and only if it is congruent over the real field to / (/).2 72 "^ ^3 (b) (c) (d) Po = Pi=l. (a) p^=p^=p^=p^=l. (a) (6) of Problem 12 is reduced to X'CX hy X = RX. 2 Ti  2. + 4y= . y^^ + 128y. that if q^ is a positive definite fonn in x^. Show Show two real quadratic forms in the same variables are positive definite.ixl+ix^x^ and 9xf+2x^+2xl + 6x^x^6x^x^8x^x^ are equivalent. 30.x2 x^. renumber the variables to obtain (e) and also as in Problem Ans. transform each of the following into a canonical form.x^x^  xx + 2xx + Uxx  XUx 2 + 4 ^xx^ can be 3 4 factored . XlX = Y'ClCY 24. Then prove 22. 17] QUADRATIC FORMS 145 18. if B and C are such that B'AB =1 and A = C'C then 27. Prove: If ^ is a real positive definite symmetric matrix and CB is orthogonal. Prove. 21. (b) Show that the symmetric matrix of a quadratic form is unique. 1 1 2 (a) X' 1 1 2 (c) 12 12 12 r 2 102 (e) 1 01 1112 2 2 2 3 1 X' 1 (g) X' 1 1 2 3_ 2 1 2 1 3 2 2 44 (b) 1 2 X' 4 2 3 3 X 1 (d) X' 2462 3 1 2 1" 2 1 3 6 9 3 3 1 (/) X' 4 3 3 1 2 4 2 3 (h) X' 12 12 11 1111 17(rf). so also is A^ for p any positive integer.16y + 16y + 12y^ 31. Prove: Every positive definite matrix of Theorem X. . (Problems 23 and 24 complete the proof 25. 26. By Kronecker's reduction. then C'C is positive definite.x^ xs and 92 is a positive definite form in xs^i.2bx^x^ + cx\ is positive definite if and only if a>0 and \a\ = acl^ >0.) Hint: Consider A can be D'AD=I. Prove: If a real symmetric matrix A is positive definite.x^. iyl . written as 4 = CC. 29. Show that over the real field xf+xl. «2S=4. P3 (e).Zxl . Every principal minor of a positive semidefinite matrix A is equal to or greater than zero.CHAP. after renumbering the variables when necessary. xs+2.
Since H is Hermitian. together with Y = IX.1) by _ h n n = X'HX = S v=\ S j=i __ _ hij h^jXiXj.2) into an {mH{BY) = Y(WHB)Y Hermitian forms in the same variables x^ are called equivalent if and only if there exists a nonsingular linear transformation X = BY which. we shall find that the theorems here are analogous to those of Chapter 17 and their proofs require only minor changes from those of that chapter.1) is a real quadratic form. (18. = hj^ where U is Hermitian and the components of of X are in the field of complex numbers. Two Hermitian forms are equivalent if and only if their matrices are conjunctive.6)] %Zi + i^22 + ••• + 'zpzp  z^+i2^ + i  •••  zrV 146 . Since B'HB and H are conjunctive. the form is called singular.3) Hermitian form (18.3) can be reduced to the canonical form [see (15. crossproducts h^j'x^Xj and hj^XjXj^.^ is real. REDUCTION TO CANONICAL FORM.4) further linear transformation.chapter 18 Hermitian Forms THE FORM DEFINED (18. The rank of an Hermitian form is invariant under a nonsingular transformation of the variables. hence.1) other Hermitian form (18. and HI. The rank H is called the rank of the form. h^jx^xj is real. From (18. every h^i is real and every h^^^x. The values of an Hermitian form are real. (18. for the pair of + hjiXjXi = hijXiXj + hijXiXj Thus. nonsingular. is called an If Hermitian form. otherwise. By a (18. If H and X are real. we have Two H. carries one of the forms into the other. Moreover. The nonsingular linear transformation X = BY carries the Hermitian form (18. 1. the rank is Kra.2) the matrix B is a product of elementary column matrices while B' is the product in reverse order of the conjugate elementary row matrices.1) of rank r can be reduced to diagonal Kjxn + k^yiLji + ••• + 'crfryr' ^i^^ and real by a nonsingular linear transformation X = BY. An form (18.
If H = C'C. H is positive. If VII.3i to Reduce X' \2i 2 + 3i 42i X 2f 15. A>0. Here. if DEFINITE AND SEMIDEFINITE FORMS. also.:e singular Hermitian form h = i. ••• equal.ysys . H of an Hermitian form XliX is called positive definite or positive semidefinite according as the form is positive definite or positive semidefinite. the nonsingular linear transformation 1 2/VlO 1/VIO (4 + 4i)/V^ BY ..e. IV. The matrix V. every principal minor of is positive semi. yiTi + y2y2 + + yrYr . An Hermitian fom is positive definite if and only if there exists a nonsingular matrix C such that VI.CHAP.4). nonnegative. r< n. a positive definite Hery^y^ + y^y^ + ••• +Jnyn and for any nontrivial set of values of A a. Chapter 2j 1 + 5 2Zi 4 2j I I o" 'l I I 0' i/^To j/a/TO 12/ 2 HC '^^ 2/\/io j/yTo + 3i 4 + 2i 13 l_ 1 (44j)/v^ 1/vTo Thus. X'HX is called positive semidefinite if its rank and index Thus. h>0.4). rj= p < n. H is positive definite. A is called positive definite if its^rank mitian form can be reduced to the x's. SOLVED PROBLEM 1 1. Two Hermitian forms each in the same n variables are equivalent if and only they have the same rank and the same index or the same rank and the same signature.definite. and H every principal minor of H is conversely. nonsingular Hermitian form h = X'HX in a variables and index are equal to n. and conversely. Thus. 1 + 2i 5 2 . 4 + 13 Prom Problem 1 7. 18] HERMITIAN FORMS 147 of index p and signature p(rp). canonical form (18. p depends upon the given form and not upon the transformation which reduces that form to (18.0 i/yflQ j/VTo 1/v^ reduces the given Hermitian form to Jiyr + Jsys . a positive semidefinite Hermitian form can be reduced to and for any nontrivial set of values of the x's.
K.3). 1 1 1 3i 2 2 + 4 3i 3i [1 l2i 1 + 2il 2 X (c) r 1 + + I 3j 3j X J 2 2  3t 1i 3  2i (b) x'\ i" For jx :]^ (6). Ans. 18 SUPPLEMENTARY PROBLEMS 2. Obtain for Hermitian forms theorems analogous to Theorems XIX. xi XL X2 ^n hin All /J21 hi2 h22 n n _S Tj^Xj^Xj 7.XXI. Obtain the linear transformation X I I = BY which followed by Y = IX carries (a) of Problem 2 into (6). . Prove: ^2 h^n S where 77 • is the cofaotor of h^: in H = \h^j\ 1=1 j=i ^n Hint: K. Prove Theorems VVII. {a) X (l20/\/3l^ = _ tl (. Reduce each of the following to canonical form. Show that X' 1t 6 X is positive definite and X' 1i 3 5 X is positive semidefinite. Hint: multiply the second row of H by j and add to the Ans.0 1/^2 (32J)/\/lO J<iyi .148 HERMITIAN FORMS [CHAP. 1 3J l2i 5. ^nn Use (4. 6. Chapter 17.7272 " 7373 2/VlO 3. X= ^\ j_ri v/2"Lo Ti (i2i)/v3i :^' ~ n \y ij' l/v^ Jb 1 1+i 1 3+J 11 1 1+j l+2i 5 10 4.ym (1 + 30/3 1/3 l" (c) X 1 1 y . (d) X = . first (d) r 1+j 3 2J 2 X + 2J +J 4 first row.7272 'l (l+f)/A/2 (l + 30/\/^ y .b) X V2 1 nyi . nn . for quadratic forms.
If A = Ai is a characteristic root.chapter 19 The Characteristic Equation of a Matrix THE PROBLEM. From (19.tA^ + llA and the A2 characteristic roots are Ai= 5. be a lineal transformation over F.2) — O^ 2 ^ — f^2 \XAX = (XIA)X = — ^21 — ^271 ~ The system of On2 X— ann homogeneous equations (19.A2 A„ are called the characteristic roots of i. The equation <^(X) = is called the characteristic equation of A and its roots Ai.3) \XIA\ A— Oil — azi — cni — Oi2 A— 022 — On2 —0tn — ci^n A — Onn = The expansion of this determinant yields a polynomial 0(A) of degree re in A which is known as the characteristic polynomial of the transformation or of the matrix A. Determine the characteristic roots and associated invariant vectors.Jz JriY whose only connection with possibility of certain X vectors X through the transformation. 149 . Characteristic roots are also known as latent roots and eigenvalues.1) the transformation is carried into KX. any vector X for which AX = XX is called an invariant vector under the transformation.2) has nontrivial solutions if and only if (19.2 n). r2 Example 1. In general. characteristic vectors are called latent vectors and eigenvectors. Let Y = AX. where A is either which F is a subfield. where A = [o^].j = 1. is (i.%. that is. A3 2 1 1 = A^' . we obtain A — di^ (19. THE CHARACTERISTIC EQUATION.%2 x^Y into a vector Y = \. Any vector X which by (19. A2= A3= 1.1). We shall investigate here the a scalar of F or of some field 3F of being carried by the transformation into KX.2) has nontrivial solutions which are the components of invariant or characteristic vectors associated with (corresponding to) that root. the transformation carries a vector Z= [%i. given A A2 The characteristic equation is 2 1 1 1. then (19.
19 When A = Ai = 5. the rank of Xil — A n—r and the dimension of the associated invariant vector space is not r. we prove a special case (A: = 3) of Xj^ As Xk ^f^ distinct characteristic roots of a matrix A and if Xi. with respect to A is II. In Problem 4.1]'.l]'. I.l. In Problem 3. When A = A2=1.l. GENERAL THEOREMS. greater than See Problem 5.2) becomes Xl 2 l' 2 2 1 1 1 1 X2 _X3_ Xl + 2x2 +^3 = Two linearly independent solutions are (2. If 2 2 1 1 Example 2. 1 (19.0. the characteristic equation is <^(A) = (A5)(A1) = 2 0. the rank of Xil — and the dimension of the associated invariant vector space is 1. 1.kY of this space is an invariant vector of A. X^ the X's are linewith these roots.0. Every vector \ [k.l]' associated with the multiple root A=l are a linearly independent set (see Theorem I).h. and of is when k>n.l.o]' and [l. Xi = [2.l]' associated with the characteristic root A=5 and the linearly independent invariant vectors [2.0]' Every vector hX^+kX^ = [2h+k. we have If is not less than Ai is an rfold characteristic root of an resquare matrix A. is re! when k = n. respectively vectors associated invariant are nonzero If Ai.k.0. The invariant vector [l. In particular Iir. is the twodimensional vector space spanned by See Problems 12. is re — 1 Xi is a simple characteristic root of an resquare matrix A. For the matrix A = 1 1 3 2 of Ex. 1 Thus.l.0) and (1. The Ath derivative of cfi(X) = \XI — A\ times the sum of the principal minors of order nk of the characteristic matrix when k<n.2) becomes 1 3 1 1 2 ll Vxi ^2 xs = or 1 1 1 Xl 21 2 3 X2 = _X3 3 2 1 2 1 1 1 since 1 1 3 is row equivalent to 1 1 2 A solution is given by is the A=5 hence.1). As a consequence III.A 150 THE CHARACTERISTIC EQUATION OF A MATRIX [CHAP. (19.ky is an invariant vector of A. . associated with the characteristic root xi = x^ = xq = i onedimensional vector space spanned by the vector [l.1. arly independent. we prove a special case (ra = 3) of k\ where A is resquare. associated with the characteristic root A = and X2 = [1. Theorem II. The invariant vector space associated with the simple characteristic root A=5 is of .
= A/^ = A" + 51 A"^ + 52 A"^ + . suppose that the determinant has been expressed as the sum of 2" determinants in accordance with Theorem VHI. {m = 1..4) of Problem IV. Xi.„. of See also Problem 6. show that (19. its value is A". scalar. 2. SOLVED PROBLEMS 1. If A is resquare.\2—k \i—k are the characteristic roots of 4 A/.4) 0(A) (m = 1. the de . . its value is (1)"^ The remaining determinants have m columns. Consider one of these determinants and suppose that of its A columns numbered h. Chapter 3. Since any principal minor of A" is equal to the corresponding principal minor ot A. is a Ai. we prove is If a a characteristic root of a nonsingular matrix A. A2 scalar. Since any principal minor of A is the conjugate of the corresponding principal minor of A. If AAi. + s„_. 19] THE CHARACTERISTIC EQUATION OF A MATRIX 151 dimension multiplicity 1. of A and n~m columns each of which contains just one nonzero element A. kX^ A„ are the characteristic roots of an rasquare matrix A and AA„ are the characteristic roots of kA. If characteristic equations..1)..A + (l)"i where s^ .. is of dimension 2 (see The invariant vector space associated with Theorems III and III'). we have by (19. We rewrite \\! . A2. .. 1. then VII. ia % are columns terminant becomes After an even number of interchanges (count them) of adjacent rows and of adjacent columns. The characteristic roots of A and A' are the same. The characteristic roots of A and of /I' are the conjugates of the characteristic roots of A.CHAP. By comparing VI. we have if A. we have V. the characteristic root A = l. then \A\/a is a character istic root of adj A. if & is a In Problem VIII. Another is free of A. 7. 2 re1) is (1)™ times the sum of all the msquare principal minors of A. 2. each element being a binomial.A\ in the form Aaii 0021 0ai2 ~ ain Aa22 0a2n Xar O'^ni OOns and. then A„ are the characteristic roots of an nsquare matrix A and \L—k.... One of these determinants has A as diagonal elements and zeroes elsewhere.
03 .A\ = A'^.im I 'i.to im! where Aj i ^'2 % i\ is an msquare principal minor of 4 Now (1)" n (n S ^•'^ P % ^™ n taken H. 1 4 1 4 2.. X3 are linearly independent. 19 %. A3. 02.^  as (ii.4) of Problem 1 to expand A/^. X2. (111) may be written as 1 1 1 aiXi (iv) Ai Ai Ao As \l An A3 02X2 ooXq "3^3 = x\ A\ . A . ^3 be distinct characteristic roots and associated invariant vectors of A. %. is i^ ) runs through the p = —— 1) .fe • %''''OT .'2 ^n I A "ink . given A = 2 1 1 54 12 3 41 6 4 + 4 6 We have Sl 1+02+6 1 = 5 4 + 1 1 1 4 + 6 1 5 2 + 3 6 S2 2 1 + 2 + 1 2 1 3 1 25+169 1 4 1 5 1 SS 2 + 2 4 4 4 4 7 6 1 1 4 3 6 54 + 1 1 1 2 1 2 3 6 1 3 2  1 = 41 + 16 8 + 2 Ml Then \XI = 2 A^' . Let Ai.. assume that there exist scalars 0%. Assume (i) the contrary.152 THE CHARACTERISTIC EQUATION OF A MATRIX [CHAP. ^s) Show that Xi. not all zero. 2 m at a time. A"!. Use (19. (11). t^. (n m + 1) different combinations of 1. such that a]_Xi + 0^X2 + 03X3 .. 3. As.ii "iiA "inA (1)'" "imA • "im..5 + 9A^  7A + 2. = Multiply (II) (1) by A and recall that AX^ = A^ A^^ we have = a^AXi + oqAX^ + agAXg (11) aiAi^i + 02^2X2 + osAgXg = Multiply (III) by A and obtain aiAi^i + osAjA^s + a3A3A^s = Now (i). that is.ii "is.
3. 1. its nullspace is of 2 dimen 14 sion 1. 15 [3. the associated invariant vector space otXj^l A. are 1 1. For the matrix of Problem 2.asXaY = 0. hence. hence. we have [aiXj. The associated invariant vector space spanned by 5 [2. 6. S = Ai \l As Al As x% ^ 0. 5]' . A — On 4.O32 023 Ois 023 1 .e. = 5.^"\A) = cji^^hX) = . B~ exists. its nullspace.. The associated invariant vector space is that spanned by 4. But this requires 01 = 02 = 03=0.A of order two <^"(A) + A 022 O23 + 1 1 032 + A 033 1 — OSI c + A Oil oia 1 A— 033 + A 01 1 0 12 1 021 Ao22 2!(AOll) + (Aa22) + (AOgg)! 2! times the sum of the principal minors of XI A of order one 0"'(A) Also <j!.2). I. Xi. 2.033 A Oil 1 O12 1 Ol3 A Oil 02 ai2 A — (322 <^'(A) 021 AO22 . Now (^"^HX^ is r! times the sum of the principal minors of order nr of A^/ A. 19] THE CHARACTERISTIC EQUATION OF A MATRIX 153 1 1 1 By Problem 5. By (11. Thus. For A = 2: XI A 2 1 1 25 1 4 3 its nullspace is of dimen 4 is that sion 1. Xq are linearly independent. 3]'. 2. find the characteristic roots and the associated invariant vector spaces.123 we obtain A. the rank of X^J A is not less than n — T and the dimension of the associated invariant vector space is not greater than r.. Chapter 3. Prove: If X^ is an rfold characteristic root of an resquare matrix A. 6. For A= 1: XI A = 1 1 1 4 2 1 is of rank 3. Multiplying (iv) by B~ .6(A) = 0. is of dimension at most r. Since Ai is an rfold root of <. not every principal minor can vanish and X^lA is of rank at least nr. a^X^. contrary to the hypothesis..Osi AO33 A Oil — Os 1 031 Ol3 O32 Aa3g A Oil O2I 012 AO22 — 032 the O23I A— Ossl A— of Og 3 A — 022 sum of the principal minors 1 XI . ^SCA^) = ct>(X^) = 4' <. 012 From (j!)(X) = \\1  A\ = 021 <»31 A— 022 032 ai3 . The characteristic roots 4 1 4 4 is of rank 3.1 CHAP. 1. X^.\) = •• = 4>''^~^\x^) = and cfy'^AXi) + 0.
2]'.x/adj A\  (if !(if .. 3... (if Ml"^si. 1 . 1]'... 19 7.x" + . [1. [1. then \A\/a is a characteristic root of adj A.„_2MlAi"" + . + . 2 n  1) is (1)^ times the sum of the /square principal minors of adj A = and By and the definitions of llllM — A\ = \AY' ^ then (6. 1.0]'.1]'. 2 n .„_i(f^f' \A\ + + (if (^f Ml \A\ /(/^) /(R) and by (i) = + . M/Ot is a characteristic root of adj A.1]' 1. 2. [1. For the following matrices vector spaces.(i^) + \A\ . [3.4) l s. + s.0.1. (c) (rf) [1. Prove: The characteristic equation of an orthoganal matrix We have </.0]'.. 1]' 1..1]'.. (a) 1. (i + ^ni^ + (1) all l'^! where = 1.1. + adj .(A) = X/P = \\PIP'P\ = PA(^/P') AAA F is a reciprocal equation. [2. 8. (/ = 1.A^"' + . [1.. [1.2]' (b) 1. + S^^^/s + (if ladj^l where Sj . = ±A"f/P = ±A"c^(i) SUPPLEMENTARY PROBLEMS 9. Prove: If a is a nonzero characteristic root of the non.. 154 THE CHARACTERISTIC EQUATION OF A MATRIX [CHAP./irV + siMfV M ni 1 and Mr"/x/adj Now 4 = (if !i (if ii + s. . 2. + ^_. 1..singular n square matrix A. By Problem (i) 1..._„ Si= (l)"s^_.a + (i)"Ml Hence.3.1) is (1) times the sum of = fsquare principal minors of A.1.1. and l/xZadj/ll m" + Si/^""^ + .. a"+ . + . 1. 2. ^ To ± S„ Jl— 1 . and S. .„_. J Sj = (1)" 4s„.(lf (if (If Ml! a"/(4^) = (ifia"+ sia"'+ . [2. determine the characteristic roots and a basis of each of the associated invariant 1 1 1 2 2 1 2 8 12 (e) 1 2 (g) 1 1 1 1 1 1 1 1 1 (a) 12 2 1 (c) 2 4 4 1 2 (O 1 2 1 3 12 1 (rf) 2 1 1 J ' 2 1 3 9 1 12 4 1_ 2 (A) 2 2 1_ '2 1 (6) 1 2 1 (/) 1 3 2 _0 (/) 1+t 1 3 2 (A:) 1 3 3 _i 1 2i 2 3 24 21 21 2.!«"% s^.(1) + . 5 610 9 6 7 '1 1 6 3 3' 1121 2 (0 5 4 3 2 3 3 6 4 2 1 n) 1 1 2 7 5 1 1 5 3 Ans.„_.
Obtain.1. Let ^ be an nsquare matrix. If are nsauare and A is nonsingular.2. t:] A and B  „ „! are nsquare and r<n. using Theorem 0(0) <^ (0) = = (i)"M (1) times the sum of the principal minors of order n  1 of A <P (0) = (1) r! times the sum of the principal minors of order n  r of . + j.(AA„) then roots of the direct \(X + k)I A\ = (X + k~ XO (X + k X2) (X + k X^^. 1. 19] THE CHARACTERISTIC EQUATION OF A MATRIX 155 (e) 2.0. then X' X = (PX.4]'. 1.1.0]' J.1.l. 1.0]'. [li. Prove: The characteristic If roots of an orthogonal matrix P are of absolute value 1.2]'. 1. For A and B of Problem 17.1]' (/) 0.0]'. then at least n r of its characteristic roots are zero. Prove: If A and A' = equation.1]' 2. [3. X.1]'. 1. Prove: The characteristic roots of a diagonal matrix are the elements of its diagonal and the associated in variant vectors are the elementary vectors E^ 12. [1.0..4 =(AAi)(AX2).1]'. [3.2]' (/) 2. If the nstjuare matrix A is of rank r.1. 16. i. [0. 14. H. (g) 1. Prove: The characteristic sum diag(..4]' 1. +^0'^'^'(O)A" to obtain (19. the characteristic roots of ^4"^ Write \XI A''^\ = \XA~'^(^I and conclude that iAl IA2 lA are 20. (j) [1. [1. 22. then A~'^B and BA~'^ have the same characteristic 18.Y(PX) = 21. [1. X9. A. [0. Prove Theorem Hint. If VII.0]'. (/) (m) 1. (/c) 1+t. are a characteristic root and associated invariant vector of P. 1. 1. Substitute from Problem 23 into c/3(X) = (?i(0) + (^'(0)X +^(f^'(0)X^ + .1]'. [12.1]'.1.0. Hint. [2. 1]' 3. [2. 1.1.l]'. 1.2]'. (A) 0. 3. [1. 1.0. [4.4 0^">(O) = n! 24. roots of a unitary matrix are of absolute value 1. [0.0. show that B and A'^BA have the same characteristic A)\ roots. [l [1. [4. [1..0.0]'.1. 0.0.3.3.4.1. [0.4i. 13. 22j. .4).0]'. show that NA and AN have the same characteristic Prove: Prove: roots.0]' 4.CHAP.0. Prove: 23.. [1. then XlXi = 0.1. [2.0]'. A2 A^) are the characteristic roots of A^ A^ 13.1.2]' = 10. 0.1. 1. The characteristic II.1]'. [3.0. 17.0.1. 1. [3.1]' [0. A/.4.0]'.0.1. Prove: If A" is a unit vector and if AX XX then X'AX = X . 1]'. [1. Prove: If A^ ^ ±\ is a characteristic root and A^ is the associated invariant vector of an orthogonal matrix P. Prove Theorems I and VI.
The reader will show that Fg = [7. to a diagonal matrix. 6. DIAGONAL MATRICES. A diagonal matrix always has n linearly independent invariant vectors. it is similar 5.1) 2 2 3 if there exists a nonsingular B = R'^AR 11 1 Example 1. If Y is an invariant vector of A^ of B.. similar to a diagonal matrix. IV. O]' and Yg = [n.I— A. For a proof. = R~'^AR = 110 1 E7 3 3] ij li 2 1 2 3 1 1 133" 1 5 = 14 1 13] 4 3 [1 12 6 is 2 13 4 iJ The characteristic equation (A— 5)(A— 1)^ = of 5 is also the characteristic equationof ^.. we prove field Over a F an resquare matrix A is similar to a diagonal matrix if and only if \I—A factors completely in 1/ F and the multiplicity of each A. 3 and 4 for proofs) Any If resquare matrix A. see Problem II.0]' and it is readily shown an invariant vector of A associated with the same XL A = 5. 1. 1]' is characteristic a pair of lineand Xg = RYg root A = 1. An that root invariant vector of B associated with A= Y^ = [l. is equal to the dimension of the nullspace of X. Example I. (i = 1. an resquare matrix A has re linearly independent invariant vectors. we have (see Problems III. and 2 2J B are similar. 2.0. has n linearly independent invariant vectors. The elementary vectors E^ are such a set since DE^ = a^E^. . See Problem In Problem V. As a consequence. The characteristic roots of a diagonal matrix D = diag(ai.. 2]' are arly independent invariant vectors of B associated with A = 1 while X^ = RY2 are a pair of linearly independent invariant vectors of A associated with the same = RYi = [1.n). then root A^ of A. 02 a^) are simply the diagonal elements.chapter 20 Similarity TWO raSQUARE MATRICES A and B over F are called similar over F matrix R over F such that (20. 156 . 1 illustrates the following theorems: Two similar matrices have the same characteristic roots. Chapter 19. X = RY is an invariant vector of B = R~ AR corresponding to the characteristic root A corresponding to the same characteristic For a proof. see Problem 2. The matrices A 1 1 of Example 1. 3. 1. 2.
VIII. Since R is nonsingular. 62..\R\ \XIA\ Thus. Eg ^n ^'^ invariant vectors Then. SOLVED PROBLEMS 1. BY X^Y and RB AR. Every rasquare matrix A is similar to a triangular matrix whose diagonal elements See Problems 78. If A is any resquare matrix with complex elements or a real resquare matrix with complex characteristic roots. See Problems 910. The matrices A and P' AP The matrices A and U'^AU of Theorem VII are called orthogonally similar. Now the elementary vectors fii. then = AX and A" = ARY RBY = RX^Y = XRY = X^X Is an invariant vector of A corresponding to the characteristic root X^. we have VII. is an We can VI. by Theorem n.\XIA\. then = A/B \I  R'^AR = = R'^UR . are invariant vectors of A. The matrix of Problem 6.. A and S have the same characteristic equation and the same characteristic roots. ter 19.of A. As special cases. Prove: Two similar matrices have the = same characteristic roots. = See Problem 11.CHAP. are called unitarily similar. of Theorem VHI DIAGONABLE MATRICES. its column vectors are linearly independent. 2. there exists a unitary matrix U such that W'^AU UAU is triangular and has as diagonal elements the characteristic roots of A. corresponding to the triple root X = 1. the nullspace of A/i is of dimension 1. = A. Any matrix A which is similar to a diagonal matrix has re linearly independent invariant Let R AR = diag(6i. •••> b^) = B. prove. basic to the study of certain types of diagonable matrices in the next chapter. Chapexample. 20] SIMILARITY 157 Not every rasquare matrix is similar to a diagonal matrix. . however. If A is any real n orthogonal matrix P such that square matrix with real characteristic roots. are the characteristic roots of A.R''aR = = R'^(XIA)R and \\IB\ \R^. then By hypothesis. Let A and B (') R~^AR be the similar matrices. There. of B. Prove: If Y is an invariant vector oi X = RY is an invariant vector of = B = R ^AR corresponding to the characteristic root A corresponding to the same characteristic root \. Prove: vectors. there exists an P'^ AP = P'AP is triangular and has as diagonal elements the characteristic roots of A. A Theorem IV is matrix A which is similar to a diagonal matrix is called diagonable. 3. the vectors Xj = RE.
.Xi^. Prove: If an nsquare matrix A has n linearly independent invariant vectors.+rc Denote by V^^. (i Take Xi^. let Ai.Xir as a basis of the invariant vector space Vf. 2 n). 1 1 2 2 3 2 1 2_ 1 1 P 3 1 6. R diag(Ai. The first column of AQi_ is AX^ = X^X^ and the first column of O]'. Suppose that there exist scalars a (i) not all zero. for otherwise. 20 4. then ^n^n^ charac AR = [AX^. = 1. ^2^2 . (i 1.4A'^ = A^A'^. XJ' . First.1. Let the characteristic roots of A be Ai. A2 A„) Hence. 1 Z2 = [2. A„ and let Xl be an invariant vector of A corresponding to column of a nonsingular matrix Qi whose remaining columns may be any whatever such that \Qi\ ^ 0. 2 s). it is an Theorem I their totality is linearly independent. Thus. its thus.. 1X1. invariant vector and by thus. is [Ai. Prove: of Over a field factors completely in F an nsquare matrix A is similar to a diagonal matrix if and only if XlA F and the multiplicity of each A.. ^n' Then X^I .. ^3 = [1. X2 A2 '^n' ..A2 Ag be the distinct characteristic roots of A with respective multiplicities suppose that roots are equal to A^. Ql^AQi is Ql^XiXi^.. such that + *• (o2i''''2i "^ (aiiA^ii + 012^12 + •• + "iTi^iri) + 022^^22 + = ''2'r2^2r2) + . . Prove: Every rasquare matrix A is similar to a triangular matrix whose diagonal elements are the characteristic roots of A. X^ be associated with the respective L. R ^AR = diag(Ai... Aj Take Xi as the first .0]'. Xq teristic roots Ai. But this contradicts X's constitute a basis of K„ and A is similar to a diagonal matrix by Theorem IV. ^3! 1 1 then R ^ 12 1 1 and 101 1 2 3 5 1 0" 2 1 1 2 1 .2 s). the characteristic root Ai. But \lA = R (Xj^I . + Yi = (osi^si "^ ^s^Xs'z "^ ^'T^ ^STg a XciT^) ' STg ) Now each vector ai^Xu + ai^Xi^ + + air^Xir^) = 0.^/^ has the same nullspace is then of dimension n(n k)=k. and that exactly k of these characteristic B R'^AR diag(Ai.. 'i''2 r„. the associated invariant vector spaces. is Zi = [1. Z2. 5. A2 A„). the 7.et R = [X^. A2. A set of linearly independent invariant vectors of the matrix A of Example 1.. A. hence.158 SIMILARITY [CHAP...1]' 1 2 1 Take R = [Jf 1. 2 1 .B) R~^ \^I B rank n k and nullity k as has Conversely.X2.1]'. 0' = 1. it is similar to a diagonal matrix. But this. . being the first column of Qi'^XiQi..X^].^ is equal to the dimension of the nullspace X^IA. is of ranknfe. Xs.0. ••• /^n ^° '^^"^ .1 2 1" 1 1 1 2 1 R'UR a diagonal matrix. Chapter 19.1. (i). V^^. s where ri+r2+. Let the n linearly independent invariant vectors X^.AX^ AX^] Ai = [Ai^i.B has exactly k zeroes in its diagonal and.
Take ft 1 i o"! 1 1 20 II and ft A1Q2 J_ 20 15 48 11 20] r_i = 5I 64 48j L° Take Qs "d °} [. and the theorem is proved with Q = Q^[0 Q2J Otherwise.Ai\. follows that the characteristic roots of Ai are Xg. Take [5.j where is of order n A/  1.1.1. it A„. given 91 6 5 89 5 1 1 5 5 4 5 4 4 Here A/^ = (A^l)(A^4) and the characteristic roots are 1.Take X^ as column of a nonsingular matrix Qq whose remaining columns may be any whatever such that IftI /^ 0. let X^ be an invariant vector of Ai corresponding to the characteristic root As. 20] SIMILARITY 159 (i) Qi^AQ^ A^ hi [o Bil A. say ^5000' 5 10 3 10 10 1 Then 1 1 51 5 89 15 12 20 16 [O 5 1 and 5 5 Qi AQ^ 4 3 3 5 17 [4. Then (il) Q2^A^Q^ = [^2 [0 52I A^j where A2 is of order n  2. 3j'. 8. Find a nonsingular matrix Q such that Q AQ is triangular. If n = 2.2. If n = 3.2. Ag with Q = Qi. Ai = [A2] and the theorem is proved the first Otherwise. then <?3' = H[11 8 2/5I "I sj . we repeat the procedure and.CHAP. Since ft AQ^\ = (XXi)\XI .! and ft'42ft [2 2J . obtain I2 <?3 n2 Qni such that (J /1() is triangular and has as diagonal elements the characteristic roots of A. 0. after n /i (iii) (?2 1 steps at most.: A characteristic root of A2 is 2 and an associated invariant vector is [s. and Qi^ AQi_ and A have the same characteristic roots. ll]'.5. Aq 4 A then characteristic root of Ai is 1 and an associated invariant vector is l]'.. an invariant vector corresponding to the characteristic root 1 as the first column of a nonsingular matrix Qx whose remaining columns are elementary vectors. /I2 = [As].
. using the GramSchmidt process. 1. Using the GramSchmidt process.. Next. Then Let Ai. obtain an orthogonal matrix P^. Pi = l/\/2 an orthogonal matrix whose first column is proportional to We find 1/a/2 I/V2 1 2 1 2 1 1 l/>/2 1 I/V2" = 1 Pl'^Pi = 3 2 3^/2" \0 A^j 1/1/2" l/^/T 1 2 1/V2 1/ V2 21/2" 3 .0. using the GramSchmidt process. obtain from Qt an orthogonal matrix Pi whose first column is proportional to that of Q^. Find an orthogonal matrix P such that "2 2 1" 1 P^AP = P"" 1 1 3 2 2_ is triangular and has the characteristic roots of A as diagonal elements. form Qq having as first column an invariant vector of Ai corresponding to the root A2 and. 1]'. 1. 20 Now 5 32 4 . Since the roots are real the associated invariant As in Problem 7. 10. l]'. X„ be the characteristic roots of A. pi [o o" ' [h [o o1 Qsj 5 1 Q'' 8 1 40 40 4 Q^j 160 20 160_ 3 1 11 "l 1_ 180 40 220 7 9/5" 5 2 1 1 1 and 2/5 2 9. obtain 1 1 1 /V2 1 l/v^ 1/1/2" [l. .Then TAs ijig Ss"! [0 After sufficient repetitions. If A is matrix P such any real resquare matrix with real characteristic roots then there exists an orthogonal that P''^AP is triangular and has as diagonal elements the characteristic roots of ^. _o 0] pn2 "I pj [ P„_J for is triangular with the characteristic roots of A as diagonal elements. the characteristic roots are " 5. 1 We take Qi = 10 1 and. Prom Example to Chapter 19. build the orthogonal matrix A^j P which P~^AP pi = Pi. 0.Ayjas characteristic roots. A3.160 SIMILARITY [CHAP.Xs P'lAP^ where Ai is of order n hi fill  1 and has As. 1 and an invariant vector corresponding A = 1 is [1. vectors will also be real. let Qi be formed having an invariant vector corresponding to Aj as first column.
l]' as associated invariant vector and form Qi = 1 _ 1 1 1_ The Gram Schmidt process produces the unitary matrix i/\fz f/i i/Ve' i/\/2 = l/vA3 2/v^ l/\/6" 1/V3" 1/^2" Now 2y/2ai) 1i ~(26 + 24:i)/y/6 (2 + 3i)/\A3" 3 +2J so that. for this choice of Qi. take [l. 3 + 2i. Find an orthogonal matrix P such that P ^AP is triangular and has as diagonal elements the char acteristic roots of A.6 and the associated invariant vectors [1. given 5 + 5i A = 4 . I 1 1 X] 2/^6 1/V6" I/V2" I/a/S" is orthogonal and P~^ AP = >/2 5 11. 20] SIMILARITY 161 Now Ax has A we = 1 a characteristic root and [l. Taking l/V^ P = l/y/J l/y/3 l/y/6 2/^/6 l/yje 1//2 l/y/3 we find P~^ AP = diag(2.3. 12. From Q2 = r obtain by the GramSchmidt process the orthogonal matrix "2=1 1/V3 1/^^ 1/a/3 ^ 2/v/6 rn I • Lv2 \_ rn ij 1 .CHAP. Find a unitary matrix f/ such that V 2 All is triangular and has as diagonal elements the charac teristic roots of A. characteristic roots are 2. . The [1.6i + 3i A(A^ — 1 +t 2 .3.1]'.1. This suggests the more thorough study of the real symmetric matrix made in the next chapter. \/2]' as associated invariant vector. of A is +(4J)A+  i) = and the characteristic roots are " 1 0. l]'.2J 1 + i 6  4f 6 + 4f 3 5 2i The characteristic equation 1i.0.1]' respectively. 6). 1. Now these three vectors are both linearly independent and mutually orthogonal.2. the required matrix U = Ui. Then L2/V6 1/\A2 I/V3J 1/^6 P = P. For A = 0. given 3 1 1 1 5 1 may be taken as [l.
'2 2 r 1 2_ 2 1 1 25..162 SIMILARITY [CHAP. and (rf) are Chapter 19 and determine those which are similar to a diagonal matrix having the characteristic roots as diagonal elements. Chapter 19. PABP"^ =NQ''^BP''' and Q~^ BAQ Q'^BP'^N.. . A^ are nonsingular and of the same order. Sj) C = diagCSj.. find a unitary matrix V such that angular and has as diagonal elements the characteristic roots of A. and define B B U Show oJ that BT^EK C to prove B and C are similar. (6). \l\pZ Ins. to = diag(Bi. If A and B are nsquare. and ij C C„) Hint. A^A^. 19. (/)..4. /i Let B = diag(Bi. 20. (d). B^). Show that = diag(Ci. roots of Find an orthogonal matrix P such that f'^AP is triangular and has as diagonal elements the characteristic A for each of the matrices A of Problem 9(a). Prove: If ^ is real and symmetric and P is orthogonal. (i) (l+i)/2 2 (/) \l\p2. For each of the matrices A of Problem 9(j). Q'^AQ = B where B is triangular and has as diagonal elements the characteristic roots Ai. Show that 1 _1 3 2 and 3 21 2 3 have the same characteristic roots but are not similar. then P''^ AP is real and symmetric. Bg B^). . A^. B2./g). 1 1//! l/>/2 ai)/2^/2 (1 \/\[2 0/2/2" i 2 \/^[2 \/y[2 16. then Let P AQ =N. Explain why the matrices (a) and of Problem 13 are similar to a diagonal matrix while (c) Examine the matrices {a){m) of Problem 9. 2 m). (c). then AE and B/1 have the same characteristic roots. Let B^ and C^ be similar matrices for (i = 1. Let of (a) .. A2As. If Ai. = diag(5i. 23. not. 15. Extend the result of Problem 19 S^ along the diagonal. Make the necessary modification Theorem VIII.. See Problem 15. of Problem 9 to prove 17. Q A Q is triangular and has as diagonal elements the kth powers of the characteristic (b) Show that 1 X.S„) 1 i?^ . Suppose C^ = and B^ fl^ and form Write = diag(Bi. Bg B„) and C any matrix obtained by rearranging the 21.A^ . X2 K Show that S roots of A. (a) \/Zs[2 2/3 2/3 1/3 (c) I/V3" 1//2 1/^/6"" l/v/2 \/^^pi 4/3a/2 i/Va" 1/1/3 2/>/6 1/^ 1/^6" i/Ve"' I/V2" (b) 1 l/V'2 (rf) 1/V^ i/'v^ 1/V3" 2/V6" I/a/2" 1/^2" 1/V2" (6) I/V3" 1/\/6" 14. show thut A^A^.. 20 SUPPLEMENTARY PROBLEMS 13. A^A^^A^. = Hint. Chapter 19 22. = trace A 24. U'^AU is tri l/v^ Ans... have the same characteristic equation. / = diagC/i. 18. where the orders of = and /g are those of Bi and B^ respectively. Show that similarity is an equivalence relation. Chapter 19. A3. C2 S are similar.
^A„. Since P~^ = P'. Two the same characteristic real symmetric matrices are orthogonally similar if and only if they roots. Theorem IV. by Descartes Rule of signs. When A III. symmetric matrix with characteristic roots Ai. that is. The invariant vectors associated with distinct characteristic roots of a real sym metric matrix are mutually orthogonal. See Problem 2. if and only if they are similar. III implies If A^ is a characteristic root of multiplicity r^ of a real r^. If P is an orthogonal matrix and 8 = P"^ AP then B is said to be orthogonally similar to A. study of real symmetric matrices and Hermitian matrices may we shall treat them separately here.. VII. Theorem III becomes X = BY V. have 163 . Every real quadratic form q = X' AX can be reduced by an orthogonal transformation to a canonical form (21. 1. Let the characteristic roots of the real symmetric matrix A be arranged so that Ai ^ A2 = Then diag(Ai. Aj 4 is a real rasquare A„. B is also orthogonally congruent and orthogonally equivalent to A. each B^ of Problem 9. A real symmetric matrix is positive definite if and only if all of its characteristic roots are positive. See Problem II. The be combined but I.. The totality of such diagonal matrices constitutes a canonical set for real symmetric matrices under orthogonal similarity. symmetric matrix. . A. the number of varia tions of sign in A/ VI. + X^y"^ where is the rank of A and Ai. we have: The characteristic roots of a real symmetric matrix are all real. A„)..„) is a unique diagonal matrix similar to A. See Problem 3. is 0..1) Aiyi + X^yl + r . . hence. •. Theorem III may be restated as . then there exists a real orthogonal matrix P such that P'AP = P'^AP = diag(Ai. A2.chapter 21 Similarity to a Diagonal Matrix REAL SYMMETRIC MATRICES. is real If and symmetric. — ^ = 0. Every real symmetric matrix A is orthogonally similar to a diagonal matrix whose diagonal elements are the characteristic roots of A. \^.. We have VIII.. then there is associated with A^ an invariant space of dimension In terms of a real quadratic form. For real symmetric matrices. Chapter 20. ORTHOGONAL SIMILARITY.. the rank of q is the number of nonzero characteristic roots of A while the index is number of positive characteristic roots or. Aj A^ are its nonzero characteristic roots. the Thus.
21 PAIRS OF REAL QUADRATIC FORMS. the invariant vectors corresponding to distinct characteristic roots are orthogonal. and write B = U'AU. of an Hermitian matrix are real. Paralleling the theorems for real symmetric matrices. If X' In Problem 4. there is associated with an invariant space of dimension Let the characteristic roots of the Hermitian matrix H be arranged so that Ai i A2 ^ Then diag(Ai. 9. then r. unitary matrix. IX. In Problem XVII. Problem XVI. •. A2 follows A. NORMAL MATRICES. orthogonal. Two Hermitian matrices are unitarily similar is. that and only if they are similar. Hermitian. . we have The characteristic roots See Problem XI. A2. A„.„) is a unique diagonal matrix similar to H. and B'B = U'A'U U'AU = U'A'AU = U'AJ'U = U'AU U'A'U = BB'. unitary matrices. The invariant vectors associated with distinct characteristic roots of an Hermitian matrix are mutually orthogonal. we have XVIII. there exists a real non. x^ x^) and if tive definite. See also Problems 45. X. Then B' = U'A'U XV. . A2. Let . The matrix H is called unitarily similar to XIII.164 SIMILARITY TO A DIAGONAL MATRIX [CHAP.singular linear transformation X = CY AX and X'BX are real quadratic forms in X'BX is posi which carries X'AX into AiXi + A2Y2 + and X' BX into ••• + A„y„ yi 22 + 72 2 + ••• + Tn where A^ are the roots of \\B—A\=0. An diagonal. If H is an resquare Hermitian matrix with characteristic roots Ai. XII. A„). Normal matrices include skewHermitian. there exists a unitary matrix U such that U'HU = U~^HU = . The totality of i Ansuch diago • nal matrices constitutes a canonical set for Hermitian matrices under unitary similarity. In If ^ 8.4 be a normal matrix and f/ be a unitary matrix. 7.. diag(Ai. HERMITIAN MATRICES... we prove square matrix A is unitarily similar to a diagonal matrix if A and only if A is normal. real nsquare matrix A is called normal real if AA' = A'A. As a consequence. we prove (x^^. Thus. If A is normal. skewsymmetric. See Problem 10. r of the Hermitian matrix H. and symmetric. If U' HU A • is a characteristic root of multiplicity A. then B = U'AU is a normal matrix. If we prove is an invariant vector corresponding to the characteristic root A/ of a northen X^ is also an invariant vector of A' corresponding to the characteristic X^ mal rnatrix A. if if and only if they have the same characteristic roots. There XIV. root A^. is a normal matrix and U is a.
= = (/(/. = {(h Consider = B which is real + ik)IA\[(hik)IA\ + ik)I (hlAf+k'^I and singular since {h A is singular. When A = 12. 0. Let A'l and X^ be invariant vectors associated respectively with the distinct characteristic roots Xi and As of 4. if and only if they are similar. since Ai ^ A2. If A.432 1 and the characteristic roots are 1 A7 . we take X3 = . 211 SIMILARITY TO A DIAGONAL MATRIX 165 MX. Then = AX^ Ai^i and AX^ = Aj'^s.2 6. 12. Two normal matrices are unitarily similar if and only if they have the same char acteristic roots. 3. Find an orthogonal matrix P such that P'^AP is diagonal and has as diagonal elements the characteristic roots of A given . X2X2X1 Then X^X^X^ = A2^iA^2 and. hence. X[X2 Thus. l]' and X2 = [l. 6. There exists a nonzero real vector X such that BX = and. we have 2 1 4 2 X2 X3 or Xi . Prove: The invariant vectors associated with distinct characteristic roots of a real symmetric matrix A are mutually orthogonal.24A^ + I8OA . SOLVED PROBLEMS 1. ^72 2 1 1 10 2 7 2 The characteristic equation is A7 2 2 1 2 A 10 2 A° . the associ ated invariant vector space has dimension XX. . Thus. A: and there are no complex roots.2x2 + xs = and choose as associated in 1 variant vectors the mutually orthogonal pair X^ = [l. Prove: The characteristic roots of an resquare real symmetric matrix is a A are all real. X'BX The vector = X\h!AfX A" + k^X'X {(hi = X'(hlA)'(hl A)X 0. that is. also A^2'4^i = XiX^X^ and Xj^AX2 = XqXiX^ Taking transposes ^1 A X^ = A 1 A*! ^2 and XqAXj^ = = 0. 2.4) is real. hence. [1. + k^X'X Also.1]' as associated invariant vector. l]'. 1.CHAP. r^ of a normal matrix A. 1 2 Xl For A = 6.A)X\' {(hi A)X\ ^ X'X>0.is a characteristic root of multiplicity r^. Suppose that h+ik complex characteristic root of A. X^ and X2 are orthogonal. 2.
. Prove: nite. Prom Problem 3. X = X„ are the characteristic roots ot H G AGH. 12).166 SIMILARITY TO A DIAGONAL MATRIX [CHAP. •. 6.X2 X„) diag(XXi. 4.. XX2 XX„) it follows that Xi.x2 there exists a real nonsingular linear transformation . Xg are the roots of XB/l =0 5. 2 The same transformation carries 1/3 X'AX X into tr' 2 0^ . If X'AX and X'BX ••• are real quadratic forms in (xi.^ = jji^vf + fJ. where Xi. A„ are the roots of VII there exists an orthogonal transformation X = GV which carries X'BX into V'(G'BG)V jjii.^ where /x„ are the characteristic roots (all positive) of B. we have 1/VT P = 1/x/y X/\f& I/VT i/x/T 2/sf6 l/yjl l/VT It is left as an exercise to show that P~^AP = (iiag(6... CY = GHKY which carries X'AX into X^y^ + X^y^ + Y'(K'h'g'BGHK)Y Y (K~ IK)Y 'i + X„y^ and X'BX into •'n '2 Since for all values of X.diag(Xi. X/sfJT^ Then V = 1 HW 2 carries (i) into W for the real quadratic form it (H G BGH)W G'AGH)W = + wt Now W (H there exists an orthogonal transformation W = KY which carries into Y'(K'h'g'AGHK)Y where Xi. Ag.Xg formation X^y2 + X^y^ + .. Thus.K'h'g'AGHK = = diag(X.X X) . + X^y^ there exists a real nonsingular trans. By Theorem (i) + ^nTn ^'^'^ ^'B^ into yi + y + + y^..jj..2V2 + + Ai„t. 21 Using the normalized foim of these vectors as columns of P . the linear transformation \/^pr l/vA3" l/sfQ 1//6" X = (GH)W l/\f2 \/2\fl l/VT IATs 2/'/6" 1/^r 1/2x^3" W 1/v^ 1/( 1/3n/2 1/3/2 1/: 1/f w _l/2>/3 7 1/3\A2 2 10 1 carries X'BX = X' 2 1 2 7 Z into ir'ZIF. l/i//!^) Let (ii) H = diag(l/\/7H. X «„) and if X'BX is positive defi= CY which carries Z'/lZinto Xiyi + \2y2 + \\BA\ =0. K'h'g'(\BA)GHK X„ XK'h'g'BGHK .
j^ = \^X\X^. Thus. n B'b By Theorem XV. A„). Prove: If X: is an invariant vector corresponding to a characteristic root A. is left as an excersise to show 6. Since A is normal. we conclude that each b^j = 0. Thus. + bmbm Since these elements are equal and since each b^j b^j 2 0. = Now C is positive definite symmetric and C^ Define = QB^Q^QB^Q^ = QB^Q^ = I QBQand AA' is orthogonal. A:^ Then there > 0.. Conversely. VT^ \fk~^) and C = QB^Q''^. = = (X^I then (SX^)'(SA'i) XB'BXi X'iB. AA' is positive definite symmetric by Theorem X. Prove: The characteristic roots of an Hermitian matrix are real. Thus. (XIA)(XIA)' (KIA)(XIA') XXI . Prove: Every nonsingular real matrix A can be written as symmetric and P is orthogonal. ij Q such that 17. P Thus A = CP with C positive definite symmetric and P 7.A2 .CHAP. A is unitarily similar to a diagonal matrix if and only if A is normal.j andX^isreal.B'X^ = (W X ^)' {W X ^) = and B'X^ = (X^/I')Zi = Thus.. Define Bi = diag(\/^. = By hypothesis. Now is the element in the first row and first column of AjAi + 612^12 + bigbis + . XA + AA A) XA+ A)X^= (XI A)' (XI so that XI A is normal.. Xfii bni. that Let \j be a characteristic root of the Hermitian matrix H. 8. the transformation W = = KY of Problem 4 is the identity transformation W = lY. Then PP' C'^AA'C'^^ C ^C'^C^ is orthogonal as required.i. Chapter 20. BX. B = diag(Ai.\ a' a' A 0. then Z^ is an invariant vector of A corresponding to the characteristic root X^. Continuing we conclude that every b^. then A is normal is zero. pose X\HX. P = C'^A. there exists a unitary matrix U such that ^1 6i2 bin As bin U AU .. 21] SIMILARITY TO A DIAGONAL MATRIX 167 Since this is a diagonal matrix.of a normal matrix A. Prove: An resquare matrix is normal. Then there exists a nonzero vector X^ such Now X'^HX^ = XjX'j^Xiis real and different from zero and so also is the conjugate transHXj^ = A jX^ . the real linear transformation X = CY {GH)Y carries the positive definite quadratic form It X'BX that and the quadratic form X' AX into ri + y + 7l \kBA\ = 36(3\l)(2Xl)^. Suppose A By Theorem vni.. X^=X. B is normal so that B' B = BB is XiXi while the corresponding element of BB . 9. X: is an invariant vector of A' corresponding to the characteristic root X ^. let A be diagonal.XA' = XXI . exists an orthogonal matrix A = CP where C is positive definite Since A is nonsingular. of B with the corresponding elements in the second row and second column. Chapter Q"'' with each k^) = B AA' Q = diag(A. into ^y^ + iy + y..
4. is the familiar rotation of Aside from the procedure. Thus. f 6~] f 3/x/T3 2 13"! Vis' fs  'VTs 2/\/l3 \y 3/\/l3. X[a' X^ = X^X'^X^. 13 J y'\ Lo Sj 5yl ^«y 40 The conic is an hyperbola. .. 12. Consider the conic x^  12xix. 2]' = For the characteristic roots Ai = 5 and A2 = 8.e. ^ .= . geometry to effect the Note that by Theorem VII the result is known as soon as the characteristic roots are found. take invariant vectors.xi = 40 = or (i) X'AX X [: ::] = 40 referred to rectangular coordinate axes OX^ and OX2. Prove: If A is normal. . A2 V2 2. The characteristic equation of A is A3 1A/. The main tasks are to locate the center and determine the principal directions. AX^ and. XiX2= as required. unit invariant vectors are = 4.2x . the invariant vectors corresponding to distinct characteristic roots are orthogonal. ^2 be distinct characteristic roots and associated invariant vectors of A. the directions of the axes after rotation. A*! AXi=\iXi. Let Ai. taking the conjugate transpose. Consider the surface we show here the role of two matrices in such a reduction of the Zx^ + 2xy + 2xz + iyz .168 SIMILARITY TO A DIAGONAL MATRIX [CHAP. since and Xj. But X[a' X^ = \^X[X2. jI'A'i =Xi^i. 11. /v/T3 3/vT3j to o"! The transformation 1 X = PY reduces (i) 2/\flf\ 3/VT3.. Without at tempting to justify the steps.2. 21 10. Now ^2/lA'i = Ai^a^i and. 3]' respectively as associated Now form the orthogonal matrix P = tsVTs '__  2/V/T3I '^lll whose columns are the two 2/v/T3 vectors after normalization.11 1 1 1 A 2 2 A : 1 The characteristic roots and associated Ai = 1. fi = I ~. by translation and rotation of axes. Xi^^A's = Xa^i^s Xx iXo. and " [2. equation of a central quadric. this axes in plane analytic elimination of the crossproduct term in the equation of a conic. Then = \^X^ and. One problem to of solid analytic geometry is that of reducing. the equation of a quadric surface to simplest form.14y + 2z " 3 1 9 1 = and the symmetric matrices " 1 3 1 1 1 A = 2 2 7 1 1 1 2 2 and B = 1   1 7 1 9 formed respectively from the terms of degree two and from all the terms. The characteristic equation of A is A1 A/^ = 6 = (A5)(A + 8) [3. i. V3 ^NTfJ ._I'A'2 = AaATa. by_Problem 8.
21] SIMILARITY TO A DIAGONAL MATRIX 169 Using only the elementary row transformations Hj(k) and H^:(k). y = y'. where yl^y'^^y'l and of \XB . z = z' + The principal directions are 12. 0. 1. i^s Denote by E the inverse of \_v\. For each of the following real symmetric matrices A find an orthogonal matrix nal and has as diagonal elements the characteristic roots of A. The equations of the rotation of axes to the principal directions are i/\A3" \_X i/i/s" 1/vT i/vT Y Z\. (a) 1 1/VT (b) l/\f2 \/sf& l/v/¥ l/v/s" i/a/s" (c) 2/vT 2/3 1/3 2/3 2/3 i/VT i/v^T 2/3 (d) 1/VT 1/3 1/\A6" 2/3 2/3 1/3 (e) i/x/T 2/^/6' 1/vT i/vT l/v/T 2/3 2/3 2/3 1/ 3 14. 0. 2 2 5 4 2 4 5 4 _4 B = 2 2 2 l/\/T 2J 8 1/3VT 2/3x^2" 1/31/2" 1/3 1/3 1/3 (6) 2/3 2/3 1/3 2/3 1/3 l/3x/l0 /4ns.v^. given 2 {a) l\ A = [7 2 1 10 2 7J . y = 0. the quadrlc has as center C(l. we have d = 4. (a) 2/3^/10 2/3\/l0 \/\[2 2/3 . 5=03 \\ r2 fj (6) 4 7 = 4 1 4 8 1_ .2. 2 (b) _1 () 1 2 (c) 4 2 21 "3 2 2 2" 1 4 r 1 4_ 3 2_ 4 _ 2 2 1_ id) 2 _2 1 1 2_ 2 4_ _1 2/3 1/3 1 2/3 1/3 i/\/^ Ans. 4). 2/3 yV 6 1/\A2 i/VT Xi are the roots Find a linear transformation which reduces X'BX to X' AX to Xtyl^X^y^ + Asy^. Prom D^.4 The equations of translation are x = x' di. XtX'^ + X^y'^ + XsZ^ + d X 111 + The required reduced 4F 4. . 3 1 1 1 1 1 7 VJ 1 4 Bi 2 2 1 1 «2 1 1 1 ^ ft] y + 1 7 9 4 '3x + Considering B^ as the augmented matrix of the system of equations \^ X X + 2y 01=0 +227=0 +1=0 we find from Di the solution x = 1. P such P that /""^.^P is diago "4 (e) 2 (a) 2 1 . The rank equation is of ^ is 3 and the rank of B is 4.E [X Y Z] 2/vT i/vT i/x/T i/VT SUPPLEMENTARY PROBLEMS 13. 4).CHAP.  2Z .v. where r1 / / 4. z = 4 or C(l.A\ =0.
Let A be real and skewsymmetric. 23. / Prove: If 4 is nonsingular. '^i If /'"^^P = diag(Ai. = 0. 21 15. 170 SIMILARITY TO A DIAGONAL MATRIX [CHAP. + A5B. Prove Theorem IV.2 s) (h) IfpM Hint. = Ai£l + A2£2 + •• +'^3^5 Every £^ is idempotent. (c) (''^ \Q?. Let the characteristic function of the nsquare matrix <^(X) A be = (Xx^f^iXx^f'^ . 3x^+ 2%«2 + 3%2 *i + 2^:1x2 + Xg 19. 21. E^Ej The =0 for i ^j.. Now tr(rT') = requires ty = 24..a'Kfs P such that and suppose there exists a nonsingular matrix (/) P'^ AP B. 2 s) the nsquare matrix diag(0. by 1 and Xj. then A is similar to A . 26.2 s) Show (a) (6) (c) (d) that ~ P'^AP A AiBi + A282+ . 17. (J (XilA)Ei = 1. 22. '^1 A. then p(A) ••• = p(Ai)£i + p(A2)£2 + ••• + P(^s^^s Establish ^^ = Ai£i + A2£2 + + xIe A = Ai£i + A2£2 + . Chapter 13. Prove: A square matrix A is normal if and only if it can be expressed as H + iK. Restate the theorem when A is real and nonsingular. . Modify the proof in Problem prove Theorem XI. Prove: If A is normal. C/"^<4(7 = T* X2A2 T = [«. ^r + i' ~'^r+2 ^1 ~^v) 2 to ''^"'^  r. = / (e) (/) (g) £1 + £2 + •• + E3 rank of £^ is the multiplicity of the characteristic root A^.ri' '^2'r2 0. then ^B and BA are normal.2 A. Hint. . then AA' is positive definite Hermitian.1 Xi.) A~'^ 20. ^n^rv where C/ _ is unitary _ tr(4/l') Write and T is triangular. PBiP~^. B = + /iri(/4) is orthogonal. AA for J are XiAi. XIH. r a / i). is any polynomial in x. ]. of (/) and define = 0) obtained by replacing A. a = 1..4 is nonsingular. Define by (j = 1. If A is nsquare with characteristic roots Xi. Prove Theorems XII.. + 17 % = 8 = 900. + ^ is nonsingular. where H and K are commu tative Hermitian matrices.. . + X^E^./„ 'i . 18. ^sV^) 0.„. then A is normal if imd only if the characteristic roots of Hint. = diag(Ai/.. Prove: If A and B are nsquare and normal and if A and S' commute. x\ .2 20x2 + 27%^ = = "i' 369.. DR.. by in the right member T? E. (a) (6) (c) Prove: Each characteristic 7 root of A is either zero or a pure imaginary. 25. Identify (o) (h) each locus: 24xi:>.312%ia:>.. 0. (See Problem 35. A. 16. and XIX. (/ / . . '^ °^ X^ + in '^ri2 ^ri)' "^S" P"\Ai/ /1)P = diag(0. Prove: If A Is normal and nonsingular so also is .
The square matrix A is similar to a diagonal matrix if and only if there exists a positive definite Hermitian matrix 32. (k) (I) B commutes A and only if it commutes with every £^ commutes with every polynomial in A. B = Prove: (a) + A be nonsingular. and C = B^^B'. A= HU H by H^ = AA' Define If and f/ = H~'^A. If If A are in its field of values. 36.. (i = l.2 s). is a polynomial in A. then U If A^ (m) = Al% = + A2% + . A and every diagonal element of U'^AU. + A^fig A is positive definite Hermitian. (n) Equation (6) is called the spectral decomposition of A. H such that H~^AH is normal. then each Ei is Hermitian. its field of is real symmetric (Hermitian). where Ai is the . where U real. Show that is unique.CHAP. (o) Obtain the spectral decomposition " 24 20 24 lO" 4/9 = 4/9 4/9 2/9" 5/9 4/9 5/9 2/9' A  20 _ 10 9_ 49 4/9 2/9 2/9 1/9 + 4 4/9 2/9 10 10 " 2/9 2/9 2/9 8/9 29 20 29 10 lO" 10 44_ (b) Obtain A^ 1  20 196 10 38/9 (c) 20/9 38/9 10/9' Obtain //^ = 20/9 _ 1 10/9 23/9_ 0/9 10/ 9 28. 30. Prove: Prove: A is nonsingular. then {I + iHT'^{I ~iH) is unitary. every element in its field of values is values least and A„is the greatest characteristic root of A. Prove: Prove: A real symmetric (Hermitian) matrix is idempotent if and only if its characteristic roots are O's real and I's. = tiA. 34. the set of numbers X' AX where of X is 2i unit vector is called the field of values ot A. C is unitary. Let A be normal. + VAs^s it is positive definite Hermitian. Prove: If A is normal and commutes with B. Prove: A is nonsingular then there exists a unitary matrix U and a positive definite Hermitian matrix H such that Hint. 33. Prove: If tf is Hermitian. then A' and Hint. 27. Prove: (a) The characteristic roots (6) Every diagonal element of of values of A. is in the field (c) (d) A A is real symmetric (Hermitian).. then A is normal if and only if H and U of Problem 29 commute. is unitary. 21] SIMILARITY TO A DIAGONAL MATRIX 171 (i) Each E^ Hint.1/2 vTi£i + VA2£2 + . B commute.. A A is normal. Then (/) A If matrix If fi commutes with A with if it Hint. If A is nsquare. If . is the set of reals Ai i A < A„.4 is symmetric (Hermitian) and idempotent. 31.. (6) A and (S')"^ commute. 35. Define /(A) = (A Xi)(AX2) • (A X^) and /^(A) = /(A)/(AA^). 29. then /f = A^'^ . Use Problem If 26 (/). is nonsingular. then I r.
where r(A) that of g(A). we prove are polynomials in F[A].1) f(X) = cirik + On^X + • • • + a^X + p a^X where the If a^ are in oj. /(A) ?^ while f(X)g(X) = then g(X) = If g(A) ^ and h(X)g(X) = h(X)g(X). Two The totality of polynomials (22. g{X) is said Here. In I. If 172 ..2) /(A) = h{X)g{X) + r(A) r(A) = 0. then there exist unique polynomials is either the /(A) and g(X) ^ and r(A) in F[A]. SUM AND PRODUCT.1).1) is called a polynomial domain F[\] over F. of degree at most m when m=n. The polynomial /(A) (22. commutative with itself Let X denote an abstract symbol (indeterminate) which is assumed and with the elements of a field F. then A(A) = A:(A) QUOTIENTS. such that /((A) zero polynomial or is of degree less than (22.1) is called the zero polynomial and we write f(\) = 0. polynomials in A which contain. the polynomial domain has most but not all of the properties of a field. and of degree re when m<n. is called a polynomial in \ over F. Regarding f(X) + g(\) f(X) is of degree is of the individual polynomials of F[X\ as elements of a number sys tem.>n. the polynomial is called raonic.1) is said to be = said to be of degree zero. apart from terms with zero coefficients. (ii) /(A)g(A) If is of degree m + ra. to divide /(A) while g(A) and h{X) are called factors of /(A). (22. Problem If 1. Oq^ oq ^ is every = = If a„ = 1 in (22. The expression n ni (22. F . chapter 22 Polynomials Over a Field POLYNOMIAL DOMAIN OVER to be F. f(X)g(X) = g(X)f(X) If m and g(X) is of degree TO (i) ^(X) + g(A) degree when m. 0. r(A) is called the remainder in the division of /(A) by g(A). For example = g(X) + f(X) and n. . the same terms are said to be equal. the degree of the zero polynomial is not defined. If a^ ^ of degree n and a„ is called its leading coefficient.
and (22. In Problem 2. Two common divisor is i.CHAP.3) Let f(X) be any polynomial and g(X) = /(A) X a. polynomials are called relatively prime if their greatest . If /(A) and g(X) are polynomials in F[X]. they have IV.g(X) = and conversely. as (X+\/3)(X^j3). When /(A) is divided by A. This illustrates not 1. When the only common divisors is of /"(A) and g(X) are constants. h(X)(Xa) + r where r is free of A. (1A)/(A) + (X+4)g(X) = 0.3).) is of degree zero. A nonconstant polynomial over when g(A. a constant. that is. See Problem 4. RELATIVELY PRIME POLYNOMIALS. When g(A. If the greatest common divisor of /(A) of degree re>0 and g(X) of degree m>0 is there exist nonzero polynomials a(X) of degree <m and 6(A) of degree <n such that a (A) /(A) + b(X). Then (22. A polynomial /(A) has X a as a factor if and only if 0. their greatest common divisor d(X) = 1.lg(A) 5 We have also V.4) is A. The greatest common divisor of /(A) = (A^ +4)(A^+3A+ i/(A) 5) and g(A) = (A^ 1)(A^ + 3A+5) is A +3A+5.= + 3A + 5 = . 3 is irreducible. over the real field Over the real field (and hence over the rational field) A^+4 field it is factorable as (\+2i)(\2i) Example 1. By (22.a A is obtained. Example?. GREATEST COMMON f(X) and g(A). is f(a). a unique greatest common divisor d(X) and there exist polynomials h(X) and ^(A) in F[X] such that (224) d(X) = h(X)f(X) + k(X) g(X) See also Problem 3. F is called irreducible over F if its only factorization is trivial. over the complex THE REMAINDER THEOREM. If h(X) divides both f(X) and g(A).) = c. 22] POLYNOMIALS OVER A FIELD 173 Let /(A) = h(X)g(X). the factorization is called trivial. we prove not both the zero polynomial.2) becomes = r. that remainder f(a) = m. Over the rational field A. DIVISOR. it is factorable is irreducible. it is called a common if divisor of A (i) (ii) polynomial d(X) is called the greatest common divisor of /"(A) and g(X) d(X) is monic. (22. (Hi) every common divisor of /(A) and g(X) is a divisor of d(X). f(a) = and we have until a remainder free of n. d{X) is a common divisor of /(A) and g(X).
we have proved the theorem. In Problems. IX. it divides at least if one of /(A) and h(X). If /(A) and g(X)^0 are polynomials in F[X]. h{X)giX) + r(X) = Then + s(X) s(X) k(X)g(X) and [k(X) h(X)]g(X) = A:(A) r(X)  Now r(A)s(A) is of degree less than m while. the theorem is true /(A) = or if n<m. ^^(A) where c ^ is a constant and the qi(X) are monic irreducible polynomials of F[X] SOLVED PROBLEMS 1. we have proved the theorem with h(X) = ~X and r(X) = /i(A). To prove uniqueness. If g(A) is irreducible in F[X] and /(A) /(A). such that /(A) (i) Let = h(X)g(X) + r(X) n /(A) = o^jA ni + %1iA. if the process. is any polynomial of F[X]. then either g(X) divides /(A) or g(X) is relatively prime to VII. we repeat Since in each step. Again. Otherwise..A(A)]g(A) is of degree equal to or greater than m.  h(X) = 0. there exist unique polynomials h(X) and r(X) where r(A) is either the zero polynomial or is of degree less than that of g(X). . . If /"(A) and g:(A) are relatively prime and each divides A(A). the degree of the remainder (assumed ^ 0) is reduced. we prove Every nonzero polynomial f(X) of F[A] can be written as /(A) (22. k{X) . UNIQUE FACTORIZATION. Thus.5) = c • 9i(A) • ?2(A) . Then both h(X) and r(A) are unique. we form /(A)  ^A"\(A) . [k(X) . suppose /(A) = h(X)g(X) + r(X) and /(A) = 'c(A)g(A) + s(A) where the degrees of r(A) and s(\) are less than that of g(A). we eventually reach a remainder r(A) = 4(A) which is either the zero polynomial or is of degree less than that of g(A). soalsodoes /(A)g(A).174 POLYNOMIALS OVER A FIELD [CHAP. Otherwise. If g(A) is irreducible but divides /(A) • h(X).h(X) r(A)  s(A) = so that k{X) = h{X) and r(A) = s(A). Prove: in F[X]. If /i(A) = or is of degree less than that of g(A). . unless = 0.^A^\(A) = /. ^m then ?^ Clearly. 22 VI. Suppose that = n>m. c^iA /(A)  7^ A g(A) = /i(A) cpX + + ••• + Co is either the zero polynomial or is of degree less than that of /(A).(A) f^X) =0 or is of degree less than that of g(A). VIII. + • • • + a^A + Oq and g(A) = h^X if + i^iA + ••• + 6iA + 6o.
+ 43(A) •g(A) Continuing. where /•^(A) = or is of degree less than that of HiX) = we have from ( i /(A)  ?i(A)g(A) and from If it obtain (a) by dividing by the leading coefficient of ri(A). we conclude that rs(A) divides both /(A) and g(A). in 'i(A) = 9i+2(A)'i+i(A) + ri+2(A) moreover. TsCA) + rsCA) If rg(A) = 0. = 0. then d(X) = bin.) CHAP. rjCA) divides rsi(A). . so that rs(A) divides r^gCA). we have = 9si(A)'s2(A) + r5_i(A) (vi). 22] POLYNOMIALS OVER A FIELD 175 2. Thus. we have /(A. we have g(A) = g2(A)ri(A) ri(A). also divides rs_2(A).) = ?i(A)g(A) + If r^(X) where h(X) = If (ii) r^^(X) =0 7^ or is of = degree less than that of g(A). g(X) where i^ leading coefficient of g(k) and we have (a) with A(A) = and k(X) = 6„ Suppose next that the degree of g(A) (*) is not greater than that of /(A). 6^ . Prom (iv). [l + 9i(A)92(A)]g(A) + i2(A)g(A) Prom (iii). Prove: If f(X) and g(A) are polynomials in F[X]. &(A)?^0 and (^»^ 'si(A) = 9s+i(A)'s(A) By (vi). /(A) + . The proof that d(X) is unique is left as an excercise. 0. ri(A) By Theorem I. they have a unique greatest divisor d(\) and there exist polynomials A(A) and k(\) in F such that common (a) da) If. . where rgCA) = or is of degree less than that of = = we have  from ( i) and (ii) '2(A) g(A)  ?2(A)'i(A) = g(A)  ?2(A)[/(A) 9i(A)g(A)] 92(A) /(A) + [1 + 9i(A)92(A)]g(A) and from it obtain (a) by dividing by the leading coefficient of r2(A). we obtain rs(A) = hs(X)'f(X) + = ksiX)g(. Prom(l) ri(A) /(A)  91(A) • g(A) = /!i(A) /(A) + /ci(A) g(A) = and substituting A2(A)/(A) in (ii) '2(A) 52(A). by retracing the steps leading to c. r^(X) 4 0. 1 /(A) = 0. /3(A) = t^(K)  qa(>^) • r^(X) Substituting for r^(X) and r2(A). we have '•i(A) (i") = 93(A) r^(X'). = h{X)f(X) + is the ka)g(\) say. Continuing the process under the assumption that each new remainder is different from general. If the leading coefficient of rs(A) is = = then rf(A) = e~^'s(A).X) Then d(X) = c\^(X) = ci/»5(A)/(A) + ciA:s(A)g(A) h(X)f(X) + k(X)g(X) as required. then d(X) = b'^^giX) and we have (a) with and k(X) r^(A) 0. we have 92(A) 93(A)]g(A) '3(A) = = [1 + 92(A) •93(A)]/(A) + [9i(A)  93(A)  9i(A) h3(X)f(X) finally. not both zero. and by 's3(A) (v). the process must conclude with (V) 's2(A) = 9s(A)'si(A) + r^cA). If r^(X) + r^iX) = 0. (»v) we have.
3)[g(A) . /(A) and g(A) cannot be relatively prime. Find the greatest common divisor d(X) of /(A) = 3A^ + 1)C + llA + 6 and III. Then by Theorem IV there exist = 1 Then. Prove: If the greatest common divisor of /(A) of degree re> there exist nonzero polynomials o(A) of degree (a) < m and b(X) of degree < n and g(X) of degree m > such that is not 1.5A + 7)(A^ + 4>f + 6A + 4)  . using (a). we have Conversely. divisor is A + 2 . taking a(A) = gi(A) and 6(A) = /i(A). Now = gi(A)rf(A)/i(A) g(A)/i(A) and gi(A)/(A) + [/i(A)g(A)] (a). g(\) = )C + 2)i)^X+2 and express d(\) in the form of Theorem We (i) (li) (iil) find = = /(A) g(A) (3A+l)g(A) + (A' + 4A^+6A+4) (A^ (A2)(A^ + = 4A^ + 6A+4) + + 7A+10) A^+4A2+6A + 4 A^ (A3)(A^ + 7A+10) + (17A + 34) and (Iv) + 7A + 10 = (^A + p^)(17A+34) J(17A+34) = The greatest common Prom (iii).(A. a(X)f(X) + b(X)g(X) = and conversely.? + 6A+4)] (A^ .17A . . Let the greatest common divisor of /(A) and g(A) be d(X) ^ /(A) = rf(A)/i(A) 1 . hence. But this is impossible.2Xi^ + (A 4. suppose /(A) and g(A) are relatively prime and polynomials h(X) and ^(A) such that A(A)/(A) + k(X)g(X) (a) holds.5A + 7) /(A) + i(3A° + 14A^ .5A + 7)/(A) + (3A° + 14A^ .176 POLYNOMIALS OVER A FIELD [CHAP. then = and g(A) d(X)g^(X) where /i(A) is of degree <n and gi(A) is of degree gi(A)/(A) = <m.3)g(A) and for A° + 4A^ + 6A + 4 from ( i ) 17A +34 A + 2 = (A^ . 22 3. holds.4)g(A) Then = i(A^ .17A  4)g(A) 4. = Thus. 17A +34 from = (A° + 4A^ + 6A + 4)  (A  3)(A^ + 7A + 10) Substituting for A^ + 7A + 10 (ii) 17A +34 = (A^ + 4>f + 6A + 4)  (A . a(X) = = a(X)h(X)f(X) + + a(A)A:(A)g(A) b(X)h(X)g(X) if (a) a(A)i(A)g(A) and g(A) divides a(A).
.CHAP.. . To prove uniqueness. . Pi(A). then (ii) satisfies the conditions of the theorem..2A  3 1 (c) (d) /(A) /(A) (a) 2A^ + 5A^+ 4A= 3A^  4A=' + A^ . = = A*A=A2+2A2 A= =  3A= . p^cA) = 1. Write (i) /(A) is the leading coefficient of /(A). A^A" g(A) g(A) 1. 9. by a change 92(A) divides in numbering.. . ization leads to a set of monic irreducible factors.A>?~55X+ib)sa) 11.A + . 92(A) = P2(A).. Find a necessary and sufficient condition that the two nonzero polynomials /(A) and g(A) each other. 9i(A) • ?2(A) . p^CA) r<s.2 SUPPLEMENTARY PROBLEMS 6. 7. 9r(A) and a„ p^(X) PaCA) • .2A" . = s and. Prove Theorem Prove: III. suppose that "n are two factorizations with Pi(A) which. 1)/(A) + g(A) g(A) = A% 2A% h 2A + A= + 2A 2 Ans.11A+ 6 . . Ps(A) i P2(A)p3(A) = 1.. then g(A) = d{X)h(X) and either d(X) or . r and Pr^i(A) •Pr+2(A) . If /(A) divides g(A) and h{\). For each of the following. Since the latter equaland uniqueness is established.q^a) . 22] POLYNOMIALS OVER A FIELD 177 5. (") If g(A) If /^(A) then (i) satisfies the conditions of the Otherwise. it divides g(A) + h(\). after a repetition of the argument above.^ is a constant and the q^(\) are monic irreducible polynomials in F[A]. Prove Theorem VI.. Hint. may be taken as . it must divide some one of the Since Pi(A) is monic and irreducible. = a„/i(A) is irreducible.. Then . express the greatest common divisor («) (6) in F[X] divide 10.. Since 91(A) divides Pi(A)p2(A) p^(\). qr(\) where c . ?i(A) = Eventually. where a„ theorem. Give an example in which the degree of /(A) + g(A) is less than the degree of either /(A) or g(A). we have r 9i(A) = Pi(A) for ity is impossible. further factor and /!(A) are irreducible. 8.A^ . there is a factorization /(A) = «ng(A)A(A) Otherwise. Prove: Every nonzero polynomial /(A) in F[\] can be written as /(A) = cq^(X).5A + 6. . Let d{\) be the greatest common divisor of /(A) and g(A) h{X) is a constant.^ (Ato i (2A=+ l)g(A) lo (6) A 3 = l_(A+4)/(A) + l(A^+5A+5)g(A) ('^) ^+1 ^ = ^(A+4)/(A)^ ^(2A=^9A2_2A + 9)g(A) + ^"^^ ^ jL(5A+2)/(A) ^^{l5X^+4. in the form of Theorem IV /(A) /(A) = = = = 2A^A=+2A26A4. ps(A). A=2 = .
13. 14.c. common divisor of /(A) and g(A) considered in K[A]. 22 12. in F[\] with greatest that if D(A) is the greatest common divisor d(X). then /(A) = s(X)D(X). Prove: An nsquare matrix A A ' can be expressed as a polynomial + .g(\).m.c. is normal it g(A) = t(X)D(X). Let d(X) = h(\).4A  5 . Find the greatest common divisor and the least common multiple of = /(A) = = A^g. (b) Show that c is a root of multiplicity k> 1 of /(A) and only is a root of both /(A) and /'(A). (a) g(A) is a monic polynomial which is a multiple of both /(A) and g(A). not both 0. The least common multiple of /(A. Suppose /(A) = (Ac)^g(A). If /(A) is relatively prime to g(A) and divides g(X)a(X). 17. (o) (b) (f)(k) 3 = A^  SA^ . show 2 ij 12 . = = g(A) = = (A+l)(A+2f(A3) 1) Ans. it divides a( A). 16. a^A in /I. 1.9A  51 = m(A) = 0. + a^^^A + aiA + oqI . Let K be any field containing F.l)(A+ if (A + 2f (A 3) 15. (a) if Show that c is a root of multiplicity if c kl of /'(A). when m(A) A^ . 20. What property of a field is not satisfied by a polynomial domain? /(c) = 0..178 POLYNOMIALS OVER A FIELD [CHAP.d.d. (A. Take Show Hint: /(A) and g(A). Prove Theorems VII and Prove: VIII. g(A) A=l.m..c. and D(X) c{X)d(X). Al. g. D (A) = = d(X). 19.c. 1 (b) /(A) (a) (Al)(A+lf(A+2). 18. The scalar and only if c is called a root of the polynomial /(A) if Prove: The scalar e is a root of /(A) if A c is a factor of /(A). l.f(\) + k(X). Given 4 = I 2 1 2 I .) and and is of minimum degree.9A = 5 and (f)(A) = A°  sA!^ .l)(A^ + A+ = (b) (A+i)(A + 2). (A^.
The equality (23. . over F(X) (23. a^^iX. a"^ ' + .. A of all polynomials in A with coefficients nonzero mxn matrix over F[Aj oii(A) (23. A^+A .chapter 23 Lambda Matrices DEFINITIONS.. provided = Bp=a andAVI. If OPERATIONS WITH AMATRICES. + A.4) are said to be equal.1..1). A(X) is called proper or improper according as A^ is nonsingular or singular matrix polynomial of Example 1 is nonsingular and improper.) . If either ^(A) or B(A) is nonsingular. zero. mxn '^(A) matrices over F. ^(A) = 5(A).2 p).3) yields • A(k) = Apk^ + Ap_.. + S(A) is a Amatrix C(A) obtained by adding corresponding elements of the The product /1(A) B(A) is a Amatrix or matrix polynomial of degree at most p+q.' ^ ^ = 0.4 . The sum A(X) two Amatrices.^k^~^ + . aj^n(>^) Let p be the maximum degree in A of the polynomials a^jiX) of (23..3) and (23. putting A = A in (23.2) Then A(X) can be ^(A) Ap\^ + spl A. + SiA + So (j The matrices (23. Let F[X] be a polynomial domain consisting in F. it is called singular or nonsingular according as \A(X)\ is oris not Further..^X^^ + . + l A* + 2A= + 3A^ + 5] A^ A= . ai„(A) CsnCA) ^(A) [«ij (A)] asiCA) Omi(A) is called a Amatrix. + A^k ^ Ao 179 .4) B(X) + S^_.P' p 1 + Ai^X + Aq and (23... The A(X) is rasquare.\ + Ao where the ^^ are Example 1.3) Consider the two nsquare Amatrices or matrix polynomials ^(A) A^X^ + A . (23.. the degree of ^(A) •5(A) and also of 5(A) /1(A) is exactly p + 9.. written as a matrix polynomial of degree p in A.3A= J 1 A^ + u »— J 1 A + 5 4 L J is a Amatrix or matrix polynomial of degree four.1) ai2(A) a22(A.) .3) is not disturbed when A is replaced throughout by any scalar k of F For example...
two results can be obtained due to the fact that.9) matrix polynomial of the form 6(A) = bqX"!^ + 5. Then LA^^I 2A+1J . such that = (?a(A)S(A) + Ri(A) (23. in general.8) If A(X) = S(A)(?2(A) + R2(X) if /?i(A)=0.4) and Bq is nonsingular.3) and (23.8). C^ + . If + A"^ + A  1 A% A% A + 2A'' 21 A (A) = L '4(A) 2A^ A 2 and + B X% (A) ' 2A J a^aJ Ql(X)B(X) '1 then Ta^i = _ AiirA^+i ii A'^+aJ r 2A ^ 2A+3 + Ri(X) 2A JL A j_5A 2A_ and A(X) 1_ A A^ + aJLAI ij B{k)Q2{X) Beie. oj To [2 il 1 2 4_ (X) \X2 Then j.7) and (23.4(A) and (23.Q^(X) + R^X) 2A A + ll and Example 4. Q2(X).. If in (23.  . In Problem I. A (23. ^^ C^A^ + A^_. Let A (A) B(A) = (A + 2)/2. Al(C) [17 See Problem DIVISION.10) A(X) = Q^(X). two rasquare matrices do not commute.C^ •pi A^ ^p ^ + . RxiX).. when \ is replaced by an resquare matrix C._C + Ao and (23. trix A scalar matrix polynomial fc(A)/„ commutes with every polynomial.6) 4 (J) 2.A'?"'/„ + B(A) = . We define (23. R2(X). 6(A) is called a left di visor of jV Example 3. B (A) is a left divisor of A (A) See Problem 3. A' Jx^ Example Let A X XmI X'' o1. To [l i1.nd + 2j P [O 4J and 2j C = _3 ij [lo 15 Ajf(C) [o ij [3 ' [1 oJ [3 4) ' [2 14 2J 26 9 12 27 1. we prove if If A(\) and B{X) are matrix polynomials (23._. /?2(A) = 0. S(A) is called a right divisor of i(A). then (23. 2. + b. where /?i(A) and degree less than that of 6(A). /1(A).5) Ag(C) A^C^ .X I„ + bol^ = i(A)/„ rasquare ma is called scalar. 23 However.B(X) + Rt(X) rA^ + = B(X). 6(A) = b(X) I. + A.7) ... then there exist unique matrix /?2(A) are either zero or of polynomials Qi_(X). ^ 1 + ••• + ^^^i + ^o called respectively the right and left functional values of A(\).180 LAMBDA MATRICES [CHAP.
.. = = A^B^ B^Ap + Ap_^B^~' B^~'a^_^ \I .11) and (23. + A. R = f(B). a'JH' a!4J j?2 ^ [17 27] = (^'B^Q^i^^^^ Theorem III.12) are identical so that ^1 and we have = ^2 = apB^ + ap_^B^~^ + .. = in (23. By (6. matrix over F. CAYLEYHAMILTON THEOREM.A\ = 0. until remainders ^1 and R2. + BA. 23] LAMBDA MATRICES 181 rA + 2 If A ll To ll If /?i(A) II.A and characteristic equation 0(A) = \Xl .. As a consequence.B + Ao and a. trix Consider the resquare matrix A = [o.. Let = Aj^(B) + + ..] having characteristic maijj Xl . where B =[b^] '' is ra re.10). Examples. If a scalar matrix polynomial /(A)/„ is divided by free of A. + "^ ^ The ^(A) = I .3) is divided by \I B. we have A scalar matrix polynomial /(A)/„ is divisible by KI^.jx''^ + . V.11) Let A(X) be the Amatrix of (23.B = + . • A THE REMAINDER THEOREM."^M A=' and r~^ _3 flO LA2 [a + + 2j 3 A4J isl 26j = '^' I IfAl 4JL3 [4 and '^'^ A+ 2I A4J " [14 = U 2./• (?i(A) and we have matrix polynomial i(A) = [oy (A)] of degree n is divisible by a scalar matrix polynomial S(A) = 6(A) /„ if and only if every oy (A) is divisible by 6(A).B if and only if f(B) = 0. free of = Aj^(B) A.12) A(\) A. It = (A/fi)(?2(A) + R2 where R^ and Kg are free of can be shown m. Prom Example R^ = Ag(B) and = A^(B) in accordance with When ^(A) is a scalar matrix polynomial '4(A) = /(A)/ = apIX^ + ap. then «. we may write A(\) = (2i(A)(A/S) + /?! and (23.. are obtained.CHAP.2) (A/i)adj(A/i) = (f>(X)I . (23.] be an resquare Since Xl .. If the matrix polynomial A(X) of (23.3) and let B =[6^.B is nonsingular. is obtained. then A/„B until a remainder/?.. then ^(A) = 6(A). + oifi + aol IV. + ai/A + Oq/ the remainders in (23.
«i(A).i 32 31 A  7>i + llX .] (^(A) = 0. Thus.4) and if Bq is nonsingular. then are either and R^iX) exist unique polynomial matrices Q.ApB^B(X)X^''^ C(A) where C(A) If is either zero or of degree at most p q. Example 6. The characteristic equation of A is V. by Theorem V. ^'<'^' = i o]C '2 ' "3 C o][o ' c 3 and 2. C(A) is zero or of degree less than (3i(A) we have (i) with = ApB'q'^''^ and R^iX) = C(A) . such that If (i) ^(A) = QAX)B(X) + Ri(X) and (ii) ^(A) If = 6(A) •(?2(A) + R^iX) fii(A) = A(X). 23 Then ^(A) •/ is divisible by XI .5 = 0. Every square matrix A = satisfies its characteristic equation = 0. Now 62 31 31 63 31 62 32 and 32 31 31 62 63 62 3l" 7 12 13 12 6 6 7. "2 2 3 2 r 1 1 o" 1 31 32_ .3) and (23. 1. p < q.7 6 6 + 11 1  5 _1 2 p 1_ See Problem 4. [oj. then (I) holds with (3i(A) = and Suppose that p iq. R2M.(X). <fi(X) Vl.182 LAMBDA MATRICES [CHAP. A+ 1 compute 4 ij p(C) and 4r(C) ^ when C= L° 2J . then = 4(A) . SOLVED PROBLEMS 1. where Ri(A) zero or of degree less than that of 6(A). For the matrix A(X) [.A and. QsO^). Prove: there ^(A) and B(A) are the Amatrices (23.
. = Aj^B^XP'^l .. = Tl [i ii L1 (a) 2J We compute ^(A) ^4S.B(X)B^''ApXP~1 left This derivation together with the proof of uniqueness will be as an exercise.CHAP. begin with ^(A) . (a) (?2(A). we continue (i).5(A) B2 ^sA and F(X) . R2(X) such that (b) A(X) = Qi(X)B(X) + Ri(X).C3S2^S(A)A 2 = 3 2 . See Problem 1. rA^ + 3. of decreasing degrees. Chapter 22. D(A). We have and Here. A2 + 1 A + 6 and D(A) . form A(\) If . Since this results in a sequence of matrix polynomials C(A)..B(X)B2 F^ ftsCA) .B(A)S2^'44A^ E(X) £(A) . Given 2A=l 1 A'Al] and S(A) A= A" ^(A) _ L A° + A' A" A^ + + 1 J LA^ + 2 A^ .A J find matrices Ql(X).C^B'^^X'^ ApB'XP'^ + C^B^^X^'f and Ri(X) = 0(A) .ApB^''B(\)\P1 q. where s> q. the process. C^S1B(A)X^'? (i) D(X) D(X) is either zero or of degree less than we have with Qi(\) otherwise.D^B^^BCX) J 3 5 1 11 L P 5] Pi 1I = _ 6 = 5"] ["13 2 3J ^ + [_ 9 '1 6A13 2A9 A= + 5A + 3 3A + 5 fli(A) Then (?i(A) = (A^X^ + Cs A +O2 }B^^  A + p fA^ q + P e] ^1 4A + 4 4 A^ + sA + 5 2A+ 3A + J (b) We compute ^(A) . So r 2 ii and ij S2 .. A(X) = S(A)(?2(A) + R^iX) as in Problem 2. we ultimately reach a matrix polynomial which is either zero or of degree less than g and we have To obtain (II).^5(A)A^ 3 = 1 1' 2 A° + 1] 1 10 10 3] 2 A 2 + fo l"! ^ A + fl 1' C(A) 11 D{X) C(A) . Ri(X). 23] LAMBDA MATRICES 183 If C(X) = C^X + .
x)(s2x) . also..(spIA) ... we have '1 0' '1 1 1 2" 8 8 7 8 5 8 8_ J7/  34 +/II 11 _0 1 1_  3 3 2 1 1_ + 8 _13 3 2 1 5 1 5 3 1 11 7 2 2 5 1" 5 "1 0" i 1 1 2 1 1 i{_7^i 3I + A\ 121 1 3 1 . Let Ai. to compute A 2 1 and A' A1 A/^ Then 3 2 1 A1 3 A^3A^7A11 = A1 5" "8 8 7 8 "l 1 1 2" "1 0" 1 1 42 = 31 29 31 3A +1 A + 11/ 8 _13 8 8_ + 7 3 2 1 1_ + 11 45 53 39 3 45 42 42 31 29" '8 + 7 8 8 7 8 5' "1 1 1 2 1 1_ 193 = 160 177 144 160 ^* = 3/ +7/ = + UA = 3 45 _53 39 45 31 42_ 8 8 + U 3 _2 224 272 13 3 224 193 From 11/ 1 A 34 +A ..33 1 1 + 11 3 2 7 2 29' 3 8 40 121 24 1 40 24 8 27 5.si. since A is nonsingular..h(X2) h(X^).. As. .(AA^) Let (ii) h(x) = c(. We have (i) \XIA\ = (AAi)(AAo).184 LAMBDA MATRICES [CHAP. be the characteristic roots of an resquare matrix A and let h(x) be a polynomial of degree p in Prove that \h(A)\ = h(\±).^n x:. 23 Then QziX) = B2^(A^>? + £3 A + F2) [A^+4A A^ + + 4 9 2X + 2I 6A + 3A + 5J 1 1 1 2 1 1 4.(Sp . Given A 3 2 use the fact that A satisfies its characteristic equation to compute A" 3 and A .x) Then h(A) = c(silA)(S2lA).
.f(C)B^(C) [l7 Bn(C)4n(C) i?^ 7J = 1 ^.. (S^. ic(siA„)(s2A„) . SUPPLEMENTARY PROBLEMS 6.. {c(Sp \t)(. •Jc(S2Ai)(S2A.Xl)i •{c(siA2)(S2A2) . 8. L ^ +2A^ (rf) + A' + 2A^J B(A)M(A) r2A'' + 1_ 3A'+A^+A + 3A^ + 3A 2A^ a] 2A=' 2\^ J 7...2) (S2X„)! . (siA„)i ... Given A(\) = Pr^ [a^+i [_A^ ^1 AiJ sna B(A)=r^^ [a + 1 ^^^^1 a compute: J _r2\2+2A (a) 4(A) A^+2A1 2A ij + B(A) + A + 2 (6) A(\) .\sp!A\ !c(siAi)(siA2) .....Sp As) ..?(C) = 9 = 3 3 3I ^i(C) ^i(C)B^(C) B^(C)A^(C) = E [3 :] [::..4 CHAP. and B(A) are proper nsquare Amatrices of respective degrees p and q.. and show that the degree of the triple product in any order is at least p+q.h(\^) using (ii)...p(C) 5 5 = 2 1 [: 3 <?i?(C) 1 3"] 4. C(A) is any nonzero . If /4(A) Amatrix. (Sp\2)\ = .. A o A' 2A=+A=+A o 1 (c) ^(A)S(A) = \ . Given /1(A) compute: 2 1 Ag(C) = B.B(A) r2A ^ A^l [a^a rA* + ij A* + 3A= + 3A=1 . (Sp X„)! jc(SlAi)(S2Xl) .] 3 M if where P(A) = 4(A)..B(A) and Q(X) = B(A)M(A). 23] LAMBDA MATRICES 185 and A(^) = c'P\siIA\\s2lA\ .. (s^A„)! h(\i_)h(\2) ..
.2 1 A 2 3 /?2(A) = 1 A+ 3A^ + 6A + 31 (d) 3A^5A16 A=^ 3A'^7A A^ + 2A'^ + 8 Qi(X) = A  3 1 .A + A^ + 2A'' A° + A^ + 8A  4 A^+1 B(X) 1 3A1 A+ 1 2A A^ A2 2A A"" 3AnA"l (d) A"l A* + A^ + 2 A^A A 1 A(X) A=A^+l A^ + A B(X) 2 A^2A1 A+1 A A^ A'' + 1 A + 1 + A A^2A A^ + A 1 A+ a1 1 2A* + A  A2 . Q^X). fAl A + ll f 1 3 1 0.4 16A .30 4A . find matrices (?i(A).2 .6 3A 7lA + fl2(A) 2 2A . A^+3 ^_^ J.8)..2A A + ll 2 46 12A llA + 8 6 4 26A ..16 2A + 10.14 2A + 6A . Rd^) satisfying (23.(A)= _J.7) and (23.2A  81A + Ri(X) 46 1 12A 15A  16 9 85A 12A  23 5 2 4A  9A  8 7A + 31 17A  3A^+5A <?2(A) A^A4 A" 3 2A^4A + 3 A . [\+i Ans. .4(X) A . Verify in Problem 9(b) that Ri(A) = .(A)^0.. '''''I aJ A^ + 2A + 2l 1 FAI ll (b) AiX) U. /?i(A).. For each pair of matrices A(X) and B(X).A  1 4A 7 1 2A  . fjA3 . Fa BiX) a] a^aJ + A.(A)= \^_^^^ . > + 2X (a) .186 LAMBDA MATRICES [CHAP.4(A) + 3A''  3A 2 3A^+ 2A 6A+ 2A° .4^(0) and figCA) = 4^(0) where B(A)=A/C.30 15A . " ' 1 I . . (a) r 2a '^^''' ai1 A + 2j' fo [l o] ij Q X ^=^'^^ ij' U ''^'''+2 fA (b) Al] .(A). = I La%i f A. 23 9. ^_ 1 ^ A+1 (c) A+7 16A + 14 fti(A) 5A + 2 (?i(A) = A^1 3A + 5 A A" 2 3A+2 21A + 4 2A+3 loA + 3 A5 18A7 2A3 Al <?2(A) A6 5A7 A^ + 1 .2A1 A^2A+1 A'* '''' J' + A^ + [o 5A^+2A 4A^ + x4 + 4 1 A'^ 1 7A2 + 2 (c) .
Let A(t) A(t) = [a^.(«)] where the a^j{t) are real polynomials in the real variable t.x)(x2= x) . Aj A„ are the characteristic roots of A and characteristic roots of f(A) are /(Ai).f(A)\ = c'^\xj .3 ^A (a) compute B(A). Given [\^ + 1 3X+ ^2 A^3A . if Prove: The matrix 17. 16. 1 1 '«'* A2 . Obtain the theorem of Problem 5 as a corollary of Problem 17.i. then X is an invariant vector of f(A) 21. The matrix C is called a root of the scalar matrix polynomial B(A) of (23. n Hint. A(t).\xsl A\. write A{t)A\t) = /. c(xi Xj) (x2\j) .9) if B(C) C is a root of B(A) if and only if the characteristic matrix of C divides B(A).] Prove: Hint. then the Write \. /(A2) /(A„)Hint.. For (d). Prove Theorem ni. !. 22. (b) T\cA(t)\. then g(B) 15. (d) dt ^A 'it). Verify: A/ . use For (c). Show If first that A and B S^ are similar for any positive integer k. where dt c is a constant or c =[ciA. "^^ .(«) = ^ k=i ^^ 2 a.C(A) (b) find (3(A) and fi(A) of degree at most one such that + l] 4(A) = Q(X). 1 1 3 1 18.B(t) = C(t) = [ci.(«). If A and B are similar matrices and g(A) is any scalar polynomial.(x^ . 23] LAMBDA MATRICES 187 11.(0 6i.x) so that \\I Nowuse *j/4 (xjAi)(:»:iA2).A\ \x2l . Fa^'+SA^VAL^°.5A=+llA 12.f(x) = c(xi. Take and differentiate the last member as nition if it were a polynomial Sfnomial with wit! constant coefficients to suggest the defi. X is an invariant vector of A of Problem 17..B(\) + R(\).f(\j)... Find the characteristic roots of f(A) = A'^ 2A+3.(*^A„) and . Prove: If 20. (x^ \j) = \.B divides A(\)Ajf(B). 14. 1 A^'+SA^+sA [A + 4 A + A  si r9 A + S(A) 1 A .(t)] •' and differentiate cv.gl 10 A='A^3A+2j in A6 + ij _13A6 9A + 10J Given A compute as Problem 4 10 '1 . f(A) is any scalar polynomial in A.A\ . then g(A) and g(fi) are similar.. given A = 2 1 2 2 19.. Prove: fi = diag(Si. B^) and g(A) is any scalar polynomial. = 0... g(B2) g(B„)) Hint. A^  y P _i2 ^^1 i7j [I 13.CHAP.4(A) = C(X) =1fx A + 2 ^ A 1 A A+lJ + 2j [A . = diag(g(Si). Derive formulas (a) r\A(t) for: + B{t)\. Prove: If Ai... (c) ^{A(t)dt B(t)\.
denoted by H^(k). any polynomial of F[\]. a row transformation on ^(X) is effected by multiplying it on the left by the appropriate H and a column transformation is effected by multiplying ^(X) on the right by the appropriate K. A(\) is a product of elementary matrices. The rank of a Xmatrix is invariant under elementary transformations. In Problem VI. denoted by K^j (/(X)). we prove Let A(\) and B(X) be equivalent matrices of rank r. the multiplication of the ith column by a nonzero constant k. with k in F. II.th column.. and the /th row. we prove r Every Xmatrix A(X) of rank can be reduced by elementary transformations to the Smith normal form 188 . In Problems 1 and 2. the addition to the ith column of the product of /(X) and the . the interchange of the ith and /th column. (3) The addition to the ith row of the product of /(X). IV. then the greatest common divisor of all ssquare minors of A(\). we have is an elementary Every elementary matrix in F[\] has an inverse which in turn ma trix in F[\]. Two resquare Xmatrices i(X) and B(X) with elements in F[X] are called equivalent proff 2 ' vided there exist P(\) = H^ (24. K^ such that B(X) P(\)A(X)Qa) Thus. denoted by K^j (2) The multiplication of the ith row by a nonzero constant k. Also.chapter 24 Smith Normal Form BY AN ELEMENTARY TRANSFORMATION (7) on a A.. Paralleling Chapter I. The interchange of the ith and /th tow. 5. denoted by . denoted by ^^(/(X)). These are the elementary transformations of Chapter 5 except that in (3) the word scalar has been replaced by polynomial. is also the greatest common divisor of all ssquare minors of B(\). Equivalent mxn Xmatrices have the same rank. An elementary transformation and the elementary matrix obtained by performing the elementary transformation on / will again be denoted by the same symbol. V. denoted by K^ik).K^ = .1) ^1 and Q(\) = K^. If \A(\)\ = k ^ 0. 3. III. s £ r.matrix A(X) over F[\] is meant: ff. THE CANONICAL SET.
../s(A) = Ai(A)A2(A). form of A(X) are called invariant factors of ^(A). The matrix N{X) of (24.2).2) N(\) . /3(A) = g3(A)/g2(A) A^+A and the Smith normal form of A(X) is 10 A'(A) A A^+A For another reduction.. A. /2(A) /^(A). the greatest common divisor of all ssquare minors of N(\) and thus of A(\) is (24. the Smith normal matrices are a canonical set A + 2 equivalence over F[A]. O) By (24.4). r (i = 1. we have . If 4(A) = fk(X) = 1 and each is called a trivial invariant factor.3). = As a consequence of Theorem VII.. the greatest = A^ common = divisor of the tworow minors of A(X) is g2(A) =X. the greatest common divisor of ssquare minors of A(\). . gs(A) = /i(A)/2(A)... 2 and we have VII. g2(A)/gi(A) = A. for Thus. see Problem 4.2) is uniquely determined by the given matrix /1(A).. 0.. /2(A) = gi(A) = 1. Consider .CHAP. k £r. if we define go(A) = then s:s(^)/«si(^) 4(^) h^(\). 2.(A) = /i(\)f2(A). The polynomials f^(X). When a Amatrix A(\) all has been reduced to (24. and g3(A) /i(A) = i\A(X)\ +X^ . AsCA) h^iX). 1).As(A) so that Now (24. .4(A) = A'+2A^+A A^ + A^+A'^ + A A^ + 2A%3A'^+A 3A^ + 3A + 2 2A + 1 6A + 3 It is readily found that the greatest common divisor of the onerow minors (elements) of A(X) is gi(A) = 1. 24] SMITH NORMAL FORM 189 A (A) /2(A) (24.2.. g2(A) =/i(A)/2(A) =Ai(A)A2(A) 1. s ^ r.4) gi(A) =/i(A) =Ai(A). Since in N(\) each f^(X) divides /^^j(A). is the greatest common divisor of all ssquare minors of A^(A) by Theorem V.r) Suppose A(\) has been reduced /V(A) = diag(/i(A).f2{X) /V(A) in the diagonal of the Smith normal 1. A + i + 3 Example 1.. ... (s = 1. . U\) r) = h^W.3) g. 0_ where each /^(A) is monic and of rank f^(\) divides f^^^ (A). in gen eral. and to /Vi(A) = diag(Ai(A). by (24. to (s = l.. INVARIANT FACTORS. then /i(A) = /2(A) = .4(A). Then.
U(k) = (\l)(X^+lfX. A. To form /5(A). . 24 VIII. A"". irreducible polynomials of F[X\. (f = 1. (A'^+l). (A+l)^ A+ l Find the invariant factors and write the Smith canonical form. A^. (A^'+l)''. (A^' + l)'' . since rl. Xi.(X)\'^i^\p^(\)fi^.2 Some fj of the (A) divides The of A(X). /3(A) = = (A1)(A^+1). X\/l. {X + if. X+\/l The Invariant factors of a Amatrix determine its rank and its elementary divisors. X + i.j^gij. form the lowest common multiple of the elementary divisors.2 r) where pi(A).. 5 be A. Example 4. /i+i(A). X+x/l since (6) A^3 can be factored. (A^' + l). Let the elementary divisors of the 6square Amatrix A(X) of rank A=. qi+t.\PsMi^^^. however. (Xif. A^ A. q^j may be zero and the corresponding factor may be suppressed.5) are called elementary divisors over F[X] Example 2. (A^lf. ELEMENTARY (24. are /5(A) (Al)''(A. /5(A) = a" (A1)'' (A i. A1. (Xif. A1. ••.. A1. Example 3.5) DIVISORS. The invariant /i(A) = 1.3 each ele Note that the elementary divisors are not necessarily mentary divisor appears as often as it distinct. (a) Over the real field the invariant factors of A(X) of Example 2 are unchanged but the ele mentary divisors are (Al)"^. Two resquare Xmatrices over F[X] are equivalent over F[X\ if and only if they have the same invariant factors. conversely.e. X'. A.2. A"" ..s). factors are /2(A) = 1.(i = i. X\fz.190 SMITH NORMAL FORM [CHAP. (X + if. (A^+l)^. Suppose a 10square Amatrlx A(X) over the rational field has the Smith normal form I 10 10 (A.1)(A^+1) I I ' (A1)(A^ + 1)^A (Al)^(X" + l)^A^(A"3) I The rank is 5. A1. factors !p(X)S'^*i ^ 1 which appear in (24. A1. field the invariant factors remain Over the complex are unchanged but the elementary divisors (A1)^. + if . (Al)^ (A^)^ Al.''+lf A^(A''3) The elementary divisors (Xl)^ A1. Let A(X) be an resquare Xmatrix over F[X] and let its invariant factors be expressed as f^(\) = {p.. p2(X). in the listing appears in the invariant factors./ = l. the rank and elementary divisors determine the invariant factors. PgCX) are distinct monic.
Since the invariant factors of a Amatrix are invariant under elementary transformations. then /sCX) = /i(A) = The Smith canonical form 1 1 \(XN(\) 1) A^(Alf (A + 1) A=(Alf(X+lf also are the elementary divisors.A(X). is Now the elementary divisors are exhausted. in the case of (ii). Consider P(A) = H^j its effect on ^(A) is either (i) to leave R(X) unchanged. S(X) is except possibly for sign another ssquare minor of ^(A). 24] SMITH NORMAL FORM 191 To form f^i\). i. for B'(A) and B(A). Thus. then g(A) divides gi(A).4(A) where P(A) is each of the three types of elementary matrices H.4(A) and gi(A) is the greatest common divisor of all ssquare minors of P(A)4(A). Finally. Two resquare Xmatrices over F[\] are equivalent over F[X] have the same rank and the same elementary divisors. so Thus. Consider P(X) = H^ (k).X) is square minors of ^(A). Let B(A) = P(A)M(A) and C(A) = B(X) Q(X). Let B(X) = P(XyA(X).e. S(A) = R(X). on ^(A) is either S(X) = R(X) ± f(X)T(X) where 7(A) is an ssquare minor of . then the greatest common divisor of all ssquare minors of P(\). S(X) = R(Xy. then either S(A) = R(X) Its effect or S(X) = kR(X). form the lowest common multiple remove the elementary divisors used of those remaining. If g(A) is the greatest common divisor of all ssquare minors of . or (iii) to increase one of the rows of R(X) by /(A) times a row not of R(X). Since C'(A) = Q'(X).A(\) having the same position as R(X). 2. (U) to increase one of the rows of R(X) by /(A) times another row of R(X). /sCX) = A(Xl). if and only if they SOLVED PROBLEMS 1.4(A) and let S(A) be the ssquare minor of P(X).CHAP. . Thus. Let R(\) be an ssquare minor of . gi(A) divides g(A) and gi(A) = g(A).A(X) is also the greatest common divisor of all ssquare minors of A(\).b'(X) and Q'(X) is a product of elementary matrices. Thus. in the case of (iii). (i) to leave R{X) unchanged. But the greatest common divisor of all ssquare minors of C'(X) is the great• est common divisor of all ssquare minors of C(A) and the same is true common divisor of all ssquare minors of C(A) = P(X). then the greatest common divisor of all ssquare minors of P(A) ^(A) QiX) is also the greatest common divisor of all ssquare minors otA(X).4(A).. the greatest common divisor of all ssquare minors of C'(A) is the greatest common divisor of all ssquare minors of s'(A). Now . S(X) = R(X). UiX.Q(. In the case of (i). In the case of (i) and (li). any ssquare minor of P(X)' A(X) is a linear combination of ssquare minors of /4(A). Prove: If P(X) and Q(X) are products of elementary matrices. consider P(A) = H^j (/(A)).4(A) = P"^(A) fi(A) and P~'^(X) is a product of elementary matrices. It is necessary only to consider P(A). or (iii) to interchange a row of R(X) with a row not in R(X). (u) to interchange two rows of R(X). Prove: If P(\) is a product of elementary matrices. Repeating. the greatest the greatest common divisor of all s . IX.) in /sCX) from the original list and = A^(Xlf(A+l) 1. in the case of (Iii).
This procedure is continued so long as the monic polynomial last selected as aii(A) does not divide every element of the matrix. we divide by aii(A) and as before obtain a new replacement (the remainder) for aii(A). column subtract the product of q{X) and the first column so that the element in the first row and /th column is 2.. 24 3. we treat B(A) in the same manner and reach i(A) /2(A) C(A) Ultimately. can be brought into the (l. we proceed to obtain Otherwise. Chapter 23. this element may be made monic and. This leaves oii(A) unchanged but replaces aij(A) by ^ij(X) + it aij (A) .a^^iX) and aij(A) = gijXA). this element divides every element of A(X).aii(A).r 1). Otherwise. which is not divisible by Let ai(A) be an element in the first row an (A).l)position in thematrix to become the new aii(A). we proceed to obtain (i).l)position.qii(X) a^jiX). after a finite number of repetitions of the above procedure. 7^ degree. (i). From the ith row subtract the product of gii(A) and the first row.. Similarly. then there is an element a::(K) of minimum J By means of a transformation of type 2.aij(X) + a±j(\) gij(A)U.(A). Let Now add the jth row to the first..192 SMITH NORMAL FORM [CHAP. 4(A) can be re B(A) where /i(A) = a 11(A).qix(X). row and the If now an(A) divides every element of A{X).?ii(A)!aii(A) Since this is not divisible by aii(A). we have the Smith normal form. (i = 1. Suppose A(X) ^ 0.qii(X). .. This replaces aii(A) by and aij (A) by aij{X) . we obtain a matrix in which every element If in the first first column is divisible by the element occupying the (l. After a finite number of steps we must obtain an a 11(A) which does divide every element and then reach Next. by the proper interchanges of rows and of columns. by an interchange of columns bring into the (i). By a transformation it of replace this element by one which is monic and. Prove: Every Amatrix A(\) = [a^j(\)] of rank r can be reduced by elementary transformations to the Smith normal form fi(X) /2(A) N(X) L(A) where each fi(\) is monic and f^(X) divides /^^j^(A). 2. The theorem is true for A(\) = 0. /i(A) divides /2(A). .l)position as the new oii(A). • On (A). By Theorem I. (l. duced to /i(A) (i) Then by transformations of type 3. Since /i(A) is a divisor of every element of B(A) and /2(A) is the greatest common divisor of the elements of S(A). it is found that each /^(A) divides fi^^iX). (6) Suppose that an(A) does not divide every element of A(X).(A) "J is not divisible by aii(X) . we can write ay (A) where type ri^(A) is g(A)an(A) + ry(A) Prom the /th of degree less than that of aii(A). now ri. suppose a. (a) Suppose a 11 (A) divides every other element of ^(A).
K32(Al).CHAP. After subtracting the second column form is the greatest from the first.jK^j = H^{k)K^a/k) = H^j{f{\)) • Kj^{f(. ^2i(A). A(A+i) . ^3i(A + 2).4(X) is a nonzero constant. The element /i(A) of the Smith normal common divisor of the elements of A(\).Ksi(A 2). /f2i(A + HqsCI). We proceed at once to obtain such an element in the (l. Hs2(A + l). Reduce X + ^(A) A=' 2 A+ +A 2 1 A + 3 + A 1 + 2A= + A° + a"" 2A'' + 3A^ +A A^ 3A + X" + 2 A + 3 a"" + 6a + 3 to its It Smith normal form. clearly this is 1. Then A°+A 2A^ + 2A and this is the required form. . A+ 1 A^ + p A A° + A 2A + 2A_ Lo S(A)J Now the greatest common divisor of the elements of S(A) is A. using the elementary transformations ^i2(l). Prove: An nsquare Amatrix A(\) is a product of elementary matrices if and only if .l)position and then obtain (1) of Problem 3. H^iiy. we obtain 1 A+ a" 1 1 A+ 2A° + 3A^ + 1 " 3 1 A + A 1 A + 3 '4(A) A^ + A% A 2A + 3A%A 6A + 3 A%A 2A^ + 2A. Reduce A 4(A) 1 A + 2 A'' = A"" + A A^ + 2A : A=. find 1 We AA= 1 A + 2 AA 1 A + 2" 1 ^(A) "l ^\^ A AA A^3A " + 2 A=+2A A^+A3_ "l A A + 1 1 A+1 1 A  1 V 1 A 1 1 A + A' 1 A+1 ^23(1).2A to its . 24] SMITH NORMAL FORM 193 4. A+1 A+ 1 A l). Xs(l) SUPPLEMENTARY PROBLEMS 6. is not necessary to follow the procedure of Problem 3 here.3A + 2 A= + A  Smith normal form. Show that H.X)) = /. 7. 2A^ + 2A A" + A 5.
2A^ 1 A^1 1 '\j A + 1 A" + A \^ + l A^ + A"" + A  A^ + A 1 A + A" A"1 A^ +A A" A= A= + A% + 3 A^ + A  A^ A1 1 a"" _ A%A^ + 1 A^ A= + 2 A^  i_ 'a=^ 3A 4A 2 A= + 3 A2 (e) A1 4A+3 A^ + A+ A2 3A + 2 1 3\ + A^ + l 2\ + 2  U 2A 6A + 4 A% 'VJ 6A 1 A ^+ 2A + 3J D "a^ if) \^ 2\ + 1 1 A+ 1 D A=( A ifOV+l I 12.2A+ 2A= .3 A .Q(X) = I and then obtain A(Xr^ given = Q(X)P(X) A+ 4(A) 1 1 = 1 A+ 1 A 2 A+2 A+2 A+1 A1 A^'A + A^ + 2 l Hint. Prove: An nsquare Amatrix ^(A) may be reduced to / by elementary transformations if and only if\A(X)\ is a nonzero constant. Obtain matrices P(\) and Q(k) such that P(\).2A + a''2A'' a'' 2A A + 4 A"" . Obtain the elementary divisors over the rational matrices of Problem 11. 24 8.194 SMITH NORMAL FORM [CHAP.4(A) over F[AJ has an inverse with elements in F[\] if and only if /1(A) is a product of elementary matrices. Chapter 5. the real field.3A + 2 A A^ 1 2 A A= + \^2A A 1 >+ (b) 2A° A^A 1 'Xy 1 A1 _ A^ A^ A^ A^.A  2 3\^ 7A + 4  5 2A= A= . Prove: A Amatrix .2 A+ 1 11. 10. Ans A1 A A^ + 2A1 A^ .A A^A"" A^ + 2 A^ + (d) 2A''2A'' 1 . See Problem 6. Reduce each of the following X (a) to its Smith normal form: 1 '\> A >? A%A _2A'' + 2A A1 A^1 2A^ .A= A+ 1 + l_ A= + I 1 A+1 2A2 + 1 A1 2 A^ A" 'Xj 00 1 a"" A% A (c) a"" 2A'' . field. 9. and the complex field for each of the .A(\).
the system is AX Now the polynomials in D D+2 D+1 D+1 Xl 'o' D + 1 D+2 X2 «3_ = t e\ D computing form similar to «:i2(l). xs are unknown real functions of a real variable and at 4. beginning with a that of Problem 6. A^ + (A^+lf. (a) (b) (c) (d) X^.if. A1 (A A+1. (A. A1. A+1 whose rank is six. X^X.(Dl)xs (D + l)x2 + = = = t (D+ 2)X2 t e* where xj.4 1 jD \D 1 A'l 5D28D2 D 4D + 2 the Smith normal form of A. 14. 1)2. A^Alf + 2f. 1. A^ A=. K^siD). combine as do the polynomials in A of a Amatrix. A+2. 1. and using in order the elementary transformations: of /I K2i(D + l). ^2 = 12Cie'^*/5 + Cje"* . H2i(D2). . 1. A. linear transformation X = QY to carry AX = H into AQY ys = = H and from PAQY = NJ = PH y2=«4e* X = (D2+D + i)y3 = 6e*l and K^e"^^^ + K^e^ + \e^ ~ ^ 3 4 Finally. A. Solve the system of ordinary linear differential equations !Dxx (0 + 2)^1 + (D + i)x2 = . A + 3. if. (X^lf X'' (c) (d) X" + X. x^. 1. (A A. A1. The following polynomials factors? (a) (6) A. Hs(2). 1. (A+lf. A^. A''+A''+2A'' + 2A° + A'' +A Ans. (A. Xs = 2Cie**/s+ ie*+ i . A+ 4 (c) (d) (A A^. In matrix notation. ff3i(D+l). (A1. + l. (A+2f . A. K^gi^D k(D + l) +1). 1. A1 A(A + l)(A + 2)(A+3)(A + 4) A^ (A + 2f. Ks(l/5) obtain 1 1 ^(5D2+12D + 7) ^^(SD^ + TD) 1 PAQ = 5D + 6 1 4 . H^sii). A1. X=A^ A^2A^ A+1.i.ffi(l). if. A1. A. A^' (A^'+lf. A. are the elementary divisors of a matrix What are its invariant A+ A. use «i = QY to obtain the required solution 3Cie**/5 + it _ . (A if. hence. A. A' A'' + 2X'' A. l.CHAP. + X^ (X^lf. (A if A^ A. A+1. 1. Hs^ikD). 1.if (A+lf A''(A (d) A(A + 2f. 1. i(5D2+7D + 2) d=+d +  5 5 Use the get yi = 0. The following polynomials real field. Chapter 5. (A+lf. A^ + A. (A1. Hint. if. A''(A (c) 1. (A+lf (A + 2f. 1. A^cAl). Find its elementary divisors in the X^X. (a) (b) are nontrivial invariant factors of a matrix. + 2A° + A.4ns. + lf. K^ih. 24] SMITH NORMAL FORM 195 13. A. A1 (A^ A. if. + 2f 15. (a) (b) 1. 2A^+ A''  X"^ X. (A if.
\Qa)\4.D are its diagonal elements. 122 Example 1. then m{X) = Xao: al for all a but i^ = a^A + aj .]. /^(A) are = \P(X)\. SIMILARITY INVARIANTS. every resquare matrix A satisfies its characteristic equation <^(A) = of degree re. Smith nor j^a)) Now \P(\). diagonal matrix./na). By the Cay leyHamilton Theorem (Chapter 23). then m(A) = A^ ...a^X . of an «square matrix A over F is a nonsingular Amatrix having invariant factors and elementary divisors. we prove A and B over F are similar over F if and only if their charsame invariant factors or the same rank and the same elemen Two resquare matrices acteristic matrices have the tary divisors in F[X.Q(\)\ Since <^(X) and IV..^ The most elementary procedure for finding the minimum polynomial of i following routine: (I) (II) involves the If If If 4 = A ^ oq/. P(A) • 1  (?(A)  and we have is the iant factors of The characteristic polynomial of an resquare matrix A XI. (III) A^ ^ aA + hi for all a and b but 're(A) A'' = a^)? a^A"^ + a^A + a^I then = A^  . . 4(X). product of the invar THE MINIMUM POLYNOMIAL. . monic.. Using (24. (m(X) is also called the minimum function oi A. Find the minimum polynomial of ^ = 2 2 1 2 1 2 196 .(XIA).) 5. In Problem II. we have A over F is similar to a diagonal matrix if and only if An resquare matrix \I.A or of the similarity invariants of A. Prom Theorems HI.A has linear elementary divisors in F[\].Oq and so on. That monic polynomial m(X) of minimum degree such that m(A) = is caUed the minimum polynomial of A and m(X) = is called the minimum equation of A.a^X . The invariant factors of \I  A are called similarity Invariants of A. the elementary divisors of \I.a)  = = 1 A(A)/2(A).chapter 25 The Minimum Polynomial of a Matrix THE CHARACTERISTIC MATRIX \IA I. is the Let P(X) and Q(\) be nonsingular matrices such that P(X) (\1A)Q(\) mal form diag(fi(X). I and II.4) it is easy to show If D is a 1..uq .
derogatory.5. In Problem V. monic. f^(X). COMP ANION MATRIX. '^ . irreducible polynomials \XI . of fniM all divide A is the product ^(A). otherwise. 25] THE MINIMUM POLYNOMIAL OF A MATRIX 197 Clearly A — oqI = is impossible. has only distinct linear factors. ^2 acteristic and ^n) has ^(A) fgi(A)! •ig2(A)Pig^(A)! .gnCA) be distinct. It is also easy to show /rei(A) X. if g(X) = X+a and for re > 1 . if If 2. the minimum polynomial and m2(A). resquare matrix A whose if characteristic polynomial and minimum polynomial are identical is called nonderogatoiy. If Si and Sg have minimum polynomials ro(A) of the direct and m^iX) respectively. sum D = diag (61. Then B = diag(^i.CHAP. After (and not before) checking for every element of A^.4X . 8 9 Set '9 8 8 8 8 9 = '122" «i 2 2 1 '100" + oo 1 1 2 1 8 2 Using the first two elements of the first row of each matrix. The characteristic matrix of an rasquare matrix A has distinct linear elementary if m(A). The characteristic polynomial A and certain monic factors of m(X).g2(X).02 as both char minimum polynomial. C(g) (252) = [a].m). the minimum polynomial of A.1) Let A be nonderogatory with nontrivial similarity invariant g(X) = ^(A) = a" + a«iA"'^ + ••• + a^X + Oq We define as the companion matrix of g(X). we have 9 =  Oj^ + ao_ then 8 = 2ai a^= i and oq A^ = iA + 51 = 5.Aj\ = \g(X)\ "i fii (.. • . An IX. F[A] and Aj be a nonderogatory matrix such that = 1.2.55) is the least common multiple of ^^(A) This result may be extended to the direct sum of XI. divisors if and only NONDEROGATORY MATRICES. we conclude that and the required minimum polynomial is X^ .. of the we have minimum polynomial and Vin. In Problem VI.. we prove if A is and only 3. then f(A) = the minimum polynomial m{\) of A divides /(A). we prove rasquare matrix fn(X) of The minimum polynomial m(\) of an A which has the highest degree. in Let gi(X).. let m matrices. f^(k) (fi(\) A is that similarity invariant Since the similarity invariants of Vn. We have just one nontrivial An resquare matrix A is nonderogatory and only if A has similarity invariant. (25. any resquare matrix over F and /(A) is any polynomial over F.
.B have the same invariant factors or elementary divisors. o .A and XI B are Then by Theorems VHI and IX of Chapter 24. (25.4) + /?2 (iii) (iv) Q'\X) /?3 where R^. "a 1 .3) C(g) 1 1 Oq % «e •• • '^n3 an2 Gj In Problem XII.. Then by Theorem VIII. then O' a 1 . Prove: Two rasquare matrices A and S over F are similar over F if and only if their characteristic matrices have the same invariant factors or the same elementary divisors in F[X\ equivalent. See Problem It 5.. Substituting in ( i ). Chapter 20. If /4 is nonderogatory with nontrivial similarity invariant f^(X) = {kaf. we have (A/B)S3(A)(A/4) = (XlB)S^(X)(XlA) or (V) + Rj^(XlA) = + (XIB)Rs {XlB){S^{X)Sa{X)\{XlA) (A/B)Rg  R^{XIA) .) of the matrix given in (25. Suppose A and B are similar.4) / = [o]. SOLVED PROBLEMS 1. if ra > 1 1 a has f^{\) as its characteristic and minimum polynomial. we prove its character istic The companion matrix C(g) of a polynomial g(\) has g(\) as both and minimum polynomial.. From (i) of Problem 1. Chapter 24 there exist nonsingular Amatrices P(X) and Q{X) such that elementary divisors. and R3 are free of A. 4. Rq. they have the same invariant factors and the same Conversely.. is easy to show XIII.Si(A) + Ri S2(X)(A/S) + S3(A)(A/.198 THE MINIMUM POLYNOMIAL OF A MATRIX [CHAP.. it follows that Xl . 25 1 1 (25.A and Xl .. if ra = 1. (Some authors prefer to define C(g) as the transpose Both forms will be used here. let Xl .B (i) P{X){XIA) = (XlB)Q (X) Let (ii) P{X) Q(X) = = = (X/S).3)... P{X){XIA)Q{X) or Xl . and / = .
contrary to the hypothesis that m(A) is the minimum polynomial of ^.A) (vii) is of Now Q{X)Sq(X) + S^(X)R^ = right R2R3 since otherwise the left member of degree zero in X while the member is of degree at least one.A) = 0. and R^ are free of A. then f(A) = /(A).B Since A. if and the minimum polynomial m(X) of the division algorithm.R^Rs and / = = \Q{X) Sg(X) ^ S^iX) RxUXl . as was 2.(X) = g„i(A)/„(A) adj(A//l) = g„_i(A)S(A) is 1. f(X) = A divides By q(X)n(X) + r(A) and then f(A) = q(A)n(A) + r(A) = r(A) Suppose f(A) = 0. Chapter 22.B. = Thus.CHAP.R^AR^ = /Ji = R^: then XI .B(X) or gni(A)/„(A)/ (i) (A/^). Then f(A) = q(A)m(. Rg = R^ = and. Then XI A is a divisor of f^(X)I and by Theorem V.R'^AR^ and A and B are similar.(X!A) + Q(X) Ss(X) \S2(X)(XIB) + R^lRg = = or (vii) (XI . f^(A) = . member of (v) Is of degree at least two while the right most Using I (ill). (Iv). Thus. 3.)S3(X). Chapter 23. suppose /(A) = ?(A)m(A). to be proved. /?i(A/^)/?2 ARi«2 . Prove: The minimum polynomial m(X) of an rasquare matrix A which has the highest degree. Let grii(X) is that similarity invariant fn(X) of A denote the greatest common divisor of the (n . Now if r(A) ji 0. and (vi) = = = Q(^)Q~\\) Q(\)\Ss(\)(XlA) + Rs\ Q(X.S(A) = fn(X)I 0. Then \XIA\ and = cf. from (vi) XI . then r(A) = 0.adj(A/^) = (f)(X)I sothat = (X/A)gn_^(X).l)square minors of XI A.B XI . r(A) = and m(A) divides /(A). Conversely. Prove: only if If A is any rasquare matrix over F and f(X) is any polynomial in F[X]. 25] THE MINIMUM POLYNOMIAL OF A MATRIX 199 Then (vi) S^(X)  Sg(A) = and (XlB)Rg left = R^(\IA) member is of degree at since otherwise the one. where the greatest common divisor of the elements of B(X) Now (A/^).A) + S2(X)(XlB)Rs + R^Rs Q(X)Ss(X)(XlA) + S2(X)R^(XIA) + R^Rg I . its degree is less than that of m(X).R^.
200 THE MINIMUM POLYNOMIAL OF A MATRIX [CHAP. using (i) and (ii).)m(\) Since m(A) = 0. 4. atory and its minimum polynomial 5. C(g) is non derogis g(A). by Now 9(A) divides every element of iS(A). in G(X) is ±1.)l = 9(A)m(A)/ = 9(A) • (A/ . = m(A) as was to be proved. the greatest the characteristic polynomial of C(g) is g(A).C(\) Then.3) is A 1 A 1 ^1 "2 an2 2 A + a^_j third column. . 25 By Theorem (ii) V. hence ?(A) /n(A) =1 (ii). m(\) divides fn(X) Suppose fn(\) = g(X. the first column add A times the second column. if preferred. Since the minor of the element g(A) common divisor of all (nl)square minors of G(\) is 1. The companion matrix of g(\) = A^ + 2A°  A= + 6A  5 is 5 10 10 10 1 or. 10 06 10 1 56120 102 1 . Thus. XI A is a divisor of m(X) •/. column to obtain To A times the A times the last 1 A G(A) A g(A) 1 Oi a^ a„_2 A+a^l Since G(A) = g(A). (\IA)B(\) and = fn(X. C(A) 5(A) = ?(A) • C(A) and. say m(\)l = (\IA). Prove: The companion matrix C(g) of a polynomial g(\) has g(A) as both minimum polynomial.^). its characteristic and The characteristic matrix of (25.nl .
f.d i.d. m(B^) ) = requires m(Si) = miB^) = . A1. e. Write the companion matrix of each of the following polynomials: (a) X'' + \^ . (h) m(X) A^A2 VIII.d.f.4A= i.X^ + 2X (b) (c) (X^i)(X+2) ]?()? + I) (Xlf 1 (A+2)(A^2A^ + 4A8) 1 'o 1 o" 1 1 Ans. A.) = = (Al)(A2)(A3). i.f. e. A. A+l. X^. <^(A) = m(X) m(. Prove: Every 2square matrix which (011022)^ + g(X) ). e.2\  X (d) (e) (/) X""  2X!' . 4012021/0 is nonderogatory. A. 25] THE MINIMUM POLYNOMIAL OF A MATRIX 201 SUPPLEMENTARY PROBLEMS 6.\. i. i. e. . 9.d. thus. A+l. AA+l. mi(A) and m^X) divide m(X).f. (A3) (b) <^( A) = = = = A= (c) (^(A) (Aif(A2).^(A) ie) = = . A2 A+1. if.f.4 CHAP. = e. (Ai)(A2) m(X) (/)(A) e. A+l. (A2).4A c^(A) = = e. i. 11. A'^A2. A1. (a) 1 (b) (c) 1 3 1 2 1  1 8 o" 4 1 2 3 1 1 1 o' 1 (d) 1 (e) 1 (/) 1 2 1 2 010 A= [a^] for 16 7. (Al)(A2)(A3). e. (A+D^ (A2) (A"" A 2)''. A5 . A.d. A^A if) m(A) (/)(A) A(A^l) A2(A+1)2 . A(AH)^(A1) . A2. A+l 10. Reduce G(X) of Problem 4 to diag (1 . For each of the following matrices ^. A^A2 A2. A2. (A1). 1 1 1 (ii) list the 3 2 (c) 1 1 1 2 1 2 1 1 1 2 2 2 (a) 2 3 (b) 5 2 6 id) 2 2 2 1 (e) 1 2 1 3 1" 2 _1 1 '211 4 if) 23 (g) 13 ih) 2 3 6 2 3 2 3 1 1 2 (a) 1 6 3 6 3 3 4 3 2 5 2 6 4 6 46 3 s" 32 2 43 41 6 42 4 04 1 02 1 2 1 Ans. 8. (A1) (A2) .f. A2. 1 1 . Hint. i. A. A1. A(A+l)2 is) m(A) = oS(A) A(A+1)2 .d.d. (i) find the characteristic and minimum polynomial and nontrivial invariant factors and the elementary divisors in the rational field. A+i. (A+i)(A5) = id) (A+ihA5) (A+l)(A5) A= . A1 = A.f. A. Prove Theorems VII and Prove Theorem X. A+l. A''4A m(A) = A^ . m(A) .d. m(D) = diag(m(Bi).
as a scalar polynomial of degree s—lmA. 22. the Hint. Hint. and denote by 0(A) and 0(A) respectively (/f(A). 1 ^° to show that the minimum polynomial is not the product of the distinct factors of 0(A).4) and An{BA)B m > = = (AB)n(AB) = 0.4) is nonsingular. .J of 0(A) is a factor of m(A). 13. 19. PDQ = B and only if R(X) = \C . A of 17. then A~^ is expressible 16. s in A. Prove: Let A. . The rank of A is the number of characteristic roots which are 1. Let A and S be square matrices and denote by m(A) and «(A) respectively the minimum polynomials oi and BA.C.202 THE MINIMUM POLYNOMIAL OF A MATRIX [CHAP. and n(A) differ = most by a factor A when both A and B are singular.I be an invariant vector associated with a simple characteristic root of A. Use the minimum polynomial to find the inverse of the matrix Problem 9(h). . Show 0(A) A 25. expressible as a polynomial in A of 21. A:. then an invariant vector of B. Infer from Problem 20 that when g(.B. or the same elementary divisors.] 20. Chapter 22. 15. then g(^) is singular if and only if the greatest common divi sor of g(A) and m(A). If the matrices A and B commute. 1. Use 4 = [o . Let X.A = XDB have the same invariant factors Hint.m(^S). and use Theorem IV. = Hint. Prove Theorem XI. r ^ 0.4 is culled nilpotent of index that A is nilpotent of index k if and only if its characteristic roots are all zero. Prove: If g(\) is any scalar polynomial in A. is d(k) ^ 1. The theorem follows from Theorem VII or assume the contrary and Then {A \^I)q(A) + rl = and A X^l has an inverse.A and B are singular. then the 23. Chapter 22. 25 12.D be rasquare matrices over that F it with C and D nonsingular./l (B. (ii) Suppose d{\) Suppose d(A) 1 1 = and use Theorem V. 24. Follow the proof Problem noting that similarity is replaced by equivalence. of 26. Prove: If the minimum polynomial m(X) of a nonsingular matrix A is of degree s. 14. 18. Xj^ is Prove: If A and B com mute. Prove: (a) The characteristic roots of an nsquare idempotent matrix A are either (b) or 1. then [g(/l)] is degree less than that of m(\).) matrices P and Q such in PCQ = A. (i) minimum polynomial i oi A. Let A be of dimension mxn and S be of dimension nxm. Prove: If the set of all scalar polynomials in minimum polynomial m(A) of A over F is irreducible in F\X\ and is of degree A with coefficients in F of degree < s constitutes a field. the characteristic polynomials of AB and BA. There exist nonsingular and S(A.4). Prove: Every linear factor XA. n.m(B. w:rite m(A) = {\X^)q{X) + r. Prove: (a) m(A) = n(A) (&) m(A) AB when not both at . B. state a theorem concerning the invariant vectors B when A has only simple characteristic roots. If A Show is nsquare and if k is the least positive integer such that A = 0.
diag (1. (i = j. They correspond r to the canon N '[^ o] introduced earlier for ' all mxn matrices of rank under equivalence. fj^^(\) f^iX)) We have proved I. We define as the ra tional canonical form of all matrices similar to (262) A C(/„)) is similar to S that = di^s {C(fj). THE RATIONAL CANONICAL FORM. C(f9) 1 C(fio) 10 10 1 10 203 2 . f^(X)) 1 with the nontrivial invariant factor f^(\.j + n).. we establish representatives of the set of all matrices R'^AR which are (i) sim ple in structure and (ii) put into view either the invariant factors or the elementary divisors. In Chapter 25 matrices A and In this chapter. are called canonical forms of A. thus. we have S similar diag (1. Example 1.(X).. fg(X) A%L /io(A) = A^ + 2A° + 1 Then 1 1 1 1 C(fa) = [1].^^iX). it was shown that the characteristic matrices of two similar nsquare R'^AR over F have the same invariant factors and the same elementary divisors. f.) of degree s^.A. S is similar to diag(D^ .2) of the companion matrices of the nontrivial invariant factors of XI . Let the nontrivial similarity invariants of A over the rational field be /8(A) = X+1.chapter 26 Canonical Forms Under Similarity THE PROBLEM.1 l.fja). D . 1 and. These matrices. Every square matrix A is similar to the direct sum (26. We define it to be the rational canonical form S of all matrices similar to A Suppose next that the Smith normal form of \I .. ical matrix four in number. Let A be an rasquare matrix over F and suppose first that its characteristic matrix has just one nontrivial invariant factor /„(\). C(fJ ^^) To show D^ = diag A and S have the same similarity invariants we note that C(f) l. f. 1 1. The companion matrix C(f ) of /„(A) was shown in Chapter 25 to be similar to A./i(A)) (1.A <26l) is .^^ D„). to By a sequence of interchanges of two rows and the same two columns.
i+'^ n) where not every factor need appear since some of the ^'s may be zero. (X A+1) Their respective companion matrices are 1 1 [1].. Note. Cipl'i Cip^^ti)) We have Every square matrix A over F is similar to the direct sum of the companion matriII. .+ l)^.1).iPi( (i = i.3) fi(\) facdis = ipi(X)l^"!p2(A)!^2^. Let the characteristic matrix of A have as nontrivial invariant tors the polynomials f^(X) of (26..) 204 CANONICAL FORMS UNDER SIMILARITY [CHAP. \])^(X)\ ^^ as the only nontrivial similarity invariant. Let (26. X+1. PjCA). A^A. 1.. diag (C(p^?" ).{p... For the matrix A of Example (A.. [: . The order in which the companion matrices are arranged along the diagonal is Also 1 1 1 1 1 1 1 1 2 1 1 using the transpose of each of the companion matrices above is an alternate form. . paCX). A SECOND CANONICAL FORM. the elementary divisors over the rational field are A+1. ces of the elementary divisors over F of XI— A.] [. immaterial.: \ 1 1 2 3 2 and the canonical form of Theorem II is .(X)!'^is(pi(A)i'"''iP2(A)i'"''.C(/9). Suppose that the elementary divisors are powers of t tinct irreducible polynomials in F[X]: pi(A. 26 and 1 1 1 1 1 diag(C(/g). C{p?^^) of any factor present has C{{^) is similar to The companion matrix hence.C(/io)) 1 1 1 1 1 Is the required form of 2 Theorem I. Example 2.).+ l. [1].
. C'(p) is used. Examples. Let A be the matrix of the section above with the elementary If divisors of its characteristic matrix expressed as powers of irreducible polynomials in F\_\]. . I's just continuous line of above the diagonal. Let IpiX. 26] CANONICAL FORMS UNDER SIMILARITY 205 1 1 1 1 2 1 1 1 1 1 1 1 2 3 2 THE JACOBSON CANONICAL FORM. Consider an elementary divisor {p(X)!^. use C(p). the companion matrix. there is a called the hypercompanion matrix of \p(X)]'^. . (26.4) Cq(P) = C(p) .. same order as C(p) having the element 1 in the lower left hand corner The matrix C^^Cp) of (26. . where M is a matrix of the and zeroes elsewhere.CHAP. M C(p) . C(p) M C(p) M .)]'^ = (\^ + 2\ 1)'^.4). with the understanding that Ci(p) = C(p) is Note that in (26. C'ip) N C'ip) where A' is a matrix of the same order as C'ip) having the element 1 in the upper right hand corner and zeroes elsewhere. build q = 1. In this form there is a continuous line of I's just below the diagonal. M=y ° and 2 1 1 1 2 1 1 1 CqiP) 2 1 1 1 2 .4). the hypercompanion matrix of jp(A)!''is When the alternate companion matrix C'ip) N Ca(P) C'ip) N C'ip) . Then 1 1 C(p) = T M. if q>l.
For the matrix A of Example 2. 5.4). Let the elementary divisors of the characteristic matrix of A be powers of linear polynomials. We have III. Cq(p) is similar C(p^) and may be substituted for it in the canonical form of Theorem II. IV. 1.. X + i. 26 In Problem 1. be the field in which the characteristic polynomial of a matrix A factors Then A is similar over ^to the direct sum of hypercompanion matrices of the form (26.A. see Problem 2. the canonical form of Theorem III is A are necessary. of course. known as the Jordan We have ] or classical into linear polynomials.. The canonical form of Theorem III is then the direct sum of hypercompanion matrices of the form «i 1 1 .5) is of the type / of (25. This special case of the canonical form of Theorem canonical form. To further add to the consometimes called the rational canonical form. Example 4. . it is shown to that Cq(p) has {pCA)}*^ as its only nontrivial similarity invariant. was used originally to indicate that in obtaining the canonical form only rational operations in the field of the elements of also of the canonical forms (introduced later) of fusion.5) C^(P) . true Theorems n and III.. °i (26. each matrix corresponding to an elementary divisor (Xa^)'^.5). Let 7 Example Let the elementary divisors over the complex field of \I A be: Ai. (X+i) . THE CLASSICAL CANONICAL FORM. corresponding to the elementary divisor !p(A)! (Aa^)^ III is For an example. the hypercompanion matrices of the elementary divisors A + A+1 and A A+1 r1 are their companion matrices. the canonical form of Theorem 011 nils 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 The use of the term "rational" in connection with the canonical form of It Theorem I is some what misleading. °i 1 <^. Every square matrix A over F is similar to the direct sum of the hypercompanion matrices of the elementary divisors over F of \I . 206 CANONICAL FORMS UNDER SIMILARITY [CHAP. the hypercompanion matrix of 1 (A+l)^is n 11 and that of (A^A + 1)^ is 10 1 Thus. But this is.. [Note that C^(p) of (26. Thus. . (Xi) . . .
CHAP.
26]
CANONICAL FORMS UNDER SIMILARITY
207
The classical canonical form
1
of
A
is
1
i
1
i
i
1
i
From Theorem IV follows
V.
An
resquare matrix
A
is similar to a diagonal matrix if
and only
if
the elementary
divisors oi XI 
A
are linear polynomials, that is, if and only if the is the product of distinct linear polynomials.
A
minimum polynomial of
See Problems 24.
A REDUCTION TO RATIONAL CANONICAL FORM.
it
In concluding this discussion of canonical forms, that a reduction of any resquare matrix to its rational canonical form can be made, at least theoretically, without having prior knowledge of the invariant factors of \I  A. A somewhat different treatment of this can be found in Dickson, L. E., Modern Algebraic
will be
shown
Benj.H. Sanborn, 1926. Some improvement on purely computational aspects Browne, E. T., American Mathematical Monthly, vol. 48 (1940).
ries,
Theoin
is
made
We
in to
shall need the following definitions:
If A is an resquare matrix and X is an revector over F and if g(X) is the monic polynomial F[\] of minimum degree such that g(A)X = 0. then with respect to A the vector X is said belong to g(X).
If,
with respect to A
the vector
X
belongs
to g(X) of
degree
p,
the linearly independent
vectors
X,AX.A''X
2
/4^
X
are called a chain having
X
as its leader.
6 3 2
= X.
Example
6.
Let A
1 1
The vectors
Then
2
Z
= [l, 0, oj' and
AX
= [2,
1, l]'
are linearly independent
while
A
X
(A
I)X=0
Y;
thus,
and
A"
[1,0, ij',
If
AY
=[1,0,1]'
(A+I)Y=0
belongs to the polynomial A,^l For y = and F belongs to the polynomial A+ 1.
m(\) is the minimum polynomial of an resquare matrix A, then m(A) X = for every reThus, there can be no chain of length greater than the degree of m(A). For the matrix of Example 6, the minimum polynomial is A^  1.
vector X.
nonsingular matrix
(266)
Let S be the rational canonical form of the resquare matrix A over F. R over F such that
Then, there exists a
R'^AR
=
S
=
diag(C.,C.^^
C„)
where, for convenience, Q/^) in (26.2) has been replaced by C^. companion matrix of the invariant factor
/iCA)
We
A
+
shall
assume
that
C,
.
the
=
A^i
+
c^ ^_X'i
+
c,.
has the form
^tl
1
1
^i2
.
Ci
.
^i3
.
1
208
CANONICAL FORMS UNDER SIMILARITY
[CHAP. 26
Prom
(26.7)
(26.6),
we have
AR
=
RS
=
Rdiag(Cj,Cj^^
C„)
so that R^
Let R be separated into column blocks have the same number of columns.
R,R^
(26.7),
R^
and C^, (i=/,/ +
l
n)
From
=
AR
and
=
A[R..R.^^
RJ
IRj.Rj,,
/?Jdiag(C,,..C^.^^
C„)
=
iRjCj.Rj^^C.^^
(i=j,j +
l
R,,CJ
AR^ = R^Ci.
Denote the
si
n)
column vectors of Ri by Ri^, R^s.
[Rii, Ri2,
•••,
Ris and form the product
t
RiCi
Since

, Risi\Ci
=
st
^__

[Ri2,Ri3, ,Risj,> 
Rik^ik]
=
ARi we have
(26.8)
=
A[Ri^,Ri^, .Risi]
[ARi^.ARi2,. ..ARisil
RiCi
Ri2 =
ARn,
Ris  ARi2 = A Rii,
•
Risi = A^''Ri.
and
(26.9)
>i.%
i =
K

^Ris,
Substituting into (26.9) from (26.8),
we obtain
=
c^kA^'Ri^
l
A'iRu
+ cirDRir =
or
(26.10)
(A^^ + cis.A^'i'^
definition of C^ above, (26.10)
+
...
+
a^A
=
Prom the
(26.11)
may be
written as
fi(A)Ri^
A
Let Rit be denoted by X, so that (26.11) becomes fj(A)X: =0; then, since X,AXj, A ^ X^ are linearly independent, the vector X^ belongs to the invariant factor f^iX). Thus, the column vectors of fi^ consist of the vectors of the chain having X^, belonging to /^(X),
Xj^
as leader.
To summarize:
chains
the n linearly independent columns of R, satisfying (26.2), consist of re/+l
Xi,AXi
whose leaders belong
satisfy the condition
A^^"'Xi
i
^ s„. n
(i=j,i +
l
n)
to the respective invariant factors
/• (A),
fj.AX.)
/^(X)
and whose lengths
<
s,
^
J
s ^^ +1
J
...
We have
VI.
Por a given reSQuare matrix A over F:
(i)
let let
X^ be
the leader of a chain
€^
of
maximum length
of
for all
ra
vectors over F\
(it)
X^_^ be the leader
of a chain
is linearly independent of the
maximum length (any member of which preceding members and those of €„ for all n€^_^
)
vectors over
(iii)
F which
are linearly independent of the vectors of £„;
let
X^^^he
is linearly
the leader of a chain €n2 °^ maximium length (any member of which independent of the preceding members and those of €„ and fi'^.i)
for all n vectors over
F which
are linearly independent of the vectors of
€^
and€„_i;
and so on.
Then,
for
CHAP.
26]
CANONICAL FORMS UNDER SIMILARITY
209
R
AR
is the rational canonical form of A.
1
1 1
Example
7.
Let
A
1 1
2
3
2
Take ;f=[l,0,o]'; then ^,
4X
= [l,
1, l]',
^^A: =
[3, 5. 6].' are line
2
arly
Independent while
/gCX) =
belongs to
^°;f = [14,25,30]'= 54^X a:. m(X) = A°  5A^ + 1 = <jS(A). Taking
1
Thus,
{A°bA^+I)X
= Q
and
A'
1
3
R
we
find
1
=
[a;.
4A',4=A']
15
1
6
3
2
1
3
5
14 25 30
65
1
1
AR
=
[AX.A'^X^a'^X]
1
1
6
01
and
R ^AR
1
=
1
s
5
Here A is nonderogatory with minimum polynomial m(A) irreducible over the rational Every 3vector over this field belongs to m(X). (see Problem 11), and leads a chain
three.
field.
of length
S.
The matrix R having
the vectors of any chain as column vectors is such that
R'^AR =
Example
8.
Let
A
=
Tak§
A"
= [l,l,o]';
then ^ A' =
AT
and
AT
belongs to
A
1.
NowA1
cannot be the minimum polynomial m(A) of A. 11). and could be a similarity invariant of A.
Next, take
It is,
however, a divisor of m(A), (see Problem
F=[l,0,0]',
The vectors
Y, 4? = [2,
1, 2]',
^^^ =[ 11,
8, s]'
are linearly
to
independent while
A^Y
= [54,43,46]'
= 5A'^Y +
SAY 77.
Thus, Y belongs
m(A) =
A  5A  3A +
7 = <^(A).
The polynomial
A
1
is not a similarity invariant;
in fact,
unless
the first choice of vector belongs to a polynomial which could reasonably be the function, it should be considered a false start. The reader may verify that
minimum
R'^AR
11
when
R
=
[Y,AY,A'^Y]
See Problems
56.
SOLVED PROBLEMS
Prove:
The matrix C^ip)
common
of (26.4) has !p(A)!'? as its only nontrivial similarity invariant.
Let C^(p) be of order
so that the greatest
tors of
s.
The minor of the element in the last row and first column of A/  C (p) is ±1 divisor of all (sl)square minors of A/  C (p) is 1. Then the invanant facl.f^(\).
=
A/  C^(p) are 1,1
0(A)
But /^(A) =
=
ip(A)!'?
since
=
A/C^(p)
\XIC(pf
jp(A)!9
210
CANONICAL FORMS UNDER SIMILARITY
[CHAP. 26
2.
The canonical form
divisor being
(a) is that of
X^+4X° +6A^ +4X +
1
1.
Theorems I and II, the nontrivial invariant factor and elementary The canonical form of Theorem III is (b).
1 1 1
1
(b)
1
1
(a)
1
1
1
4
6
4
1
The canonical form
(a) is that of
Theorem
2,
I,
the invariant factors being \ +
2,
and the elementary divisors being X + Theorems II and III is (b).
2
1
X+
X+
2,
X
2,
X
2,
X+
3.
2,\ 4, A'+3A 4A12 The canonical form of both
2
2
4
(a) 1 1
(b)
2
2
2
12
4
3
3
4.
The canonical form
X+
2,
(a) is that of
,
X+
2,
(X'^
+2Xlf
(X^
Theorem III. Over the rational field the elementary divisors +2X if and the invariant factors are
1)^
(X +
are
(X + 2)(X^ + 2X 
2)(X^+2X1)^
II
The canonical form
of
Theorem
I
is (b)
and that of Theorem
is (c).
2
2
1 1

2
1
1
1
(a)
2
1
1
2
1
1
1
2
1
1
1
2
1
1 1 1
2
(b)
7
10
6
1 1
1
ID
1 1
1
2
11
12
17
14
21
8
0 ir. 1. 0.1.1.2]'. 0.Xg. 2. 1]'. O]' is linearly independent of the members of the chain led by 1.1. 0.1]'= AXg + A Xg + Z. 0..0.0. [o.A^Xe.1. When 1 1 we 2 1 3 1 "o 1 1 R = [Xs.^5 [2.0.1.A^'Xfi 1 1 1 u 1 1 1 R~'^AR 1 1 1 1 1 1 2 1 the rational canonical form 'of ^. Oj' is linearly and AZ . A^X = [1. 2 Note. 2. 2]' = 2^=A: Z. 0. 0.0]' are linearly independent while a'^X^ 3. L1. independent of the members of the chain led by Xg Z and the members of the chain However A'^Z = 2 1 1 3 1 1 1 1 6. = Y of the chain. Y = [o. 1. We tentatively assume this to be the minimum ^. 1. 26] CANONICAL FORMS UNDER SIMILARITY 211 200000000000 2 1 1 1 1 (c) 4 2 4 1 1 1 1 1 1 6 9 4 9 6 2 3 3 1 6 2 2 1 1 5. Let A = 2 1 1 2 1 1 1 1 1 Take X 2 3 1 1 2 2 0_ Then AX 4 = [2 1.2]' are linearly inde pendent while m(\) = A* .0J . 0. 0. 2. Using this as X^. we may form another R with which to obtain the rational canonical' form.0.0. 0.0. The vector Z = 1.2A polynomial m(A) and label 2.CHAP.1.2A^ + as X^.1. 2.1. 1. 0.0 1 1 ll i]' belongs to A .1.[1.1. 1. A'' X =[3.AXg) = and W=ZAXe = [2.[3. 1. Let A = 1 1 4 4 2 = 2 1 1 Take X = [1.0.2X^ + A" = [l. X3X X and X belongs to A . X belongs to A*2A^ + 1 We tentatively assume 1 and write Xg for X.AXe. 1. .1.AXs.0]'. then (A'' ~ 1)(Z . Now A^Y Since the two polynomials complete the set of nontrivial invariant factors Xg and so that write to A 1. The vector AY Xs . Y belongs for Y.1.0]' is linearly independent of Y and the members 2.0. 2.0. o]' is linearly independent of 11. 1. 2 2 Then r 2 2 ^^^=[1.
if Z = [O. in. Z. II.X^.AXs. 2.X^. hence. 12 SUPPLEMENTARY PROBLEMS 7.212 CANONICAL FORMS UNDER SIMILARITY [CHAP.0. 0. AY = Y and Y belongs to X+ 1. Under what conditions will (a) the canonical forms of Theorems I and n be identical? (b) the canonical forms of Theorems n and in be identical? (c) the canonical form of Theorem H be diagonal? .0]'. o' 1 _0 n. 1. 0. O]'.2. 1 2. and the members of the chain led by X^ are linearly independent. we have [o. 1 m. 1. hence. 1. in Can any 1 of these matrices be changed by enlarging the number field? Partial Ans. diag(l. 9. 1 n. write the canonical matrix of Theorems I. m. diag(2. 0. 26 When. 1) 8.0. in. (a) I. we label Y as X^ and Z as X^. o]'. 1 II. we have [1. _0 1 0_ "o (e) I. AZ = Z find Z belongs to A+ 1. 1 _ 1 o' 1 (g) n. the fourth column is subtracted from the first.2. the third. 112 R = 1 1 11 [Xs. 1. O]'. 10 10 n.A%] = 1 0100 R~ AR = 011 1 03 1 1 1 10 02 the rational canonical form of A. Chapter 25. in A. if y = Again. 1.1. ni. Since y. For each of the matrices (a){h) of Problem over the rational field. 3) 6 11 "o 6 1 o" (b) I.. n. 4 1 1 (/) I. 0. ni. When [l. when the fourth column of A is subtracted from 1. 1 1 1 '2 1 (h) I.
1 1 1 1 1 1 1  1 o" 1_ 1 1 1 1 1 a IV. 2 1 1 1 1 1 1 . o" 1 Identify the canonical form _0 Check with the answer to Problem 8(6). a /S /3 1 B . 1 1 1 1 10 1 1 1 1 _0 1  10 2 0_ 1 1 1 n.CHAP. n. Ans.^ = ^(l±i\/3). Let the nonsingular matrix A have nontrivial invariant factors Write the canonical forms of that of (a) X + 1 Theorems I.  1 2 3 2 6" 1 1 1 1 1 1 in. A. a 1 where a.° + 1 (X° + i)^ (b) X^+ 1. III over the rational field and Theorem (a) IV. 0_ 10. 26] CANONICAL FORMS UNDER SIMILARITY 213 "o 9.
13. 1.2^ J.X2] 0' t 3 42 8 06 4 12 03 25 t dY^ dt 1 1 1 Y + 1 = yi + 73 y2 74 Then C^+it'^ Cge* + Cge* t 2Ci + C2e*+ 3(Cs + C4)e"*+ «^2t+l' 4« + 2 6t Ci + t_l.Xi + 2AXi belonging to A + 1. linearly independent of X^ and the chain led by X5.3*3 t. AX^.. find R such that R' AR is the rational canon A 15. jf. Chapter ical form of 25. .2C2e*. show that X. i. ^J .A'^Xi while The elementary 4vector £1 belonging to A . is leader of the chain Xi = Ei. and rewrite the system as dX_ (i) 4 6 3 dt 2 1 3 1 2 2  X + AX + H Since the nonsingular linear transformation X RY carries (i) into dY dt R ^ARY + R^H choose R so that R~ with AR is the rational canonical form of A. the vector polynomial m(A) of A.L. 121 [X. jf. 2xi^ dX4 dt 3a£i X2 Xq  2x4. 0. real variable Hint. T i Let V ^ = T U.4 2C1  Cae* 5(C3+C4)e"*  «2+3{2_ . 0. linearly independent of the chain led by ^5.2(5C3+6C4)e* .2«^+ .AXi.„ . In (a) Problem 6: Take 7 Take = [o.A^Xi. Hint. and obtain X^ = Z X^ belonging to A + (c) Compute R~ AR using the vectors X3 and X^ of (b) and (a) to build R. In Example 6. Cse"Cse^Sr €46"* and X = RY = 2C1 + 2C2e*+ 2(3C3 + 4C4)e* + «^ 4C1 . (i>) Z = [O.A Now £4 yields X2 = E^. and Y are linearly independent and then reduce A to its rational canonical form. Prove: If with respect to an nsquare matrix A. Solve the system of linear differential equations dxi dt 2Xx 4^1 + + X2 + + Xg + X4 + i dx2 dt 2x2 2a:2 3At3 dxs dt ^ 6*1  .3. J define «• 2 "^^ = dxi dxQ dxn dxA' 2 111 3 ^^. O]'.AX. For each of the matrices A of Problem 9(a)(h). and obtain X4 = y_ (3/1 _2/) A^g belonging to A+1. 14. 0. 214 CANONICAL FORMS UNDER SIMILARITY [CHAP. where the x^ are unknown functions of the TT. X belongs to g(X) then g(X) divides the minimum r (X). o]'. Suppose the contrary and consider m(A) = A (A) " g (A) + 12. 1. 26 11. J r . 0. 1.
206 for multiplication. 2 Basis change of. 126 of Hermitian form. 172 Dependent forms. 126 definition of. 115 Conjugate of a complex number. 24 Complex numbers. 170 of real symmetric matrices. 64 multiplication of matrices. 126 Canonical form classical (Jordan). Jacobson. 2 fields. 117 Contragredient transformation. 117 under equivalence. 203 CayleyHamilton Theorem. 127 Column space of a matrix. 189 under similarity. 88 Cramer's rule. 69 matrices. 205 of bilinear form. 149 Characteristic roots definition of. 77 Decomposition of a matrix into Hermitian and skewHermitian parts. 163 of unitary matrices. 126 factorization of. 68 Derogatory matrix. 24 Anticommutative matrices. 23 Cogredient transformation. 164 of inverse A. 179 of a (scalar) polynomial. 39 Commutative law for addition of matrices. 197 215 . 155 of real skewsymmetric matrices. 55 rank of. 75 Cofactor. 116. 42 of quadratic form. 207 Characteristic equation. 2. 149 of adj A.) of Hermitian matrices. 110 Addition of matrices. 3 Commutative matrices. 41. 146 of matrix. 43. 49 inverse from. 86 orthonormal. 13 Conjugate transpose. 67 Adjoint of a square matrix definition of. 95 of a vector space. 127 Coordinates of a vector. 197 Complementary minors. 12 Degree of a matrix polynomial. 155 of a direct sum. 125 reduction of. 181 Chain of vectors. 2 Augmented matrix. 125 equivalent. 12 of a matrix. 93 transformation. 110 Conformable matrices for addition. 203 row equivalent. 64 multiplication of matrices. 75 Closed. 155 Characteristic vectors {see Invariant vectors) Classical canonical form. Characteristic roots (cont. 73 vectors. 149 polynomial. 111 Bilinear form(s) canonical form of. 40 Canonical set under congruence. 133 rational. 11 Companion matrix. 2 fields. 73 polynomials. 13 Conjunctive matrices. 12. 102. 50 Algebraic complement. 151 of a diagonal matrix. 128 rank of. 206 4 of vectors. 85 Coefficient matrix. 155 of real orthogonal matrices. 11 Associative laws for addition of matrices. 155 12 13 of a product. 49 determinant of. 3 Congruent matrices.INDEX Absolute value of a complex number. 13 symmetric and skewsymmetric parts. of a sum.
111 GramSchmidt process. 77 Equivalence relation. 146 derivative of. 156 of an Hermitian matrix. 70 of matrices. 157 Diagonalization by orthogonal transformation. 30 of elementary transformation matrix. 136 Equations. 19 Dot product. 88 transformations. 132 system of homogeneous. 64 Field of values. 55 elementary transformation. 33 multiplication by scalar. 149 of a diagonal matrix. 147 signature of. 33 along a row (column). 172 Jacobson canonical form. 94 of a vector space. 163 of similar matrices. 30 of conjugate transpose of a matrix. 39 Invariant vector(s) definition of. 147 146 Hermitian forms equivalence of. 100 Index of an Hermitian form. 149 Leader of a chain. 78 system of nonhomogeneous. 33 Latent roots (vectors). 149 Elementary matrices. 75 Lagrange's reduction. of. 100. 188 Field. 117. 39 of transpose of a matrix. 10 Image 1 Diagonal elements of a square matrix. 135 Left divisor. 39 matrix. 64 matrices. 110 Intersection space. 40. 164 of a real symmetric matrix. 171 First minor.216 Determinant definition of. 23 by Laplace method. 179 Laplace's expansion. 133. 63 Linear combination of vectors. 22 of conjugate of a matrix. 33 of singular matrix. 73 of vectors. 11 Equality of matrices. Greatest 102. 164 of a normal matrix. 126 Lambda Hermitian forms. 11. 9 Equivalent bilinear forms. 13. matrix. 156 Inverse of a (an) diagonal matrix. 2 matrix polynomials. 11 Identity matrix. 146 matrices. 42 of nonsingular matrix. 20 INDEX Hermitian form canonical form definite. 41 nvectors. 68 Factorization into elementary matrices. 3 Divisors of zero. 205 Jordan (classical) canonical form. 180 Left inverse. 39 of product of matrices. 103. 207 Leading principal minors. 205 Idempotent matrix. 188 quadratic forms. 10. 156 of a vector. 147 of a real quadratic form. 13 Distributive law for fields. 21 147 index of. 164 Hypercompanion matrix. 58 Involutory matrix. 55 product of matrices. 164 Dimension of a vector space. 10 Matrices congruent. 206 Kronecker's reduction. 68 Linear dependence (independence) of forms. 76 matrix. 11 symmetric matrix. 55 direct sum. 147 rank of. Hermitian matrix. 133 Inner product. 33 expansion of along first row and column. 146 semidefinite. linear equivalent systems solution of. 163 by unitary transformation. 115 equal. 173 over a field. 40 common divisor. 179 (scalar) polynomials. 111 equivalent. 75 of. 149 Eigenvector. 65 . 86 Direct sum. 87 Eigenvalue. 131. 134 systems of linear equations. 22 Lower triangular matrix. 95 Diagonable matrices. 2 Gramian. 43.
41 of a bilinear form. 163 transformation. 134 signature of. 179 scalar. 55 lambda. 172 scalar matrix. 131 of sum. 179 nilpotent. 179 positive definite (semidefinite). 2 similar. 164 rank inverse of. 11. 12 transpose Quadratic form canonical form of. 112. 39 scalar. 11 of. 39 skewHermitian. 164 . 48 Nonderogatory matrix. 133 Quadratic forms equivalence Negative definite form (matrix). 134. 131 order of. 100. 87 wvector. 1 217 Null space. 131 reduction of Matrix Polynoniial(s) definition of. Orthogonal congruence. 110 Orthonormal basis. 147 of a matrix. 163 matrix. 50 conjugate of. 179 sum of. 163 periodic. 131. 134. 138 rank of. 134. 10 elementary row (column). 117. 180 singular (nonsingular). 10. 147 matrices. 117 symmetric. 115. 134 index of. 197 nonsingular. 11 inverse of. 179 degree of.) product of. 11 nonderogatory. 133 semidefinite. 134. 3 Kronecker. 11 of. 134 definition of. 179 monic. 125 of an Hermitian form. 39 of product. 172 scalar. 12. 147 quadratic forms. 4 Periodic matrix. 125 of Hermitian form. 111 Partitioning of matrices. 136 Lagiange. 99 Polynomial domain. 33 rank of. 13. 157 diagonal. 95. 157 unitary. 43 of quadratic form. 39 Normal form of a matrix. real definite. 179 Minimum polynomial. 197 diagonable. 102. 13 determinant of. 156 square. 164 Permutation matrix. 131 factorization of. 10 singular. 13. 135 Product of matrices adjoint of. 103. 85 sum Matrix of. 146 of matrix. 43 of. 132 regular. 134 permutation. 39 Hermitian. 4 of matrices. 179 product of. 11 similarity. 157. 163 triangular. 1 orthogonal. 196 Multiplication in partitioned form. 12. 87 Nullity. 2 semidefinite form (matrix). 108 1 definition of. 50 of bilinear form.INDEX Matrices (cont. 134 Rank of adjoint. 172 matrix. 87 of. 180 Positive definite (semidefinite) normal form nullity of. 133. 41 elementary transformation of. 164 idempotent. 118 skewsymmetric. 41 Normal matrix. 146 of a quadratic form. 135 Quadratic form. 197 Nonsingular matrix. 134 Principal minor definition of. 147 Nilpotent matrix. 39 normal. 11 Hermitian forms. 1 derogatory. 103 vectors. 147 leading. 2 Order of a matrix. 99 polynomial. 163 equivalence. 179 proper (improper). 133. 3 scalar multiple of.
10 Vector(s) Span. 95. 100 invariant. 149 length of. 187 INDEX Symmetric matrix deiinition of. 39 Transformation elementary. 207 coordinates of. 63 Root of polynomial. 116 Similar matrices. 147 of Hermitian matrix. 103 singular. 101. 178 of scalar matrix polynomial. 86 definition of. 100 . 2 polynomial. 112 Upper triangular matrix. 10 matrix polynomial. 2 vector spaces. 112 Scalar matrix. 12 (cont. 157 transformation. 13. 196 Similarity invariants. 1 Row equivalent matrices. 109 law Vector space basis of. 110 normalized. 118 Skewsymmetric matrix. 102 orthogonal. 163 dimension of. 88 definition of. 112 similarity. 110 Triangular matrix. 133 of nullity. 39 linear. 101 Unitary matrix. 40 space of a matrix. 75 Trace.) invariant vectors of. 110 over the real field. 11 of a product. 67 inner product of. 86 over the complex field. 117 Smith normal form. 100. 196 Singular matrix. 24 Sum of matrices. 85 Spectral decomposition. 118 of real quadratic form. 172 product of two vectors (see inner product) Schwarz Inequality. 11 Triangular inequality. 133 of real symmetric matrix. 88 Symmetric matrix characteristic roots of. 12. 163 System(s) of Equations. 101. 170 Spur (see Trace) Submatrix. 10. 12 of a sum. 180 multiple of a matrix. 93 transformation. 87 Sylvester's belonging to a polynomial. 95 unitary. 188 Transpose of a matrix. 110 Secular equation (see characteristic equation) Signature of Hermitian form. 85 of inertia.218 Right divisor. 157 Unit vector. 100 vector product of. 94 orthogonal. 39 SkewHermitian matrix. 180 Right inverse.
172 179 180 189 39 39 H^. X Y \\n A S ^•k A'A': 3 G 10 11 zxy Q p s 109 115 116 A' / A° 11 I. 110 103. <^(A) a.A°^ det q \A\. A^ (C) 40 43 NiX) fi(^) N adj 189 196 198 198 A 49 m(A) C(g) J F X.] XY. . K^ik) F[A] ^(A) AjfiC). 111 Hj [^i. K.. (vector) Page 88 100.. A. 110 100. r 23 39 39 E^. (matrix) /(A) h^ij Hi(k). 146 \^ij\ 22 im i^ 23 149 149 170 172 ^ii'i2 ii..j(k) •V. is. ^ 20 h A.Index of Symbols Symbol Page 1 1 1 Symbol E^. X^ 64 67 yn(f) V^tiF) 85 S Cq(p) 203 86 87 205 ^A 219 . A' 12 13 116 131 A*..{k).