This action might not be possible to undo. Are you sure you want to continue?
64431
co
co
>
OVY
75
2I26&
5,000.
OSMANIA UNIVERSITY LIBRARY
Call No.
5 ^
1
1
F37 A
Accession No.
Author
This book should be returned on or before the date
last
marked below.
.ALGEBRA A TEXTBOOK OP DETERMINANTS. FELLOW AND TUTOR OF HERTFORD COLLEGE OXFOBD SECOND EDITION OXFORD UNIVERSITY PRESS . AND ALGEBRAIC FORMS BY W.E.A. F. L.S. MATRICES.R. FERRAR M.
PRINTERS. 1950.Oxford University Press. 1948. from sheets of the first LTD. London E. LONDON edition 1942. 1946.CA GLASGOW NEW YORK TORONTO MELBOURNE WELLINGTON BOMBAY CALCUTTA MADRAS KARACHI KUALA LUMPUR CAPE TOWN IBADAN NAIROBI ACCRA FIRST EDITION 194! Reprinted lithographically in Great Britain by LOWE & BRYDONE. 1953 SECOND EDITION 1957 REPRINTED Q 6 O I .. Amen House.
W. L. HERTFORD COLLEGE. 1957.PREFACE TO SECOND EDITION IN revising the book for a second edition I have corrected a number of slips and misprints and I have added a new chapter on latent vectors. F. . OXFORD.
Bocher. The sections on determinants and matrices. I have included topics commonly required for a university honours course in pure and applied mathematics I have excluded topics : appropriate to postgraduate or to highly specialized courses of study. . Some of the books to which I am indebted may well serve as my readers to further algebraic reading. are. I will not say to a minimum. The books knowledge The second available if make reasonable provision for the latter. in my view. The Theory of Determinants. The book as a whole is written primarily for undergraduates. but to something that may reasonably be reckoned as an essential part of an undergraduate's mathematical education. . and algebraic forms. to some extent. I have written this book on the same general plan as that adopted in my book on convergence. is incentive and opportunity to acquire a detailed of some one branch of mathematics. Modern . Modern Algebraic Theories Aitken. a working knowledge of at least one foreign language. University teaching in mathematics should. suitable either for undergraduates or for boys in their last year at school. as he should have. . Without that the list exhausts my indebtedness to others. The Theory of Canonical Matrices'. pretending I may note the following: Scott and Matthews. But we are deplorably lacking in books that cut down each topic. The first is a broad basis of knowledge comprising such theories and theorems in any one branch of mathematics as are of constant application in other branches. Part III is suitable for study at a university and is not intended to be read at school. Accordingly. Introduction to the Algebra of Quantics Turnbull and Aitken. matrices. and Invariants'. especially the student has. Elliott. provide at least two things.PREFACE TO FIRST EDITION IN writing have tried to provide a textbook of the more elementary properties of determinants. Introduction . Matrices. Theory of a guide for Determinants . to Higher Algebra Dickson. Salmon. this book I Parts I and II of the book. Determinants and Matrices Turnbull.
whose invaluable research lectures' on matrices I studied at Edinburgh * many years ago. F. . T. J. I wish to thank the staff of the University Press. In part. Hodgkinson. Copson. in particular. deliberate. T. I gratefully acknowledge my debt to Professor E.PREFACE * vii Higher Algebra (though the modern' refers to some sixty years back) Burnside and Panton. the late Mr. while still in manuscript. the book was read. A book written expressly for undergraduates and dealing with one or more of these topics would be a valuable addition to our stock of university textbooks. and I still marvel at the patience and skill that go to the printing of a mathematical book or periodical. or to enlarge upon number rings and fields so as to give some hint of modern abstract algebra. whose excellent lectures on Algebra will be remembered by many Oxford men. Whittaker. Theory of Equations. for the criticisms which have enabled me to remove some notable faults from the text. I am Finally. for their excellent work on this book. all of them. I hope they are. both on its publishing and its printing side. OXFORD. . though the reference will be useless to my readers. this book are many. but I think little is to be gained by references to such subjects when it is not intended to develop them seriously. L. or to digress at The omissions from one of several possible points in order to introduce the notion of a group. I have been concerned with the printing of mathematical work (mostly that of other people !) for many years. HERTFORD COLLEGE. who has read all the proofs once and checked deeply grateful to him for this work and. by my friend and colleague. In the exacting task of reading proofs and checking references I have again received invaluable help from Professor E. W. nearly all the examples. It would have been easy to fit in something about the theory of equations and eliminants. Further. September 1940.
.. 169 200 . THE RANK OF A MATRIX IX. INVARIANTS AND COVARIANTS XV. . . 109 PART / III LINEAR AND QUADRATIC FORMS X. NUMBER FIELDS. . 119 135 XI.. . AND MATRICES j . . ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS . . LATENT VECTORS INDEX .83 . RELATED MATRICES VIII. JACOBI'S THEOREM AND ITS EXTENSIONS 59 V.. . 5 ELEMENTARY PROPERTIES OF DETERMINANTS . . . .160 . . .CONTENTS PART I PRELIMINARY NOTE: NUMBER. ORTHOGONAL TRANSFORMATIONS XIV. . . .28 THE MINORS OF A DETERMINANT 41 HI. THE POSITIVEDEFINITE FORM XII. 67 . PART II MATRICES ^VI. 144 XIII... THE PRODUCT OF Two DETERMINANTS 55 IV. .. THE CHARACTERISTIC EQUATION AND CANONICAL FORMS .93 ... SYMMETRICAL AND SKEWSYMMETRICAL DETERMINANTS I.. DEFINITIONS AND ELEMENTARY PROPERTIES VII. DETERMINANTS 1 . II. .. . . . . .. .. . DETERMINANTS ASSOCIATED WITH MATRICES . ...
. (1) s Let r.. 6'. rxs all belong to (1). to a greater or a lesser extent. Number In its initial stages algebra is little more than a generalization of elementary arithmetic. division. such as 3 J or J. and 2. 3. such as 2\3i. s denote numbers selected from (1). the numbers r and r+. 1 4702 B . . rs. with the solution of the quadratic equation. Number rings Consider the set of numbers 0. x is permitted to be a complex number. This property of the set (1) is shared by other sets of numbers. It deals only with the positive can all remember the type of problem integers.PART I PRELIMINARY NOTE. multiplication. or irrational. 1. all numbers of the form a~\b^J6. then so do r+s. such as 77 or V3. the precise definitions of these numbers and the rules governing their addition. 2. CHAPTERS ON DETERMINANTS PRELIMINARY NOTE 1. The numbers used in this book may be either real or complex and we shall assume that readers have studied. and r X laving this property is called a KING of numbers.. whether denote the same or different numbers. Finally. (2) where a and b belong to (1). have the same property. We that began 'let x be the number of eggs'. Then.. and if x came to 3 we knew we were wrong. if r and s A set of numbers belong to (2). . rs. and then to be any real number either rational.. 1. to be the ratio of two integers. In later stages x is permitted to be negative or zero. 2.. For example. subtraction.
where p and q belong to (3). (3) Let r. Most of the propositions is the work field is carried out within a field of numbers. DEFINITION. In this preliminary note we wish to do no more than give a . when s is not zero. Number 3. (5) the set of all numbers of the form p+q^JS. that is to say. real or complex. the numbers all r+g> r _ s> rxs> r ^s belong to the set (3). sets. book presuppose that what particular usually of little consequence. for. where p and q belong to (1) and q is not zero. the set of all rational real numbers. it does not contain the number in this . A set of numbers. Then. set. (6) Each of the sets (4). is said to form a FIELD OF NUMBERS when.2 PRELIMINARY NOTE 3. rs. whereas it contains and 2. all real numbers (rational and irrational). This property characterizes what is is called a FIELD of numbers. In the early part of the book this aspect of the matter need not be emphasized: in some of the later chapters the essence of the theorem is that all the operations envisaged by the theorem can be carried out within the confines of any given field of numbers. and (6) constitutes a field. rxs. 1 (1) is not a field. if r and s belong to the set and s is not zero > r+s.s denote numbers selected from (3).2. r^s also belong to the Notice that the set the numbers 3. whether r and s denote the same or different numbers.1. (5). The property others : shared by the following among many (4) the set of the set of all complex numbers. fields and every Consider the set of numbers comprising number of the form p/q.
..** \ of degree 2 in the n variables x v x 2 . x n with the bilinear form (iii) I I in the 2n variables (iv) ars^ys \ x l9 ...... a matrix. The matrix written above. y n with the Hermitian form r=l s=l 2 2 Or^r^ where (v) rr a and fs are conjugate complex numbers. real or complex. arranged in an array of m columns and n rows is called a matrix.PRELIMINARY NOTE formal definition of a field 3 of numbers and to familiarize the reader with the concept. is associated with the determinant *2n nl (ii) with the form rl 2 2 *.. Thus 22 *2m is n we speak of a square matrix of order n. Associated with any given square matrix of order n there are a number of algebraical entities. x n and y l9 .. with the linear transformations .. 4. When m = with (i) m= n. Matrices A set of mn numbers.
roughly. referring back to Part I for any result about determinants that may be needed. Part II develops the algebra of matrices. Part III develops the theory of the other associated forms. this: Part I develops properties of the determinant. The plan of expounding these theories that I have adopted is. .4 PRELIMINARY NOTE The theories of the matrix and of its associated forms are closely knit together.
in origin. (n L b 2 (a l b 2 ~a 2 b l )x = 0. usually written as ai a2 The term a l b 2 is referred to as 'the leading diagonal'. Suppose that the two equations a^x+b^j are satisfied = 0. Since is there are two rows and two columns. The number a^b^a^b^ it is a simple example of a determinant. or 'of the second order'.2.CHAPTER Introduction I ELEMENTARY PROPERTIES OF DETERMINANTS 1. a 2^i)# ~ 0> is an(l so a i^z a z^i ~ 0. On the other hand. The theory has been developed to such an extent that few mathematicians would pretend to a knowledge of the whole of it. are. a 2 x+b 2 y y. On the other hand. closely Determinants connected with the solu tion of linear equations. being different by a pair of numbers x and from zero. Then b 2 (a l x\b l one of them. Until the middle of the last century the use of determinant notation was practically unknown. In the following chapters it is assumed that most readers will already be familiar with determinants of the second and third orders. 1. 1.1. but once introduced it gained such popularity that it is now employed in almost every branch of mathematics. Determinants of the second and third orders. = least. and so Similarly. no theorems about such determinants are assumed. at y)~b l (a 2 x~\b 2 y) = 0. the range of theory that is of constant application in other branches of mathematics is relatively small. so that the account given here is complete in itself. . and it is this restricted range that the book covers. the determinant said to be 'of order two'.
0. x. which may be written as a 3 ^2 C l> a l ^2 C 3 a i ^3 C 2~t" a 2 ^3 C l is a 2 ^1 C 3 + a 3 ^1 C 2 0. = 0. It is The term a x 62 c3 is thus suggested that. denote the coefficient of z. from equation Z{a 1 (b 2 c 3 b^c 2 )+b 1 (c 2 a 3 c3 a 2 )+c 1 (a 2 b 3 = 0. from equations (3) (a 2 6 3 (a 2 6 3 and (4).x+b. a^b 2 )x (b 2 c 3 (c 2 b 3 c 2 )z c.3... associated with n linear <*>iX+b l y+. y. say a 2 x+b 2 y+. satisfy the three (2) equations = a x+b y+c z = ^x}b y+c^z = a. and z are not all zero. namely a l b 2 and (i 2 b lt but in a different order and wifrh the opposite signs.z 2 2 0.. in interchange simultaneously a x and b l9 a 2 and 6 2 we get . 6L 2 b2 63 3 in which form it is referred to as a 'determinant of order three' 5 . The determinant has one obvious property. Again. so that our result similar working zA = By Since we can show that zA = 0. we bl a2 b2 ai instead of c^ 1 b2 #2^1 That is.+k 2 = z = 0. If. 1. not all zero. then. a3 = 0.^a 2 )z a 3 b 2 )y (2).y+c. (3) (4) 3 0. A The number A is usually written ! = as 0.+k l z equations in n variables. We by A. .. let numbers #. y and y z. i/A = 0.6 ELEMENTARY PROPERTIES OF DETERMINANTS (1). a3 b 2 )} and so. 2 0. or a 'determinant of the third order referred to as 'the leading diagonal'. the interchange of two columns of (1) reproduces the same terms.
and proceed step by step to a definition of a determinant of order n. But this is not the only possible procedure and we shall arrive at our in definition by another path. though all the definitions lead to the same result in the end. function of the coefficients may be conveniently denoted by if all x bl . so we could form a determinant of order four by using those of order three. and so makes reference to other books a simple matter. and #! 6 2 Just as we formed a determinant of order three (in 1.. 2/. z which are not all zero. kn called the leading diagonal. the determinant 1. Properties of determinants of order three....5. b ( stands for the expression (1) The following facts are all but selfevident: (I) The expression (1) is of the form . determinants of We shall first observe certain properties of the third order and then define a determinant of order n in such a way that these properties are preserved for deter minants of every order. which form it may be referred to as a determinant of order n. As we have seen in 1. 1 .4. Note on definitions. There are many different ways of defining a determinant of order n.2) by using determinants of order two.ELEMENTARY PROPERTIES OF DETERMINANTS there zero is a certain function of the coefficients which must be the equations are to be satisfied by a set of values It is suggested that this x.2. The only particular merit we claim for our own definition is that it is easily reconcilable with any of the others.
... is sign +.1. (II) the leading diagonal term. . The interchange of p and </.. p and fi qn . s... Determinants of order n 2..8 ELEMENTARY PROPERTIES OF DETERMINANTS sum is t wherein the to r.. a n and b n J See previous footnote. Having observed (1. means the simultaneous interchanges p and qi t p 2 and y^. . 2. DEFINITION.. the values taken over the six possible ways of assigning 1. any other term is such that the inter (III) the sign prefixed to t change of any two letters^.i jn kn function of the a's. k's which satisfies the three conditions: (I) it is an expression of the form (2) 2 i a/A"^0> ivherein the sum is to r. (III) As with the determinant of order 2 (1. 6 the values taken over the n\ possible ways of assigning 1..... but in a different order and with the opposite signs prefixed. when a and b are interchanged! in (1)..5) three essential properties of a determinant of the third order.k n .. n in some order. and without prefixed by the repetition. 2.. we now define a determinant of order n..2).. The determinant 1 bi b. but in a different order and with the opposite signs prefixed to them. 2. b '<$. . Ji * fc 2 J2 a (1) n is that b . For example.. simultaneous interchanges a t and b l9 a 2 and 6 2 a s an d & 3 .. throughout (2) reproduces the same Throughout we use the phrase 'interchange a and 6* to denote the . a l b^. 3 in some order and without repetition. s. .. The leading diagonal term a^b^c^ is prefixed by +. we get (II) which consists of the terms of (1). the interchange of any two letters throughout the expression (1) reproduces the same set of terms. say.
..ELEMENTARY PROPERTIES OF DETERMINANTS set 9 of terms.. definition yields one function of the follows is The proof that divided into four main steps. p. . it is automatically satisfied for every inter change of letters. (A) is An interchange of two letters that stand next to each other an ADJACENT INTERCHANGE. which correspond to the columns of (1). By w+1 adjacent interchanges. q.. (III) 1. g.. Assume. we end if initial signs.. by (I) and (II).. the value must be and. 6's. (II). we cannot have a l b 2 +a 2 b l step. Before proceeding we must prove^that the and one only.. and with the opposite signs prefixed. in (2). that the conditions (I). This stage has been reached by means of 2m+l adjacent called interchanges. say.. . having. Let the letters a. . the determinant (of the second order) (III) fix the value of to be a l b 2 a2 b l . then.. be written down in any order. m letters between them in the order (A).. k.. in each of which q is moved one place to the left.. Now if. Second step.. . Accordingly. 6. by so that (III).. (II). &'s . are sufficient to fix the value of a determinant of order n 4702 C Third . For.. say a's. First step. the condition (III) of the definition is satisfied for adjacent interchanges of letters. we change with signs opposite to our all the signs 2m\l times. the interchange of a and 6 must change the signs... a. The conditions (I).. but in a different order of occurrence.. d. Take any two letters p and q. in each of which p is moved one place to the right. we reach a stage at which p comes next after q\ by m further adjacent interchanges. we reach a stage at which the order (A) is reproduced save that p and q have changed places.
(i) Ct. . as applied to is An the sum taken over the $.... k changes fix the Hence.. j with k ...10 ELEMENTARY PROPERTIES OF DETERMINANTS (I).. . k Q to be the opposite of the sign of the term a l b8 cl . Finally. c. change all the signs in (2). as applied to . An contains a set of terms in which a has the this set of terms 8 is <*i2b wherein. (nI)\ possible assigning to 6 the values 2. ... Moreover. (III) value of a determinant of order n 1. kg in (3 a). Aw . nl defined .k e9 ..... n in ways of some order and without repetition.. (II). ab n Fourth step.. the term 6 2 c 3 kn is prefixed rt .. .. Hence the terms of (2) in which 6 has the suffix 1 are given by This.. The interchange of a and b in (2) must. .. letters 6. as applied to A an interchange of any two of the the signs throughout (3). (iii) by (III). . By the determinant suffix 1 . that (2) must take the form c with rf. by + .. (3) by (I). by condition (III). 3.. . . now show The adjacent interchanges 6 with c.. by (ii) (II).. (3b) for (3 b) fixes the sign of a term fi^c. the terms of (2) in which a has the suffix 1 are given by (3 a) on our assumption that a determinant of order is the conditions (I) (II) (III) fixes the signs of all by terms in (2) that contain a 1 b 2t a^g... by our hypothesis that the conditions (I).
bn . we say that there are \ inversions with respect to n. with n = A6 6 and the term 2. in (5). Aj = 0.. is the total If ar b 8 number of . 5. The sum is called the total number of a inversions of suffixes. inversions 2 + 3+2+1 = 1 8... 3. in the term a 2 b 3 c^d lJ there is one inversion 2. where Xn 2. since no suffix less than 5 comes after since the suffixes 3. For example. we can make by X n adjacent interchanges of letters and.. (III) define uniquely a determinant of order n 1. if there are A n1 suffixes less than n 1 that come after n 1. then they define uniquely a deteris minant of order n. 2. k0 has X fl inversions with respect to n.. For example... 2. 1 come after 4. . Thus. A5 A4 A3 = = = 0. }1 tl with respect to 4. we say that there are X n _ l inversions with respect to n 1 . on restoring alphabetical order. the = . (II).. b c 1 d e f (5} since the suffixes and 5 come after 6. and hence. If in a term a r b s . if conditions (I). A2 = 1. f the order of the suffixes n n to be the suffix of the nth letter of the alphabet unchanged. + (!)"a3 63 (4) an bn Jn That to say. and so on. make n the last suffix. Similarly. then. leaving 1. by induction. they define uniquely a determinant of any order.2.ELEMENTARY PROPERTIES OF DETERMINANTS C3 11 63 . Rule for determining the sign of a given term. 2. But they do define uniquely a determinant of order 2..k there are X suffixes less than n that come after n.
0. in f h J On restoring alphabetical order in the last form. Properties of a determinant 3. h rf and / with d f give. o/ 2. to each of them. ..k n.. coincide with a l b 2 .e.... The determinant Aw . 3.1 can be expanded in either of the farms (i) I(l) a. where N. In the term let there jji m suffixes greater than m that come before m. ar b8 . any term of (2) is (1)^. suffixes 4. .. The number is the total number of inversions of suffixes. bn Cn h h This theorem has been proved in 2.3. A^_ x adjacent Similarly.. By (III). interchanges of letters followed by a restoration of alphabetical order will then make n 1 the (n l)th suffix.k n .1.k e letters make i. the last suffix.12 ELEMENTARY PROPERTIES OF DETERMINANTS e two adjacent interchanges / with succession.. Let 1 ^m^ be N may also be arrived at in another way. Ai+A 2 +.. THEOREM 1. the sign to be prefixed to . when n has been made Thus A 1 +A 2 ++An adjacent interchanges of the term ar b 8 . 2. of these ^ m greater suffixes in evaluating N. 5 are in their and the comes at the end.k g where (ii) N is the total number of inversions in the suffixes r. and so on.... as in (5). we have a ^b 3 c 2 d l e5 fBt in which the original order.. suffix 6 1. accounts for one inversion with respect and.. Na r b s .+A n 2. It follows that suffix Then the m comes after each n 3.
[There are /x n inversions with respect to k in ajS.] Now If consider any one term of (7).. The interchange of two columns. (7) is are the letters a. 6.. determinant multiplies the value of the determinant by I. the expansion of the of the enunciation.e. multiplies the determinant an interchange of two letters.. It follows from defined as 2 /*n> * s equal to N.. by . /c . Thej*e are are p n suffixes greater than letters that come before j in the alphabet but after it in Fni (8). and Theorem 2 is proved. /?. say * ( M (l) we i A> 8) write the product with its letters in alphabetical order. so there are ^ n _^ suffixes greater than t that come before 2. which first determinant THEOREM in a 3.. or of two rows.j ke come after k.. 1 . which is it in (9). It follows at once the definition of An that an interchange of two columns. the expansion of the second determinant of the enunciation.... (9) (\)Uar b8 . By Theorem where a.ELEMENTARY PROPERTIES OF DETERMINANTS THEOREM 2. . we In get a term of the form t ..3 that M.. 1.. where N is the total number of (8) there are letters that \i n inversions of suffixes in (9)./c if there are fi n letters after k that come before k in the alphabet. and so on: ^i+^2+*+M/i * s the total number of inversions of letters. so that in (9) there that come before 8. 1 . 13 A bl 00 determinant is unaltered in value when rows and columns are interchanged. by Theorem 1. may also be written as (7). & in some order and Jf the total number of inversions of letters. the second determinant is 2 (!)%&. and so on. from (III) of i.. that is to say ^ fC<\ O . . Thus which is 2(i)%A^> is.
14 ELEMENTARY PROPERTIES OF DETERMINANTS Hence also. bt di If a column (row) is moved past an odd number of columns (rows). we may show that ax cl c3 the first determinant may be written as b l Cj d l 6 3 c 3 dz dl ci! ! bl dl a3 3 . giving ticbd. For example. then the value of the determinant is unaltered. The expansion the expansion by the is (ii) of Theorem first row. a* A2 63 64 c2 3 r/ a$ 2 fc 2 C2 C4 ^2 a4 c4 4 &4 d4 its first first and. d. In the particular example. By usually referred to the corollary of Theorem 1 is there a similar expansion by any other row.// a column (roiv) is moved past an even number of Columns (rows). an interchange of two rows 1 multiplies the determinant by . abed can be changed into cabd by lirst interchanging 6 and c. then the value of the determinant is thereby multiplied by l. d. . 3.11. d. on expanding the second determinant by determinant is seen to be equal to bl cl row. in particular 6i 6. For a column can be moved past an even (odd) number of columns by an even (odd) number of adjacent interchanges. as 3. COROLLARY. by Theorem 2. and then interchanging a and c. the dl a b Similarly.
THEOREM This is an immediate corollary of the b . or rows. or two is zero. THEOREM 6... is the sum a2 of the four determinants bi 62 is This again obvious from the definition.. Hence x x. Hence there are corresponding expansions of the determinant by each of its columns.3. for is the algebraic sum of the 2" summations typified by taking one term from each bracket. The determinant of order is equal to the different sum n of the 2 determinants corresponding to the letter 2n ways of choosing one from each column. k =K ab 3. is ~x.k. is multiplied by a factor K. identical. THEOREM its 4. or rows. by Theorem 3. // a determinant has two columns. if its value is x. leaves its value unaltered.. n. or row.2. or 2x identical The interchange of the two = // each element of one column. we may turn columns into rows and rows into columns without changing the value of the determinant. 5. definition. its value after the interchange of the two columns. . rows. by Theorem 2. in particular. We 3. the value of the determinant is thereby multiplied by K. shall return to this point in Chapter II. for . 1. 0.ELEMENTARY PROPERTIES OF DETERMINANTS 15 Again. But. value columns.
az aj b v 62 . it is A times a determinant which has two columns identical.+A6 2 Proof. Hence A^ = A COROLLARIES OF THEOREM 7. . We may add to i o 61 < 60 C 63+^3 There is a similar corollary with PRECEDING instead of SUBSE7 is QUENT. There are many extensions of Theorem 7. THEOREM The value a. by Theofem 1. 2/2 2/2 *n %n 2/n But the second of these is zero since. Let the determinant be of two distinct columns of A. in particular. AH of 2.i it can be proved that each column (or row) of a determinant fixed multiples of the SUBSEQUENT columns (or rows) and leave the value of the determinant unaltered. x and ?/ the letters Let be the determinant formed from A n by replacing each xr of the x column by xr {\yr where A is independent of r. in b1 a 2 +A6 2 a3 . by repeated applications of Theorem 7 . by Theorem 5.1. For example. xt .16 ELEMENTARY PROPERTIES OF DETERMINANTS 3. Another extension of Theorem column (or row) and We may add multiples of any ONE particular.4. of a determinant is unaltered if to each element of one column (or row) is added a constant multiple of the corresponding element of another column (or row). column (or row) to every other leave the value of the determinant unaltered. Then. in particular. 7. .
a choice third. add second and = l. A== a l \Xb 1 a 2 +A6 2 is.ELEMENTARY PROPERTIES OF DETERMINANTS There are is 17 many others. of course) 'prove' that A = 0: the manipulation of the columns has not left the value unaltered. Hence the deter minant A is equal to NOTE.i/ = l. 7 notation for the application of Theorem we shall use As a convenient and its corollaries. such as A6 being zero in virtue of Theorems 4 and 5. The practice of adding multiples of columns or rows at random is liable to lead is to error unless each step checked for validity by appeal to Theorem 6. For example. 1f A/zv. a new determinant A^ whose kth row 4702 is obtained by taking I times the . 'subtract second column from first. But experience rather than set rule the better guide for further extensions. = 3. by Theorem 6. One of the more curious errors into which one is led by adding 0. to denote that. corresponds toA= l./Lt of values that will (wrongly. Applications of Theorems 17. A6 3 all MC3 v the other determinants envisaged by Theorem 6. starting from a determinant A n we form .5. In the multiples of rows at random is a fallacious proof that A example just given. add first and third'. but multiplied the determinant by a zero factor.
On expanding the thirdorder determinants.. Prove thatf 1 . On taking c = ca c lf c' 3 = 1 c3 c 2 . from 3 to 2) the reader is referred to Whittaker and Robinson.. . the determinant becomes a j8 a y~]3 a(j3y) fo y(aj3) t This determinant.13(108. 1926). The Calculus of Observations (London. A = 42{2(30)13(24) + 67(95 + 48)}f +{2(102f 108). 2. EXAMPLES 1. say. from 4 to 3.18 first ELEMENTARY PROPERTIES OF DETERMINANTS row plus .171)467(~ 216323)}. plus t times the nth row of A n The notation refers to a similar process applied to columns. by Theorem I. 1 1 a yot. For a systematic method of computing determinants (one which reduces the order of the determinant from. ra times the second row plus. can be evaluated quickly by using the Remainder Theorem. chapter v. Cg = c 3 f3c4 . Here they are intended as exercises on 13. 6 to 5. and others that occur in this set of examples. On writing c{ = cl 2c a c3 . I Find the value of A = 87 42 18 17 3 7 1 4 45 50 91 3 6 5 9 The working that follows is an illustration of how one may deal with an isolated numerical determinant. etc. from 5 to 4.
=^ 7*2 and remove the and remove the (Theorem r %~ r z = r3 r4 and remove the is factor y y from r'^ 8 from rj. Prove that A ^ O (D / (/J) a) \2 A \j (<xy) iQ ID 2 (aS) /O v/3 2 = 0. by Theorems 1 19 and 5.ELEMENTARY PROPERTIES OF DETERMINANTS which. is zero since the determinant of order 3 that multiplies a y is one with a complete row of zeros. being the product of two determinants each having a column of zeros. is equal to i.e. c^it c2. When we have But. at present. 8 becomes 2 2 2 2 (j8a)(yj8)(8y) 2 2 2 a+j328 )3+y28 2 2 aIn the last determinant take r[ y8 28jSy Sy 2 2Sa 00 00 22 r^ r^ r' 2 = ^2 ~ r3 becomes ay j88 y8 28jSy 8y which. 2) that the given determinant is 'obviously' zero. 3. 4. 3 (j8a) (y<*) 3 (y0) 3 o . r( r% r's we treat it as an exercise on factor a factor /3 /? Theorem from r( 7. Prove that 3 (aj8) 3 (ay) (j8y) 3 = 0. y /?. c. c3 = c3 c4 and remove the factors a. Then A/(a)8)(j8y)(yS) equal to y 0fy28 y8 In this take /? c{ = y. Take 5). y) \2 0/ SS\2 (y _ a) 2 2 (y _)i 2 o (y8) 2 2 (8a) (8j8) (8y) studied the multiplication of determinants wo shall see (Examples IV.^ = c2 c 3 . on expanding by the first row.
.2. y 8+a 8 y+8 a Prove that 2 a+/? y 2 )8 a 2 y 2 2 jS 8 8 2 82 = 0. r^r* 1 1 ft 1 1 = 0. 8. j8+y 2y 8 y+jS y+8 28 +y \/ 9. a number of zero terms in the expansion of A. Prove that y+a r' 3 a+. a2j8 afy a4~8 =?= 0. There are. The nonzero terms are a 2/ 2 [a 2 &!C 4 c?3 and so is prefixed by ( 6 e 2 2 I) ]. each prefixed by +. Take \/ 6. 2 j8 82 Prove that A ^ a b d c e 6 c e d Of ct c2 / The determinant may be evaluated a a2 3 4 directly if we consider it as bl b2 63 64 C3 C4 whose expansion is 2 (~ l)^ar^ c e^u ^ ne value of N being determined by the rule of 2. y Q 2 ft 2 r<** HINT. however. Prove that 1 1 )3 1 y y 2 y8a a2 10. c 2 d 2 . 2_^2 2_^2 2_ y 2 2 y ~g2 / 6. Prove that ft 2 ]8+y HINT. a 8 7. Use Theorem Prove that 2a j8 + a y+a .20 ELEMENTARY PROPERTIES OF DETERMINANTS y y2 \S&. 2 .
HINT. Prove that the expansion of c b d e c a b a e / d is / a 2 d 2 f 6 2 e 2 f c 2 / 2 2bcef 2cafd V/12. The determinant that has ara as the element in the rth row and the 5th column may be written as \ars Sometimes a determinant is sufficiently indicated by \.2aebf+ 2afcd . its first . enables one to abbreviate still further. Two linear factors x + a+c(b{d) and one quadratic 4. and two terms cdbe each prefixed by . Ca = c 2 f^c3 . (Harder. The use of the double suffix notation. 21 two terms aebf [one a 2 64 c r rf3 the other a3 6 X c4 e/2 and so each prefixed by(l) 3 ].2becd. 3 = c3 + vc l9 ci = c4 . From a determinant A. factor. Hence 1.. rt of the Preliminary Note. A = a 2/ 2 + & 2 e 2 f c 2 ^ 2 . of order formed by taking c{ a new determinant A' = Cjf.. two terms adfc prefixed by f .k n ). Prove that A' = Notation for determinants The determinant A of 2. Prove that the expansion of 1 1 1 y l 2 2/ is /1 3. is 14. which we used on p.1 is sufficiently indicated by its leading diagonal and it is often written as (a<ib 2 c3 .ELEMENTARY PROPERTIES OF DETERMINANTS . 3 4.) Express the determinant bed d a b x\~c x\c as a product of factors.Ac 2 .
j8. On expanding A 4 we . since it then has two Hence. is equal to the ^ ing determinant of order n. the coefficients of the powers of a being functions of The determinant vanishes when columns identical. a3 a 2 3 j8 ft* /3 oc 1111 y 2 y y 3 S3 S2 S is equal to (oc /?)(a y)(a 8)(j8 y)()S 8)(y S).22 ELEMENTARY PROPERTIES OF DETERMINANTS a 1 6 1 c 1 . y>S)> and . j3. is a numerical constant. OL = j8. y. by the Remainder Theorem applied to a polynomial in a. The coefficient of a 3/J 2 y on the righthand side of (10) is K.. j8. /?. S. The determinant 5. Such determinants are called ALTERNANTS...1. the difference of any two of a. y. /?. must be independent of a. so that Since A 4 is the factor K a homogeneous polynomial of degree 6 in a. the coefficients being 1 \ (b) as a nonhomogeneous polynomial of highest degree 3 in a. 1)..i 1 .. homogeneous polynomial of degree y. and in A4 it is 1. the correspondn ~ l n ~" . but Standard types of determinant The product of differences: THEOREM 8. due regard being paid to alphabetical order in the factors.. 5.k n ) may be indicated by the notation is liable to misinterpretation.. namely (oc product of the differences that can be formed from the letters a.. alternants. K appearing in the determinant.. Hence K= 1 and The last product is conveniently written as (>/?. 8 and so y. a similar argument. the determinant has a factor a /?. 8. 8 By is a facto? of A4 . see that it may be regarded 6 in the variables (a) as a a.. row. thus (a>ib 2 c 3 .
The determinant A == ^e product is taken over the nth roots of unity. The circulant. factors also is and the degree of the determinant (a n ~ 1 j3 " 2 ....+an a) n a 1 +a 2 a)+a 3 aj 2 +.. where = A unaltered.. determinant is called a CIRCULANT..+a H o} n . 2&7T 7 . on using the 1 = a. *). (A. Hence the first column of the new determinant has a factor ..ELEMENTARY PROPERTIES OF DETERMINANTS the a.. /?. . '5. so that the argument used for A 4 /l ) readily extended to An .. In A replace the c first column cx by a new column c[.....2.1 = a.2. . Such a Let w be any one of the n numbers a) k = cos 2kn J^sm n n .. 1 and writing a 1 +a 2 co+... 1) is (n~ l)+(r& 2 + . + !. 23 corresponding product of differences of the n letters K as (a. THEOREM 9. l. This last product contains /?..n).. = .. rt . fact This leaves the value of The first that aj n = column of the new determinant is..
. Comparing the coefficients of aj on the two sides of we see that K= 1. (11) Moreover. so that o^ (A.. II..... are said to be a 2. .. 6 2... 5..... nl Thus. and so on. as the reader will see for himself... n.. the definition of A n by the ( conditions I. 432615 becomes 123456 /4 3 2 6 1 5\ after the interchanges /I 2 3 ? denoted by /I 3 2 6 4 5\ 6 4 5\ II 2 3 4 6 5\ \1 3 2 6 4 5J' \1 2 3 6 4 5/ \1 2 3 4 6 5/' \1 2 3 4 5 6/' whereby we first put 1 in the first place.. of The n numbers r. and therefore the sign (1)^ must be the same whatever set of interchanges is to used in going from r.1 which is therefore a>+.. In fact. the term ar b 8 .tc0 t Now K in the expansion of A n (a 1 6 2 "^n) * s prefixed by the sign 1)^. 6 to 1. Odd and even permutations 6. 1... w PERMUTATION in some order.1). then 2 place.?&). 2. but the set of interchanges leading from For example. n by suitable interchanges of pairs. 6. suppose that there are interchanges in any ONE way of going from r 5. then 5 in the solely fifth. Hence the sign to be prefixed = any given term is uniquely determined. Then. since A and JJ are homogeneous of degree n in the must be independent of these variables a lt a 2 . to 1. the one order to the other is not unique. as we have proved..+aw a>2 1 ). n if they consist of 1.. $... beginning by /4 3 2 1 6 5\ /4 3 2 6 1 5\ \4 3 2 1 6 5/' 1 \4 3 1 2 6 5/' as first steps towards moving into the first place. 2........ This is true for 1.24 ELEMENTARY PROPERTIES OF DETERMINANTS w . if one way of . But.. and so on: or we can proceed by adjacent interchanges. there is a wide variety of sets of interchanges that will ultimately change 432615 into 123456. But we may arrive at the same first in the second final result by putting 6 in the sixth place. (1 1).1....... (o 1 +a a cu Jb +. = = A = Jf . r.... 2. a n the factor K variables and so is a numerical constant. We can obtain the order from the order 1... 2.. by condition III in the definition of a determinant (vide 2... 5.fa n aj (Theorem 5) a cu factor of A.. a = a +a 1 2 .. III is unique..
when the change from it to the standard order 1.. 6.. 6 to 1. If A denotes the determinant as and the elements are functions of x. when the change from it to the standard order 1. n can be effected by an even number of interchanges.. It can be proved.. so does every way of going from r. 5. every permutation may be characterized either as EVEN. 2. d to 1. where legitimate to define A n = (a\b 2 . d&/dx is the sum of the n determinants obtained by differentiating the elements of one row (Or column) of A and leaving the elements of the other rows (or columns) unaltered..k n ) as the plus or minus sign is prefixed to a term according as its suffixes form an even or an odd permutation of 1... n.... This is. 7..?i.. in fact. so does every way of effecting the same result..&0. nI x X 1 x 2x x 2x X 4702 a. if an odd (even) number of interchanges.. without reference to determinants. 5. n can be effected by an odd number of interchanges. Accordingly...... 2..ELEMENTARY PROPERTIES OF DETERMINANTS 26 going from r... the rule for obtaining its differential coefficient follows. or as ODD.. 2.. 2... one of the common ways of defining a determinant... 2.. For example..... one way of changing from r... 5. Differentiation When the elements of a determinant are functions of a is variable x.. When this has been done it is ar &s . 3 . n involves that..2. n involves an odd (even) number of interchanges.. 6 to 1.. 2..
Prove that a (yh) 2 J first Prove that each of the determinants of the fourth order whose rows are (i) 1. y/2. a 2 a 3 (ii) 1. V(D^ dx 4 dx dx k.26 ELEMENTARY PROPERTIES OF DETERMINANTS The proof of the rule follows at once from Theorem 1 . 3. Prove that 1 1 ft y aj y . a. j8+yf 8. Take c' == c 2 2 1 1 l' y c3 f ^jC^ where s r ^Cj.j8. cj = = a r +j3 r +y r . 6. II Prove that 1 a a2 y+a a+jS a HINT. in each of which one row row of A and the remain Ti1 rows consist of the corresponding rows of A. jS 2 +y 2 f S 2 y8. where a> runs through the n roots of 1. k dk/dx ing is the sum consists of differential coefficients of a of n determinants. .8). since the usual rules for differentiating a product give Since. Write down other determinants that equal 4. Prove that the 'skew circulant 1 a. for example. . is equal to if(a. .y. . EXAMPLES V/l..
]8. of homogeneous products of a l9 a 2 .... 7.. y and it must be unaltered by the interchange of any two letters (for both the determinant and the product of the first three factors are altered in sign by such an interchange). 27 Prove that the circulant of order 2n whose a ai first row is a3 . y is /owr. consider the coefficient of 8 in the alternant (a.. /?.ELEMENTARY PROPERTIES OF DETERMINANTS 6. y. By show putting x that n Hj) the 9 = 1 t/" in sum Example 8 and expanding in powers of y.. ++8) the product of a circulant of order n (with first row ai+a n+l and a skew circulant of order n (with first row a x a n+lf Prove that the circulant of the fourth order with a to first row 9. is equal to HINT. is aa a a a n+2 .)../?. a of rt degree p. a 2n .. Prove that 1 y* 03 12. Extend the results of Examples 10 and 11 to determinants of higher order. The degree of the determinant in a. 2 Alternatively. 11. . The first three factors are obtained as in Theorem 8.. S). so the remaining factor must be linear in a.
CHAPTER 1 . Such minors are called FIRST MINORS.. 3. various rows. We the determinant of order I .. II THE MINORS OF A DETERMINANT First minors 1. A* or. and = a n A n +b n Bn +. minors. In the determinant of order n.2.a 1 /l /i . can expand A by any row or column (Chap. .... fourth...11).2.. An 1..+kn K n these being the expansions of A w by its .... (2.Kr (r=l... obtained by deleting the row and the column containing ar is called the minor of ar and so for other letters. They are defined as the numbers consideration to introduce A .1.. on expanding by the first column... (1] on expanding by the second row. = _a 2 a 2 +& 2 &. n shall first minors by a r j8r ... and so on for third. b is called a SECOND MINOR.. The above notation requires a careful of sign in its use and it is more convenient COFACTORS. + (l)^.+a n A n these being the expansions of A m by its various columns... the determinant of order n 2 obtained by deleting the two rows with suffixes r. + (l)^ 2 *2 . = ia 1 a 2a2 +. . s and the two columns with letters a. for example.B r r . denote the . We /t I.n) such that An = ai A 1 +a 2 A 2 }.
. a simple matter to determine the sign for any minor. We we have seen in 1 that. by the definition of the cofactors so that C2 = y2 . by where s is as bs . we get a by determinant equal to zero.. It is /?r . so that. 2.. r. a3 a while. A: s. ry . other than .... I k..3. to determine whether D 3 is +S 3 A n is unaltered (Theorem it 3.. I... 2. by the procedure of \ 4). are obtained by prefixing a suitable sign to ar 1. get a determinant having two rows as b s ks that we thereby is.THE MINORS OF A DETERMINANT It follows 29 from the definition that . w..3.... .. Ar B . l) the r +s ~ 2 times the minor of a rs that . row up the until becomes thf first Corollary) row.4. For example. o 69 b. But and so /)3 An S3 . one of the numbers 1. ( \ars is. on interchanging the first two rows of A n . Or that again. to determine whether C2 is particular +)>2 or y2 we observe that. 1. If we cofactor A rs of ars in use the double suffix notation (Chap. 1. r . .. A rs Aw is ( l) r+s times the deter minant obtained by deleting the rth row and 5th column. with = (3) If replace the rth row of A n namely . we observe if we move the third or S3 on expanding by first row of the determinant so formed. K r are unaffected . But A B r.. is.
30
THE MINORS OF A DETERMINANT
An
.
such a change in the rth row of minant, (3) takes the form
Q
Hence, for the new deter
=aA
s
r
corresponding result same way, and the results
A
+b s Br +...+ks Kr for columns may be
thus:
(4)
proved
in the
summed up
The sum of elements of a row (or column) multiplied by their awn cofactors is A; the sum of elements of a row multiplied by the corresponding cofactors of another row is zero; the sum of elements of a column multiplied by the corresponding cofactors of another column is zero.
We also
state these important facts as a theorem.
10.
THEOREM
The determinant A
~
(a l b 2
...
kn )
may
be
expanded by any row or by any column: such expansions take
one of the forms
& A
where r
is
= a A +b B +...+k K = x^Xi+Xz X +...+xn Xn
f r r r r r,
(5)
,
2
(6)
is
any one of
. . .
the
numbers
1, 2,...,
n and x
any one of
(7)
the letters a, 6,
,
k.
Moreover,
Q
= a A +b E +...+k K = y X +y X +...+y Xn
8
r 3 r
s
r)
1
l
2
2
tl
,
(8)
1,
2,...,
two different numbers taken from two different letters taken from a, 6,..., k. x, y are
where
r, s are
n and
3.
Preface to
46
to a group of problems that
We
full
come now
depend
for their
discussion on the implications of 'rank' in a matrix. This full discussion is deferred to Chapter VIII. But even the more
elementary aspects of these problems are of considerable 46. importance and these are set out in
4.
The
4.1.
solution of
nonhomogeneous
linear equations
The equations
a l x+b l y+c^
=
0,
a 2 x+b 2 y+c 2 z a 2 x+b 2 y
=
c2
are said to be
homogeneous
linear equations in x, y, z; the
equations
a^x+b^y
=
c ly
=
are said to be nonhomogeneous linear equations in x, y.
THE MINORS OF A DETERMINANT
4.2. If
it is
31
?/,..., t
possible to choose a set of values for z,
so
that the m equations
ar x+br y+...+kr t
are
it is
=
lr
(r
=
l,2,...,ra)
all satisfied,
these equations are said to be CONSISTENT. If the equanot possible so to choose the values of x, /,...,
,
tions are said to be INCONSISTENT.
x
y =
0,
3x
2y
=
For example,
x+y
=
2,
1
are satisfied
when x
=
xy =
0,
3x~2y =
x, y,...,
are consistent, since all three equations 2, 1, y 1; on the other hand, x+y
=
=
6 are inconsistent.
4.3. Consider the
n nonhomogeneous
t,
linear equations, in
the n variables
ar z+b r y+...+kr t
=
lr
(r
...
1,2,.. .,71).
(9)
Let A (a 1 b 2 ar b n ... in A.
,
...k n )
and
10,
let
A n Bn
r
be the cofactors of
the result
Since,
by Theorem
Ja A =
r
A,
2b A =
r
r
0,...,
of multiplying each equation
(9)
by
its
corresponding
A
r
and
adding
that
is,
is
A#
=
(/!
6 2 c3
...
i n ),
(10)
the determinant obtained by writing I for a in A. Similarly, the result of multiplying each equation
(9)
by
its
corresponding
B
r
and adding is
liB!+...+l n Bn
Ay
and so
on.

=
(a,l 2 c 3
...
fcj,
(11)
have a unique solution given by (10), (11), and their analogues. In words, the solution is A x is equal to the determinant obtained by putting I for a in A;
the equations
(9)
*
.
When A
^
A.?/ is equal to the determinant obtained by putting
I
for b in A;
and so on'.f
When A
=
sistent unless each of the determinants
the nonhomogeneous equations (9) are inconon the righthand sides
of (10), (11), and their analogues is also zero. determinants are zero the equations (9) may or
sistent:
When all such may not be con
we defer consideration of the problem
to a later chapter.
f Some readers may already be this result the differences of form are unimportant.
familiar with different forms of setting out
32 5.
THE MINORS OF A DETERMINANT
The solution of homogeneous linear equations THEOREM 11. A necessary and sufficient condition that not all zero, may be assigned to the n variables x, ?/,..., t
the
values,
so that
n homogeneous
equations
ar x+b r y+...+kr t
hold simultaneously
5.1.
is (a l b 2
...
=
(r

l,2,...,n)
(12)
kn )
=
0.
NECESSARY.
Let equations (12) be
satisfied
by values
,
of#, ?/,..., not all zero. LetA^E (a l b 2 ...&) and let A r JBr ,... be the cofactors of ar b r ,... in A. Multiply each equation (12) by its
,
corresponding
Similarly, AT/
least
=
one of x,
and add; the result is, as in 4.3, A# == 0. 0, Az = 0,..., A = 0. But, by hypothesis, at be zero. ?/,..., t is not zero and therefore A must
Ar
5.2. SUFFICIENT. Let
A
=
0.
5.21.
In the
first
Omit
r
=
place suppose,
1
from
(12)
...
and consider the
r t
nl eqxiations
=
2,3,...,rc),
further,
that
A^^to.
(13)
b r y+c r z
+ + k = a
r
x
(r
where the determinant of the
Then, proceeding exactly as in
coefficients
4.3,
on the left is A v but with the determinant
A
l
in place of A,
we obtain
Az =
x
(b 2
3
d4
...
kn )x
= C^x,
...,
and so on. Hence the set of values
= A&
But
1
y
=
B^,
z
=
C^f,
where ^
^ 0, is a set, not all zero (since A^ ^
(13).
0),
satisfying the
equations
4 1 +6 1 5 1 +...
=
A
=
0,
and hence
this set of values also satisfies the
is a sufficient I. This proves that A corresponding to r 0. to hold provided also that condition for our result
=
=
omitted equation
and some other first minor, say C89 is not zero, an of the letters a and c, of the letters x and z, and of interchange the suffixes 1 and s will give the equations (12) in a slightly changed notation and, in this new notation, A l (the C8 of the and if any one old notation) is not zero. It follows that if A
If
Al =
A ^
=
THE MINORS OF
first
A,
DETERMINANT
all zero,
33
minor of A
y,...,
t
is
not zero, values, not
may
be assigned
to x,
so that equations (12) are
all satisfied.
5.22. Suppose
now
that
A
=
arid that
every
first
minor of
A is zero; or, proceeding to the general case, suppose that all minors with more than R rows and columns vanish, but that at least one minor with R rows and dplumns does not vanish.
ing suffixes) so that one nonvanishing minor of R rows is the in the determinant (a x 6 2 ... e n+l ), where e first minor of a
Change the notation (by interchanging
letters
and interchang
denotes the (JK+lJth letter of the alphabet. Consider, instead of (12), the R+l equations
ar x+b r y+...+e r \
=
(r
=
1,2,...,
R+l),
.
(12')
where A denotes the (/?+l)th variable of the set x, y,... The determinant A' EE (a^ b 2 ... e R+l ) = 0, by hypothesis, while the minor of a t in A' is not zero, also by hypothesis. Hence, by
5.21, the equations (12') are satisfied
when
A
*
= 4i, y=B^
...,
= J0;,
(14)
where A[,
B[,.,.
are the minors of a l9 6 1} ... in A'.
Moreover,
A[ 70.
Further,
if
E+ < r < n,
1
,
being the determinant formed by putting ar b ry ... for a v b lt ... in A', is a determinant of order JR+1 formed from the coefficients
zero.
of the equations (12); as such its value is, by our hypothesis, Hence the values (14) satisfy not only (12 ), but also
7
ar x+br y+...+er X
=
(R+l
<r<
n).
for x,
Hence the n equations (12) are satisfied if we put the values (14) variables in (12) other y,..., A and the value zero for all
in (14) is not zero.
than these. Moreover, the value of a;
6.
The minors of a zero determinant THEOREM 12. //A = 0,
4702
we obtain (&! c 2 one of the numbers 2.....34 THE MINORS OF A If A = and if. and consider b r y+cr z+. K = (r Now let s be any the Ti1 equations Proceeding as in A... t These equations hold whenever satisfy the equations and therefore hold when Hence = Kv = A v y = Bv 4^! = ^^... . s That s is.3.. A B 8 8.. A C = A C x ... A y=B (15). = = x. They Theorem are satisfied by x = Av y = B 1 l9 . . A z = C x. Hence our general theorem the proved.+kr 1 ^ = A = 0. no first minor of A is zero.+kr t 4.... Ar "A.... kn )x.. k n )y k n )z (6i c 2 . y.... d3 i w )a. there being 8 no s suffix s.. t = K.... we may divide and so obtain *" equation dr A A8 5: B 7? 8 ~^r V' K 8 if / (17) V . but with the determinant A 8 in place of ... t S 1 1 8. x.. . =A r.. y=S is r.... 3.+i 1 ar A l +br B 1 +.. =^= =K B S ' ~K... (15) for. is as a solution of (15).. further..+kr t = (r=l.. (a l c 2 (6 X 2 .. and so on. we have . a 1 ^ 1 +6 1 5 1 +. = a r x (r ^ s). of the row (column). A B =A B r 8 s If n fi rs t r minor by zero... = 2.3...n).2. the coto those factors of the rih sih row (column) are proportional that is. by 10..n). etc. (16) The same method of proof holds when we take x with r 7^ 1.. n.. Consider the n equations ar x+br y+... t = K^..
prove that 2 aG 3. Use aG \hF\gC If the determinant 2 .. c in A. jbhen are concurrent or parallel if (Oi& 2 c 3) = 0.. C are the co factors of g. A and B. we have _J: 35  2 = = and so for other columns. h are real numbers and C is negative.2.] 4. 3. so also are v/ Prove also that. f. c. if a.0. Prove that the equation of the circle cutting the three cr given circles x*+y* + 2gr x+2fr y + orthogonally is = (r = 1.. If A denotes the determinant a h 9 h b g f c f and G. HINT.. 2 {^\{a~ a C ar = <*' r = ' l > c){a~d} *> 3) " Obtain identities in of a.. 2.. 3). 3) 0. Make equations homogeneous and use Theorem 1 1 5. 4) should have a finite point in are ar x\b r y\~c T z\dr (r common. Prove that the three lines whose cartesian equations are a r x+b r y+cr (r = l. EXAMPLES * 1 . etc. of Example 2 is equal to zero. HINT. 2 \2hFG + bF }2gGC\2fFC}cC 2 = <7A.THE MINORS OF A DETERMINANT Working with columns instead of rows.2. d by considering the coefficients of powers %/2.. A EC = F GH = AF. F. 6. etc. 1. ffi A 03 . o. [A = be / 2 F = ghaf.3) . 6. prove that and hence that. III By solving the equations a rx + b ry + c rz + d r t prove that = ar (r = < 0. 0. when C f. 2. Find the conditions that the four planes whose cartesian equations = 1..
in 6 the values 1.. con.. prove that the circle in question is the locus of the point whose polars with respect to the three given circles are concurrent.6.. 9.. (18) r. 7..k e . x 2 in the product of the factors. 2. show that the expansions of A n by its rows and columns are but special cases of a more general procedure.. Write down . and.. ...1.. 2. Also.. Laplace's expansion of a determinant 7..k e (19) in to 2)! ways of assigning some order. The determinant can be expressed in the form ^(lfa where the sum the values is 1. x 2 in the determinant and the coefficients of x. an interchange of any two letters throughout (19) will reproduce the same set of terms. 11 on p. but with opposite signs pre which the sum taken over the (n n. circles 8.. 1 P 3 3 y 3 XZ as a product of factors... excluding p and q. r b a ct .... evaluate the determinants of Examples 10. 5..36 THE MINORS OF A DETERMINANT the last equation as ! By writing +y = 0. 27. Express in determinantal form the condition that four given may have a common orthogonal circle.. 7.. The terms in (18) that contain ap b q when p and q are fixed. We now stitute a sum ap bq (AB)pq is = ap bq ^c t . Extend the results of Example 8 to determinants of higher orders.. .. by considering the minors of x.. taken over the n\ ways of assigning to n in some order and y N is the total number of inversions in the suffixes r s.
i the terms of (18) are accounted for if we take possible pairs of numbers p.. t t A where (i) = 2(l)*(aA)M^)> 1. by the definition of a determinant. 2.. on comparison with (18).... is the total n in that order but excluding p and q. sum is taken over all possible pairs p. The sum 7.. for the leading diagonal term ap bq of the determinant (ap b q ) is prefixed by plus.. 2. t ..l q. We leave the elaboration to the reader. t. q..2.1 has been grasped argument extends to expansions such as .k e ). q. n. ... In this form the fixing of the sign is a simple matter. since this is true of (18). so that terms in (18) (<* p b q )(c d u . it follows (from the definition of a determinant) that (18) also contains a set of terms aq bp (AB)pq Thus (18) contains a set of terms . where u.k e ) 9 it is (21) and so on...k0 of the determinant (c d u ..k0).. and the detera typical a q bp may is be denoted by (ap b q ). say (c d u . (20) is called the When intuitive that the once the argument of 7.... all all An = IKA)M^)> where the sum is taken over all possible pairs p..THE MINORS OF A DETERMINANT fixed. 6 are 1.. (20) the t.... 0.u. Denote this determinant by its leading diagonal. Hence.. either (AB)pq or (AB)pq is equal to the determinant of order n 2 obtained by deleting from A w the first and + second columns and also the pth and qth rows. 2. . But we have seen that (AB) pq minant ap b q set of = (c t du ..k e ).. 37 Hence. Hence Moreover.. q from 1. since (18) contains a set of terms ap bq (AB) pq and an interchange of a and 6 leaves (AB) pq unaltered... n in that order but excluding p and q.. Again. 6 are N p.. number of inversions of suffixes in expansion of A by its first two columns.. (ap b q aq bp )(AB) pq ..ko). y (ii) (iii) u . as is the leading diagonal term c d u .
For example. In 7. 29). the terms in the expansion of A that involve afi8i are given by where A(r 1 . 7. The Laplace expansion of the latter determinant by its first two columns may be considered as the Laplace expansion of A w by its third and fourth columns. expand more) columns or rows.4. l)th . Determination of sign in a Laplace expansion The procedure of p.1 we expanded A by . (23) A(r l9 r 2 8 l9 8 2 ) denotes the determinant obtained by the r x th and r 2 th rows as also the ^th and s 2 th deleting columns of A. (22) an expansion by fourth minors. and so on. the element ar ^ appears in the (r 2 and ^ row and (s 2 2 <r <s 2.3.l its first it by any two (or two columns: we may. and and s 2 th columns (s l < s 2 ). Various modifications of 7. enables us to calculate with ease the sign appropriate to a given term of a Laplace expansion. Now. the determinant = having ars as the element in Laplace expansion by the This expansion is of the form its its rth row and sth column. (21) an expansion by third minors.4 (p. A w is equal to Co as we see by interchanging a and c and also 6 and d. that is. Consider the determinant A ars . in fact. Moreover. and (ii) the summation is taken over all possible ' where (i) 9 pairs r v r 2 (r x < r 2 ). by 1. 29.5 1 ) is the determinant obtained by deleting the r x th since rl l)th row and ^th column of A. 1.1 are often used.38 THE MINORS OF A DETERMINANT The expansion (20) is called /t an expansion of A by second minors.
THE MINORS OF A DETERMINANT column of A^. where a is the the elements 7. by (20). ( a l) .. appearing The rule.. . a2 a3 62 c2 63 c3 ci 3 *i The signs are.. hence the terms that involve ar ^ 39 in the expansion o St are given by for &(r v r 2 ]s v s 2 ) is obtained from A^ by deleting the row and column containing #riSa Thus the expansion of A contains the term and therefore the Laplace expansion of A contains the term ( 1 Nri+riUi+Sa (24) That is to say. easily extends to Laplace expansions by third.^). minors. proved above for expansions by second minors. c4 The sign is most easily fixed by (24). those of the leading diagonal term products Alternatively. c2 c3 d2 ds c?4 e2 e3 e. fourth.. by (24). As an exercise to ensure that the import of (20) has been grasped.4. the signs are. which gives it to be ( l) l+5+1+2 . the reader should check the following expansions of determinants. the sign to be prefixed to a term in the Laplace is expansion numbers of sum of the row and column in the first factor of the term.
whcro Aj ^ Aa == (0364^). A3 E (a^b l c 2 ) t and A\ is the co .40 '(in) THE MINORS OF A DETERMINANT 62 Ci ca Ca &2 c3 c4 &4 ^4 (a 2 & 3 c 4 ). factor of a r in A a .
The nomenclature is appropriate because the meaning of the symbol as a whole does not depend on what letter is used for the 'dummy'. The repeated suffix is often called a 'DUMMY SUFFIX'. The summation convention It is an established convention that a sum such as ar br (1) may be denoted by the single term The convention single is that when a is term (as here r represent the sum of all values of r. namely. The convention literal suffix is repeated in the repeated) then the single term shall the terms that correspond to different is applicable only when. which denotes 8 2a = 1 rs xsy (3) n bjs> which denotes 2a n r rj ^s> W xr xs(^) n rs ars xr xs . term for a suffix whose presence implies the summation. follows \^e suppose that each suffix has the range of values Further examples of the convention are n ars xs an . by the one knows the range of values of the suffix. but almost universal.CHAPTER III THE PRODUCT OF TWO DETERMINANTS 1 . both ars xs and arj x^ stand for the same thing. a curious. In what context. for example. which denotes r ^L 2 8=1a =l In the last example both and s are repeated and so we must sum with regard to both of them. .
When we the forms substitute Xt (7) = *>isX.. but as a ^a = 1 rs xs (r= l.. r (6) not as a single expression in which typical one of the n expressions n 5 has a fixed value. 4 (p.. then (10) is satisfied by a set which are not all zero (Theorem 11). (8) become ari b i8 X 8 (r=l.. In the next section we use the convention n in considering linear transformations of a set of linear equations..42 . On most occasions when we are using the summation convention.... Consider n variables x i (i ari x t = = (r 1.2..n).2. . the determinant \ara zero (Theorem 11).. we have the n i = Q> (12) and so all EITHER OB the x i are zero. r is of the values 1... THE PRODUCT OF TWO DETERMINANTS not repeated is f Any suffix that is regfird called a 'FREE SUFFIX'. W* W. in (6). to be thought of as a suffix free to take any one n..n). and no further summation of all such expressions is implied. 21).. 2. 1.... (9) Now where consider the n equations crs E= \crg \ ari b ia . (<=l. (11) If the determinant! of values X i (i = But.2.n)... 2.2..?i) satisfy equations (10).. I. 2... when the equations ^ ^^ = ^^ = is \ X = 0. X f The reader who is unfamiliar with the summation convention is recommended to write the next few lines in full and to pick out the coefficient of 9 in n / n il { 2 Compare Chap. 2.n).... n) and the ft linear formsf (7) l... Thus.
\. Of the two proofs. it is certainly the more artificial. the second is The foregoing perhaps the easier. (13) where k By arr = independent of the a's and 6's. is \ zero. we shall use only the Greek letter a as a implying summation: a repeated Roman letter dummy will not imply summation. if r 2. e. \. Thus a lOL b 0il stands for a 11 6 11 +a 12 6 2 i+a lr b rl stands for the single term.1 . c r j. it is indicated that cl = lX6w . In the next section we important give two proofs of this theorem.THE PRODUCT OF TWO DETERMINANTS In the former case the n equations 43 t>x.e. \b ra = o = \ by a \crs \ set of values . at least one of the determinants \ar8 \. are satisfied is. that 0. \ari b is is of degree n and of degree n in the ft's. 3. considering the particular case a ra is = when r  s. First proof a proof that uses the double suffix notation. The determinant \c rs i. and so it is indicated that cr. it stands for a 12 fc 21 . suffix In this proof.X^ which are not all zero. then their product is the determinant where 3. Consider the determinant \cpq \ written in the form . (14) is an indication rather than a proof of the theorem contained in (14). Hence \b rs if = 0. Let \afit \ 9 \b ra \ be two determinants of order n. in the a's = *Klx6J.g. /* Proofs of the rule for multiplying determinants THEOREM 13. 1.
... so that is merely the expanded form of pq 3... A Then = (17) t Note that a ir b n stands for a single term. where the summation tor. applied to rows). having two rows identical... all a 2x other than 2s to be zero.. 2.. 5. z (p.. and so on. 11. Doing this... in the first place. z are all different. Hence \.2. This we can do by cona sidering all a lx other than a lr to be zero.. 5... 2. Consider. z are the numbers 1.. n in some order and N number of inversions of the suffixes \bpq in (16) \ r.. ..... 5. 2. the *&' determinant is zero. the '6' determinant is N equal to (l) \bpq where N is the total number of inversions of the suffixes in r.. we see that the terms of \cpq \ that contain (15) as a factor are given byf a lr brl a ir br2 a 2s bs2 2s b sn anz bzl a nz bz2 a nz bzn b rl b r2 bsl bs2 l Z2 ' z Unless the numbers r. Second proof a proof that uses a single suffix notation.... is z the values taken over the n\ ways of assigning is the 1. z. n in some order. $.. total 5.2... When r. But the factor multiplying a * .44 THE PRODUCT OF TWO DETERMINANTS coefficient of and pick out the in the expansion of this determinant.
A' ! = (a^ ki readily to determinants of order n. on expanding the last determinant by its first two (18) columns.k n ). as see 45 deter we In the determinant (17) take using the notation of p... * n ). Ki I . .The determinant . kn 1 in which the bottom lefthand quarter has its 1 in each element if of principal diagonal and elsewhere. Then (17) becomes AA' = 1 a1 1 +6 1 a2 aij8 1 +6 1 ] 1 and so. . 3.. _ _ The argument extends Let A = (a l b 2 . is unaltered we write * = r +a 2 2 rn + 1 +b 2 r n+2 +.. . 17.5. r = . Hence .+k 2 r2H .THE PRODUCT OF TWO DETERMINANTS by using Laplace's method and expanding the minant by its last two columns.
sign see p. the only nonzero term 10. 7. 2n is an even number and so the sign to be prefixed to the determinant is always the plus sign.3) 1 an ! + .. Other ways of multiplying determinants The rule contained in Theorem 13 is sometimes called the matrix^ rule or the rule for multiplication of rows by columns.THE PRODUCT OF TWO D JOT RM WANTS 10 When we expand the last determinant by is (for its first n columns. 39. The process is most easily fixed in the mind by considering (18). another way of stating Theorem 13. . = (_l)n Moreover.. Hence ln *n is a /i Kn (19) which 4. and so we can at once deduce from (18) that.0 1 . A determinant is unaltered in value if rows and columns are interchanged. if A' = A* then (18) may also be written in the forms AA' AA' = (20) = (21) f See Chapter VI. in forming the second row of the determinant we multiply the elements of the second row of A by the elements of the successive columns of A'. In forming the first row of the determinant on the right of (19) we multiply the elements of the first row of A by the elements of the successive columns of A'.
is zero. There however. 3 a. 2 (<*y) o The determinant is the product by rows 1 of the two determinants 2a 2)8 1 a ft a2 1 1 y 2 2y of these ]8). (ii) as the when n > 4. 7 is y 2 By Theorem second to (j8 8.] \/2. and many writers use multiplication is. by columns or by the matrix rule. 19. p. EXAMPLES IV A number of interesting results can be obtained for at least 1. is the product by rows of the two determinants and so 3. The determinant of Example 3. by applying the rules shall arrange for multiplying determinants to particular examples. The of these is referred to as multiplication by rows. the examples in groups and we shall indicate the method of solution We one example in each group. The extension to determinants of order n is immediate: first several examples of the process occur later in the book. the second as multiplication by columns.) 4. In the examples that follow multiplication is by rows: this is a matter of taste.THE PRODUCT OF TWO DETERMINANTS In (20) the elements of the rows of 47 A are multiplied by the elements of the rows of A'. in (21) the elements of the columns of A are multiplied by the elements of the columns of A'. much economy of thought if one consistently uses the same method whenever possible. . Evaluate the determinant of order n which has (o r element in the rth row and 5th column: (i) when n = 4. tlx) first equal to 2(/? y)(y a)(a /?) and the y)(y a)(a [In applying Theorem 8 we write down the product of the differences and adjust the numerical constant by considering the diagonal term of the determinant. Prove that the determinant of order n which has (ar a g )* as the element in the rth row and 5th column is zero when n > 3.
THE PRODUCT OF TWO DETERMINANTS Extend the 3 3 results of Examples 3 3 3 and 4 to higher powers of a r a8 . Prove that (x a)(x j3)(x y)(x 8) is a factor of A == 1 X X2 X3 X4 s r in and find the other factor. 1 1 ft I a a 8 82 S3 Hence (by Theorem 8) prove that the first determinant is equal to the 2 product of the squares of the differences of a.) Prove that (ax) (6 (c (a?/) (b 2/) 3 3 (az) (6 z) (c a3 63 c3 x) x) 3 (c y) 3 3 z) x 3 y 3 z3 = 7. 8. Prove that.e./?. 0. 2 and evaluate it when k and when k > 3. i. 1 1 A indicates the pro 1 1 1 1 i 8 8 S2 8 3 4 82 83 8* y If A!. . The factors of A follow from Theorem 8. 8)} . . u8. ^ 9. (Harder. /6. 0. x 2 x 3 x4 . in order to give A! A 2 = A.) Prove that (ax) (b 2 2 (a?/) (6 i/) 2 2 x) (cx) is 2 (cy) 2 (az) 2 (bz) 2 (cz) 2 a2 62 c2 zero when k = 1. if s r = ar +jSr +yr +8 r then . . {(a. we have anything but O's in the first four places of the last row of we shall get unwanted terms in the last row of the product A! A 2 So we try 0. The arrangement of the elements duct by rows (vide Example 8) of Solution. y. .48 '5. (Harder. 9a6rx^/z(6 c)(c~a)(a b)(y z)(z x)(x y). then . 1 as the last row of A! and then it is not hard to see that.y. the last column of A 2 must be \j x. 0.
By considering the product of two determinants of the form a + ib c}~id (cid) a ib prove that the product of a sum of four squares by a sum of four squares is itself a sum of four squares..2) of order X A = = 17.y T ) are . Multiply the determinants of the fourth order whose rows are given by 2 x r y r x* + y* 1 r *H2/ r . Find the factors of 52 SA 54 So SS So *1 *2 3 *8 1 X* where 12. (x . 13. 16. B ca. C = c a&.y two sets of values + yiY Sl2 XiX t \y Y etc. 5. Find a relation between the mutual distances of a sphere. 2.y). r = (x r 1. concyclic. 3 s three and hence prove that (a* + b + c*3abc)(A* + B +C*3ABC) 3 3 3 + F + Z 3XYZ. If X hy.'. five points on 15.Sn Y = hx\ by. = xa ) z + (y r ya Y> where Hence prove that the determinant o r zero whenever the four points (<K r . when s = a hy r r 49 111 3 j8 r 111 ft }jS . 3. prove that*S\ = S2l 2 2) 19 l 2t a and that 4702 Sn /* 6 H . 4. further. 23. may be written in the form 2 62 a 2 6c. ti of(x. v/11. then Prove. Extend the results of Examples 811 to determinants of higher orders. that if 3 3 3 Express a f 6 + c 3a6c as a circulant (p.. THE PRODUCT OF TWO DETERMINANTS r Prove that.2 *r 22/r 1. sr = a r f r f y r . a3 y 3 y y Note how the sequence of indices in the columns of the first determinant affects the sequence of suffixes in the columns of the last determinant. 14. and o r.. (x^y^.
.) Using the summation convention. y2 ) contains terms (2) + (c 1 a 2 )(y 1 a 2 ) + (a 1 6 2 )(a 1 j8 8 ). A = !. Prove that 18s. AjU = x ^X r be a given quadratic form in n variables.. that is.50 THE PRODUCT OF TWO DETERMINANTS . (ftiC 2 )(j8 1 when expanded.. .... x n . say. The result is the determinant A If = (1) we expand A and pick out the terms involving. supposing k to be the nth letter consider also a corresponding notation in Greek . A 72 we see ^ia ^ they are /^yo^jCg b 2 c 1 ).. ^> denotes n sets of values of the variables X ^ or 1 .. the terms b 2 c l ).. It follows that A. the Nth.) Extend the result of Example 17 to three variables (. Let r ^ wliere (rrj. (Easier.k. a^..t. (Harder. z) and the coordinates of three points in space. < JV.. x%). the sum of all the products of corresponding determinants of order two that can be formed from the arrays in (A). First take two arrays y\ in which the number of columns exceeds the number of rows. x 2 .. ISA. from equal to the sum of terms in Now and t consider a.. T/. Similarly. x n are n independent variables (and not powers of x).. where n letters.a rs x rx s where a: 1 x 2 . 5. Hence A is (2). .. let S^.1.. r a rs x%. terms that can possibly arise moreover (2) accounts for all the expansion of A. and multiply them by rows in the manner of multiplying determinants.. Multiplication of arrays 5.. (x lt y lt z^).r. (x z i/ 2 z 2 ).. Hence A contains a term in P*yi are (b l c 2 where (b 1 c 2 ) denotes the determinant formed by the last two columns of the first array in (A) and (^72) the corresponding determinant formed from the second array.. b..
is the sum of all N columns and n the products of corresponding determinants of order n that can be formed from the two arrays. For example. k and no other roman letter.. where n N. We have therett . K n ). THEOREM The determinant A. THEOREM 15. of order n. letters a.. where n < N. 6. The determinant of order n which is columns obtained on multiplying by rows two arrays that have is equal to zero. is Thus the full expansion of A consists of a sum of such products.. when expanded. and n rows. that is 51 obtained on multiplying a IS /i *n ^n A EE (3) letters after k... contains a term (put nth letter. the number of such products being NC the number of ways of choosing n distinct letters from N given letters. which that have is obtained on multiplying by rows two arrays roles. the product by rows of the two zero determinants of order n that one obtains on (nN) columns of ciphers to each array.THE PRODUCT OF TWO DETERMINANTS The determinant. 14. equal to zero) all (4) which the product of the two determinants (a l b 2 . Moreover (4) includes every term in the expansion of A containing the. in fact. the This determinant. N > The determinant adding so formed is.. 5.. . of order by rows the two arrays n.2. k n ) and (! j8 2 .. fore proved the following theorem.
By Theorem 15. p. + 2fyz + . Show that this result gives a relation connecting the mutual distances of any four points in a plane. the determinant obtained on multiplying them by rows vanishes identically.. X ^ ax + hy+gz.. 4. prove that if sr HINT. If a. Y == hx + by+fz. 3. and of the rth powers of the roots.. )3.r 4 2</ 4 1 1 1 *J + 2/J *4 2/ 4 have 5 rows and 4 columns. let S ^ax z + . and Theorem 14.. In the notation usually adopted for conies. Obtain the corresponding relation for five points in space.52 is THE PRODUCT OF TWO DETERMINANTS obtained on multiplying by rows the two arrays ! ft az bz <*3 and is also obtained on multiplying by rows the two zero determinants * 3 *> 3 ft EXAMPLES 1. y.. V The arrays 1 1 000 2. denotes the sum are the roots of an equation of degree n. 48. 2. Z^ gx+fy\cz t ... Compare Example 8.. Obtain identities by evaluating (in two ways) the determinant obtained on squaring (by rows) the arrays (i) a a' b b' c c' (ii) abed b' c' a' df 5.
is x 2 + by i +/z 2 which is the product of arrays xl *2 yL 2/2 Zi h 9 b f C Z2 f and so is ^4f 4//7yG. The It is multiplication of determinants of different orders sometimes useful to be able to write down the product of a determinant of order n by a determinant of order m. L. The methodf of doing so is sufficiently exemplified by considering the product of (a^ 6 2 c 3 ) by (^ /?2 ).. Dixon. We see that the result is true by considering the first and last determinants when expanded by their last columns. 6. The determinant is which is the product of arrays *i t 2/i 2i z2 2 2/a F2 and so is But (Yj Z 2 ). when written in full. Hence the initial determinant is the sum of three terms of the type 6.. ^22/1 Let = 2/iZ 2 2i *? = Zi# 2 2 2 ^i = ^12/2 Then Solution. H be the co factors of 2/2 a. ....THE PRODUCT OF TWO DETERMINANTS let 53 A t B.... The first member of the above equation is I learnt this method from Professor A. h in the discriminant of S.
X ft 4 ^4 C4 ^4 a result which becomes evident on considering the Laplace expansion of the last determinant by its third and fourth columns. while the second by the rule for multiplying two determinants of the same order. . the signs having the same arrangement in both summations. The reader may prefer the following method: ft X 3 X #2 &2 C2 c3 o ft a3 63 1 3 al + *3ft a 3 a2 This is. perhaps. A further example is d.54 THE PRODUCT OF TWO DETERMINANTS is. easier if a little less elegant.
A' is called the AD JUG ATE determinant Theorem so for other letters of A. . A = 0..JACOBI'S I . . Hence when A = 0. an bn B AA' we obtain = A .{.. Kn = ) A".. When we multiply by rows the two determinants . 16. .. 71 1 . With to 1 A l in A' is equal A the notation of /l ~ 2 . . B. .1 . Ar B . THEOREM 17. by Theorem 12. A r  for a r A +b B8 + . A  at bl . X' t A' = A... . /c x )t ). But when A is r zero. . CHAPTER IV THEOREM AND ITS EXTENSIONS Let Jacobi's theorem THEOREM 16. r . . A B A s Br = A' For. Then If we multiply A' == (A 1 B 2 . = = A 71 1 DEFINITION. the cofactor of and and suffixes. A . .. AA' ^ s 0.k 8 r equal to A when r when A then = r K 8 is equal to zero when s and is s. A' = 0. in . by rows the two determinants ..... is A A 71 . a determinant A a _ = (a l o 2 . A' is also zero.. if so that the Laplace expansion of A' by its first two columns a sum of zeros. denote the cofactors of a n 6 r . Hence so that.
we multiply by rows (1) where the j's occur in the rth row and the only other nonzero terms are A's in the leading diagonal. In a determinant A. form a minor of the determinant.1. the 0. When by the argument used when we con16. The value of (1) is General form of Jacobi's theorem Complementary minors. 2. A'. we wish to consider the minor of Jr in where J is the 5th letter of the alphabet. where r < n. . all terms to the left of and below the leading diagonal Thus A B9 . this result follows gives the result stated in the theorem. Moreover. EXTENSIONS we obtain A wherein are zero. ITS a. sidered A' in Theorem if each B C r s B C 8 r is zero. the elements of any given r rows and r columns. of order n. . the elements that are left form . and columns are excluded from A. When these same rows 2.56 JACOBI'S THEOREM AND A . K When A = A = 0.
for example. /O\ l5) / \ (1) above. 57 which. the determinant (b 2 c 3 d4 ). (3) is merely the definition of a first minor. as 1 (3) is true whenever r ~ Then. one Laplace expansion A' that compose 2Rr 4702 . in (1) above is. l Let us first = A we see by Theorem 12. when the appropriate sign complementary minor of the original The sign to be prefixed is determined by the rule that minor. let $Jlr be a minor of A' having r rows and columns and let yn _r be the complementary minor of the corresponding minor of A. In particular. and whenever A Accordingly. then also 9Wr = 0. If r 1. So let r > 1. by definition. let /zr be the minor of A whose elements correspond in row and column to those elements of If r 1 > and A = 0. by the definition of complementary minor. Thus. A. in *2 c2 d2 (1) the complementary minor of is since the Laplace expansion of (1) by its first and third columns involves the term +(d 1 C 3 )(b 2 d4 ). if A is the determinant then t in the usual notation for cofactors. It remains to prove that (3) is true when r > 1 and A ^ 0.JACOBFS THEOREM AND ITS EXTENSIONS a determinant of order n is r. Then CYT> MJf T = v / fl ~~T ^"^ Z\ Ar 1 . due regard being paid to the sign. With the notation of Theorem 16. a Laplace expansion shall always be of the form prefixed. 0. dispose of the easy particular cases of the theorem. THEOREM 18. A ^ 0. further. j . is called the where yn ^r is the minor complementary to ^r .
and let JP. is K n r letters of the alphabet.. 1 an . . . . E be the first r letters. . 1 . Aj_ .. . en fn . . . . . . . .. be the next Let 4. .58 JACOBI'S THEOREM AND ITS r EXTENSIONS of A must contain the term in the Hence A may be written form other elements other elements It thus sufficient to prove the theorem when 9W is formed from the first r rows and first r columns of A'. this we now do.. .. . nr elements of the leading diagonal of the and other elements of the nr rows 1 's all last we obtain A . A fr If Kr . . and so (3) is true whenever r > 1 and A ^ 0.. E FI .. When we form the product X AI . . kn first where the last determinant are are O's. /I fr+l fn *>+! If b Kn That is to say..
^.CHAPTER V SYMMETRICAL AND SKEWSYMMETRICAL DETERMINANTS The determinant for every r and s. Before considering determinants of even order we shall examine the first minors of a skewsymmetrical determinant of odd order. A skewsymmetrical determinant of odd order * has the value The value of a determinant is unaltered when rows and columns are interchanged. = A r8 For A n and A v differ only in considering column A^ for row and row for column in A. such a determinant must all be zero. \a n is \ said to be symmetrical if ars = aw ar8 = \ars \ is s. as we have seen in 2. a skewsymmetrical deterrow and 5th column of arg 1 of odd order. in forming rows of A. and. that is. In a skewsymmetrical determinant such an interchange is equivalent to multiplying each row of the original determinant by 1. it is multiplied by w ( l) . . then A ra is ( minant l)*. The determinant 1. to multiplyHence the value of the whole determinant by (l) n ing a skewsymmetrical determinant of order n is unaltered when . zero. for a^ = 2. this value must be zero. A rs we take the elements of n1 We is that require yet another preliminary result: this time one often useful in other connexions. a column of A is (1) times the corresponding row. 3. Let A r8 denote the cofactor of ara the element in the rth . that is.. moreover. and if n is odd. aw for every r and said to be skewsymmetrical if It is an immediate consequence of the definition that the elements in the leading diagonal of #. THEOREM 19. . 4. .
The determinant au a 22 (1) s (2) is the cofactor ra of a^ in \ars \. 5. considerthis. rs lr tl . Hence the coefficient of X r Y is A is expanded as in Theorem 1 (i) every term must involve Hence (2) accounts for all the either S or a product X r Y s . we see that the coefficient of X r Y is s the Laplace expansion of (1) by its rth and (nfl)th rows.60 SYMMETRICAL AND THEOREM 20. (3) Since = 0. Its value is ZA X X rs r s.. A skewsymmetrical determinant of order 2n a polynomial function^fits elements.. THEOREM the square of 21. of degree n 1. . A A = A A (Theorem 12).. let \a rs as such. be a skewsymmetrical determinant is zero (Theorem 19). 20. its value Yr = X r.. The one term of the expansion that involves no X and no Y is \ars \S. Further. ls . . so that the determinant (1) of Theorem 20 \ now a skewsymmetrical determinant of order 2n. terms in the expansion of A. S = 0. (Z V4 (4) where the sign preceding X is (1)^. to be zero. .iV4 nB ). and if we suppose that each is the its square of a polynomial function of elements. The term involving Xr Ys is obtained by putting S. when s sign.. is In Theorem of order let is 2?i 1 . chosen so that \a rs rl \ rs sr rr ss s rr ss . or A* = A A or A = A AJA Hence (3) is and A /A n = AJA Z V4 . Doing A rs But. ing we see that the coefficients of X r YH and of a rs S are of opposite A rs Moreover.. 1 11 2 22 r Now ^4 n . A nn minants of order 2n are themselves skewsymmetrical deter2. . all X other than JC r and all Y other than 7.
6. Higher Algebra (London. A Here we give merely sufficient references for the reader to study the subject if he so desires.SKEWSYMMETRICAL DETERMINANTS then (4) will 61 Hence our theorem be the square of a polynomial function of degree n. The study of Pfaffians. is too special in its appeal to its inclusion in this book. S. 0. Hence our theorem is true when n It follows by induction that the theorem is true for = 1. . chapter ix. SALMON. is true for a determinant of order 2n if it is 2. vols. Contributions (London. Lesson V. to the History of Determinants. it is delightfully written and can be recommended as a reference to anyone who is seriously interested. warrant interesting though it may be. Sir T. true for one of order 2n But n and is an = a? a perfect square. MUIR. 18901930). G. prove that the above quadratic form may be written as . EXAMPLES VI (MISCELLANEOUS) 1. It is too detailed and exacting for the beginner. Lessons Introductory 1885). all integer values of n. Its properties and relations to determinants have been widely studied. CHILD. The Pfaffian polynomial whose square is equal to a skewsymmetrical determinant of even order is called a Pfaffian. M. 1936). iiv This history covers every topic of determinant theory. A v . Prove that where A rs is the cofactor of a rs in \a r8 If a rs a 8r and = \ \a ra \. 22. to the Modern Higher Algebra (Dublin. BARNARD and J.
*. when when r =56 a. A\. and A.. B? C\ BJ CJ B\ (7} BJ Cl BJ C\ 17): use HINT. B\C\ = a4 A $ (by Theorem Theorem 10. A! = (a t 6. prove that a fct is independent of a?!.an x ri ii xt . and use Theorem 10.. HINT. B\C\ 4. A is a determinant of order cofactor of afj in A. *. Use Example 3. Prove that A ri the = whenever a n +x = +x 0. r = . 1. Prove that if A = ari (Greek letters only being used as dummies).62 2.. S= n n r 2 2. Prove that 1 1 1 sin a sin/? cos a cos/? sin2a sin 2/9 cos2a cos2j8 1 1 siny sin 8 sine cosy cos 8 sin2y sin 28 cos2y cos 28 cos2e cose sin2e with a proper arrangement of the signs of the differences. A a = (o a &4 c i) AS = (a 4 6 1 c l ) the cofactor of ar in A. and the summation convention be used 6. a typical element of A. EXAMPLES If VI ar$ . n.c4 ). xk . < n. Prove that A\ = A} and that is A4 (^6. a r.. then the results of Theorem 10 may be expressed as  t^r = &**A mr = =A= ar ^A tti QrA*. and A 7^ 0. JS 21 1 ^a J3 Al ^j = 0. 6.. that A\ = and A\. consider BJ/dx u .03).
0*3.. HINT. c a and use the remainder theorem. 9.. .EXAMPLES Prove also that transformation then 7. = a. a. then the determinant has r~ l as a factor. and if r columns (rows) become equal when x o. Consider a determinant with c{ = c^ c. determinant if fr (a) = gr (a) hr (a) when r = 1. c^ = c. and a 1 by independent proofs that both determinant and product are ( equal to o i +o + 3) ( a _ l)4( a + 3a 3 + 4a + 2af 1). VI 63 if n variables r x are related to n variables X by the ^ X = r* &x = A^X^ 8 9 t Prove Theorem 9 by forming the product of the given circulant by the determinant (o^. as = Verify Theorem 9 when n = 5. then the whose elements are polynomials in x. where Wi . 3. 8.. o>J).. contains (x a) 2 as a factor... o x = a 4 = a 5 = 1. If the elements of a determinant are polynomials in x. a) n are distinct nth roots of unity. Prove that.. (xa) 10. 2.
.
PART II MATRICES 4702 .
.
n). 'numbers' x. y.. Cf.. . or we can entities Linear substitutions We can think of n numbers. X =a r rl x l +ar2 x 2 +. n ln a and understand by the notation Ax a 'number' whose components are given by the expressions on the righthand side of equations as (1). in ordinary threedimensional space. (I) wherein the ars are given constants. with components x l9 x 2 . chap. a single entity we shall refer to merely* a slight extension of a common practice in dealing with a complex number z.. either as separate x lt # 2 .+arn x n (r = 1. we can think of the vector OP (or of the point P) and denote it by x.CHAPTER VI DEFINITIONS AND ELEMENTARY PROPERTIES 1. x n or as a single entity x that can be broken up into its n separate pieces (or components) if we wish to do so..] Introduce the notation A for the set of n 2 numbers.. consider a change of axes in coordinate geometry where the xr and r are the components of the same X vector referred to different sets of axes.x^).. [For example..... X. with components n are connected by I9 2 . Then the equations (1) can be written symbolically X = Ax. Pure Mathematics. which is. ' give prominence to the components x ly x^ # 3 of the vector along the axes and write x as (x v x 2 ... G. H. Hardy. given a set of axes through and a point P determined by its coordinates with respect to those axes. a pair of real numbers x. .. y X X X .. Now suppose that two x n and a set of n equations ... For example."\ it When we are thinking of x as This is as a 'number'. f (2) iii. in fact. real or complex. real or complex.. nn a n ia a .
This idea we consider with more precision in the next section.. further. the sym bolical equation (2) will be all that it is necessary to write when equations such as 2. y components Y Y2 73 ly . . But X +Y = r r (arl +6rl K+(ar2 +6 r ^ a +. b ln b nn and (3) is r the symbolic form of the n equations ( Y = ^*l+&r2*2+" + &rn#n r = *> 2 ii n )' by comparison with vectors in a plane or in denote by X\Y the 'number with components r r space.68 DEFINITIONS AND ELEMENTARY PROPERTIES The reader will find that.. + (a n+&rnK. . naturally. or X+Y. r and so we can write X+Y = (A+B)x provided that we interpret A+B to be the set of n i2+ii2 2 numbers a ln+ b ln anl+ b nl an2+ ft n2 ann+ b nn We are thus led to the idea of defining the sum of two symbols such as A and J5. a second (3) Y given by Y = Bx 2 where B symbolizes the set of n 6n b nl 6 18 b n* numbers . Take. with components v 2 X X . Linear substitutions as a guide to matrix addition It is a familiar fact of vector geometry that the sum of a JT 3 and a vector Y with vector JT. with components Consider now a 'number' x and a related 'number' X given by (2) X = Ax. is a vector Z. . with but little practice. 9 We X +Y . . (1) are under consideration. where A has the same meaning as in 'number' 1.
or is frequently denoted by a single letter other symbol one cares to choose. The square bracket is merely a conventional symbol In writing. In substitutions. . The definition of a matrix deliberately omits this notion of A in relation to something else. A is thought of as operating and so on some 'number* x. or (to mark the fact that we are not considering a determinant) and is conveniently read as 'the matrix'. When m= n the array is called a SQUARE MATRIX A. DEFINITION 3. For by any example.1. DEFINITION 2. The sum of a matrix A. is defined only when . We now introduce a number of definitions that will make precise the meaning to be attached to such symbols as A + B. the array having m rows and n columns.2. 69 Matrices 3. In the definition that follows we do not impose this restriction. Two matrices A. 3. DEFINITION 1. hi k>t a i2 tfoo . AB.DEFINITIONS AND ELEMENTARY PROPERTIES 3. ADDITION. The sets symbolized by A and B in the previous section have as many rows as columns. An array of mn numbers. But there is one important difference between the A of 2 and the A that denotes a matrix. real or complex. such as those considered in 2. a common notation for the matrix of the definition is [arj. when A and B denote matrices. with an dement brt in its rih row and sth column. am a9~ a  aml am2 is called a MATRIX. but admit sets wherein the number of rows may differ from the number of columns. B are CONFORMABLE FOR ADDITION when each has the same number of rows and each has the same number of columns. and a matrix B. As we have seen. the idea of matrices comes from linear substitutions. 2A. with an element ar8 in its rth row and sth column. leaves matrix notation open to a wider interpretation. of order n. . a matrix a.
* Definition ai ci ( and this. The matrix I. rA is defined element of A. When r a number. It follows 'A BW from Definitions 6 and 7 that 'A mean the same thing. The sum is denoted by A}B. b]. the matrix the matrix [~a. and DEFINITION 7.g. DEFINITION SUBTRACTION. MULTIPLICATION BY A NUMBER. 5. defined as It is called the difference of the two matrices. when the two matrices are conformable for addition and each element of A is equal to the corresponding element ofB. by Definition 3. by [a. by definition.70 DEFINITIONS AND ELEMENTARY PROPERTIES A. A is that matrix whose elements A multiplied by 5. is called a NULL MATRIX. the first ai 2 1 [aj order 2. Again. ^1 matrix having every element zero is written 0. C 2J OJ [6 2 a 2+ 6 2 DEFINITION are those of 4. real or complex. matrix A where A is the onerowed For example. B are conformable for addition and is then defined as the matrix its rth having ars \b rs as the element in row and sth column.K+&I L ql. T^i]' [ The matrices are distinct. b] is. and we write A B. is to be the matrix each element of which is r times the corresponding . and A is a matrix. U e. OJ cannot be added to a square matrix of but the second can. K [a 2 Ol + rfri cJ cj . namely. = is B' and equal to the corresponding b rs DEFINITION 8. The matrices A and B are said to be equal. is equal to x c2 i DEFINITION 6. each ar3 . A~B is A~}(B).
B. either (ii) sum being denoted by a+b+c. = ra+r&. By we can an easy extension. which we shall leave to the consider the sum of several matrices and write. of which an example a\b is b+a. the rth row and 5th column of the matrices A. (4\i)A instead of A+A+A\(l\i)A. Similarly. where the small r(a+6) letters denote the elements standing in. 5. the laws* that govern addition in ordinary algebra also govern the addition of matrices. on. and 8. although we may write r(A + B) = rA+rB . and we may correctly write is A+B+(C+D) = A+B+C+D and so on. algebra are : The laws that govern addition and subtraction (i) in the associative law. Thus the associative and commutative laws are satisfied for addition and subtraction. 6) (a The matrix equation (A + B) + C = A + (B+C) is an immediate consequence of (a\b)\c = a\(b\c). C and either sum is denoted B\A by A\B\C. the commutative law. 5 A A 2 A. say. of which examples are = a\~b. = an immediate consequence of a{b = b+a. Further. which are real or complex numbers. since the addition and subtraction of matrices is based directly on the addition and subtraction of their elements.4 = we ^ 36 In virtue of Definitions 3. if 71 A= "! bjl. are justified in writing 2A 3 instead of instead of A\A.3. and so reader. On the other hand.DEFINITIONS AND ELEMENTARY PROPERTIES For example. then 3. for example. (iii) the distributive law. 3. the matrix equation A{B . of which an example is (a+b)+c = a+(b+c).
4. real or complex.. we must the symto interpret (2 a) is AB 2 2 6 21 6 21 0n& 12 +a 12 6 22 a 21 6 12 +a 22 6 22 ~.. But before we give this formal definition it will be convenient to define the duct' of two 'numbers'.. A. .. B follow. xn and ylt y2 . when r* a. on the ground that r(a+b) = ra+rb RA RB 3 b are numbers. real or complex.. J ^ 'scalar pro This gives us a direct lead to the formal definition of the product A B of two matrices A and B. that is. They enable us to express xl9 x 2 in terms of the a. provided we interpret bolic form of equations be the matrix (2 a) AB in such* a way that (2). *a> where the a and b are given constants.72 DEFINITIONS AND ELEMENTARY PROPERTIES r is a when number. are all matrices. Let x. scalar product or inner product of two numbers DEFINITION 9. and Zj. having components x ly # 2 . This we shall do in the sections that It. Linear substitutions as a guide to matrix multiplication Consider the equations y\ 2/2 = 6 z +6 z = 621*1+628 11 1 12 2. y be two 'numbers'. If we introduce the notation of 1. ... we may write the equa(lft) tions(l)as x==Ay> z y=Bz and write We can go direct from x to x=zABz. 6.. The . yn Then 5. in the sense of 1. z 2 in fact. we have not yet when assigned meanings to symbols such as R(A}B).
6.3. If the two or the SCALAR PRODUCT of the two numbers have m. cj consists of one letter cannot be defined. DEFINITION 11. as the inner product of the first row of [ars ] by the second column of [brs ].> a ln ) and of [ar8] as a 'number' having components (a n . UJ AB and a column of B= p [b two 1 2 Cl i.1.DEFINITIONS AND ELEMENTARY PROPERTIES is called the 73 INNER PRODUCT the inner numbers. we shall permit ourselves a certain degree of freedom in what we call a 'number'. or two examples follow in 6. Two when matrices A. B are conformable for this product: it is then defined as the matrix ivhose element in the ith row and kth column of B. It is is the inner product of the ith row of A by the kth column has as an immediate consequencef of the definition that many rows as A and as many columns as B. n components respectively and m ^ n. DEFINITION 10. in dealing with two matrices [ars ] and we shall speak of [6 rJ.1. Matrix multiplication 6. That is to say. so that we cannot One f The reader is recommended to work out a few products for himself. Thus. In using this definition for descriptive purposes. Thus. we think of the first row of a 12 . product is not defined. best AB The way conformal is of seeing the necessity of having the matrices to try the process of multiplication on two nonif conformable matrices.. A = Ki. 470'J . the B are CONFORMABLE FOR THE PRODUCT emial to the AB number of columns in A is number of rows in B. The product AB is defined only when the matrices A. For a row of A B consists of letters. PRODUCT. the second column of [b rs \ as a 'number' having components 6.
When these products are formed the elements in the ith row and &th column are. On the BA = fa ^l.Ay+0z". THEOREM BA. form the inner products required by the other hand. Consequently. A B by the kth in BA. the matrix 1 column. l^xiby^fzl f \y\ LzJ [gz+fy+cz] aft instead Sometimes one uses the terms fore and of pre and post. in general. . distinct from both As we have if just seen. we can form AB and BA only the number of columns of of B and the number equal to the number of rows of columns of is equal to the number of is A B rows of A. J2. Some simple examples of multiplication are Lc ^J lP ^J h b glx pi f\ c\ = pw. respectively. 6.74 DEFINITIONS AND ELEMENTARY PROPERTIES definition. we introduce the termsf post multiplication and premultiplication (and thereafter avoid the use of such lengthy words as much as we can!).2. there can be no precision in the phrase 'multiply A by B' until it is clear whether it shall mean AB or BA. AB is not. the inner product of the ith row of by the 4th sa4l column of A.3. Since and BA are usually distinct. in general. The matrix A postmultiplied by B is the matrix AB\ the matrix A pre multiplied by B is the matrix BA. the matrix matrix as BA. The matrix AB is. Accordingly. the AB Pre multiplication and postmultiplication. in the inner product of the ith row of column of B\ AB.fa L&2 d U 2J a matrix having 2 rows and V %' 6.
Another law that governs the product of complex numbers is the associative law..5. C = [c ik ] and EC = [yik ]. . shall use the ABC to denote either Let B= [6 ifc ]. either being com monly denoted by the symbol abc. by Definition 11. C> each of order n. This tributive law. of which an illustration is a(6+c) distributive The = law also governs the algebra of matrices. we may prove that 6. (Definition 11). B . For convenience of setting out the work we shall consider only square matrices of a given order n and we shall use A.DEFINITIONS AND ELEMENTARY PROPERTIES 6.. [6 ifc ]. of which an illustration is (ab)c a(6c). We shall show that (AB)C = A(BC) symbol let and thereafter we of them. "* 1 . We shall now consider the corresponding relations between three square matrices A. B. by Definition 11.4. One of the the product of complex numbers is the dislaws that govern abac. t t to denote [aifc ]. The associative law for multiplication.. 75 law for multiplication. where. Accordingly. We is recall that the notation [a ik ] sets down the element that in the tth row and the kth column of the matrix so denoted.. In this notation we may write ]) = [aik]x([bik]+[cik = a X bik+ c k] (Definition 3) [ ik\ [ i (Definition 11) 3) = AB+AC Similarly..
]. etc.Z\ this product of matrices is. to AA. though order of arrangement. for example. au bv cjk denotes j 2a i/ 6 fl c # it is and once the forms (1) and (2) of 6.5 have been studied. (AB)C and we may use the symbol denote ABC We 6. whereby n b i} c jk denotes ^b n n it c ik .. where. (2) (2) gives the same terms as (1). the use of the convention makes obvious the law of formation of the elements of a product ABC. k being the only suffixes that are not dummies. [ a il bli c im zik\> the i..76 DEFINITIONS AND ELEMENTARY PROPERTIES in the ith Then A(BC) is a matrix whose element kth column is n n n row and .. The summation convention.. j corresponding to I But the expression = corresponding to I 3. . for example. use A 2 .. by Definition 11. Hence A(BC) to denote either. * i ^ i Similarly. Then (^4 B) C kth column is is a matrix whose element in the ith row and n n n <i b il c at . the term in a different 3 in (1) is the same as the term 2. let AB = [j8 /A. A*. it is best to use the summation convention (Chapter III). = j 2 in (2).6. in fact. By extension. becomes clear that any repeated suffix in such an expression a 'dummy' and may be replaced by any other. Moreover. AAA. When once the prin ciple involved in the previous work has been grasped.
an important exception.. The number n of rows and columns it is the context and Let C be a square matrix of order n and I the unit matrix of order n. As we have seen (Theorem 22). For instance. . then 1C / CI /a C . however. X 1 by a. DEFINITION 12. It is denoted by /. Also = = /3 = Hence / has the properties of unity in ordinary algebra.DEFINITIONS AND ELEMENTARY PROPERTIES 7. then IA A and BI B. There is. must be zero. B a matrix of n columns. but the to denote the null matrix. OJ [a 2a\ . which states that when the product xy is zero. for multiplication The third law that governs the multiplication of comnumbers is the commutative law. This law does not govern matrix products. and / the unit matrix of order n. as . in / is usually clear from rarely necessary to use distinct symbols for unit matrices of different orders. 8. even though A and B are not square matrices. B= F b 2b 1. this law does not hold for matrices. either x or y. or both. It may also = = formable for the product AI. Then AO Ordinary algebra = = equation the null matrix. if Jc is any number.. C kl Further. A and / are not con7. Use OA 0. of which an example is plex ab = ba. The square matrix of order n that has unity in its leading diagonal places and zero elsewhere is called THE UNIT MATRIX of order n. real or complex. kl C and each is equal to the matrix kC. be noted that if A is a matrix of'ft rows.. 77 The commutative law 7. AB = does not necessarily imply that if A or B is A= [a 61. If A is not square. so we replace /X C and Cx I by C in the algebra of matrices. Just we replace 1 X x and a.1. The division law is governed also by the division law. in ordinary algebra.2.
the determinant of the matrix B AB is the product of AB is equal to the A and. 6 and matrices. J3. in a large measure. \B\x\A\. \BA\ = = = \A\x\B\. that is. It is an immediate consequence of Theorem that if A and are square matrices of order n. as though they represented ordinary numbers. if A column. 1 Thus. is called the determinant of the matrix. We shall return to this point Theorem 29. . the commutative law holds for their product and \A\ X \B\ \B\ X \A\.1. 01. product of the determinants of the matrices B. When A is a square matrix. the determinant that has the same elements as the matrix. The determinant of a square matrix 10. It is usually denoted by 14 . \AB\ Equally. and in the same places. AB may be zero and BA not zero. representing be added.78 DEFINITIONS AND ELEMENTARY PROPERTIES is is the zero matrix. AB A nor B Again. and if the two matrices. (i) #are (ii) = 6a. matrices A. AB is not usually the same as BA\ whereas ab = implies that either a or 6 (or both) is does not necessarily imply zero. although neither the product the zero matrix. if A= TO AB={Q [O 9. multiplied. [a f Oj 6 2 1.. manipulated The points of difference between ordinary numbers a. and. so that K*l A= has the element aik in the ith row and kth [a ik ] 9 then \A\ denotes the determinant 13. Since \A\ and \B\ are numbers... ab 2 OJ [a a6j Summary of previous sections We have shown that the symbols may A. 10. B= BA = F 6 Oj 01. For example. a [a o 61. the equation AB = whereas ab that either in A or B is zero..
(iii) A= [I [4 . let us suppose A to be a square matrix of order three and let us suppose that k = 2. For simplicity of statement. p. where k is a number. We have A= i a!3 22 ^23 33J say. The true theorem is easily found. 10. 1. both A and B 4 x \B\. \AB\ not true that \kA 1 = The beginner should note that 10.1. evident on applying 10. 2J EXAMPLES VII Examples 14 are intended as a kind of mental arithmetic.1 to the matrix product 21 x A.2. 8J [3 4 3 4 6 6]. of \ in the equation It is are matrices. The theorem '\A + B\ = \A\+\B\' is manifestly false (Theorem 6. 2a 32 2a 33 J Hence 2^4 The same = is &\A\. 15) and so \2A\ 2\A\. *3i 2a 12 ^22 (Definition 8). = k\A . whereas such an interchange does not leave a general matrix unaltered. Find the matrix A + B when (i) A = Fl 21. This is due to the fact that the value of as the a determinant is unaltered when rows and columns are inter =  changed. where 21 is twice the unit matrix of order three and so is 020 [2 01. = = B= (ii) U 4j A=\l 1 2 2 L3 [5 17 61.DEFINITIONS AND ELEMENTARY PROPERTIES Thus \AB 79 BA . although the matrix AB is not the same matrix BA. so that 2A = 2a. 3 2 3].
. /. Is it possible to define the matrix A + B when (i) A A A (ii) (iii) has 3 rows. n. ji = then ij j& = = k = i = . 6j 1J L2 3J Examples 58 deal with quaternions in their matrix form. Form the products 11. 1 i Q'p'pg. Prove the results of Example 5 when 0. 2 2 7. kj = ik = c.(a +]8 +y + S )/ 2 2 / 8 8 2 8 [see 7] = 8. AB (i) if A has 4 cols. 6. A^\l LI AB and . 2 2 2 /. d being numbers. (i) (i) if B has 3 cols. J = ki = j.Q . L10 12J 6 8j L8 2. oj Lo li . 13 4J when (ii) A = \l 21. real or complex. Is and the matrix B BA 4. AB when it possible to define the matrix BA have the properties of Example 2? A. D has 4 columns. (iii) always. 001 1000 LoiooJ t r 1 Oi. (i). i. i. oooiJ By 00011 LooioJ I 1000 = 1 I. oj &ro it i..80 DEFINITIONS AND ELEMENTARY PROPERTIES (i) Ana. = o/ + 6i + c/f dfc. 3. *i. Further. 01. a. bi cj dk. y.(a +j8 +y f S^/. P' numbers. y=ro Li k. (ii) if A has 4 rows. a. k are defined as i 01.Q' . oj j. If i denotes *J( l) and /.. j. 5. f 6 [6 81. B has 4 rows. ij = r o n. 0010 LioooJ 0100 considering the matrices ri 01. j. C' = al 6. (ii) F4 6 41. Ans. k are respectively r li 0100 0010 9. ij t=ri Lo i. has 3 columns. (ii). (iii) only if A has 3 columns and B 3 rows.R4 B=\l 21. j9. (ii) if B has 3 rows. has 3 rows. 8 being QPP'Q' = g. B has 3 columns ? Ans. then 2 alpiyySk. (iii) [5 7 9]. If $ O A:. then QQ' = (a* + 6 + c + d 2 )/. (iii) depends on thenumber of columns of A and the number of rows of B. real or complex. B= F4 15 51. No. If P = aJ+jSi+x/HSA.
the usual quadratic form as a matrix having only one one column. show that (a/f &i)(c/Mt) 10.. of submatrices The matrix P rp^ p lt p l9 l Pn\ ?i P Pat 2>3J LPti may be denoted by [Pu Pltl.(A A w 1). linear factors unleash B = BA.DEFINITIONS AND ELEMENTARY PROPERTIES 81 together with matrices al\bi./ l > 2 (by the distributive if = PtX n + PiX*^ +.. /* AB. where A and then /(A) . pn]. 13. where a. A 2 are numbers.p] f\ \y\ c\ = [ax+&t/ a + cz*+ZJyz + 2gzx + 2hxy]. in general. If Aj. Prove that the matrix equation A*B* = (AB)(A + B) 2 2 is true only if AB = BA and that aA + 2hAB + bB cannot. = (oc6d)/h(ad+6c)i. where A p A n are the roots of the equation /(A) = 0. row and \h 19 f \z\ that is. Plf denotes Ptl denotes 4709 >81 Pu denotes M . then BA 14. and of order n. In particular.H.. 11. be written as the product of two 12. If B = X4f/i/. The R..+p n where prt X are numbers.. The use 15. then A is a square matrix where / law).ra [x h b 0].S.. and f(A) denotes the matrix = p^(A Ai I). albi. 6 are real numbers. are numbers. Prove that y z]. real or complex. If /(A) ^'A^IA^+AiA. is HINT. is the unit matrix of order n. LPfl Pj where PU denotes r^ n p ltl. show that the usual complex number may be regarded as a sum of matrices whose elements are real numbers..
82 DEFINITIONS AND ELEMENTARY PROPERTIES if Prove that. then by replacing row$ and affects by replacing col v by col.A 2 A} AAi A 2 A3 A2 A3 the lines A I J is zero 23. s = 1. Prove that the product of the two matrices 2 cos sin 01. If each P and Q T8 is a matrix of two rows and two columns. Ivk lkj l^ = Ikj. the interchange of two columns of A.A 3 ) and (jtiit^Ms) are tne direction cosines of two lines and m. etc. HA AH Examples on matrix multiplication 21. Prove Example 8 prove that. 17. . when r = 1. first Prove that L2 L when denotes the matrix of Example 22. then replacing is the result of result of multiplying the rth row of A by k. and multiplying the rth column of A by k. The first 'column' of PQ as it is written above is merely a shorthand for the first two columns of the product PQ when it is obtained directly from [> ] X [gfa]. prove that the product A2 A! A t AiAa A. 3. if and only if and m are perpendicular. of the unit matrix /. when and differ by an odd multiple of When I (A 1 . ifc by putting for 1 and i in Example 6 the tworowed matrices of Example 9. IfH= If H I+[hij] the unit matrix supplemented affects in the position indicated. Qn .j.. 2.A 2 . refer to like matrices with q ilc instead of p ilc > p+Q = NOTE. rowiJftroWj HA A AH by an element h by A  + is a matrix of order n obtained from the unit matrix by 20. Elementary transformations of a matrix 18. then Q. r8 16. [" cos 2 < cos <f> sin <l sin 2 ^ JTT.j Acol. 3. Iy is the matrix obtained by interchanging Deduce that 19. is the the rth unity in the principal diagonal by fc. cos [cossin is sin 2 (/> J Lcos^sin^ J zero 22. 7 2j 9 = /. i the z'th and yth rows Prove that the effect of pre multiply ing a square matrix A by Ii} will be the interchange of two rows of A> and of postmultiplying by Jjj. 2.
the matrix n. AB is the product of In the notation indicated in 1. [a ki ]'.. are considering square matrices of order n we shall. For example. [ttfcj. A and B are square matrices (A BY = B'A'. 4 ej for the rows of the one are the columns of the other. let so that A A = ( [a ik ]..2. If A is a matrix ofn columns.1. If A= A'. The transpose 1. if When we A = [a ik ]. for the element in the ith row is that in the kth row and ith column THEOREM 23. = Then. A is said to be SYMMETRICAL. that is to say... has the rth column of A as its rth row It is TRANSPOSE or denoted by A' . we have AB = [a ti b ik l (AB)' = {a kl b^. . throughout this chapter. for r is called the 1. is the transpose of n [1 [2 3 si. applies to all The definition rectangular matrices. matrix DEFINITION which. and using the summation convention. the transpose of the product the transposes in the reverse order. In this notation.CHAPTER VII RELATED MATRICES 1. then A' = and kth column of A' of A. A (or the TRANSPOSED MATRIX OF A). denote the matrix by writing down the element in the ith row and kth column.1. denoting a matrix by the element in the ith row and kth column. 2. of a 13. LAW OF REVERSAL FOR A TRANSPOSE. // of order n. B= B' [b ik l [b ki ]. 1.
= = C'(AB)' C'B'A'.3). on reverting to the use of the summation convention. transpose of the product K are square matrices of order n. x can be expressed uniquely in terms \a ik of (Chapter II. // A. When A 0. the product of the matrices B and A' is given by f B'A' = .. a particular instance of a class of symbols that is extensively used in tensor calculus. 3...84 RELATED MATRICES the ith row of is Now A' akv flfc 2 >"'> b u b 2i . i = k..1. the is the AB. for any number of matrices. 9 B f Hence..K product K'. COEOLLAKY. where (1) symbolizes a matrix [aik\. which. so. has 1 in the leading diagonal positions (when It is i = k) and zero elsewhere.. 4. for when we multiply each equation A = \ ^ X X = i aik xk (la) .. (ABCy and 2.. adjoint matrix and the reciprocal matrix Transformations as a guide to a correct definition.. as we have already said. Let 'numbers' X and x be connected by the equation X = Ax. step by step. The Kronecker The symbol B ik S ifc is delta defined i = when ^ by the equations S ik = 1 when Ic... The 3. b ni and the kth column of &kn> the inn^r product of the two is is . In matrix algebra it is often convenient to use [8 ik] to denote the unit matrix. By the theorem.B'A'.
.3. the name c THEOREM n and if 24. or the ADJUGATB. we obtain (Theorem 10) That or is. 3. The reason for the reciprocal matrix' may be found in the properties to be enunciated in Theorems 24 and 25. A i( and sum for i = 1. in the last two definitions. = fc=i (^w/A)**.A~ X. is When a nonsingular matrix. of [a ik ]. definitions. xt == T ti ( *. [aik ] be Formal A= i i* DEFINITION 14. (2 a) in 1 (? a) Thus we may write equations * the symbolic form (2) . k{ ] (i) (2) unit matrix of order n. =56 0. which is A" 1 to mean the natural complement of the matrix [A ki /&]. DEFINITION 15. then cofactor of ars in A is and where I is the = Mx[4K/A] = /. 3. A = [a ] is a SINGULAR MATRIX i/ A an ORDINARY MATRIX or a NONSINGULAR MATRIX if A ijfc == 0. (1). MATRIX O/ DEFINITION matrix [A ki /&] 16. the THE RECIPROCAL MATRIX that Notice that. 2. .BELATED MATRICES by the cofactor 85 n. Properties of the reciprocal matrix.2. // [a ik ] A r8 is the a nonsingular square matrix of order \aik \. [aife ] is is the ADJOINT. TAe matrix [A ki ] [a ik ].. Let 9 provided we interpret a square matrix of order n\ let A.. or \A\ denote the determinant \aik \.. and let ^i r. denote the cofactor of ar8 in A. it is A ki and not A ik is to be found in the ith row and fcth column of the matrix. [A /&]x[aik = I.
and hence = ARAA. two l It follows that (note the next steps) A. with such a notation.Q = (p. Then. 25. we have and so 3.e.l 1 . R must be A' For the precise form uaed here compare Example 6. That is reciprocal is equal l But. though we have shown that A~ is a reciprocal of A. Similarly. // A is a nonsingular square matrix.RA. p.A(RA~ l l ) = A~ 24).4. that is (by (2) of Theorem I(RAi) Hence. Thus the product is [S ik] = /. 77. using now the sum convention. i. gives the unit only THEOREM matrix. Hence (1) is proved. This we do in Theorem 25.1). if AR /. . 30) is zero when i ^ k and is unity when i = k. we have not yet shown that it is the only reciprocal. 75.86 RELATED MATRICES The element in the ith row and kth column of the product which (by Theorem 10. to say.= // = 0. (2) is proved. f = 0. / and be any matrix such that the matrix [A ki /&]. G2. 6.f p.  / and A~*A by its /.5). by Theorem 24. AA~ l Let R AR 1 let A~ l denote A(BAi) = I. justified. = (p. 7. when multiplied by A. In virtue of Theorem 24 we are when A denotes M' for in we have shown by i (1) and (2) of Theorem 24 that. there is one matrix which. a matrix multiplied to unity (the unit matrix). .
. provided that A Q is interpreted as /. the reciprocal of the reciprocal of A is A itself. 1 when r A A~ r r > = A ~ AA. (R~A~ )I = 0.. When r and s are positive integers it is inherent .4. R must be A~ (RAi)A l l 1 1 i.r + 1 . in symbols.A~k A A.r r fc '. it follows that A is a reciprocal of A~ l and... s = t. a proof of (1) when will be enough to show the method. in the notation that A r +*. (RA~ )AA.= 0. so that Hence = A A~ = I. Then r ~k A But and. 3. if RA ~ I.A1. for matrices 2 . 4. = RAA~ A = // = 0. l l . For since. With this notation ( 1 ) negative integers. When A~ denotes the reciprocal of A and s is a positive integer. but as the reciprocal of A. we have 1 and that so is Hence. Moreover. A A~* = IA~ k = ^4. and t > > r every case. AA~ = A~ A 1 1 )" l 1 /. r l l r+l = l ^4 r 1 l /^. Let t = r+k. by Theorem = A.4.= ^l 4 r .e.A. The index law . (A24. A A A. s are positive or 8 ) .RELATED MATRICES Similarly. T = A A~ = A A. A' 1 In virtue of Theorem 25 we are justified in speaking of not merely as a reciprocal of A. We have already used the notations A ^4 3 to stand for AA. RA~ = 0.5.. r 8 l A XA we use the notation A' 8 to denote (A' 1 we may write A r xA 8 = A r+s whenever r.4 r 1 . by Theorem 25.= Q. We shall not prove r (1) in > 0.= I (Theorem 24).r+1 r = . it must be the reciprocal. where k > r 0. or. A* r r .. if 87 RA = /..
. For a product of three factors. Hence (A~ 1 )' be definition of a product and the results of [ 8< A'^iy 1 j /. RELATED MATRICES we may prove and that 9 (A*) = A" negative integers. (2) whenever 5. 77. The reciprocal of a product of factors is the product of the reciprocals of the factors in the reverse order. (B*A~ ). For a product of two factors. 7. The matrix A1 is. are commutative. it isa reciprocal of A' and. is a reciprocal of AB and. in particular.l B~ lA\ BA.88 Similarly. by the Theorem 10. r s are positive or The law THEOREM of reversal for reciprocals 26. must the reciprocal. 75. BO that (A Moreover. we have (p.(AB) l = = B. The operations of reciprocating and transposing is. by definition.1). A' = [aM 1 )' ] and so. (ABC)~ l (A*B*)* Proof.1A*AB B*B 6. THEOREM 27. that (A')~ Proof.5) Hence B~ lA~ l must be the reciprocal of AB. we then have = I (p. l = (A 1 )'. by Theorem 25. by Theorem 25. it and so on for any number of factors. = [A ikl&\. = = C.
equation A B = 7.2. The division law for non. and so ^4 = 0. Since nonsingular. (i) // the matrix A is nonsingular. form the adjugate. then both when i 7^ k and when Of*] i = k. is A THEOREM is the zero 28. it has a it But A~ 1 AB (ii) = B IB is = B. '// 4702 The division law for matrices thus takes the form 0.RELATED MATRICES 6. But ABB' = AI = 1 A. THEOREM 29. implies B = 0. it ABB. A [a ik \ and \a ik = \ 0. 1 1 . of course. or adjoint. 7. and so B it 0. AB Mew.1. or JB = 0. eiMer 4 = N 0. Proof.1 1 0. 89 Singular matrices If a singular matrix. Since ^4 7? ~ 0. (i) Since the matrix ^4 is nonsingular. matrix. though we can. the equation AB reci implies = I.singular matrices 7.Ox B. .' . Moreover. its determinant is zero and so we cannot form the reciprocal of A. or BOJ?H ^ are singular matrices. follows that has a reciprocal B~ l and . B is nonsingular. If Proof. procal A* and A" A follows that Since AB = 0. (ii) the // ^4 Me matrix 0.. Hence the products [ X [ A ki] and A ki\ x [*] are both of them null matrices. The product of a singular matrix by its adjoint matrix.
Also B. RELATED MATRICES THEOREM 30. satisfies (1). then C a nonsingular has the unique solution Since jB' has 1 Y = AC" 1 . there is X such that A = BX.90 7. // A is a matrix of m rows and n columns. Moreover. . R must be a matrix of n columns. YB. if Further. Proof. m may rows and be proved the unit matrix of order m. Y such that (1) and there is one and only one matrix A= Moreover. // A is a given matrix and B is a non singular matrix one and only one matrix (both being square matrices of order n). COROLLARY.B^A where / is = IA=A. (2) X= B^A. A= BR. then has the unique solution If A is a matrix of square matrix of order X= m n 9 B~ 1A . for and . and the first part of the corollary as we proved the theorem. Y = ^4 J5. Similarly. m columns and A has m rows. is B~ l and A are conformable for the product B~ 1A.3. 1 BB~ A 1 == We see at once that X = B~ A IA = A.BRA = = Bis B(RB~*A) y so that RB*A the zero matrix. which a matrix of m rows and n columns. 7. B and a nonsingular square matrix of order m.4. if A = BE. The theorem remains true in certain circumstances when A B are not square matrices of the same order. rows and n columns.is the only matrix that satisfies 1 (2). Y= AB~*.
RELATED MATRICES The second part of the way. 4jfc . the set of equations bi = [a^] is a nonsingular = n lL fc1 a ik x k ( = l. Z = gx+fy+cz. when A. by considering Theorem its 23 in the particular case 5. The product of the matrix [c* ] by its adjoint is equal to Ja^l/. y' stand for matrices with a single row. HINT. 7. When A =Ta g h b gl. form the matrices A~ B" 1 and verify by direct evaluations that (AB)~ l = B^A' 1 .rc) = Ax... 83.. Y = hx+by+fz. EXAMPLES VIII 1. l 3.. Use Theorem 23 and consider the transpose of the matrix AA' 6. 4. When A. A'B' B'A' and verify the in result enunciated Theorem 23. B are the matrices of Example 1. An interesting application given in Example 11. that (A')~ = (A' . metrical matrix. f\ c\ 9 f Ui form the products AB. solutions of the sets of equations are x = A~ l b.. Use 2..n) b' (4) may be The written in the matrix form = y'A where y 6'. If A matrix of order n.. B are square matrices of order 2. when B A~ l Prove that the product of a matrix by transpose is a sym. . HINT. (3) may be written in the matrix form b set of equations b x stand for matrices with a single column. Prove Theorem l 27. The t= fci i>2/* (i = l. where 6. 92.. B= r* t \vi *. BA. 1 )'. p. 91 corollary may be proved in the same The corollary is particularly useful in writing down the solution of linear equations. y* * *3 1. p. X= ax+ hy+gz.5. is y' = b'A~ l . Form these products neither being symmetrical.
contains a proof .^3). 1 Vx m'y. p.. M~ f/x 11. then .. A is a nonsingular matrix of order n. of this. the elements d lk of A are the conjugate complexes of a ifc arid AA' = 0. then/(^4) may bo written as a matrix . 2. the last part is proved by showing that the product of A and the last matrix is /. Solve equations (4). when/(^4) is nonsingular. Use 7. x and y are singlecolumn matrices with n rows.4 = 0. Miscellaneous exercises 10. v are nonzero LO AJ LO /J HINT. In view of Theorem 25. p. {f(A)}~ 1 in the same matrix form.(A. p. Interpret this result (i when a I i linear transformation 2/^ = 2 a ik xk k= l = l. If/(^4). 3 Prove that m' = T^. ^4 and J5 are square matrices of the same order. [Chapter X.. prove 9. 12. l//(Af). ~ 8. 1 y Ax.)^.. 23. 81) >].+p n where p . Prove that B'AB is symmetrical. 4. The elements a lk of A are complex.92 RELATED MATRICES of submatrices (compare Example 15.f(M). 3) changes (^.f(N).. p n are numbers.)'. (^l. but with !//()..^0. Prove that ^L == J" = 0. the extension of Example 8 in which/ is replaced by//gr. (2/i)2/22/a) y l \m 2 y 2 \m z y z and arc regarded as homogeneous point coordinates in x l ~{I 2 x z ~}I 3 x 3 into m 1 a plane. in the two forms y' y  (A')*b. 11. Let L FA 11.] . 13. A is symmetrical. using Theorem prove that 1 (A ).m = 1 .. Prove that if f(A) p A n + p l A n l + . \f(L) f(M) f(N)\ and that. = b'A^ 1 and then. N= where A. and if AA ' 0. g(A) are two polynomials in A and if g(A) is nonsingular. If the elements a lk of A are real. V and m' are singlerow matrices with n columns.1. 91. l/f(N) or l/f(A) can be written instead of f(L). 14.
for instance. called a minor of A For example. whether square or not. and this is zero if a = b. there will be at least one set of minors of the same order which do not all vanish. The converse is not true. From this matrix delete all elements save those in a certain r rows and r columns. the elements that remain form a square matrix of order r and the determinant of this matrix is called a minor of A of order in r. Suppose we are given any matrix A. Definition of rank 1. when considered of order 1. When r > 1. 2. It may well be omitted on a first reading. of order 2..1. are minors of order 1. 1. but all minors of order p+1 vanish.. by the argument of such that and n. then every minor of order r+1 must also be zero. any minor of order r\l (r ^l) can be expanded by its iirst row (Theorem 1) and so can be written as a sum of multiples of minors of order r. p m .2.] 1. 6. Each individual element of A is. this connexion. a definite positive integer p is. in the example given in 1.CHAPTER VIII THE RANK OF A MATRIX [Section 6 is of less general interest than the remainder of the chapter. For any given matrix with n rows and m there columns. d Now = e. the only minor of order 3 is the determinant that contains all the elements of the matrix. other than the matrix with every element zero. not all minors of order EITHER p is less than both vanish. but the minor aigc. is not necessarily zero. 1. a. The minors of a matrix. Rank.2. the determinant is a minor of of order 1. if every minor of order r is zero. and g = h. Unless every single element of a matrix A is zero. Hence..1.3.
we shall be concerned with the possibility of expressing one row of a matrix as a sum of multiples of certain other rows.. In the remainder of this chapter we shall be concerned with linear dependence and its contrary. a m 0.. n). (a ml ... the statement 'not ALL minors of order r are For a nonsingular square matrix of order n.. in particular.. We first make precise the meanings we shall attach to these terms..n). linear independence.+ Cm = wherein all the I's belong to F and not of them are zero.. respect to The contrary of linear dependence with F is linear independence with respect to P..94 THE RANK OF A MATRIX OR p is equal to min(m. in the sense of Chapter VI. not all minors of order p vanish. are said to be * Let F be any of numbers. all of whose elements are zero. real or complex numbers. p.1. the smaller of m and n.. This number definition RANK of the matrix. But the much more compact and we shall take can be made p is called the as our working definition the following: DEFINITION zero ' 17... and no minor of order p+ 1 can be formed from the matrix. Linear dependence 2.. as being of zero rank... . Let the components be ^ * (a u . A matrix has rank r when r is the largest integer for which is valid.. the rank is less than n. It is sometimes convenient to consider the null matrix.... The 'number' a t is said to be a SUM OF MULTIPLES of the f When when m = n. Then a l9 . n) m= = n . linearly dependent with respect to F if there is an equation (1) all Zll+. n) 2. min(m. 2. min(m.. each having n components. v * .+/m a mr 0 (r = l.. m 7^ n.. field J a ln ). a mn ). the rank is n\ for a singular square matrix of order n. The equation (1) implies the n equations ^a lr +. Let a lt . am be 'numbers'.. J Compare Preliminary Note..
1. or (i) < n and. where real or complex. Let r A have m rows and n columns.. for example. we shall allow ourselves a wide interpretation of what we shall call a 'number'. Let r = n and m > n. if A THEOREM and linear dependence 31. With this labelling. real or complex.THE RANK OF A MATRIX 'numbers a 2 . 'linearly independent'. let n+0 be the suffix . "number" a is a sum of multiples of the "numbers" imply an equation of the form a = Z 1 6f/ 2 c. . Taking letters for columns and suffixes for rows. let us label the elements of A so that one such nonzero determinant is denoted by == 6. we consider each row of a matrix as a 'numbet'..2. There is at least one nonzero determinant of order n that can be formed from A. has more than r rows. we shall use F the phrases 'linearly dependent'. Unless the contrary is stated we shall take of all numbers. r < m. real or complex. also. Moreover. b 'the and c* will not necessarily integers but may be any numbers. Rank and 3. 3. where r 1. As on previous occasions. /! and 1 2 are For all numbers.. If A is a matrix of rank r. A bn . (i) possibilities to be considered: either (ii) r Then there are two = n and m > n. which follows. 'is a sum of multiples' to imply that each property holds with respect to the field of example. we can select r rows of A and ^ express every other row as the sum of multiples of these r rows. kn of any other row. The equation (' (2) implies the n equations *lr = Jr + +'m amr = Ifi ) field to be the 2. in Theorem 31. am with respect to the form i l* a 2+~ 1 9 95 = \ F if there is an equation of /o\ 2 + lm am> \ i ( ) wherein all the J's belong to F..
. p2 Pr 'r With this labelling. take letters for A columns and suffixes for rows... Notice that the argument breaks down if A and so on. n equations a l l l +a 2 l 2 +. Pi b2 br ... Consider now the determinant ar+0 r and where oc is the letter where 6 is any integer from 1 to ANY column of A.. This n m. p. of y f Using p to denote the row whose t letters have suffix tt we may write equations n (1) as ... . then D has two columns If a is one of the letters a. let the remaining rows of A have suffixes r+1. A of order r+1.. There of at least one nonzero minor of order As in (i).. then and since the rank of A is r D is again zero... II.3) the 2++Mn = b n+6) Z lf Z .+a n ln = a n+$9 4.96 THE RANK OF A MATRIX Then (Chap. as a sum = < (ii) Let is r < n and... also. m>.. r < m. r+2. m. and label the elements so that one nonzero minor of order r is denoted by M= ax ^ bl .. . proves the theorem when r 0. l n 2 (1) have a unique solution in given by Hence the rowf with suffix n{9 can be expressed of multiples of the n rows of A that occur in A. r. D is a minor of If a is not one of the letters a. identical and so is zero...
kr .. Hence (2) expresses the row with suffix r\Q as a sum of symbolically... to select A ...THE RANK OF A MATRIX 97 Hence D is zero when a is the letter of ANY column of A. columns be a. it is impossible where q r. Using suffixes for rows and letters for columns. v each of which has r 4708 e. letters of its &!. o o . O . multiples of the r rows in where M^ M ... AI ~~ A2 Ar This proves the theorem when r is less than Notice that the argument breaks down if 3.... q. and to express every other row < as a sum of multiples of these q rows. THEOREM q rows of 32. 6 and Then this minor is let the suffixes of its rows which is the product by rows of the two determinants *.. where m is the number of Pit /2>> /V Then..+Ar ar = 0. Suppose that. we obtain the equation ^ ar^+A r 1 a1 +A 2 a 2 +. Aj. we can select q rows A and then express every other row as a sum of multiples of them. consider ANY (arbitrary) minor of A of order r.... m. Let the = > q. (2) and where M. (3) is +\kPq> satisfied when \ k ( 3) = 8/fc .. // A is a matrix of rank r... A^ are all independent of ex. 1.. contrary to the theorem. and if we expand D by its last column.2. for k of rows in A. m and less than n.. M = 0. let us label the elements of A so that the selected q rows are denoted by = 1. there are constants Xtk such that Pk for if if = *lkPl+ k k and Now be (3) expresses our hypothesis in symbols... en q columns of zeros.
Then. To determine whether the equais. if our supposition were true. shall prove that the equations are consistent when r r' and are inconsistent when Since r r' Let r < r'... r < m. m. We as a can select r rows of A and express every other row of A sum of multiples of these r rows.. Hence our supposition not true and the theorem established. Non homogeneous 4.. Let us number the equations (1) so that the selected r rows become the rows with suffixes 1. since every r' or r A has r' minor of A is a minor of B. that set of values of to satisfy all tions are consistent or inconsistent we consider the matrices A= an . a ln "I. at least one set of may be found to satisfy all the equations. But this is not so. may be inconsistent. for since the rank of A is r there . x v no xn .. Such a set may either be consistent.. r. m). B be r. either r [if call B = < a nonzero minor of order is r..1. (1) m linear equations in n unknowns. or they x can be found the equations. B == u bl We the augmented matrix of A. We 4. Then we have equations of the form ar+0. is at least one minor of is A of order r that is not is zero.. 4. Let us now make the hypothesis that the equations are .2.i = (1) wherein the A's are independent of t. Let the ranks of A. then so has B\ but the converse not necessarily true]... linear equations We now consider n ^a *=i a set of of equations values of x it x i = bi (i = l. r' respectively. .. every minor of A of order r would be zero. . < ^ r'.98 THE RANK OF A MATRIX Accordingly. that is.
If r < row as we can select r rows of B and express every other a sum of multiples of these r rows. on multiplying n. For.3.. since the kth row of A let Now can be written as A lkPl+'" \ N + *rkPr> \ \ every minor of A of order r is the product of a determinant wherein i has the values 1. (1) cannot be consistent. which is contrary to the fact that the rank of B is r' (Theorem 32). as we shall prove. in this notation. (2) (3) together imply that we can express any row of B as a sum of multiples of the first r rows... r a determinant \a ik l^ifc'l by \. when n we may > r..r) (4) t 1 satisfies all the equations of (1)... at least one minor of a of order r must Then.. l (5) side wherein the determinant of the coefficients on the lefthand The summation on the righthand side is is not zero. xr Hence.2). the equations (4) become r a u xt e=i = n &. may be written as (5) below. number ra.. Let the suffixes of the variables x be numbered so that the first r columns of a yield a nonzero minor of order r.. But at least r in = the work of of order r 3. Then. %ax = b t t ( = l.factor... As before. a denote the matrix of the coefficients of x in (4). If r = ra. every minor of A of (put q order r has a minor of a of order r as a.. have a value distinct from zero. solve equations (5) uniquely for x v . i the equations so that these r rows correspond to Then every set of x that satisfies = 1.. x n arbitrary values and then t . r.THE RANK OF A MATRIX consistent. that is.. (2) by x and summing from i we get br+e = *ie*>i++\e*>r  (3) The equations. t 99 = 1 to t = Then. if r = r' and . J au x *=r+i i (* = >> r).. give x r+v . and Hence the equations 4. (1) Let r' = r. one minor of A is not zero and hence one minor of a of order r is not zero... If present only n > r..
(6) which form a set of m homogeneous equations in n unknowns xn may be considered as an example of equations (1) a?!. all solutions of (6) are given by of the variables may be given arbitrary values. r). = 0.. = A^b. are zero.= <=rfl au xt = 1. the only solution of equations (6) given by 6 x Hence. r (1) are consistent and certain sets of n x can be given arbitrary values.100 THE RANK OF A MATRIX r. as when we obtained equations (5) of . HI). and so also the equations (1)..5) as x where of (5).. when r x2 that is. b n the single^column matrix with elements xl3 .. > A: 6 is is the matrix of the coefficients on the lefthand side and x 5.!*. = n the only solution of (6) is given by x = 0. is the singlecolumn matrix with elements b v . VII. then. the rank all 6^ B of A. When r < n we can.3. the rank of so equations (6) are always consistent. reduce the solution of r (6) to the solution of (t n t~i 2 . 7... This unique solution n > the equations may be written symbolically (Chap... Homogeneous The equations linear equations IaXi = t . xn 9 .J In this case.. (7) wherein the determinant of the coefficients on the lefthand side is not zero. = = = xn = 0.. But if r. the f Not all sets of criterion is that the coefficients of the remaining r variables should provide nr a nonzero determinant of order r.3.. The results of 4 are immediately applicable. . when all the b i are zero. V (* = !.3. f of the variables If n = r. the equations (5). x l 4. is equal to n.. = A?b..... Since and is equal to the rank of A.. J The notation of (7) is not necessarily that of (6): the labelling of the elements of A and the numbering of the variables x has followed the procedure laid down in 4. are consistent and have a unique solution. in the notation of is 4...
nr). the #!. (a) every solution of (6) is a sum of multiples of the particular solutions X^. fc (10) l That is to say.. x n and then solving (7) for Once the values of xr+v . i. xr uniquely. x n The particular solutions of (6) obtained . . This we shall prove in 6.. x n arbitrary values in (7)... which shall prove in 6. If we replace these numbers by the elements of a singular matrix of order nr..... X (9) of (6). . and (6) is called a FUNDAMENTAL SET OF SOLUset of solutions.1. if there are more variables than equations.4. We shall use ments from the to denote the singlecolumn matrix whose eleare x v . is less than n. = x n 0..... than equations > 6. we are led to a fundamental set of solutions of (6). Fundamental 6. X sets of values (8) we denote by X* As we is (* = l. A set of solutions that has the properties (a) TIONS. (6) no one X^ is a sum of multiples of the other X^.. the general solution obtained by giving #rfl ... Moreover. In particular. Xn = 0... sets of solutions the rank of A.e. then r < n and the equations have a solution other x l = ..... (7) determine x v .. solutions of (6) that do not form a fundawe are led to nr mental set of solutions. we obtain n r distinct solutions of equations (6) by solving equations (7) for the sets of values r.. #n = 1.....THE RANK OF A MATRIX 101 assigning arbitrary values to xr+l . There is more than one fundamental If we replace the numbers on the right of the equality signs in (8) by the elements of any nonsingular matrix of order nr.... n w.2.2. is furnished by the formula X=Yaw*X = fc .. xn have been assigned. When nr == 0> Xr+2 (8) *n = 0. xr .
x a+8 considered as the elements of a onecolumn matrix. no longer consider the (7)... therefore. . I k has unity in its kth row and zero in the remaining 1 rows. on using the device of submatrices. x r This solution may be .. THE RANK OF A MATRIX Proof f of the statements ra made in 6... original equations but shall consider only As in 6. we use to denote x v .. Further..102 6. (7) and let A^ 1 The general solution of (7) is obtained by giving arbitrary values to xr+1 . As we have homogeneous linear equations in n the matrix of the coefficients is of rank r and seen.1. XA . first reading. that is. every solution of such a set is a solution of a set of equations that may be written in the form r n n ~ n x (i 1 C7\ u <r 1 v? r\ Z^^u^i ^ JL \ a t \* ". symbolized... we use l k to denote the kill column of (8) considered as a onecolumn matrix.. We are concerned with a set of r unknowns when < n.2.... as XBut the last Xr+k matrix is t Many The remainder of readers will prefer to omit these proofs. We shall. X x n considered as the row elements of a matrix of one column. x n and then solving the equations (uniquely. to denote the particular solutions of equations (7) obtained from the sets of values (8)..1. Let A denote l the matrix of the coefficients on the lefthand side of equations nr denote the reciprocal of A v In one step of the proof we use XJJ+ 5 to denote xa . at least on a 6 does not make easy reading for a beginner.> {/ wherein the determinant of the coefficients on the lefthand side is not zero. since A l is nonsingular) for x l9 .
31..1 By Theorem 31 applied to the matrix [A /A.* Y.. Then there are constants \ ik such that X* where the =.. are the solutions of 6. 6. 6. where p ik (at most). Let Yj.. X^. be any p distinct solutions of nr ..1 (continued).. by (10). that Yt form  a fundamental set. whose rank cannot exceed p. Exactly as in can express every Y. cannot form a fundamental set. 6.3. But.32. as a sum of multiples of nr 6.32 that a fundamental set must contain nr if solutions. It follows from the definition of a fundamental set from 6. Let Y lv .. (9). we as a sum of multiples of p of them (at can express every most).].. Yp where p > nr. is the value of xr+k in Y t .. We first prove that a set of less than nr solutions . (k = I. We next prove that in any set of more than is a sum of multiples of the retions one solution (at least) mainder. (7).31. and so our supposition is not a true one and the Y^ cannot form a fundamental set.!/. contrary to what we wish to prove.33. this is X fc impossible. of we them and 6. solu6.. be any p distinct solutions of (7). (10) A fundamental set contains n r solutions* 6. Yp where p < nr. If [c ik ] is a nonsingular matrix of order X^ is the solution of (7) obtained by taking the values nr.4. Suppose. Proof of statements made in 6. Then. as we see from the table of values (8). and xr+l c iV xr+2 ~ C i2> > xn ~ .31 and 6. nr).41.THE RANK OF A MATRIX and so 103 (7) we have proved that every solution of may be repre sented by the formula X^lVfcX*.
VII. If [c ik ] a singular matrix of order n its a suitable labelling of that c p+8.42. with there are constants X &8 such elements. it is and these equations. Hence. =2 8 1 p where p is the rank of the matrix. we get where [y ik ] is the reciprocal of [cik \.1).k r. in 6. For if it were. On solving these equations (Chap. by the argument of that one others. by and t as a We have supposed the solutions to be labelled so that XJ. (10). 6._ sum of multiples of the rest. then. < n r.104 THE RANK OF A MATRIX by what we have proved n r then. by form every solution of (7) may be written in the nr nr X xr+i 2. 3. (11) would give equations of the formf Moreover. r can be written . would imply X f could be expressed as a solutions is sum of multiples of the set. and so p (10). yik Xfc = 1=1 fc=l V not possible to express any one X^ as a sum of multiples of the remainder. But. Hence the XJ form a fundamental 6.2.31.
THE RANK OF A MATRIX
Hence there
is
105
an equation
and so the XJ cannot form a fundamental
6.5.
set of solutions.
A
set of solutions such that
as a
sum
of multiples of the others
no one can be expressed is said to be LINEARLY
INDEPENDENT. By what we have proved in 6.41 and 6.42, a set of solutions that is derived from a nonsingular matrix [c ik \ is linearly independent and a set of solutions that is derived from a singular
it
matrix [c ik ] is not linearly independent. Accordingly, if a set of n r solutions is linearly independent, must be derived from a nonsingular matrix [c ik ] and so, by
such a set of solutions
is
6.41,
a fundamental
set of
set.
Hence, any linearly independent
n
r solutions is
a
fundamental set. Moreover, any set of n r solutions that is not linearly independent is not a fundamental set, for it must be derived from
a singular matrix
[c tk ~\.
Devices to reduce the
EXAMPLES IX calculations when
finding the rank of a
matrix
1. Prove that the rank of a matrix is unaltered by any of the following changes in the elements of the matrix (i) the interchange of two rows (columns),
:
the multiplication of a row (column) by a nonzero constant, the addition of any two rows. HINT. Finding the rank depends upon showing which minors of the matrix are nonzero determinants. Or, more compactly, use Examples VII, 1820, and Theorem 34 of Chapter IX.
(ii)
(iii)
r
2. When every minor of [a ik ] of order rf1 that contains the first rows and r columns is zero, prove that there are constants A^ such that the kth row of [a ik ] is given by
Pk
= 2 \kPiil
or otherwise) that every minor of [a ik ]
Prove (by the method of
of order
3.2,
r+ 1
is
then zero.
106
3.
THE RANK OF A MATRIX
By using Example
if
and
2, prove that if a minor of order r is nonzero every minor obtained by bordering it with the elements from an arbitrary column and an arbitrary row are zero, then the rank of the
matrix
4.
is r.
Rule for symmetric square matrices. If a principal minor of order r is nonzero and all principal minors of order rf 1 are zero, then
the rank of the matrix
is r.
(See p. 108.)
results for
Prove the rule by establishing the following a ik in (i) If A ik denotes the co factor of
(7
=
U
tl
a*r
a ir
a8, a t$
a a
t
i
is the complementary minor of then A 88 A it A^^L^ MC, where 88it atg a tt a t9 ast in C (Theorem 18). = and every corresponding A 88 = 0, then every (ii) If every (7 = Atg = and (Example 2) every minor of the complete matrix Ait
M
[a ik ] of order
(iii)
r+1
is
zero.
If all principal minors of orders rf 1 and r f 2 are zero, then the rank is r or less. (May be proved by induction.)
Numerical examples
5.
Find the ranks of the matrices
(i)
2458
2 3J 3 [1347],
1
1
(iii)
2
3
1
(iv)
5
658
8
.3872.
Ans.
(i)
3, (ii) 2, (iii) 4, (iv) 2.
Prove that the three points (xr yr z r ), r = 1, collinear according as tho rank of the matrix
, ,
Geometrical applications 6. The homogeneous coordinates of a point
in
a plane are
(x,y,z).
2, 3,
are collinear or non
is
2 or
3.
THE RANK OF A MATRIX
107
7. The coordinates of an arbitrary point can be written (with the notation of Example 6) in the form
provided that the points
1, 2,
3 are not collinear.
HINT.
Consider the matrix of 4 rows and 3 columns and use
31.
Theorem
8.
coordinates
9.
Give the analogues of Examples 6 and 7 for the homogeneous (x, y, z, w) of a point in threedimensional space.
configuration of three planes in space, whose cartesian equa
The
a
is
ilL
x+ a ia t/f a i8 z =
b{
(i
=
1, 2, 3),
determined by the ranks of the matrices A =s [a ik ] and the augmented matrix B (4.1). Check the following results. ^ 0, the rank of A is 3 and the three planes meet in (i) When \a ijc a finite point. (ii) When the rank of A is 2 and the rank of B is 3, the planes have
\
no
of
finite
The planes
point of intersection. are all parallel only
if
every
A ik
is
zero,
when the rank
A
is 1.
When
or
the rank of
A
is 2,
either (a)
(b)
no two planes are
two are
parallel
parallel,
and the third
not.
Since a^
=
0,
(Theorem 12), and if the planes 2 and 3 intersect, their line of interHence in section has direction cosines proportional to n 4 ia lz (a) the planes meet in three parallel lines and in (b) the third plane
A
,
,
A
.
meets the two parallel planes in a pair of parallel lines. The configuration is three planes forming a triangular prism; as a special case, two faces of the prism may be parallel. B is 2, one equation (iii) When the rank of A is 2 and the rank of is a sum of multiples of the other two. The planes given by these two equations are not parallel (or the rank of A would be 1), and so the configuration is three planes having a common line of intersection. (iv) When the rank of A is 1 and the rank of B 10 2, the three planes
are parallel.
(v)
When
the rank of
tions represent one
A is 1 and the rank of and the same plane.
B
is 1, all
three equa
108
THE BANK OF A MATRIX
10. Prove that the rectangular cartesian coordinates (X 19 2 J 3 ) of the orthogonal projection of (x l9 x 2 # 3 ) on a line through the origin having direction cosines (J lf J 2 Z 3 ) are given by
, , ,
X
Ma h
i
l\
h
LMa
where X, x are
11. If
Mi
single
column matrices.
L
sponding m matrix,
denotes the matrix in Example 10, and styo.w that
M denotes a corre
X2  L
and
M
2
=
M
and that the point (L\ M)x is the projection of # on a line in the plane if and only if I and in are perpendicular. of the lines I and
m
NOTE. A 'principal minor' and columns.
is
obtained by deleting corresponding rows
and every minor of order / ^ n is the sum of products of a /rowed minor of A by a /rowed minor of B. ii A  is the product by rows of the two arrays am aln 15). and B a matrix of n rows. n 2 rows of the two determinants product by When / = ^ n). . of all Hence every minor of A B of order greater than n is zero.CHAPTER IX DETERMINANTS ASSOCIATED WITH MATRICES 1. If a ik b ik are typical elements of A. A is zero. Any /rowed minor of AB may.1. When A has has n^ rows n^ rows and B has n 2 columns. A is the sum t of order the products of corresponding determinants that can be formed from the two arrays (Theorem 14). a. . and when < w. bn  b m nf / % Hence (Theorem U / ' when > n. C ik == is a typical element of A B. When ^ n. so that the product A B can be formed. the product AB and n2 columns. with a suitable labelling of the elements. Suppose that A is a matrix of n columns. B. . *m *nl in  that is. be denoted by 21 U 22 n (this assumes that n^ ^ n. The rank of a product of matrices 1. A b nl b nn is the # . A / is the product of a /rowed minor of A by a /rowed minor of B.
then A and AB have the same rank.2. There is one type of multiplication which gives a theorem of a more precise character. m A and CA have the same rank. . aln a 2n ann~~ of the form an\ ' ' The determinant of /(A) matrix n (X is = (l) 0. (i) every minor of AB with more than n rows is zero. full. and the ranks of A and B cannot exceed n. 1 . by what we have shown. CA is similar. B~ and so r. . rows and n columns and B is a non// A has singular square matrix of order n. (1) roots that of (2) \A\I\ = 0. the rank of A. (ii) every minor of A B of order t < n is the product or the sum of products of minors of A and B of order t. If all the minors of A or if all the minors of B of order r are zero. real or complex. cannot exceed p and which are the ranks of the factors AB and B" 1 Hence p = r. In .+p n ). THEOREM rank For. and the rank of AB is />.. n 2 elements a ik The . AB cannot exceed the of either factor. l X n l where the pr are polynomials in the of the equation /(A) = +. then/o ^ r. and if C is a nonsingular square matrix of order m. latent roots 2. this . where / is the unit matrix of order n and A any number. 33. . matrix is ra n A a 12 a 21 a 22 a n2 this A . then so are all minors of AB of order r. characteristic equation. A is r l .110 DETERMINANTS ASSOCIATED WITH MATRICES The rank of a product Accordingly. Associated with every square matrix A of order n The is the is matrix A A/..1. n +p is. then THEOREM 34. The proof for 2. If the rank of But A = AB n. are called the LATENT ROOTS of the matrix A and the equation itself is called the CHARACTERISTIC EQUATION of A.
.DETERMINANTS ASSOCIATED WITH MATRICES 2. . but doing it n 8 times for each power of A. we have This equation is true for all values of A and we may therefore equate coefficients! of A. Hence (A\I)B = \A\I\xI Since on using the notation of (1). by Theorem 24.+ JPll )a li (t. The adjoint matrix of A XI.. k = 1. Every square matrix is.. coeffi. We may regard the equation as the conspectus of the n l 2 equations + . say J5. B is given by (3). n). the product of a matrix by its addeterminant of the matrix x unit matrix..2.i =(!)"!>!/. this gives us .. so that the equating of coefficients is really the ordinary algebraical procedure of equating coefficients.l +. is a matrix in X of degree whose the ele ments are polynomials n 1 or less. that if then A"+p l A*..+p n _ l A+p n I = 0. cients of the various ^powers of A being polynomials in the a ik Such a matrix can be written as 1 Ai. satisfies its own characteristic equation.. 111 THEOREM 35.. B r are matrices whose elements are polynomials in joint = Now. (3) where the the aik .
. let g^A) and g 2 (A) be two polynomials in the matrix A.. A n are the latent roots of the matrix A.. (AXilKAXtl^AXtl) = It does not follow that A+ Pl A*+. 10. Then the matrix that is . 3. any one of the matrices Xr I the zero matrix.. // g(t) is a polynomial in t. If A!. the product where X v A 2 .2. Pn ..+A.. so that \AXI\ = (1)(AA 1 )(AA 8 ).. then the determinant of the matrix g(A) is equal to THEOREM 36.. ).112 DETERMINANTS ASSOCIATED WITH MATRICES Hence we have = A n I+A n .. and A is a square matrix. p.(XA .. I+pn I = (l){ABn . l 1 1 1 ll 1 1 } 2.. g 2 (A) being nonsingular.. Then. Further..) I it follows that (Example 13.+ +A(B +AB )+AB = 0. A theorem on determinants and latent roots 3. A 2 . An are the latent roots of A.(ttm = c(AtiI).p I+. by Chapter VI. Let so that g(t) g(A) = c(ttj..... r=l The theorem also holds when g(A) is is a rational function ffi(A)/g 2 (A) provided that g 2 (A) non singular. 3..1. +A^(Bn _ +ABn _ )+.3.(Atm I). 81).+p A n I = is 0...
as we shall prove. Equivalent matrices Elementary transformations of a matrix to stanform. is nonsingular. the latent roots of the matrix g(A) are given by g(\). elementary transformations of type (i) a matrix such that the minor formed by the by replace elements common to its first r rows and r columns is not equal to zero. THEOREM the matrix an important result: COROLLARY. f A r multiples of these r rows. rl From this follows 36. The ELEMENTARY TRANSFORMATIONS of a two rows or columns.1. the interchange of the multiplication of each element of a row (or column) by a constant (iii) other than zero.DETERMINANTS ASSOCIATED WITH MATRICES is 113 a rational function of A with a nonsingular denominator g2 (A ).. These transformations. is \ are the latent roots of where glt g2 are and g(A) polynomials and g2 (A) A of the form g l (A)/g 2 (A). we can express any other row of A as a sum of In the first place. it can be changed by elementary transformations into a matrix of the form 7 I' "J o where / denotes the unit matrix of order r and the zeros denote null matrices. the addition to the elements of one Let A be a matrix of rank r. 4. Applying the theorem to this function we obtain. // A lv . The argument that follows is based on Chapter VIII. on writing g^A^g^A) = g(A). Next. subtraction of these multiples of the rows from the row in question will ther6fore give a row of zeros.. of type t (iii). leave the rank of the 3. Then. row (or column) of a constant multiple of the elements of another row (or column). 4702 . dard 4. in general rectangular. DEFINITION matrix are (i) (ii) 18.
step by step. K ^2 Then. of r rows and columns. of type where P (iii).and postmultiplicaHence we have proved that t If a diagonal element were zero the rank of these transformations leave the rank unaltered. presents us with /. P can be changed successively (i) and and so. to a matrix having zeros in all places other than the principal diagonal and nonzeros in that diagonal. C" 1 h'of Finally. 4. this can be transformed by elementary transformations. Q\ in Ut . show. the unit matrix of order r. By working with columns where before we worked with rows. matrix unaltered before us is 1) and the matrix now P o o denotes a matrix. again by elementary transformations of types to (iii). 4. such that \P\ 7^0. P would be less than r: all . to F P full. As Examples VII. of type (ii).1 1820.2. the transformations envisaged in tion of A by nonsingular matrices. of the form (cf. suppose P is. can be performed by pre. A final series of elementary transformations. and the zeros denote null matrices.114 DETERMINANTS ASSOCIATED WITH MATRICES Examples IX.
1. Accordingly.4. DEFINITION 19. 2 = B 1 B 2 and so is nonsingular. we can pass from A l can.B A 2 2 C2 .DETERMINANTS ASSOCIATED WITH MATRICES when A is a matrix of rank r B and C such that r>* ^ } 115 there are nonsingular matrices _r/ where Tr denotes the unit matrix of order r bordered by null matrices. l 2y we saw in 4. Ai = B^BtA&C^ . are equivalent. BI A 1 C From this it follows that 1 = /. Theory of Canonical . we can pass from I' to A by elementary transformations. C. Both A 2 and l are A . W. Turnbull and A. 1932). Hence we can pass from A 2 to A v or from A to A by elementary transformations. Two matrices are EQUIVALENT if it is possible to pass from one to the other by a chain of elementary transformations. as We 4. when two matrices are equivalent each can be obtained from the other through pre. pass from by elementary transformations.LA M. matrices. The detailed study of equivalent matrices is of fundamental importance in the more advanced theory. Here we have done no more than outline some of the immediate con sequences of the definition.3. L and M are nonsingular and then. Aitken. by using the inverse r operations. where L and is also nonsingular.  f The reader who intends to pursue the subject seriously should consult to the H. 4.and postmultiplication by nonsingular Conversely. and = C2 (7f l say. as we shall prove. they have the same rank J3 2 Cv C2 such that there are nonsingular matrices l9 If A! and A2 and B . An Introduction Matrices (London and Glasgow. matrices. A of rank r (Theorem 34). A2 to to T r TT by elementary transformations and so.. if M A2 is of rank r. we can pass from A 2 to A l9 or from A! to 2 by elementary transformations.
.
INVARIANTS AND COVARIANTS .PART III LINEAR AND QUADRATIC FORMS.
.
. it must contain 1. also belong to tKe set. real or complex. b are rational real (iv) field all must contain each and every number that is contained in $r (example (i) above). Number fields We recall the definition of a number field given in the pre liminary note.CHAPTER X ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS 1. An expression such as .. it must contain Every number field every fraction. since it contains the difference 1 1. r+s.. all real all numbers. the symbols xr yr . and every integer. that is. A set of numbers. another. when. it must contain 0..1. r~s Typical examples of number (i) (ii) fields are the the field field of of of of all rational real numbers (gr say). Linear and quadratic forms 2. DEFINITION 1. where a is any number of the set. we call aip &#> VARIABLES in $ v .. 2. since it contains the quotient a/a. complex numbers. numbers from a field g 1? not necessarily the same as g. be determinate numbers of the field. Let xr yr . if r and to the set and s is not zero. is called s belong rs. but as free to be any. denote numbers that are not to be thought of as fixed. and so on. the sum 1 + 1. we suppose their valuefi to be fixed. a FIELD of numbers. where a and numbers. rxs. b^. arbitrary.. The numbers we call CONSTANTS in g. or a number field. since it contains the quotient of any one integer by 2.. ... (iii) the the field numbers of the form a+6V5. Let g be any field of numbers and let a^.
one in which both coefficients and variables are necessarily real numbers is said to be a 'REAL FORM'. quadratic form is identical with the when we define the a's by means of the equations For example. The symmetrical square matrix A == [a^].120 is ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS LINEAR FORM in the variables x^ .jJX^i^ is (% = %) (3) said to be a case. however.3. x*+3xy+2y 2 is a quadratic form in the two variables x and y. A form in which the variables are necessarily real numbers is said to be a 'FORM IN REAL VARIABLES'. 2. which shows that the consequent theory is more compact when such a restriction is imposed. = 1. The restriction is.2. In each wish to stress the fact that the constants a^ QUADRATIC FORM belong to a certain field. It should be noticed that the used of only when a^ = a^. having a^ . g say. said to be a an expression SUCh as m n is said to be a BILINEAR FORM in the variables xi and y$\ an expression such as . a2 2 = 2 i2 =a =i 2i Matrices associated with linear and quadratic forms. we refer to the form as one with coefficients in g. more apparent than n real: for an expression such as wherein by is not always the same as 6^. having coefficients On 2. term 'quadratic form' is This restriction of usage is (3) dictated by experience. when we in the variables a^.
the quadratic form to to and the m linear forms it (4) Ax. let x denote a singlecolumn matrix with x v . and x'Ax is a matrix of one row B .. the transpose In the sequel it will frequently be necessary to bear in mind row and jth column. the third. As in 2. x n then x'.ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS in its ith 121 is associated with the quadratic A' and so A because a^ symmetrical a^.. form (3): it is = . is better envisaged as the though product of the matrix A by the matrix x. x n Let A denote the matrix of the quadratic form 2 2 ars xr xsThen Ax is the singlecolumn matrix having } . xn as elements: the matrix product Ax.x). (2).. that the matrix associated with any quadratic form metrical matrix. is then a single column matrix of m rows having the m linear forms as its also The first two of these elements. A(x. Matrix expressions for quadratic and bilinear 9 : forms.More generally. the transpose of x is a singlerow elements matrix with elements x v . of A. a singlecolumn matrix having x l9 . 2..5. m we associate with the m linear forms the matrix [a^] of 2. and with the linear form (1) we associate the singlerow matrix [<%>> ^l. is a sym We also associate with the bilinear form (2) the matrix [a^] of rows and n columns. m rows and n columns.4. are merely shorthand notations.. i} ] Notation. We may then (2) (3) conveniently abbreviate the bilinear form to A(x.. as the element in 4702 its rth row. can be so regarded..y).4.. one of We denote the associated matrix [a of any or (4) by the single letter A. (3). which has as many rows as A and as many columns as x.
A transformation is said to be NONSINGULAR when its modulus is not zero. Linear transformations 3.. 2. DEFINITION The DISCRIMINANT of the quadratic form i=i j=i is 2 1>0^ (# = namely *) \a^\.. #!. 9 . l9 . We sometimes speak of (1) as a transformation from the xi to the X i or. w). x > X.. is represented by the singleelement x n and when x and y are singlecolumn matrices having as row elements.6. 3. when the is real.. the transformation a^.2. briefly.. The transformation (1) is most conveniently written as X = Ax. Ae determinant of its coefficients. The set of equations Xi=2* av x * (* = I. (I). Similarly.. 3.. the bilinear form (2) is 2/ lv ... is said to be a linear transformation connecting the variables Xj and the variables X^ When the a tj are constants in a given field 5 we say that the transformation has coefficients in JJ.1.122 ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS number of rows of x') (the and one column (the number of columns of Ax). (2) in which X and x denote singlecolumn a matrix equation xn respectively. the single element being Thus the quadratic form matrix x'Ax.. a^ are real DEFINITION numbers we say that 3. . The determinant whose elements are the coefficients a^ of the transformation THE TRANSFORMATION. y n 2. (1) wherein the a^ are given constants and the Xj are variables.. and is said to be SINGULAR when its modulus is zero.. represented by the singleelement matrix x'Ay. Xn and x matrices with elements X v . is called the MODULUS OF DEFINITION 4.
the rows being suitably numbered. r. with a nonsingular transformation. Ax. it whether it (. or in the form be given in the form x > X whatsoever. and it is given by x When A is a singular matrix.. there no corresponding pair of 3. Y must satisfy Y\ for any pair X. For example. 3) has a reciprocal A" 1 (3) and A^X = A~*Ax = x.3.2). the pair X. immaterial > x. and only one corresponding x. is less than n.4~ )~ is 1 1 = A. and the / = .. Also and so.. Let x. there is one when Moreover. which expresses x directly in terms of X..ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS 123 A denotes the matrix (a^). this relation. Use the summation 'numbers' (Chap. For in such a case. ki are constants.. The product two transformations. so the set lt . bjk z k ( 9 z> \ may be written as a matrix equation y = Bz. (1) gives relations (4) **=i*w*< wherein the (4) will l (k = r+l. 1). the rank of the matrix A.n). y. Thus. and we can select r rows of A and express every row as a sum of multiples of these rows (Theorem 31). Then the transformation may be written as a matrix equation x transformation Vi ~ Ay (3. VII. Ax denotes the product of the it When A (Chap. z be each with n components. given any A~ 1 X. y. and let all suffixes be understood to run from 1 to n.. in the linear transformation 3\ the relation 2X = 4 (2 6/ is ). convention. there are for which no corre X X X= = X sponding x can be defined. VI. and two matrices A and x. . Y that does not satisfy x. is a nonsingular matrix.. and to such sets as will satisfy (4): a set of give no corresponding set/ Xn limited X X that does not satisfy is of x.
or the expression in . Consider a given transformation *i = *=i n TL*> ik Xk (t = l. transpose of B. this transformation is and  = <7. ABCu and the modulus of the resulting transformation is x the product of the three moduli \A\..n) au (1) and a given quadratic form A(x. DEFINITION 5. Thus the result of two successive Bz may be written as transformations x Ay and y which is AB = = x Since == ABz.1. y Bz.. and so for finite number of transformations.. xn given by (1). Transformations of quadratic and bilinear forms 4. [It PRODUCT OF THE TWO TRANSFORMATIONS X = Ay.] The modulus of the transformation (3) is the determinant ^at * s the product of the two determinants \A\ and \B\.124 ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS we substitute for y in terms of *i z.. \ a ijbjk\> > x = Ay.x).. y is the result of the successive transformations x = Ay. \B\. the result is a quadratic form C(X. 4). z = Cu. if we have three transformations in succession. we adopt a similar nomenclature for the transformations themselves. any y = Bz. Xv Xn (1) to the form (2).. . The transformation x = ABz is called the Bz... AB is the product of the two matrices. Similarly. y . we obtain a quadratic expression is result of applying This quadratic said to be a transformation of (2) by (1). If in (1) we obtain (3) = *ifok*k> a transformation whose matrix is the product (compare Chap.. VI. is applied to a // a transformation x A(x.x) = 2 2a i=l j=l n ij x i xj ( = aji) ( 2) When we substitute in (2) the values of x v .X) quadratic form whose matrix C is given by THEOREM 37. = BX C where B' is the = B'AB. 4.
X > To prove this we observe that. since a ti Moreover. (3) is not merely a quadratic expression in n but is. Corollary) A = Second proof a proof that depends mainly on the use of the summation convention. is the element in the kih row and ith (6) is Hence the matrix C of matrix product B'AB. and this implies x' = X'B' (Theorem Hence... C denotes the matrix B'AB. C' (Theorem (since 23. a proof that depends entirely on matrix theory. Then the quadratic form a a x i xi The transformation expresses (4) in x t =a = b ik X k ( aa jt) ( 4) (5) the form c kl Xk X t . c^ = c^. a quadratic form having c^ c^. (6) may calculate the values of the c's by carrying out the have actual substitutions for the x in terms of the X. = (3) where Jf 1? . 23). x = BX.5). = B'A'B = B'AB = C. c kl = \ =a jt . clk for. by the associative law of multiplication. in fact. when G= A') B'AB..x) pare 2. we have .ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS First proof 125 The form A(x. Let all suffixes run from 1 to n and use the summation conis vention throughout. We We a ij xi xj =a = ij b ik Xk b X jl l so that c kl buaybfl equal to the where b' ki column of = b ik B'. x'Ax = X'B'ABX = X'CX. Moreover. that is to say. is given by the matrix product x'Ax (com By hypothesis.
in x and Y has for its element This matrix is therefore AB. The matrix of this bilinear form in the ith row and kth column . . 4. the result is a bilinear form D(Xy Y) whose matrix where C' is the D is given by D = C'AE. that where D= x'Ay C'AB. = n n ^ = 2 a# 2 ^fc fc l ^ik and 4(*.3.y). As in the first proof of we write the bilinear form as a matrix product.. is the same sum lk .y) =2 = i m l . Second proof.2. // transformations x = CX 9 y = BY are applied a bilinear form A(x. namely x'Ay. much detail is This proof is given chiefly to show how avoided by the use of matrices: it will also serve as a proof in the most elementary form obtainable. First proof. having c w = c as b u a tj bjk or clk . = X'C'ABY = X'DY.126 ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS i and. Thus (6) is a quadratic THEOREM to 38. *). transpose of C. since both and j are dummies. The bilinear form may be written as X V ' m ' " ~~il tt Transform the y's by putting y/ so that = 2V/C (j = i. It follows at once. since x' X'C'. Theorem 37. 4. b ik a ji bji== bjk a ij b u> which form. where A is [ct ik ] and B is [b ik \.
The theorem follows on applying the two transformations succession. 4. The discriminant of a quadratic form. while (6) is the matrix bilinear A Hence the matrix of the in form (4) is C'A. Let a quadratic form be transformed by a linear transformation %i =2 n l ik Xk (*' = 1 . THEOREM 39. )> (3) l and leave the t/'s unchanged.4. transform the x 's in (1) 127 by putting (*' xi = 2 c ik x k = fc m = 1 v.ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS Again. (3). of (7. Then m n n m m m The sum ]T c ifc a # ^s = 1 2(Ic< fci/=ri=i fc ' **% (4) ^^e inner product of the ith row of c lm by the jth column of aml Now (5) is the transpose of that is. the matrix of the transformation of the bilinear form (1).
Z. that the determinant \C\. (2).128 ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS is. it will be used Hermitian forms 5.. . that is.. is given (Chap. is equal to M. the a+ij8.Z. is The content of this theorem usefully abbreviated to the statement 'WHEN A QUADRATIC FORM is CHANGED TO NEW VARIABLES. 10) by the product is of the three determinants \A\. as in the theory of the complex variable. and \L\. In its most common interpretation a HERMITIAN is BI LINEAR FORM given by . let the resulting quadratic form be 1=1 J=l . and since the value of a determinant are interchanged. THE DISCRIMINANT IS MULTIPLIED BY THE SQUARE OF THE MODULUS OF THE TRANSFORMATION '. The result is an immediate corollary of Theorem 37. \L'\ unaltered also equal to when rows and columns Hence is M . The discriminant of is. \L'\.1. (^ = C fi ). . 5. VI. that the determinant \l ik \. conjugate complex. if z The bar = then z = a ifi. This theorem is of fundamental importance many times in the chapters that follow. the matrix of the form (2) is C = L'AL.2 JU. where a and j8 are real. But \L\ = M. By that theorem.2 1 1 J >tf*< 1 field wherein the coefficients a^ belong to the bers and are such that of complex num denotes. (3) in symbols o =  Jfa. whose modulus. where L is the matrix of the transformation. (2) Then the discriminant of (2) is c M 2 times the discriminant of (1).
z n 9 = AY. X n and Yv . 1 2 = or Z = A'z. . tions 4702 (1). are cogredient sets: the coordinates of the two points referred to the new triangle of reference are (X 19 Yly ZJ and (X 2 7 Z 2 )... S . Z) are said to be contra then the sets x and z (equally... that (A')~ Z. Y by the same n 6. say x  AX.. tion whose matrix is the reciprocal of the transpose of A.2. X and y Y) are said to be is related to a set Zv .. 5. If a set z l9 . Cogredient and contragredient sets When two sets of variables x v . (equally. yn are related to two other sets X v . z 2 ). Theorems concerning Hermitian said to be a forms appear as examples at the end of this and later chapters...1.. and each is 2 obtained by putting in the appropriate suffix in the equa. 6.. then the two sets x and y cogredient sets of variables. y Zn by a transformais... j/ 2 . Examples of cogredient sets readily occur.ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS The matrix condition  129 A of the coefficients of the form (1) satisfies the A' =A (2) is (2) Any matrix MATRIX. The theory of these forms is very similar to that of ordinary bilinear and quadratic forms. xn and y l9 . y transformation. in plane analytical A transformation another is geometry from one triangle of reference to of the type x (I) The sets of variables (x l9 y^ zj and (x 2 . A that satisfies said to be a HERMITIAN A form such as is HERMITIAN FORM. regarded as the coordinates of two distinct points. X and gredient sets of variables....
l n ).. Then 9 ) . The transformation 9 ) y x = AX of the pointcoordinates entails L = A'l of the tangential coordinates. is provided by differential operators. M. N= The matrix coordinates of the coefficients of I. Then (2) becomes = triangle of reference. in matrix form. N.. m.. xn and a 'flat' I with coordinates (J 1} . Hence point and linecoordinates r m.} The notation of the foregoing becomes more compact when we consider n dimensions. tion of X where A Let F(x l9 ... n). a point x with coordinates (x l9 . f (3) m.. the transformation 6. if x = AX dx Accordingly. [Notice that (1) is X. z) with respect to a given Then the line (tangential) coordinates of Take a new triangle of reference and suppose (I. by l9 denotes the matrix [#]. .. n> L. the x i and the d/dxt form contragredient sets. Another example of contragredient sets. Z > x. (x. z w hilst (3) is /. 7.. y. Y. y. m.. n in (3) is the transpose of the matrix of the coefficients of X. (1). y. n) are contra gredient variables when the triangle of reference is changed. 9 Xn dF ___ ^ 6F dX dxj __ ^ 9 dF That is to say. a are LX+MY+NZ I 0. whcre L = M = w^+WgWfwZgTfc. Let lx+rny+nz = (2) be regarded as the equation of a given line ex in a system of homogeneous pointcoordinates (x. z) Z in (I. x n be expressed as a funcmeans of the transformation x AX. one that contains the previous example as a particular case.130 ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS Analytical geometry also furnishes an important example of oontragredient sets.2. that (1) is the transformation for the pointcoordinates.
a characteristic number.. the set of ratios not unique (Chap. x n from zero. a ln  0.X 3 ). is called a POLE equal to a characCORRESPONDING TO Ar A is . .. is then a point which corresponds to ..... that or satisfies (2).. with ^4 X of the transformation itself. If X 7^0. A set of ratios that satisfies (2) when teristic number Ar . in full.. are homogeneous co2 . 7. number.n). Consider the transformation The X= *<=i^ i =i ( = l. NUMBER all LATENT ROOT of the transformation... TERISTIC If A #!.. a unique corresponding set of ratios. that A be a root of the equation A(\) differs = a u A a 12 .1. may be regarded as a method of generating a onetoone correspondence between the variable point (x lt x 2 . If the determinant A(XJ there is of rank (nl).. 7.^3) and (X V ordinates referred to a given triangle of reference in a plane.ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS 7. it possible to Xx t is (i = 1. A pole then (1).. is ... where A is independent of i? a result to hold.^0. (2) and this demands (Theorem 11). 131 characteristic equation of a transformation Ax or..x 3 ) and the variable point (X V 2 .X 3 ) (^. that satisfy equations (2).2. we must have n Xx i =J a f ij xj (i = l y ... when one of x l9 .. assign values to the variables x so that ff such ft). (3) a 7il a n2 ' u nn ' This equation is called the CHARACTERISTIC EQUATION of the transformation (1) any root of the equation is called a CHARAC. VIII. there is a set of numbers or a x n not Let A is A 1? a characteristic zero (Theorem 11). 5). If the rank of the determinant is A(X l ) is n 2 less. (I) Is ^X".n).
k. 2 Express ax*\~%hxy}by as a quadratic form (in the definition of 3. X numbers of the form af &V3. X are the co factors of x. Verify Theorem 38. for r = 1. in A. y. AZ = XiL+XtM+XaN.2. N) are line coordinates referred to DEF. . = ir . = Z 3 +w + n 3 7? a ) = \ 3 X+fjL 3 Y+VsZ .. 6. z > X. Winger. E. jr A k. M. where A is the determinant l^t/a^al anc* xrt . Principes de gtomttrie analytique (GauthierVillars. if A jr = kf .1. by the method of Example 5.] 5. then M ABC x ^y^^) and . if (l. for homogeneous coordinates in three dimensions. F referred to ( as triangle of reference are (x^y^z^ (# 2 2/2 2 2) an(l Prove that. Y. or. Verify Theorem 37.132 ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS shall not elaborate the geometrical implications*)* of such transformations. Paris. if i^j^lq and i a J 2 k 2 are two unit frames of reference for cartesian coordinates and if. Darboux. ABC The homogeneous coordinates of the points Z). in solid geometry.. and that all numbers of the form af 6V3. a few examples are given on the next page. ( Write down the transformation which the product of the two transformations . [Omit if the vector notation is not known. . 1917). where a and 6 are the ratios of integers (or zero) constitute a Prove that all field.n} are line coordinates referred to (L. ) Prove that. by actual and the transformation is x = l^X^m^^ y substitution on the one hand and the evaluation of the matrix product (B'AB of the theorem) on the other. then the transformation of coordinates has unit modulus. R. when the bilinear form is 7. Z and then f Cf. G.m.1) and show that its discriminant is is accordance with ab h 2 2. An Introduction to Protective Geometry (New York. etc.6). z 4. A ir j r.. HINT. First obtain the transformation use 6. f9 . when the original quadratic form is l 2 X{m 2 Y.. where a and 6 are integers (or zero).. constitute a ring. We EXAMPLES 1. 2. 1922). 2.
. if each xr denotes a complex variable and a rs are com= 1.. = Z. ing lines f When the determinant is must intersect on /. Prove that the transformation x BX. [Examples 1316 are geometrical in character.. where C = BAB" > y is given by 9. (BytO). show that the transformation is a homology (plane perspective. Prove that. aX. n). Show that the transformation of Example 13 can.] Prove that..ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS 8. 8 plex form a rs x r xs is a real number. n. collineation) if a value of A can be found which A 2 will make al o3 of rank one. Prove that C= 10. (A' gate complex x where C 11. Prove that the transformation Y Y= Ct/. x) (or by singular transformation x = every coefficient of C belongs to ft. The elements of the matrices if A and the quadratic form A (x. 14. y = B^ change the Hermitianf a lj x i yj (with A' A) into c^X^Iy. by a change of triangle of reference be reduced to the form X' Hence. = BX. 133 In the transformation X Ax. changes a^x^Xj = JB is together with its conjuf A) into c^X^Xj (C C). if the points in a plane are transformed . and 12. . BX X belong to a given field ft. The transformations # = JBX". the Hermitian numbers such that a rs a 8r (r 1. scheme 13. transformed by the nonBx) into C(X.. B'AB. is every point of it corresponds to itself. 12. yZ. now variables Y and y.' = Y' .J8F. Y = BX 1 . &i cl ^2 ^ c3 ca 63 A HINT. deiined by y=Bx. or otherwise..X). are introduced. of rank one there is a line I such that In such a case any two correspond The summation convention employed in Examples 9... Prove also that there are in general three points that are transformed into themselves and three lines that are transformed into themselves. by the y' = then every straight line transforms into a straight line. Find the conditions to be satisfied by the coefficients in order that every point on a given line may be transformed into itself. 10. then Prove that. in general.
] . ALGEBRAIC FORMS: LINEAR TRANSFORMATIONS Obtain the equations giving.134 15. 1$) the axis of the homology. kz') is the same point as t = [Remember (z'. in in (collincation. plane perspective) two dimensions. 2/>+oA+j8/i+yi> that (kx' ky'. Points in a plane are transformed by the scheme x' = px+a(\x+fjLy\~vz) t Find the points and lines that are transformed into themselves. the homology which the point (x{ y{ z{) is the f 9 centre 16. and the hne (l[.e. 1%. is Show of it also that the transformation involutory (i.z').y'. two applications if restore a figure to its original position) if and only 0.
The point of excluding such a form from the definition is that. = x n = 0.2 is JXj*<*j (y = it is x = 0. while ^x\~2x\ is positivedefinite... it is essentially a function of two .. A real positivedefinite form in the n variables n positivedefinite form in the n variables X^..1. . X = l 3x l 2x 2 X = 2 x3 . variables. nun~si/tf/ul(ir transformation.. provided that (he two sets of variables are connected by a real. and x 3 and so it does not come within the scope of Definition 6. = = = .r.. fl said to be POSITIVEDEFINITE if positive for every set of For example. the second 1. DEFINITION The real quadratic form /<) .. Definite real 1. This we now prove. THEOREM is (i 40. of real values of #!. and # 2 2.. X Let the positivedefinite form be B(x. x n other than the set X L It is said to be NEGATIVEDEFINITE if it is negative for every set = . forms 6. the first is positive for every pair of real 3af is + 2j values of x l and x 2 except the pair x l is = 0.. namely... positive V2 and x 2 ^3. and is zero when x l An example of a real form that can never be negative but is = when x l = x2 = x2 0.CHAPTER XI THE POSITIVEDEFINITE FORM 1. according to the definition....x) } and let the real . x2 . 2. not positivedefinite. is given by This quadratic form is zero when x 0. x l9 x 2 o? 3 . but it is = negative when x l = 1 not positivedefinite.. whereas it appears as a function of three variables... A positivedefinite form in one set of variables in a is still a positivedefinite form when expressed r new set of variables. x n other than x = x 2 real values of x^.. x 2 3. provided only that the tw o sets are connected by a real nonsingular transformation. #!..
where . since (1) xn . THEOREM wherein each crr is positive. .. x) is positivedefinite. 6ii^i+26 12 ^ 1 ^2 +. x). 0..x) = 22 byx^ r=l r<8 (by = bit ) The terms of (1) that involve x l are . Hence X = if and only if x = 0. and so . is a matrix equation. I l 2 a nn x n arr U J' > Q] We now is show that every positivedefinite form in n variables a transformation of this obvious type.x n single 1. The manipulations that follow are typical of others that occur Here we give them in detail. C(X X) is positive for every X save that which corresponds to x = 0.136 THE POSITIVEDEFINITE FORM nonsingular transformation be y X = Ax. x) become C(X X) when expressed in terms of X. But X = 4#. C( X JT) is positive for every X other than Jf . The most obvious type of is positivedefinite ( \ form in n variables a ll x 2 l ... Let the given positivedefinite form be B(x. in later work. . and x 2 is (2) = = = positivedefinite.+26 ln a. later we shall refer this section and omit as many of the details as clarity back to permits. Accordingly.4 is a nonsingular matrix.. The equation # = column matrix whose elements are x l9 . 1 ^n Moreover. Then x = A~ X. the terms (2) may be written as t It is essential to this step that b n is not zero. Since B(x. can be a real transformation of unit modulus into a form transformed by 41. 1 Let B(x. it is positive 0. Every real positivedefinite form. when x l =1 hence 6 n > 0. .2... x being a NOTE. B(x.
. 1 Hence (Theorem j8 22 1.. Zn .. .. on T7 1 ^1> xr [ Y" /Q /*23 V" i^ [ V (5) fe 7 .. if "11 137 n 2.... (3) X =x r r = (r we have B(x. (6) The transformation y 33 (5) is of unit modulus and so.. the determinant of the transformation being b 12 /b n 1 . biJbi .. .. (7) wherein 6 U 4702 .. As a preparation for the next ... X 3 . n).. we have . r 8 (put F as before. (put 2 as before.... form (6) is positivedefinite by Theorem and ^ a quadratic form in Z4 . Xn be 2 2 yrs^r^sJ P2n ^22 /? ^hen. X^ (4) say....THE POSITIVE DEFINITE FORM and so. 40 applied to the form when r 1. 2 . . y33 are positive.. j3 22 ... and we have J 22 P22 Let this writing + form in X quadratic Jl / a quadratic form in 3 . r 3).Xr r (r = 3. Xn . ).x) = &nXf+ a quadratic form in X X X r 8.. we obtain as our next form Working > = = (4).. The transformation 1 (3) is nonsingular. Working > X = 40) the form (4) in X = r when r X is positivedefinite ^ 2).
form THEOREM 42.x) into (8). A set of necessary and sufficient conditions that the real quadratic form 2 **<** be positivedefinite is (1) n > 0. the transformation x >  is (9). Moreover. Necessary and sufficient conditions for a positivedefinite 2. the coefficient of Z\ that a positivedefinite form. and = x+ b* x+ _ + b* x f2  x *+...1. + + X +&x J22 n. 2. K nn are positive. we finally obtain B(X. we have A i2 J 22 = *8 + 733 733 Proceeding step by step in this way.... which is of the form required by the theorem. an a 12 ^nl a'In .+ */m2 ( 8) wherein 6 ll5 . moreover. We have thus transformed B(x.X) = 6 1 if + 022fS + y333+'. which is of unit modulus.138 THE POSITIVEDEFINITE FORM can be shown to be positive by proving (7) is step.
. 2 (3) .THE POSITIVE DEFINITE FORM If the form is 139 1. 0. then.nl and so is positive...+ Knn Xl and in this form a n (2) is ..2.. . /? 22 . (1) Now consider when x n a ln _^ = 0. place. as in 1. 9 xn _ v to nl terms a nI. and so is positive. by Theorem 39..2. Hence the given if first > A(x.. Similarly. xn X X3 +. the set of conditions holds.. The discriminant K nn Hence. to n 2 terms... on putting x n = and x n _ l an j8 22 = . applied to a form in the nl variables x v = an j8 22 .. then. of the form an j8 22 . a n and so on. as in there is a transformation ~ar~ n9 P22 of unit modulus.x) = au * f a quadratic form in x 2 . and we may write.. set of conditions is necessary. positivedefinite.. * n n are a ^ positive... in the Conversely. By the previous argument .. whereby the form is transformed into (2) ....
a ilr22 ~ tt ll tt 22 tt !2> and so may is positive write (3) as by hypothesis. ^ fc (A>2). for k > 2. when .. Hence /? 22 is positive.. the product a n ^22 733 ^ s e(l ual t ^he determinant (5).x)  a 11 Yl+p 22 Yt+ Y33 Yl+.. y33 is positive.. of the transformation is unity. That is.140 THE POSITIVEDEFINITE FORM where say. we may write A(x.. we can proceed we must prove that /? 22 is positive. and the discriminant of is a u a 22 af 2 .#2 an (fc ' i ~ an ^N' Z* Before ** > 1). and we a u 7ff j3 22 7+ a quadratic form in 73 . by Theorem 39 applied to a form in the three variables x lt x 2 # 3 only. We may proceed thus. Yn . where v = A v *i i> 72 = 7 = fc 3L 2 + ^X a +. Hence.. and so is positive. (3). + . step by step.. and the transformation from X to The discriminant of (4) is a n /? 22 y 33 the modulus of the transformation is unity.. 13 which is positive by hypothesis. + 2 YuY3 Y4 +.. Aj = = %i\ . Accordingly. (4) (4) where !! and jS 22 are positive. . and the discriminant of (3) is Consider the forms for k Y when x k ~ > 3. Consider the form and transformation when x k ~ The discriminant of the form (3) is then a u j3 22 the modulus . by Theorem 39 applied to a form in the two variables x and x 2 only.. to the result that. . Hence..Jfn P22 P22 .
. 2 2 6a. { The form is A (x. since the transformation y nonsingular. n. i V2 j8 22 . . K HU are positive and *n Mi (7) P22 The form (6) Ls positivedefinite in (7) is X } ly . i *22 i *32 ^33 EXAMPLES XI I . tl y sufficient conditions.... _ V9 i o V 9. xn and we have begun with a n We might have considered the variables in the order xny x _ ly .n n.nl Equally. x) if is negativedefinite if the form y A(x.2. We have considered the variables in the order x v x^.. 2. Accordingly. Provo that oaoh of tho (quadratic forms atM35y +ll= + 34?/s.f 49^/ f 5 lz*~ S2yz f 2Qzx4xy a 2 (i) (ii) is positive definite..x)} positivedefinite. x l and begun with a nn We should then have obtained a different set of necessary and . Xn and..nl a nl. but that (iii) is not.. xn (Theorem 40)... the form A(x x) if is negativedefinite and only < 0. A(x x) is positivedefinite in x ly .3.. y 2. namely.n a nl..THE POSITIVE DEFINITE FORM the set of conditions is satisfied.. any permutation of the order of the variables will give rise to a set of necessary and sufficient conditions. . 141 we may j write A (x y x) in the /A\ lorm wherein a n .
If /(#!..w) are linearly dependent. C lJ a 2^2 C 2> 0. HINT. Prove Theorem 40 for a positive definite Hermitian form and any non singular transformation.... denotes denotes d 2f/8x i dxj evaluated at xr a r (r = l. equal to r: and that the exception to the general rule arises when the r distinct forms are not linearly independent. Prove that the discriminant of is not zero unless the forms n 2 =! <**.s = l. X = a^x^a^x^a^x^ X = 2 3.. n1. minimum at xr = otr provided that 22/wfi6 1S a positivedefinite form and each /j is zero.. . Prove also that the rank of such a discriminant is.. Prove that the discriminant of the quadratic form in 2 . X z s Use the transformation and Theorem 39.. It is said to be positive definite when this value iCj is positive for every set of values of the variables other than =...= x n = 0. prove that / has a 9.n). a Hermitian form has a real value for every set of values of the variables. z F is (a l x\b l y}c l zf^(a 2 x}b 2 y}c 2 z) the square (by rows) of a determinant whose columns are a l^l 4.. times the discriminant of F. in general....142 2.. Extension of Example 3.n) of rank 8. Compare Examples 3 and 4. 7. y) may be a minimum at the point x == a. Harder. 133. p. y. (r ^ l. 6. :r 3 whose discriminant is an x%. By Example 12. y ft.0. then where K :r is a positivedefinite form in r 2 . /. Prove that the discriminant of the quadratic form r^s is ^(xr ~xs Y (r..0.r. # n ) is a function of n variables.. Write down conditions that /(#. THE POSITIVE DEFINITE FORM Prove that if ^22 3 3 ri o n xr xt (a rs = asr ) i is a positivedefmite form. HINT. Prove that a sum of squares of r distinct linear forms in n variables is a quadratic form in n variables of zero discriminant whenever r < n. 5..
THE POSITIVE DEFINITE FORM 10. +c nn wherein each c rr is positive. Obtain the corresponding analogues of Examples and 6. it follows.. where is zero. X= c^tff b^y^c^z. that the discriminant of the Hermitian form _ XX\~YY Y= a 2 x\b^y\. by analogy with Example pj 3. Prove the analogue of Theorem 42 for Hermitian forms. by considering 1*1= M'l. 5. that the discriminant is real.Ml..c 2 z. . 143 by a transformation Prove that a positivedefinite Hermitian form can be transformed of unit modulus into a form c u Xi X\+ . The determinant \A \ == [a^j is called the discriminant of the Hermitian form 22 a O' characterized by the matrix equation A' the determinant equation xi^' Remembering that a Hermitian form is = A. 4. Prove. Compare the proof of Theorem 4 1 11. x n %n . HINT. 12.
M Hence. ^2n '"nl a'n2 . 1. or. A equation of two quadratic forms We have seen that the discriminant of a quadratic form is A(x.. x) \C(x. . n) is multiplied by the square of the modulus of the transformation. 2.x) is likewise multiplied by the variables are submitted to a transformation of modulus M. if a transformation \ j =1 rs b ij\ = changes ^a rs xr x89 %c xr xs into 2<x r8 XX r s.1. x) are any two distinct quadratic forms in n variables and A is an arbitrary parameter. In A full. What we have said above may be summarized in the theorem THEOREM 43. the discriminant 2 when of the form A(x. by M X M If A(x. &1 / o.2.. x) and the form x\ + . +x% is called is THE CHARACTERISTIC EQUATION of A.x) and C(x...x) 2 when the variables x are changed multiplied by a transformation of modulus to variables (Theorem 39). fl Ac 7fl . The 1. a nn Ac w is called THE A EQUATION OF THE TWO FORMS. 1 . the equation a !2 a<><> = ) 0. The roots of the A equation of any two quadratic : forms in n variables are unaltered by a nonsingular linear transformation. The A equation of A (x. . .Yrs XX r s> tllen The equation \A XC\ Ac 11 = 0.CHAPTER XII THE CHARACTERISTIC EQUATION AND CANONICAL FORMS 1. . The coefficient of each power A r (r = 0. in full.
If Zr .THE CHARACTERISTIC EQUATION The 145 roots of this equation are called the LATENT ROOTS of the matrix A. Z = XiY. where / is the unit matrix of order n. so that \ the characteristic equation has no zero root when \a ik When \a ik 0.x) and ments.1. Then the determinant vanishes and (Theorem 11) there are numbers Z v Z 2 . there is at least one zero latent root..n).. (r Multiply each equation (1) by Zr and add the results. The reality of the latent roots 2. Let A be any root of the equation.).. such that . // [c ik ] is the matrix is of a positivedefinite ele form C(x.. The equation itself may be denoted by \A XI\ 0. prove 2. all any symmetrical matrix with real [a ik ] the roots of the A equation C #11 U A #12 C 12 = # ^'2i # ^22 ^2 22 i *nl c are real. (1) n that is.. X real. THEOREM 44. 8 =1 58 = ar8^8 (r = !. so that the rank of the matrix [a ik ] is r = \ ^ 0. = l. . ^ZCrs^s = n Z. =X Zr .. r i f 4702 When Z = X+iY. the conjugate complex! of where r and Y are r r r +iY .. as we shall there are then nr zero roots. = V(l)... namely.. X and Y real.. = The term independent of A is the determinant la^l. = Z Z +Z Z = = Zr Z r r r r (Xr +iYr )(Xr ~iY ) = 8 8 (Xr +iYr )(Xs iY8 )+(Xr iYr )(Xs +iYJ 2(Xr X8 +Y Y8 ).. Z n not all zero. we obtain terms of two distinct types. < n> later.
.x) are positivedefinite forms. the following theorem. ..2.. C(x. one that has c rr 1 When = and c r3 = (r ^ s). every root A of the equation \AXI\ = 0.X) + C(Y. If the coefficient of A in (2) were zero. and since the n) are not all zero.X)+A(Y. if A = [a ik ] is a square matrix of order n and rank r.x) is zf+xl+''+^n and is positivedefinite. on using an obvious notation. Canonical forms 3. x) numbers Z r = X r +iYr (r = 1. In this section we prove that. 3. {l<*rr(X r + Y*) + 2 2 an (Xr X +Y Y a r 8 )). nothing about the reality of A.. then (2) would tell us It is to preclude this that we require COROLLARY. NOTE. x) to be positivedefinite. every root is positive. C(x. X{C(X. a variety of applications in different parts of mathematics. by hypothesis. When a 12 a 22~~^ [a ik ] is any symmetrical matrix with real elements. Moreover. the form C(x.Y). then there is a nonsingular transformation from variables # lv . every root of the given equation is positive. since each a lk is real.1. that is. x n to variables n v . positivedefinite (2) Since. a n2 is the ' ' ann When [a^] matrix of a positivedefinite form... a ln a 2n ' ' anl is real. // 2. X X that changes A (x. both A(x.. THEOREM i 45. the righthand side of (2) is real and hence A must be real.146 THE CHARACTERISTIC EQUATION AND result of the operation 2 r 2 r Hence the A( = r l l is I UX +7 )+2 r<S c rs (Xr X8 +Y Y8 )} r ' or.x) and C(x. of n a 21 A .. Thus Theorem 44 contains.Y)} = is A(X.. (3) . the coeffi cient of A in equation (2) is positive. as a special case. x) into * r.
in = the quadratic form ]T 2a is rs a nn are zero. then a will 2 2 ^rs %r X 6 12 . that the a ik are elements in a particular field F (such as the field . has 1 in the first. If in the quadratic form either 1 or 0. ^ 1) has 1 in the principal diagonal position and elsewhere. of real numbers). The value of such a determinant Moreover. each element . has in the first and 5th columns and 1 elsewhere. We shall have occa sion to use two particular types of transformation..2 and 3. is Moreover...1 formation of type I will change the quadratic form into 0. (ii) in the 5th row.. then so are the coefficients of the transformation and the resulting g t We call (3) a CANONICAL FORM.. The transformation in n variables x t x^Xi+X.2. b ln is 8> wherein every bn change the form into zero but one of the numbers not zero. is 2. n r S9 wherein 6 U is ^ II\b X X If. Elementary transformations. gr are distinct from zero. then either a n tf/. r) is nonsingular. JJa rs xr xs one f the numbers a ii>> or a suitable transnot zero. in the 5th.. TYPE The transformation xl =X r. further. is =X t (t=8. show that any given quadratic form can be transformed into one whose coefficient of x\ is not zero. We show. but one number a rs suitable transformation of type I xr xs> a ^ the numbers a n . x^XiX.CANONICAL FORMS 147 if where g v .. in the tth row (t s. xr = Xv x8 =X 8 (5 ^ 1. This is done in 3.. I. and in (iii) every other column..... TYPE II. each coefficient of the transformation and so belongs to every field of numbers F. its determinant in full modulus is 1. as is seen by writing the and interchanging the first and rth columns. first The step in the proof is to 3..3.l) nonsingular. (r ^ s) is not zero. Its (i) modulus 1 is the determinant which in the first row..
0. can be transformed by a noncoefficients singular transformation. coefficient fc n is not zero. of the determinant of numbers F..x)~% s=l rs x ^a r=l r xs (ars = aj. For 3.3. in the quadratic form but one b l8 is not zero.. (1) If one a^ is not zero. with coefficients in a given field F. The product of these two transformations changes (1) directly into C(Y. If.. with quadratic form in n variables and of rank r. If every arr is zero but at least one ars is not zero. a transformation of type I changes (1) into a form B(X. cra^r^ wherein n is not zero. The foregoing elementary transformations enable us to transform every quadratic form into one in which a u is not zero. is (1) where oc v . ar are numbers in n n ij F and no one of them equal to zero. field b nn are zero. modulus of this transformation is y We summarize these results in a theorem. with coefficients in F. 7. 1 THEOREM A ZJ+. x). DEFINITION the Proof of the main theorem. step towards the canonical form.Y) and the 2. and having one ars not zero. b r8 xr x8 all the numbers 6 n . Let the form be ^ 2a xi xj\ an ^ let A denote the matrix . X) in which 6 n is not zero. The rank of a quadratic form the is defined to be rank of matrix of its coefficients.+ r Z.... a transformation of type I followed by one of type II changes (1) into a form C(Y Y) in which c n is not zero.148 THE CHARACTERISTIC EQUATION AND is 1. then a suitable transforma 2 tion of type II will express the form as c 2 2. Every quadratic form A(x.... can be transformed into a form by a nonsingular transformation with coefficients in F B(X. The first consider any given form A(x. or 1 and so belongs to every . in a given field F. X) whose 3. THEOREM 46.4. into the form 47.
. (3) j 2 with coefficients in F.... which enables us to write ^. (=2. If one by is not zero. and every b is in F. We may thus show that there is a nonsingular transformation Z i = Il Y = ii i (i=2.n). #) as Moreover.. This may be written as and the nonsingular transformation. with coefficients in F.. then (2) reduces to the form (1). If every b tj in (2) is zero. there is a nonsingular transformation with coefficients in F that changes A (x.Y stitute n into 2 15 .*). If one a^ is not zero (Theorem 46).. If every a^ is zero..H ^ n i > Y^X..JT i 2 +. hence it is nonsingular and has its coefficients in F.. where = . Z n Hence there is a nonsingular transformation.. the question of rank we defer until the end.The equations (3). conx 2 7^ a nonsingular transformation of the n variables Yl9 .. together with F Zj.CANONICAL FORMS 149 [%]. . Y l = X"i<) . enables us to write 4(#. the rank of A is zero and there is nothing to prove.. x) into where c^ ^ 0. we may change the form ] ^i Yf variables in the same way as we have just changed the in n1 2^ original form in n variables.... the transformation direct from x to y is the product of the separate transformations employed.
oc n ^ are numbers in is. into a form (in n~l F. Hence k r.. that is.+* H x*. so that its k places of the principal diagonal rank is k.. This corollary of course. * l Xl+.. a^.. ^ 0. with in a given field F... one of two things must happen..x) to the form pass 11+ 1 I o. ^ but every d^ = which case (5) reduces to or we arrive after n steps at a form 1 In either xi+.. the rank of the matrix A. circumstance we arrive at a final form (*<n) (6) by a product of transformations each of which is nonsingular and has its coefficients in the given field F. with coefficients in F.z?. But the matrix of the quadratic form (7) consists of a lv . coefficients can be transformed by a nonsingular transformation.. // A(x. at a form i=T+i 0.+* k Xl+ 2 y=/cf i I dtjXtX. and if its discriminant is zero.150 THE CHARACTERISTIC EQUATION AND the product of all the transformations so far employed..Z... (5) 0. which has coefficients in F and changes A(x x) into y c^ZJ+^ZS+i3 = i j JU. the matrix B'AB has the same rank as A (Theorem 34).. Since B is non singular. (?) Then the matrix of (7) is (Theorem 37) B'AB.. merely a partial statement of the . =3 (4) wherein j ^ and a 2 On proceeding in this Either we arrive and every c is in F. = COROLLARY. a x ^ 0.Z.. way.. whereby we Let B denote the transformation from A(x. x) its it is a quadratic form in n variables. a k in the first and zero elsewhere. It remains to prove that the number k in (6) is equal to r. r. variables at most) where c^. in wherein k < n.
. Then there is a real..x) X l C(x. A^ are the roots of \A~\C\ = and are all real The roots of \A~XC are all real (Theorem 44). . l A(x. . the rank of the form is n I or less.x) . forms in real quadratic Z2 .. x). C(x. Yn such that where the a's are real numbers..... Zn .CANONICAL FORMS 151 theorem itself. (2) it is Since C(x. XI.. since the discriminant of A(x... We state the corollary with a view to its immediate use in the next section. x n to \ = F!..x) and $ are where <f> = ^(Z 2 .. =U =l i (1) Then we have.x)=i Jy^r.x) = A(x. also is positivedefinite in the variables (Theorem positive and we may use the transformation 7 40).. x) be positivedefinite. Let X l be any one root.Zn )+(A 1 A){yu ZJ+0(Z lf . 1.. when applied to C(x. Let this same transformation.x)X C(x. nonsingular transformation that expresses the two forms as where A 1} . for an arbitrary A.. The simultaneous reduction forms THEOREM 48. x) is positivedefinite in the variables x... let x). 4. Then A(x.x)XC(x....x) is a real quadratic form whose discriminant is zero and so (Theorem 47..x)W(x.x) is zero..x)+(X 1 X)C(x. of two real quadratic Let A(x.. wherein F is the field of real numbers. Hence y u This enables us to write (2) in the form (Chap.x) be two real quadratic forms in n variables and C(x. give C(x.2) A(x. Corollary) there is a real nonsingular transformation from x v .
.. 9 Yn .. transformations so far used) from x l9 . ifi Thus we may. Zn step towards the forms we wish to establish..... A 2 is a root of C/ .. = o. Un Hence there is a real nonsingular transformation (the product of all tion between Z2 .. A(x y x) and (Theorem G(x....152 THE CHARACTERISTIC EQUATION AND A.. and the transforma Un is real and nonsingular. xn to U l9 . that is... of . for the forms (3) derive from 0 l9 . Z l we have a real... Un such that C(x.. Z n and Uly . denote quadratic forms in Z 2 .. o *\C\ is **. $ to forms step with 6 and in place ( } where 2 is positive.. together with A = A account \A\C\ = 0. then \ also a root of \B = 0.x)= .. Z n is a positivedefinite form in the (i) tfj the first ifj nl \ for is obtained by a nonsingular transformation from the form is (1).. nonthe equation C/j adjoin singular transformation between Z v . 2 \AXC\. are the roots of \A If A x is I^ACI = a repeated root of A0 0. A^ t.. by using the first of A and C. Z n and When we = 9 y . = 0... . x) by a nonsingular transformation 43) the roots of (A and so iyu Ay u )ZJ+0M  0. we proceed to the next step we observe two things: Before variables Z 2 .. Since this holds for an arbitrary value of we have (3) A(x where 6 and This is 9 x) = .. reduce 0. which (ii) positivedefinite in Y The roots of for all the roots of = 0..
x) by real nonsingular transformations to the two formS A(x.x)= wherein each of ^ rf+ 2 n+. + X n X%.+X n <* n ai Yl c(x.. A(x. . Meanwhile...y.'*.. = X Orthogonal transformations If. X n account 0. the positivedefinite form transformation the the #? + .x) =X l a l Y* + Xt*tYl+. namely. in Theorem 48. X n account for over. consists in writing rrff. all the roots of \A A/ = 0. THEOREM 49..] t for C(x.. More The proof Theorem 48... The number If of nonzero latent roots B is the is A(x. for all the roots is \AXC\ = positive and A 1? ..+ a n Y*. ... all the roots are real. the real transformation X = r V. Finally.+af . envisaged by + X*. theorem transforms is called an ORTHOGONAL TRANSFORMATFON.. x) in n variables can be reduced by a real orthogonal transformation to the form where A 15 .. Thus we may proceed step by step to the reduction of A(x. x) and C(x. ( 1 ) x . gives the required result. A real quadratic form A(x.x) 5. we shall see that they are necessarily nonsingular... As at \AXC\\ f the first stage. x) 4702 matrix of the orthogonal transformation whereby reduced to the form AX X\ + ..+* into XI+. We shall examine such transformations in Chapter XIII. x) in [See also Cha])ter XV. we can show that F is positivedefinite and that the roots of \fXF\ = together with X l and A 2 account for and a 2 are positive. of all the roots of \AXC\ = 0. 6. we note an important theorem.. Such a transformation is xl+ +?%.CANONICAL FORMS where o^ 153 and A 2 are roots. not necessarily and F are real quadratic forms. Aj distinct.
y?+. starting from the one given form A (#. let p. a real quadratic As we proved Theorem 47 can be transformed by a real nonsingular transformation into the form form of rank a^.^*. #).154 THE CHARACTERISTIC EQUATION AND then (Theorem 37) E'AE is the matrix of the form (1) and this has X v .. (3) the the number of positive a's is equal to the number of positive fi's and number of negative a's is equal to the number of negative j3's. in general. be the number of positive a's and v the number of positive jB's. (2) farms *+.. r of a quadratic in form (3. Let the variables X. the rank of B'AB \I\ Hence the rank of A is equal to equal to the rank of A. ar are real and not zero.. Then. The signature 7.**. ai /? 1 is reduced by two real. there are.. (4) . B and B 2 say. As a glance at 3.1. THEOREM to the 50. wherein . the number of nonzero roots of is the characteristic equation \A = 0.3 will show.. many different ways of effecting such a reduction. x n in the initial quadratic form. The theorem we shall now prove establishes the fact that..+. Y be so numbered that the positive a's and jS's come first.. 9 Let n be the number of variables x l9 ... But since B is nonsingular.. even at the first step we have a wide choice as to which nonzero ars we select to become the nonzero 6 U or c n as the case may be... Its rank is the number of A's that are not zero. since (2) and (3) are transformations of the same initial form. the number of positive a's and the number of negative a's in (1) is independent of the method of reduction..+/?r y*. X n in its leading diagonal and zero elsewhere.. we have .4). 7. // a given real quadratic form of rank r nonsingular transformations.
(5) are homogeneous equations in the n variables! # 1? . . X'^ = 0.. A. n in which 5) the equations have a solution x 1 = = .. 9 . or /J's. in Theorem 50 the number of positive latent roots of the form.2..  j v 6) ' But (6) means that the n equations **! ^j > "^n v / say fc^i ik xk in full. This gives the form 7. = 0. f n are not all \l ik zero.. is means that the determinant = \ 0. xn There are less equations than there are variables and so (Chap. are the nonzero roots of the characteristic Hence the number of positive a's. Then Q n+vfji equations . which a contradiction of the hypothesis that the transformation is nonsingular.. X' = 0. l . in turn... ^ X Y r. from (4) and (5). Xn = .. /^ > > / leads to a contradiction. equation. = 0. n > .. /z leads Similarly.. contrary to the theorem. where A ls ........ i>> n are not a ^ zero Let X' Y' be the values of r r  .... is . Hence the assumption that /x to a contradiction. have a solution xr =^ r in which 1} .. this.. the assumption that v v and the theorem is proved. from (5). = One of the ways of reducing a form of rank r is by the orthogonal transformation of Theorem 49. xn 1? . which is Hence impossible unless each X' and Y' is zero... ^= 0.CANONICAL FORMS 156 /x Now the suppose. r when x = Then. in x xn f Each X and Y is a linear form lf . either we have a contradiction or and. X = 0.. VIII. F. 2^ . that > v.. Accordingly.= 0..
The sum of P and N is the rank of the form.. n.. A 2 (y. EXAMPLES XII 1. x) and aT8 C(x.y) into (2). Equally. say C 2 1.. is When Ai(x. The real transformation changes (1) to f?+. Or.fr y (2) is. Two forms A(x. then.^*.. Prove that the A equation of the two forms has k roots equal to unity.. (2) is changed into A 2 (y. THE CHARACTERISTIC EQUATION AND have proved that. there is a real nonsingular transformation. We form are DEFINITION the form..y). the transformation from x to a's and where is JJL = i(s+r) and X real and nonsingular. where the where /Ts are positive.+^^i.y) by the trans There x = f formation = C2 l y. say that changes A^(x x) into (2). 7.x) into A 2 (y..+cipXlp^Xfa. The number PN is called the SIGNATURE of conclude with a theorem which states that any two quadratic forms having the same rank and signature are.. 8.. . Hence 3 = ^010 changes A^x. C\..y).3..]56 7. the number of positive and of negative coefficients in any canonical form.4... x).. associated with every quadratic two numbers P and JV. Let A^x. Then there is a real nonsingular transformation x = By that transforms AI(X. y) be two real quadratic forms same rank r and the same signature s. = c ra when k and 8 1. that changes A 2 (y.X) into A 2 (y. on considering the y reciprocal process. EQUIVALENT. a real nonsingular transformation.x) reduced to its canonical form it becomes (1) 9 ^Xl+. x) have r 1. in a certain sense. We THEOREM having the 51..
. Uso the result of prove that. Let = The condition for real roots (ab' a(a? in A 2 becomes 2 ) + a'b2tih') 4(abh 2 )(a'b'h' > 0. which can be written as aV^a^ftX^ftMfta. 7. /? 2 are real and the Prove that the latent roots of the matrix A . 9 x) = foe? all positive. xk X.k <*kk X.x) = is two latent roots are positive and one 5. by means of Theorem 18 applied to A k that . With the notation of Example 5. the A roots are real save when a lt ft l9 roots with suffix 1 separate those with suffix 2. 157 Write down the A equation of the two forms ax 2 f 2hxy + by 2 and and prove. lk X = a ki. x k 0. prove that A k and f k are independent of the variables x t . In fact.CANONICAL FORMS 2. for A k by subtracting from the last column multiples of the other columns.) > When ab h2 0. Prove that when A(x.. a x and /3 t are conjugate complexes and the result can be proved by writing c^ y iS. when k < n. xk subtracting from the last row multiplies of the other rows and then. when no Dk is zero. 2 . > 0. Example 6 and the result An of Example 5 to . that the roots and ab h 2 > 0. Prove also that A n By . 3.. . by elementary methods. yji8. are real when a > a'x 2 \2h'xy{b'y 2 HINT. and with . negative. n a lk show. where A(x are 4.xr x. ^ a. 9 ^ 6. r ^ 2 a r .
14. 22 Show. we have written y instead of a: whenever the suffix exceeds m.x). Use Theorem 47...158 THE CHARACTERISTIC EQUATION AND r. f X n X nt Show where A^. Hodgkinson... let the range of u and t be mf 1. and hence (by the method of Chap... r are all distinct from zero..2) find a canonical form and the signature of the form. Prove that a quadratic form is the product of two linear factors if and only if its rank does not exceed HINT. as an additional distinguishing mark. 2T = ^ara xr xt (arB = a^). is multiplied by \B\x\B\. f A n X n 3T n . ara^r^ with Transform 4cc a x 3 f. I owe this example to Mr.  A is a Hermitian matrix. then the roots of HINT. A n example of some importance in analytical dynamics.x) are Hermitian forms. Solution. 9. The result is easily proved by careful manipulation. 1. Consider what happens when the rank. all real. ...x). A(x. C(x. .. Verify Theorem 60 for any two independent reductions of this form to a canonical form.^ that when a quadratic form in m\n variables. when it transformed to new variables by means of transformations x = BX. 77: The Routhian function. if XI = are real. J.. r dT/dxr there . 12. is 2. #m+n where are no terms involving the product of a by an x. of A is less than n and D y/'S. We are to express T in terms of the y's and new variables f. Prove that there is a nonsingular transformation that expresses the two forms as 13. m. is and are expressed in terms of lf . all \A Prove that.. Compare Theorem 45 and use C(x x) t = n rl 2. that 2 + 9*2 + 2*1 h 8* 2 *3 f 6*3 *! f 6*! * is of rank 3 and signature 1. given by t Cf.. trove that the discriminant of a Hermitian form A(x. x = BX.... Prove the analogues of Theorems 46 and 47 for Hermitian forms. XI. 0. by means of Example 4x1 4. XX+ l l .. Higher Mechanics..^ are the roots of \A\C\ 15. of which C is positivedefinite. Lamb. m # m +i. A! XX l 1 4 . m+n.. >!. 10. 13. Then 2T may be expressed as 2T = art xr x8 + 2aru xr y u + a ut y u y tt where..xr xr .. Deduce the analogue of Theorems 43 and 44 for Hermitian forms.. Use the summation convention: let the range of r and s be 1..2# 3 ^ f 6^ x 2 into a form a n i=.
a run from 1 to m and u. and add: we got o . a determinant of order Now 2T = = 2AT = where ^(i/. since both r and 8 are dummy suffixes. . and this is zero. ru A T8 8 a8U A 8r r. Hence. since ar8 = a8T we also have A 8r = A r8 Thus the term multiplying y u in the above may be written as . 2T may be written in the form 2T = (AJAfafr+b^vt whore r. 9 169 A = ara . y) denotes a quadratic form in the t/'s only. . But. t run from m+ 1 to m+n.CANONICAL FORMS Multiply by A 8r the co factor of asr in w.
OZ. . The matrix of an orthogonal transformation must have This is some If A = special property. and x = AX. an orthogonal matrix.. z 2 +?/ 2 +z 2 ..OP 2 2 .. whose directioncosines with regard to the former axes are are its .CHAPTER XIII ORTHOGONAL TRANSFORMATIONS 1.. Z) point P coordinates referred to rectangular axes OX OY. (/ 2 .f #2 into A transformation x is called AX that transforms XI+.2__ .1. m 2.. for every set of values of the variables t ns anl = = 1 X = (* r .X*+Y*+Z . w3 . [ar8 ] is readily obtained.. When (x. DEFINITION a. (l lt m l9 n^.. Hence n). The matrix A is called an ORTHOGONAL TRANSan ORTHOGONAL MATRIX. Oz and (X. Oy.+X* FORMATION. (s i~~ t).2. definition of the previous chapter. Definition and elementary properties 1. the two sets of coordinates are connected by the equations Moreover. (2) seen by forming the matrix product A' A. These are the relations that mark an orthogonal matrix: they are equivalent to the matrix equation A' A as is = /. !. in The bestknown example of such a transformation occurs analytical geometry. We recall the 9. (? 3 . y. z) are the coordinates of a referred to rectangular axes Ox. 7? 2 ). 1. Y. # 3 )..
ORTHOGONAL TRANSFORMATIONS 1.3. THEOREM is 53. (AB)(AB)' = ABB' A' (Theorem 23) = AIA' = AA' = /. is THEOREM either 54. A' in full) fact follow the relations (by writing A A ( = When n noted in forms ^ } I).\A'\ = I and \A\ the determinant of the matrix A. then so is I/A. THEOREM Let 4702 If X latent root of an orthogonal transforma tion. give four theorems that properties of an orthogonal matrix. is Then and the theorem proved. A be an orthogonal matrix. then =1. AA' = /. Let x Hence = A X. X = BY be orthogonal transformations. = 3 1. 1. and so 1. then A A' = 7. 1. BB' = /. The modulus of an orthogonal transformation is +1 or IfAA' \A\. embody important THEOREM matrix A necessary A to be orthogonal is and sufficient condition for a square A A' = /.4. l. 1. Z1 m 1 +Z 2 m 2 +Z 3 m 3  0. and from this 25). and hence 1 14 1 = 1.3. \A'\ is = a \A\.1. But 55. We now 52. the relations and the transformation is the change of axes (1) and (3) take the wellknown = on. This theorem follows at once from the work of COROLLARY. Every orthogonal transformation is nonsingular. The. . product of two orthogonal transformations an orthogonal transformation. 161 When (2) is satisfied A' is the reciprocal of (Theorem so that A A' is also equal to the unit matrix. and so (4) A = 1 A*.2.
and in the crude form we have used above ifc it certainly proves nothing. (An alternative proof 2. the characteristic (latent) roots By Theorem of A' are the reciprocals of those of A. has 1 +2 = 3 independent elements.) The standard form 2. 168. the For example. There are n 2 elements in A. p. and the theorem follows. We may therefore expectf that the number of independent constants necessary to define A completely will be n If the general orthogonal matrix of order in terms of some other type of matrix.162 ORTHOGONAL TRANSFORMATIONS 36. n is we must look to be expressed for a matrix is that has \n(n~ 1) independent elements. those lying above the leading diagonal. But the characteristic equation of A' is a determinant which.2 of an orthogonal matrix be an orthogonal matrix the equa A may tions (1) of must be satisfied. Hence the n latent roots of A are the reciprocals of the n latent roots of A. becomes the characteristic equation of A. Corollary. Such a matrix skewsymmetric matrix of order n\ that is. In order that 1. There are 1) __ n(n\l) n(n of these equations. namely. on interchanging its rows and columns. general case is The number of such elements in the  The method of counting constants indicates what results to expect: rarely proves those results. when n = 3. is given in Example 6. .1.
then. replacing the zeros of the principal diagonal by with 713. and when I+S is nonsingular it has a reciprocal (I+S). Hence an orthogonal matrix (Theorem NOTE. I+S is nonsingular.2. (i+sy = /s .! and. LEMMA. = is (IS)(I+8)i n. was well known to Cayley. Hence J l l Now $ and I+S are commutative. obtained by writing I+S is nonsingular. // S is a real skew symmetric matrix. an orthogonal matrix of order Since S is skewsymmetric. proved as a lemma in covered Theorem 56. // $ is is a skewsymmetric matrix of order I+S nonsingular. s f  5. 163 n. from (5). (isy  i+s. . 2.. Consider the determinant A down S and x\ e. who disfact. by hypothesis.(IS)~ (f+S). and.g. we have 0' = {(/+ fl)i}'(/= (2S)~ (I+S) l )' (Theorem 23) (Theorem 27) (5) and 00' /  (IS)(I+S).1 Hence the nonsingular matrix O being defined by . This 2.2.ORTHOGONAL TRANSFORMATIONS THEOREM provided that 56. I + S cannot be singular. l 00'  (I+S)is (I 52). When the elements of S are real.
in general.+x. Every real orthogonal matrix A can be expressed in the form J(IS)(I+S)* where 9 S is a skewsymmetric matrix and J is a matrix having i1 is in each diagonal place arid zero elsewhere. orthogonal or not.. P P are either squares or the sums of squares..164 ORTHOGONAL TRANSFORMATIONS The differential coefficient of A with respect to x is the sum of n determinants (Chap. and P Plv are. n. I. THEOREM 57. Thus. and By Maclaurin's theorem. though they may be l9 . 7). is 2 we see that d 2&/dx 2 is the sum of n(n 1) determinants each of which a skewsymmetric determinant of order n so on. 1. and so on. Differentiating again. 2 when x = equal to 0. Hence n even n odd .. when n = 3. A A _ P __ P __ __ = +a +~.xP +x*P3 +. we then have (6) 1 2 A where = AO+Z 2 +* 2 2 +... positive and.3. l (7) where so . The lemma follows on putting x = 2. As a preliminary true for all to the proof we establish a lemma that square matrices. in the general case. .+a. 21)... i A is a skewsymmetric determinant of order 2 a sum of skewsymmetric determinants of order n 1. they cannot be negative. .+x. . But a skewsymmetric determinant of odd order is equal to zero and one of even order is a perfect square (Theorems 2 8 19. each of which is equal to a skewsymmetric determinant of order n 1 when x = 0. zero in special cases. dA dx and the latter become skewsymmetric determinants of order when x = 0.
let = A : be a matrix whose latent roots are all different from JV A 1. a a 32 a 1Ql a 33J = F a 11 a.o 1*32 a 12 C W 31 "~" i "33 J or. \a nn \. Then. Bearing this in mind. .. we see that. so that where J = A /f l9 Moreover. Now Jj having * nonsingular and its reciprocal Jf is also a matrix in the diagonal places and zero elsewhere. being derived from A by a change of signs of * and is . Suppose it is true. Let A be a given orthogonal matrix with real elements. adding the two forms of (8).1 (9) a nn +l combination of row signs. If A has a latent root 1. of the type required by Theorem 57..g...4. Given a matrix J. in A not a latent root of the matrix JA. "1 .1 = 0. until we arrive at = impossible to have both (8) cannot be true. we obtain = a 2n for every possible .ORTHOGONAL TRANSFORMATIONS LEMMA. we must have = 0. either the lemma is true for every possible combination of row signs. (ii) with minus in the first row. (i) with plus in the first row. (8) But we can show that (8) is impossible. Multiplication by J merely changes the signs of the elements of a matrix row by row. "IFa. We can proceed a like argument. e. 21 i L 22 a. = and a nn \. 2. reducing the order of the determinant by by 0^+1 0: but it is unity at each step. Hence Proof of Theorem 57. having so that 1 is 165 i 1 it is possible to choose a square matrix each diagonal place and zero elsewhere. where Jt is 1 is of the same type as the J of the lemma.
without ambiguity. We 2.4. . 2I2A A' 1 1 r\ Hence. A l A l 1 and we may work with A v A(. we have S S is a skewsymmetric matrix. since I+S is nonsingular J and / S. when S is defined by (10). (10) The matrices on the right of (10) are commutative f and we may write. have thus proved the theorem. we can choose a skewsymmetric matrix S this. Combining Theorems 56 and 57 in so far as they relate to matrices with real elements '// we see that S is a real skewsymmetric matrix. p. real orthogonal matrix such that 4 1 +/ is nonis singular. from = S'. on using the relation A A' l l = /. T 4 * ' 1 A' * A' A l /. Hence it remains to prove that a. and so. matrix A^I l when A is orthogonal and we have chosen it so that the is nonsingular. I+S are com mutative. 2.166 ORTHOGONAL TRANSFORMATIONS certain rows. let so that To do 8 = (IAJV+AJ*. that is. then A = J(IS)(I+S)t  1 Compare the footnote on Compare the footnote on p. we have Al =(IS)(I+S)\ (12) the order of the two matrices on the right of (12) being immaterial. 163. (10). J Compare 2. since A l is orthogonal. Moreover. Thus A l Further. and / and A( are commutative f = = as though they were ordinary numbers and so obtain. 163.
ORTHOGONAL TRANSFORMATIONS is 167 a real orthogonal matrix. the equations 7).. and 58 hold for this matrix. we have rl r=l But not all of # 15 ..' 2. a\ I .. ^x r xr > 0...+a rn x n (r = l. Let if A is [ars ] be a real orthogonal matrix. The latent roots of a real orthogonal matrix are of unit modulus. By using the orthogonal relations (1) 1.. and every real orthogonal matrix can be so written. Prove that the matrix [45 i Ll is  i I orthogonal. xn are zero...+a ni x fl (r = of 1. Xxr == a rl x 1 + . THEOREM 58.. Prove that the matrix ri 4 4 4 i i 4 * i Li is 444 4J 4 i orthogonal. x n other than x These x are not necessarily real... a latent root of the matrix. x n = 0. Find its latent roots and verify that Theorems 54... 55. have a solution a. 1 ... EXAMPLES XIII 1. X. J 2. Prove that when c a 61..5. Then (Chap. 3. n). and therefore Hence AA = 1. which proves the theorem. also Xxr = a rl x l +... but since ara is real we have. on taking the conjugate complex of (13)..n) (13) ...
. Prove that the product of two unitary transformations Prove that the modulus is itself unitary. Higher Mechanics. S is a skewsymmetric matrix such that IjSA is nonsingular. equation 10. where the transformation is obtained from kinematical considerations. Multiply the determinant \A \I\ by \A\ and. Prove that the transformation is the matrix equation UU' = /. Prove Theorem 55 by the following method. Let A be an orthogonal matrix. 2(ca6) 2(ac a a_{_&2 + c 2 6) a l +a l + + c t fci I_}_ 2(a6c) a 2__62 + c 2 2(o+6c) 2 1a 2 . examples 20 and 21. and Invariants. f f . 7. F'^F. put A' = I/A. x UX which makes mark of a unitary called a unitary transformation. (^4f A^) ^^). X'AX = 6. <S skewsymmetric. when 1 ^L is and/? = symmetric. tf'^ + S)/? Prove also that. . R'SR = fif.168 ORTHOGONAL TRANSFORMATIONS 'l+a*6 2 c* 2(cja&) lf. R'(AS)R = AS. chapter i. MM = M of a unitary transformation satisfies the /.l+a Hence 4. Theory of Determinants. f& 2 +c find the general orthogonal transformation of three variables. 9. A given symmetric matrix is denoted by A X AX is the quadratic _ form associated with A. when = A + S. J These results are proved in Turnbull. (Harder. when SA = AS and J<^ F> 5. )J Prove that. of the f Compare Lamb. is form e i<x Prove that each latent root of a unitary transformation where a is real. R'AE = A. X= RY. is 8. A transformation x = UX. Matrices. Prove that. in the resulting determinant.
3. . linear transformation of the variables is multiplication by a power of the modulus of the transformation.. the ' in fact... is called 4702 ... A which satisfy the conditions O^a^k.. an ALGEBRAIC FORM of degree k in the variables x. 1..1. wherein the a's are arbitrary constants and the sum is taken over all integer or zero sets of values of ex. Definition of an algebraic form. a+p+.. A sum of terms..+X = k. t. applied to word should mean something that does not change at all anything whose only change after a . Multiplication by a power of the modulus. t. and books devoted exclusively to that study already exist. All we attempt here is to introduce the ideas and to develop them sufficiently for the reader to be able to employ them. In Theorem 39 we proved that when the variables of a quadratic form are changed by a linear transformation.. as a result of a linear transformation is the mark of what is called an invariant'. 10. (2) ?/. and co variants is not Such a study requires a complete book. Before we can give a precise definition of invariant' we must explain certain technical terms that arise. )S.CHAPTER XIV INVARIANTS AND COVAR1ANTS 1. Strictly speaking. not necessarily the square. the discriminant of the form is multiplied by the square of the modulus of the transformation. Introduction 1. it is. DEFINITION variables x... A detailed study of invariants possible in a single chapter of a small book. each of degree k in the n ?/.... be it in algebra or in analytical geometry. O^A<&.2.. Anything that does not change at all after a linear transformation of the variables is called an 'ABSOLUTE INVARIANT'.. We have already encountered certain invariants. 1.
The multinomial coefficients k\ /a! A! in (1) and the binomial coefficients n!/r!(n >)! in (3) are not essential. but they lead to considerable simplifications in the resulting theory of the forms. Notations..x) the single a symbolizes the various constants in (1) and wish to (3) and the single x symbolizes the variables. will be used to denote singlecolumn matrices with n rows. the elements in the rows of x or X being the variables of whatever algebraic forms are under discussion. . from variables xv . We such as mark the degree k and the k ..+lrn Xn (r = 1.. in the notation (f>(b. we shall use F(a. to denote an algebraic form. F(a.. and ] n WIT* ^i or \a ( oc y i which is used to denote the form (1). such as x or X. number of variables is \//v fl/\ \71 9 n..4.170 INVARIANTS AND COVARIANTS is 2 For instance.5***? fy / \ which is used to denote the form ( (3).. while the number of variables is either shown explicitly.. coefficients lrs M denoting the matrix of the in (5). As in pre vious chapters... If we 1. y. and n\ rr/ o an algebraic form of degree /x 3 * xi^y r)l ) r\(n is an algebraic form of degree n in x and .. Clarendon type. y . n).X).. \M\ will denote the determinant whose elements are the elements of the matrix M . x n 9 Xv X n x = X r l namely. shall use F(a.x) n An alternative notation ( IA 9 \ / 0' 1. rl l +. (5) (6) will be denoted by x = M X. The standard to variables linear transformation . or is inferred from the context.. as in (4). ax 2 +2hxy+by 2 in # and y. In this notation the index marks the degree of the form. x) and similar notations.
if A 's are linear 1... or may be proved either by laborious calculation an appeal to Theorem 39. the it will last thing we shall wish to do will be to calculate the actual expressions for the A's in terms of the a's: be sufficient for us to reflect that they could be calculated necessary and to remember. (8) The constants A in (8) depend on the constants a in (7) and on the coefficients lrs By its mode of derivation.X) equal to the original function (of the coeffi cients a) multiplied by a power of the determinant \M\. DEFINITION A function an be. For instance.)(X lt . let F(a. x) matrix of the is said to be M of of the coefficients a of the algebraic invariant of the form if. in accordance with by Definition 11. . example of 1. that the in the a's.5. For example. the power of \M  in question being independent of in the M.x) Xi = a n xl+2a l2 x l x 2 +a 22 x%..X) = (A ... whatever the (6) may is same function of the coefficients A form G(A..INVARIANTS AND CO VARIANTS 171 On making the substitutions (5) in a form 9 F(a. 11.. x n )'<.)(x l . Hence. at times.X n )". X n this form we denote by G(A.. the form F(a.X) #11^ where An = In the sequel. = ~ lft Xi~\li 2 X2 j X2 ~ ^21^1 I ^22 ^ then F(a... 9 : (7) we obtain a form of degree k in X l9 .. Definition of an invariant. a n a 2 2~ a i2 * s an invariant of the algebraic form a result that .x) G(A.4.x) = (a. for all values of the variables x...
For example.. in accordance with Definition 12.. This example is but a simple case of the rule for forming the 'product' of two transformations: if we think of (9) as the trans formation from variables F l and F 2 to variables x (10) as the transformation from x and y to is merely a statement of the result proved in X and y. a 2 x\b 2 y. r if. MX. F^a.. or of a num ber of forms F (a. the transformation x = Jf X y = l l y a 2 x+b 2 y. but several forms F^a. algebraic forms r jointinvariant) of the forms A function of the coefficients a of a number of F (a. (9) is. we may consider not merely one form F(a.X): by x as before. X). Fr (a.3 of Chapter X.172 INVARIANTS AND COVARTANTS Again. then (11) 3. F (a x) = in full. al b 2 a2 bl is a joint invariant of the two forms a 1 x\b l y. #).X) = (a 2 a l +b ( see that = a l oc l +b l X (11) Hence.. the substitution being given DEFINITION 12.X). x) if 2 and = a x+b y.. x).x) and its transform G(A. coeffi 1.. An invariant is a function of the cients only. and and Y.x) is said to be an invariant (sometimes a transformation x M X. whatever the matrix M of the the same function of the resulting forms G (A.X) r for the result of substituting for x in terms of X in Fr (a.x). x) and their transforms G^A.. x). so that F2 (a we } x) G2 (A. we write G (A. r share with invariants the property of being . Covariants.6.X) is the coefficients A of equal to the original function (of the coefficients a) multiplied by a certain power of the determinant \M\.. Gr (A. Certain functions which depend both on the coefficients and on the variables of a form F(a.
v are expressed in terms of X arid Y. I dv 2 T"~" J \r a^ ? __ ~~ m du du . save for multiplication by a power of \M\. let u and v be forms in the variables x and In these.. y.l Y+. \ Let (13) 9 and.+A n Y n (12) asserts that 9 Then X The function depending on the constants a r br of the forms u. apart from the factor l^m^l^m^ unaltered . v and on the variables x. For example.INVARIANTS AND COVARIANTS 173 unaltered. Then we have. = L X+m Y. l^X+m^Y. u = AtX n +nAiX. y 2 = 2 by the rules of differential calculus.. when expressed in terms of X. make the substitutions x y. du 2 . Such funcvariables are changed by a substitution x = M tions are called COVARIANTS. ^x "dy dy' dY ~ dv " V1 dv l dx ""*dy* The rules for multiplying determinants show at once that du dX dv ~dX dY (12) dv dY (12) is best seen if 1 The import of we write n it in full. Y. . when the X. so that u.. du du T du dv j r> dv 1 . is.
.X). 1... and if x ^ ^U^Vy^jW) i~ . is called the Jacobian of the n forms. of a form F(a. a point of some interest to note that the wider definition can be adopted with the same ultimate restriction on the nature of the factor. namely......6 will show.. in the variables x.x) which is equal to the same function of the coefficients A. an invariant t v. wxz ^yz 2 Then the deter uxx uvx it (2) . Let u be a form . a = JfX. It is. When is defined as a function of the coefficients a. it in question must be a power of the In the present.. z and let uxx uxy >. v. Note on the definitions.. w. denote dujdx. v. when the ingly.. of the transform G(A. . as covariant of the forms u. of invariants and covariants x.. treatment of the subject we have left aside the possibility of the factor being other than a power of \M\... 2. dujdy.7.. z. multiplied by a factor that depends only on the constants of the transformation'. can be proved that the factor modulus of the transformation.2.... Let u. Jacobians.. y.. w.1. 0/ an extension of the argument of 1.. .. denote minant d u/dx 11 uxy uvv 2 2 . determinant y.. d u/dx8y.174 INVARIANTS AND COVARIANTS . Accord B we say that (14) is a co variant of the forms u. y replaced by the variables X. elementary. to be a power of the modulus. where ux uy . IT' ^ \ i 2. 4 constants ar br are replaced by the constants A r r and the variables x. It is usually written as It is. Y... however.. Examples 2. Hessians... . The w be n forms in the n variables wx u..
.... yields H(u'.. on using (4). __ a _ a _ Hence xx YX U ZX U YZ U ZY X UxY UyY as may be seen by multiplying the (5). (3) by considering the Jacobian of the forms ux uyy .X)= \M\*H(u\x).Aj is to say. and them u v u^.. last two determinants by rows and applying That Y\ ..INVARIANTS AND CO VARIANTS is 175 in fact.1 gives (4) But and if n _ 8u y then = m X+w 1 a F+. I . calling = \M\*H(u.... u and is a co variant of u\ we may prove that we H(u\X) For.r..x). we find that 2.. which. if called the Hessian of symbolize (2) as H(u.x).. I Mil M D2>"a(A.) ^ r.. .
3. n) an invariant... A form is usually called a binary form thus the general : binary cubic ^3+3.1 determinant \af8 of the linear forms arl x 1 +.^^ = to have a solution other than x 1 2. ) tl 0.^+3^2+^3.. apart from a power of 2..f The discriminant is 3.4.2 is a quadratic form.6) an a a2 i(a a 3 f a\ K^a^ a^a^ precise a^) (9) the phrases The reader will clarify his ideas if ho writes down the *is an invariant of (7)'. meaning of .w are linear in the variables we obtain the result that the \ 2. 3 2 u/8x r dx s = 2ar r8 ..5. can write down one covariant and deduce from The Hessian of (7) apart from numerical factors... in two variables is Invariants and covariants of a binary cubic. 'is an invariant of (8)'..v. of 2. a second (2.e. When the form 2. (8) The discriminant of an invariant of we may expect that it is to find (we prove a general theorem later..2) a a 2 x+a3 y i.. is It is sometimes called the ELIMINANT of the forms..176 INVARIANTS AND COVARIANTS Eliminant of linear forms. the vanishing of a necessary and sufficient condition for the (r . When the forms u..+arn x n (r is = 1.... invariant of (7) also.. it is (7) We covariant and one invariant. it is. covariant. its is u of Hessian is independent of the variables and so ar invariant. When we have determinant form. so that the Hessian of (6) \ars \ 9 is. ( 2 a?)# 2 +K (8) is 3 a x a 2 )xy+ (a x tf 3 (8): afty*. = =x 1. This provides an alternative proof of Theorem 39. Discriminants of quadratic forms. equations ^^. the the discriminant of the quadratic namely.. As we have seen the eliminant in Theorem 11. 2..
7. (10) connects shall show that an algebraical relation and (10).e. for a cubic covariant.INVARIANTS AND CO VARIANTS Again. 3. z . Thus.6. +c x x y + Jc a xy* + Jc 3 y 3 2 t 4702 Compare 3. a x 2(a Q a 2 i.(3a x & 2 a 3 a a 2a 2 )y 3 . the Jacobian of to be a covariant of (7) (7) 177 and itself. (and in the advanced theory. a\)x Exercise. Meanwhile we note an interesting (7). which we may expectf apart from numerical factors. (9). At a later stage we If we write c Q x*{c l xy{lc 2 y 2 for a quadratic covariant. 2 + 2a xy+a x 2 y 2 al)x+ (a a z ~a^ a 2 )y 3&Q&! ct < (a a x x 2 + 2a 2 xy+a3 y 2 a 3 a x a 2 )x+2(a 1 a 3 a\ (GQ &3 f 3(2&J # 3 Q a 2 a3 a l a^xy 2 }. (8). by differentiations. c 3 +c 1 a. but are given in terms of c by means of the formula _ where p that CQ is / a d d the degree of the form u. an important) fact concerning the coefficients of the co variants (8) and (10). . is. for which p 3. o c3 y 3 and so on for covariants of higher degree. when we start with the cubic (7). and take the covariant (8). is written as . so = #o a 2~ a v we and da 2 and the whole covariant (8) from its leading term (a Q a 2 3 is thus derived. (8). Prove that when (10) c a. 2 ?/+c 2 :r?/ 2t 2 Lt . the coefficients c r are not a disordered set of numbers. as a first glance at (10) would suggest.
..... 9 a' )(x l9 ...... v is if = F(a' x) y = = ((*Q 9 ... 9 A m )(X l9 .... l9 it becomes 9 (A that is.aJ.X n )". a m )(x l9 ..X) a'.x) == (a Q . Just as (15) may be regarded as being a mere change of of the form notation in (14)....... Now (12) typifies any form of degree k in the n variables and (14) may be regarded as a statement concerning a?!. Equally.. A'm )(X l9 . (A' .... x n It may be written as y ..178 2. That is to say.x). // \ is an invariant of F(a. Let 9 = F(a... Xn 9 k ) +\(A'.6.. Thus.. xn k ) and.. A m +XA'm )(X Xn k ) . .... x n ) k 9 9 (12) and suppose we know a's. after the transformation x = M X... x n where s is some .. which is A m +XA'm )= m \M\<f>(a Q +Xa^. and \M\\ moreover... when we substitute x = JfX in (12) we obtain a form 0(A X) 9 u the relation  = (Ao 9 .... m the coefficients A' satisfy the relation (15) m 4(A^.. independent of the matrix M. each side of (16) is a polynomial in A whose coefficients are functions of the A. by considering the invariant </> u\Xv 9 we have 9 4(A Q +XA^. . INVARIANTS AND COVARIANTS A method u of forming joint invariants.a'm }. m 9 xn ) k 9 transformed by x v J/X into = G(A'. a m +Xa' ) 9 9 (16) Now a mere change of notation in (14).. the coefficients of any such form... (a Q +Xa' ..r 1 . 9 Xn )" 9 +\A^ 9 .. when A is an arbitrary constant.A' )= \M\^(a' ... u\\v is a form of degree k in the variables x v . ' that a certain polynomial function of the say J 0(a ... (16) is true for . 9 a m +A< l )(.. so.... a...... 9 X n )* A 9 (13) and we suppose that the a of (12) and the of (13) satisfy fixed constant. A m )(X (A l9 .A' )(X 1 ..
... Hence.> is m + Aam) (17).INVARIANTS AND COVARIANTS all 179 values of A and so the coefficients of A.. A'J = s \M ^ r K. (17) The rules of the differential calculus enable us to formulate the coefficients of the powers of A quite simply. Write Then b dX db m dX = and the value of (18) i$ + c)6 +^ db m when A = is given by Hence.... by Maclaurin's theorem. m <.. a m +Xa in ) <f>(a Q +Xa^..... a m .. A. A 2 .. a polynomial in A.. remembering the result we have (a .. AQ. for if this coefficient is > a Qi"> a m %>>  a m)> then (16) gives <f> r (A Q . a m +\a m ) y is a jointinvariant of u and <f>r( v.a the expansion terminating after a certain point since ^(a +Aao.... 1 \ (o oj d +....aJ. Hence the coefficient of each power of A in the expansion of <f>(a Q +Xa^..... Am ... (20) ....../ .+<4 U(a c*a.. on each side of (16) are equal.. a' )..
1 by considering the Jacobian of the is 8. b'c)y 2 is the Jacobian . \M\{b'(ax by)a'(bx + cy)}.cy 2 a'o: f. Prove the result of Example forms ax + by. ^ .. 2. .by. forms. v .2bxy + cy 2 a'x 2 + 2b'xy f c'y 2 f f 6. Prove that ab' + a'b forms ax 2 + 2hxy f by 2 a'x 2 f 2h'xy f 6'?/ 2 4wa. Prove the result of Example 2 by considering the 2 jointinvariant of ax 2 + 2hxy + by 2 and (a'x + b'y)*. i. first Prove the result of Example 3 by . 7. and so \M\ is equal to ^m^ l^m^ forms ax + by. a'x + b'y. v . 2hh' is an invariant of the two quadratic 3.YFf<7F 2 and so for other is taken to be x = 7 v . AB'A'B = ab' 2 Af (a&'a'&). AB' + A'B2HH' = Af 2 (a6 +a'62M ). Prove that 2 2ba'b'+ca' 2 is . are not intended to be proved by sheer = + 5. / /  4. 1. a'x\b'y. 2 ax 2 + 2bxy\cy 2 Ans. Ans.4X 2 f 2J5. showing that abh 2 is an invariant of ax 2 f 2hxy f by 2 10.6.e. . Prove that abc{2fghaf 2 bg ch 2 . liX+niiY. \M\ 2 (ab' 2ba'b'+ca' 2 ). . . 11.. Prove that &'(a:r}fo/) ax 2 f.b'y. ax 2 + 2bxy+cy 2 are transformed into ^Yf#F. exercises on 2. Prove that ab'a'b is an invariant (jointinvariant) of the forms ax}.2fa'2/ j. AB' 2BA'B'{CA' 2 = . 2 Prove that (ab a'b)x + (ac a'c)xy + (bc' of the forms ax 2 f. . In these examples the transformation y = l 2 X + m 2 Y. + 2h'xy. B'(AX + BY)A'(BX+CY) The remaining examples substitution. a'(frrfC2/) is a covariant of the two forms Ans. Prove the result of Example 4 by considering the Jacobian of the two forms ax 2 \2bxy + cy 2 a'x\b'y. an invariant of the forms a'x+b'y. the determinant A =SH a h 9 h b g f f is an invariant of the quadratic form ax 2 + by 2 +cz 2 + 2fyz+2gzx\2hxy. Prove that ^(b l c^ b 2 c l )(ax{hy\gz) the Jacobian (and so a covariant) of the three forms Examples 912 are 9. . and find two joint in variants of this form and of the form a'x 2 + .180 INVARIANTS AND CO VARIANTS EXAMPLES XIV A Examples 14 can bo worked by straightforward algebra and without appeal to any general theorem.
is the geometrical significance of this jointinvariant ? Prove that the second jointinvariant.e. b c c d e . chapter xx. \M 6 . = aA' + bB' + cC' + 2JF' + 2gG'ir 2hH\ a. 13. denote cofactors of some importance in analytical geometry..c. usually denoted by 0.6.2/) . y Compare 1 2. Prove that dx*dy is a covariant of a form u whose degree exceeds 4 and an invariant of a form u of degree 4. Multiplication by the determinant of Example 13 gives a I' determinant whose first row is [x = IX \. are of A'. indicated by 0' of Example 11. the discriminants A.2: here (/ f l'~) A multiplication of the determinant just obtained by that of Example 3 will give a determinant whose first row is Hence x 1 5. d 4 is an invariant of (o.  Prove that ace f 2bcd ad a b c b*e i. = JkfX multiplies the initial 2 determinant by c 3 .INVARIANTS AND COVARIANTS NOTE.. 211' I'* l'm 2mm m'* I'm' 14.mY. in where A. 0' 181 = a'A+ b'B+c'C+2f'F+ 2g'Q + 2h'H. Compare Somerville.d. Find a jointinvariant of the quadratic form ax* + by* + cz* f 2fyz + 2gzx f 2hxy .. e. Prove that I* Im lm f m* = f (Im'l'm)*.)(x. HINT.. What is identically zero.. A'. and the linear form Ix \my\nz.. Analytical Conies. These jointinvariants. a'. 0'. 12.
(2) is to say. q. am A fc .. If the reader does not know the result. xn = AX" n y . a m )(x v .. / is kQ .182 16...aJ... A m )= W/(a M. Let 7(a . . 9 I(A Q .. . tlie values .... . 0> .....+a k y is k the suffix of each coefficient is called its weight of a product of coefficients WEIGHT and the defined to be the sum of t The result is a wellknown one.. Hence That a (1) becomes 7(a A*.. a homogeneous*)* function of its arguments A"*. a result which proves. is a covariant of tl'ie forms ti. a m ) be an invariant of 3. Weight: binary forms.. v. In the binary form a Q x k +ka l x k ~l y+....and then showing that the assumption contradicts (2)... a \k ..A/(a = 0> . if its degree is q. x n k ) .. aj.. and w by s kq/n. u = (a ... The multiplying factor is M 3 Properties of invariants 3.. HINT. whatever M may be. for this particular transformation. ^ is homogeneous and of degree A... a proof can be obtained by assuming that / is the sum of I l9 / 2 . . Let the transformation x = MX change u into Then.. X that s is given in terms of further... INVARIANTS AND CO VARIANTS Prove that xx 'xx v xy v yy <> wxy .. in x v .. a m and. x n A m are. Invariants are homogeneous forms in the coefficients. a m A*) ..... (1) the index s being independent of Consider the transformation xl wherein \M\ Since of ^4 = AJtj.. each homogeneous and of degrees q l9 g 2 ". #2 = AJT 2. 3.1... fc.. =A 71 . w..2...
INVARIANTS AND COVARIANTS
the weights of its constituent factors. Thus a* a% a\,
.
183
.
is
of weight
A
polynomial form in the coefficients
is
is
said to be ISOBARIC
a\
is
when each term has
for a a 2
the same weight. Thus a a 2 of weight 0+2, and af is of weight
is
isobaric,
2x1; we
refer
to the whole expression a a 2
af as being of weight
2,
the
phrase implying that each term
3,3.
of that weight.
Invariants of binary forms /(a ,.,., a k ) be an invariant of
are
isobaric.
Let
ro
/ be a polynomial of degree q in X change u into transformation x
let
and
,...,
=
M
ak
.
Let the
. \
r
K'Y'.
.
(4)
(kr)\
Then, by
3.1 (there
being
now two
variables, so that
,...,a k )
n
=
2),
I(A
,...,
Ak
)
=
\M\^I(a
(5)
whatever the matrix
M may be.
x
Consider the transformation
=
X,
y
=
XY,
,...,
for
which \M\
a
A.
The values of 4
a x A,
Ak
are, for this parti
cular transformation,
,
...,
ar A r
,
...,
a k \k
.
That
power of A associated with each coefficient A is equal to the weight of the coefficient. Thus, for this particular transformation, the lefthand side of (5) is a polynomial in A such
is,
the
that the coefficient of a power A"' is a function of the a's of weight w. But the righthand side of (5) is merely
A^/(a
,...,aJ,
(6)
since, for this particular transformation,
A. Hence the \M\ of A that can occur on the lefthand side of (5) is only power and its coefficient must be a function of the a's of weight
=
Moreover, since the lefthand and the righthand side of
184
(5)
INVARIANTS AND CO VARIANTS
are identical, this coefficient of A*** on the left of (5)
must
be /K,..., ak ).
Hence /(a
,..,,
ak )
is
of weight \kq.
3.4. Notice that the weight of a polynomial in the a's must be an integer, so that we have, when 7(a ,..., a k ) is & polynomial invariant of a binary form,
(a
,...,
a k ),
(7)
and
\kq
=
weight of a polynomial
(7)
=
an
integer.
Accordingly, the 8 of
3.5.
must be an
integer.
Weight: forms in more than two variables. In
Jie
form (1.3)
the suffix A (corresponding to the power of xn in the term) is and the weight of called the weight of the coefficient tMt \
a^
a product of coefficients
of
its
is
defined to be the
sum
of the weights
constituent factors.
Let I (a) denote a polynomial invariant of (8) of degree q in J/X change (8) into the a's. Let the transformation x
=
^
...,
(9)
Then, by
3.1,
I(A)

\M\**lI(a)
(10)
whatever the matrix
M may be.
#w _i
Consider the particular transformation
xl
for
=X
l9
= X n i>
xn
A^n>
which \M\ =
A.
Then, by a repetition of the argument of
that /(a) must be isobaric and of weight since the weight is necessarily an integer, kq/n kq/n. Moreover,
3.3,
we can prove
integer.
must be an
EXAMPLES XIV B
Examples 24 are the extensions to covariants of results already proved for invariants. The proofs of the latter need only slight modification.
INVARIANTS AND CO VARIANTS
1.
185
If C(a,x)
is
a covariant of a given form
C(a,x)
...
u,
and
tlr
<7(a, x) is
a
sum
of
algebraic forms of different degrees, say
= C (a,a;r + + C (a,*)
1
r
',
denotes degree in the variables, then each separate where the index term (7(a, x) is itself a covariant.
m
HINT. Since
<7(a, x) is
a covariant,
C(A X)
9
=
=
\M\ C(a,x).
.X\
,..., JY" n
8
Put x /,..., rr n The result is
l
for
#!,...,
T
9
# n and, consequently,
for
X
lt ...,
Xn
.
^ C (A r
X)*'F
t'
Af *
2 O (a,^)^^.
r
r
t
Each
side
is
a polynomial in
arid
we may equate
coefficients of liko
powers of t.
2.
A
covariant of degree
w
in t3ie variables is
homogeneous
in the
coefficients.
HINT. The particular transformation x l = \X ly ..., x n
gives, as in
i.e.
XX n
3.1,
C(aA
fc
,
X) w
=
\ C(a,x)*>, A rw O(a, a) w
if
ns
A^C^oA*, ic)
OT ==
This proves the required result and, further, shows that,
q in the coefficients a,
*
C
is
of degree
kq
_
\ n8 \
w
or
&#
ns\m.
3. If in a covariant of a binary form in x, y we consider x to have 2 weight unity and y to have weight zero (x of weight 2, etc.), then a m of in the coefficients a, is isobaric and of covariant C(a,x) degree q
,
weight
^(kq\ m).
HINT.
follow the line of
4.
Consider the particular transformation, x argument of 3.3.
9
X
t
y
= AF
t
and
If in a covariant of a
to
,
then a covariant C(a, x) w of degree q in the coefficients and of weight {kq f (n 1 )w}jn. HINT. Compare 3. 5.
3.6.
have unit weight and x n
form in x lt ... x n we consider x l9 ... x n _ l to have zero weight (x^^x^ of weight 2, etc.),
a, is isobaric
Let u(a, x)^ be a given form of degree k in the n variables x v ..., x n Let C(a x) be a covariant of u, of degree w in the variables and of degree q are not the in the coefficients a. The coefficients in C(a,x)
Invariants of a covariant.
.
y
actual a's of
4702
^ but homogeneous functions of them.
186
INVARIANTS AND CO VARIANTS
The transformation x
and, since
C
is
a covariant, there
JSfX changes u(a, x) into u(A, is an s such that
X)
Thus the transformation changes C(a,x) into \M \~ 8 C(A,X) m But C is a homogeneous function of degree q in the a's, and so also in the A'a. Hence
.
C(a,x)>
=
\M\*C(A,X)*
is
=
C(A\M\
8
!<*,X).
(11)
Now
suppose that I(b)
in n variables. degree cated by B when v is expressed in terms of X, there
w
an invariant of v(b,x), a form of That is, if the coefficients are indiis
a
t
for
(12)
wllich
\M\<I(b).
Take
v(b,x) to be the covariant C(a,x}
w
\
then (11) shows
that the corresponding
B
are the coefficients of the terms in
C(A\M\l*,X).
If I(b)
of a
it is
of degree r in the 6, then when expressed in terms the /!(a), a function homogeneous and of degree rq in
is
coefficients a.
Moreover, (12) gives
Il (A\M\i)=.\M\'Il (a),
on using the fact that I in the a
or,
'
is
homogeneous and of degree
rq
That is, II(CL) is an invariant of the original form u(a,x). The generalities of the foregoing work are not easy to follow. The reader should study them in relation to the particular
example which we now
has a covariant
give;
it is
taken from
2.5.
C(a x)
y
2
=
(a Q a 2
al)x
2
+(a Q a 3
a1
Being the Hessian of
\M\*\ that
is
u, this
covariant has a multiplying factor
to say,
or
(.%^+..^_
(11) above.
+
...,
(Ha)
which corresponds to
is 6 beyond the scope of the Even among the irreducible covariants may be an algebraical relation and invariants there of such a kind that no one of . Irreducible invariants and covariants..7. x) of a form of degree in n variables where 3. will prove that C(b. 2 Q l . that the square. 3. Its proof present book.6. Invariants or covariants which cannot be so expressed are called irreducible. using a covariant C(b.6 uses /(&).x)^ it is immediately obvious.x). If I(a) is an invariant of a form u(a. from the definition of an invariant. The theorem may then be stated The number of irreducible covariants and invariants of a given form is finite.. there is only a finite number of covariants which can On not be expressed rationally and integrally in terms of invariants and covariants of equal or lower degree in the coefficients of u. of I (a) are also invariants of u.x) w w gives rise to a covariant of the original form u(a.' This theorem and its extension to the joint covariants and invariants of any given system of forms is sometimes called the GordanHilbert theorem. for a given form u. the other hand.INVARIANTS AND COVARIANTS But if 187 x J/X changes the quadratic form into then B X*+ 2B XY+ B 2 7 BQ B 2 Bl= J/ (6 6 2 6f). (9) of 2. and so for covariants. 2 (12a) see that Take for the 6's the coefficients of (11 a) and we which proves that a 3. there is.5 is an invariant of the cubic Covariants of a covariant. cube. Equally. A slight extension of the ' ' w argument of 3.. Thus there is an unlimited number of invariants.8. only a finite number of invariants which cannot be expressed rationally and integrally in terms of invariants of equal or lower degree.
Q so that. assumption a = ^3. . in general. the most direct. obtained by writing 2 (^ x\ a and making the substitution X x+(h/a)y. we can then choose al p and q to satisfy the four equations p+q = For general values of a be consistent if ex j8. We a 1? a 2 a 3 these four equations cannot shall proceed. a . third equation of (3) follows from the choose P. a cubic u has three irreducible covariants. The first two if we can = 0. possible having chosen a and /?. and a co variant H\ it has one is irreducible invariant A: there four. q. /? so that (1) is identically (2) equal to It is p(x+y)*+q(x+py)*r if. a relation connecting these namely. shall prove this relation in 4.1. 148. a covariant (?. For example. itself. (1) may. at first.3. follows There are several ways of proving this: the method that is. perhaps. Readers will be familiar with the device of considering ax 2 ~\2hxy^by 2 in the form aX 2 \b l Y 2 a . Canonical forms 4. general quadratic We now show that the cubic y. Binary cubics. p<x+qP = = a . We 4. The p. simultaneously. be written as pX'*\qY*\ the exceptions are cubics that contain a squared factor.188 its INVARIANTS AND CO VARIANTS terms can be expressed as a rational and integral function of the rest. a. on the . We pose the problem 'Is it possible to find p. Y form was considered in Theorem 47.
INVARIANTS AND CO VARIANTS The fourth equation of if 189 (3) follows from the second and third P. where X and Y are linear in x and y. have a common aQ x 2 linear factor . hence . provided they are distinct. 0. f The reader will more readily recognize the argument in the form a x 2 + 2a l x+a 2 = 0. Q are determined from the two equations 0. not equal to We may readily see is vision that (4) what cubics are excluded by the pronot zero. If (4) is zero. a Q x 3 +3a l x 2y+^a 2 xy 2 +a 3 y 3 +2a l xy+a 2 y 2 have a common linear factor. the two quadratics a x 2 +2a 1 xy+a 2 y 2 a l x 2 +2a 2 xy+a 3 y 2 . p(p*+P0+Q) = 0. p. this condition being necessary to ensure that a $. where P. t 2 +Pt+Q = 0.+a 2 = 0. a * 8 f ^a^^Za^x^a^ = common have a common root. the third and fourth equations are linear consequences of the first two if a. and so the latter has a repeated root. /? are the roots of is. and F'(x] to F(x) = . a l x z + 2a 2 x\a z = have a common root hence . and so the latter has a repeated linear factor. = 2 0. Such a cubic may be written as X 2 Y. a 2 +Pa^+Qa Q 3 +Pa 2 + 001 = = This means that (a a. ot(oc +P*+Q) = 0. Thus (1) can be expressed in the form (a l (2) provided that (4) is a 2 ~a a 3 ) 2 4(a a 2 al)(a 1 a 3 a 2 ) is not zero. j3 are the roots of a 2 afX 2 +(a 1 a 2 a a 3 X+(a 1 a 3 a) = and. O x 2 f2a 1 a. Q also satisfy 2 ^+Pa That +Qa. Every root is a repeated root of the former. q can then be determined from the first two equations of (3).
Then. may (6) (5) as the product of 2 two quadratic 2 .5]. Relations among the invariants and covariants of a form are fairly easy to detect when the canonical form is considered. (aga 3 3a a 1 a 2 +2af)a.. we can write these in the forms Moreover. [(8) 3 H= G of 2. and the quartic may be on making a final substitution . write By f combining these in pairs we factors.a 4 )(^?/) (5) has four linear factors. For example. neither A t nor A 2 zero...AJ. 2. The quartic 4 (a 0v . INVARIANTS AND COVARIANTS Binary quartics. without insisting on the transformation being real.5) a covariant a covariant have (a Q a 2 al)x*+. say . we may write the factors (6) in the formsf By applying Theorem 48. [(10) of and an invariant A f = (a a 3 a 1 a 2 ) 2 4(a a 2 af)(a 1 a 3 a 2 ). the cubic u is =a to x 3 +3a l x 2y+3a 2 xy 2 +a 3 y* known (2. at a"x 2 +2h"xy+b"y 2 first. hitherto excluded from our discussion. the two distinct factors of a'x* f 2h'xy f b'y* . preclude quartics that have a squared factor. not neces sarily real. Application of canonical forms. X.A.. = AJ.3.Xl+^X*. by a linear transformation..5]. may be written as is 4. A quartic having a squared factor..190 4.2. aoX +2h'xy+b y Let us. since written as or. X*+X*. We have merely to identify with X + iY and XiY. (8) we may This write the quartic as the canonical form for the general quartic. we are precluding quartics that have is (7) a squared linear factor. linear in x and y. +.
. and 1 let x'. The factor M 3 of (7 most easily determined by considering the transformation x A^Y. A^ If 2 = +4# 3 . be projected into the AEG y referred to A' B'C' metry prove (in one form or another). A 3 ... Let A' B'C' be the projection on TT' of z' be the homogeneous coordinates of P' Then. G and Geometrical invariants 5. .1. or trilinear. n = . u y at once obvious that Now. unless 2 A = 0. in a plane TT. In these while It is = pX*+qY*. and the multiplying factor is A 6 or \M\ A . A% = a 3 A 3 so that y = AF. (9) A = 0.INVARIANTS AND COVARIANTS Unless 191 A Y= x{/3y the cubic can. a form for which both 5. .p*qX*pq*Y* A! = p*q*. to be definite) referred to a triangle point P' in the plane ABC TT'. The factor M 6 of A may bo determined in the same way. = \M 2 #. which gives A =  is .. variables the covariants H and G become #1 ^ PIXY. . 0.  (?! The factor of H is Af a because H is a Hessian. a fa/j y = my'. X 2 Y. Let a point P. by the transformation of 4. having homogeneous coordinates x. \M\ being the modulus of the transformation x fli J/X. be expressed as X = x\oty. there are constants such that z = nz'. '. Accordingly. ra. Projective invariants. . y. the cubic can be written as // are zero.1. we have proved that. z (say areal. as many books on analytical geoI.
since x = X+m 7+n Z. y = Q. y = X+m l 2 2 Y. f wherein. Z x y' z' ^ =X =A 2 X+fjL 2 Y+v 2 Z. We shall not attempt any systematic development of the geometrical approach to invariant theory: we give merely a few isolated examples. z = X+w y+n 3 Z. it becomes corre AX*+2HXY+BY* = 0. 2 a'x*+2h xy+b'y . any invariant or covariant of algebraic forms may be expected to correspond to some projective property of a geometrical figure. other than A'B'C'. leads to the type of transformation we have been considering in the earlier sections of the chapter. say. The crossratio of a pencil of four lines jection: the condition that the is unaltered by pro2 two pairs of lines f ax +2hxy+by*. figure in Thus the projection of a rise to a on to a plane ' IT' gives transformation of the type x y = liX+miY+n^Z. the determinant (/ x ra 2 n3 ) is not zero. Geometrical properties of figures that are unaltered by projection Thus projection (projective properties) may be expected to correspond to in variants or covariants of algebraic forms and. TT 3 *+/i3 7+1/3 Z. in TT'. = 0. The binary transformation x = liX+m^Y. concurrent lines. conversely. after transformation by (2). z ~ are not 2 2 a (1) Z 3 3 . F.192 INVARIANTS AND COVARIANTS be the homogeneous coordinates of P' referred to a triangle. Then there are relations Let X. which is a pair of lines through a vertex of the triangle of reference in the projected figure. for such an equation as ax 2 +2hxy+by 2 sponds to a pair of lines through (7. (2) be considered as the form taken by (1) when only lines through the vertex C of the original triangle of reference are may in question.
i. is an invariant of the two alge Again. 181). Now proper conies project into proper conies. Moreover. The condition that the four lines form a harmonic pencil J ace+2bcdad'b*e~c* = and. Again. Thus all the geometrical marks of a Hessian are unaltered by projection and.INVARIANTS AND COVARIANTS 193 0: this represhould form a harmonic pencil is ab'+a'b2hh' sents a property that is unaltered by projection and. thus geometrical definition turns upon projective properties of the curves and. the invariants and covariants we have hitherto been considering are sometimes called projective invariants and Covariants. and multiple points into multiple points. The condition that the four lines form an equi harmonic pencil. is l ~P P P~ l from the permutations of the order of the Zc 2 = 0. lmes to indicate ** HE we may expect 22 the crossratio of the four ** = is some of the invariants of the quartic. in geometry. points of inflexion into points of inflexion. p. J is an invariant of the quartic. as we should expect in such circumstances. as we = should expect. ab'}a'b2hh' braic forms. 4702 c c . the Jacobian of any three algebraic forms is a covariant. that the first and fourth of the crossratios P (the six values arising lines) are equal. linepairs into linepairs.e. Similarly. the Hessian proves to be a covariant of an algebraic form. the Hessian of a given curve is the locus of a point whose polar conic with respect to the curve is a pair of lines: the locus meets the curve only at the inflexions and multiple points of the curve. as we should expect. as we have seen (Example 15. the Jacobian of three curves is the locus of a point whose polar lines its with respect to the three curves are concurrent. In view of their close connexion with the geometry of projection. / is an invariant of the quartic.
is a particular type of linear transformation. they transformation. the discriminant A is an invariant of the form for all linear transformations. By equating two equations we obtain the coefficients of powers of A in the a+b+c = ai+bi+Cv A + B+C . The orthogonal transformation from one rectangular axes. INVARIANTS AND COVARIANTS Metrical invariants.Ai+Bi+Ct where (A = fee/ 2 . etc.2. since (3) also transforms x +y }z into 2 2 2 the values of A for which (4) X(x +y +z ) is a pair of linear 2 2 2 is factors are the values of A for which (5) \(X ) . bc+ca}ab~f 2 ~g 2 ~h 2 are invariant under orthogonal are not invariant under the general linear transformation. 0. bl h 9 b\ f f A A Cj cA A respectively. and (l B . al X +b Y 2 l +c l Z 2 +2fl YZ+2g 1 ZX+2h l XY.n 3 ) are the directioncosines of three mutually perpendicular lines.). (3) is equivalent to a change set of rectangular axes of reference to another set of It leaves unaltered all properties of a geo .194 5. It leaves unchanged certain functions of the coefficients of ax 2 +by 2 +cz 2 +2fyz+ 2gzx+ 2hxy (4) which are not invariant under the general If (3) transforms (4) into 2 linear transformation. (/ 2 .m 3 . The orthogonal transformation = 2 X+m 2 Y+n l 2 Z. (3) wherein (l^m^n^.ri 2 ).ra 2 . a+6+c. X +Z  +Y +Z a pair of linear a A factors. On the other hand. These values of A are given by h g = 0. A is the discriminant of the form (4). That is to say. (5) we 2 2 2 2 2 2 {Y have.
. under any orthogonal transformation: for (6) is the cosine of the angle between the two planes. m (# m . z m ).. say (a?!. suppose all rows after the rth can be expressed as sums of multiples of the first r rows. for any letter t of x.. Let the components of x be x. the its number of intersections of a curve with Hessian. is an invariant of the linear forms Xx+py+vz.. Then (Theorem 31) we can choose r rows and express the others as sums of multiples of these r rows.. and so on. Then there are constants A lfc . .... z. Let points (or particular values of x) be given. (1) f This also is obvious to anyone with some knowledge of ndimensional geometry. as they are called..1. Such are the variables undergo the transformation x = M degree of the curve..... Then the rank of the matrix is unaltered by any nonsingular linear transformation Suppose the matrix is of rank r (< m)..INVARIANTS AND COVARIANTS metrical figure 195 that depend solely on measurement. r. There are certain integers associated with algebraic forms (or curves) that are obviously unchanged when the X... is the rank of the matrix formed by the coordinates of a number of points. ... 6. A^ such that.. z. Zi). n in number. One of the less obviousf of these arithmetic invariants.. For example. Arithmetic and other invariants 6... Such examples have but little algebraic interest. ?/. For convenience of writing..
Putting in suffixes for particular points (X v .. (2T m .+c llt inverse *..... (#2>2/2) ^ e ^ wo cogredient pairs of variables (x.. y.y) each subject to the transformation x wherein \M ..2/i).IX+mY. Z= c nl x+... . 6. Transvectants. y = VX+m'Y.... di . it implies (1)..+c nn z. as we have used 32. z. . the coefficient of each of c^.. operate on a function ... and supposing that t . by using x ~ M X where in the foregoing and are equal.. ZJ. is zero and hence (2) implies Tr+k we see l = X lk T1+ .+\rk T r Conversely. If now the operators  dA .6. We conclude with an application to binary forms of invariants which are derived from the consideration of particular values of the variables..2). is the jth letter of we have By (1) (1). X.. By Theorems 31 follows that the ranks of the two matrices (2) X = M~ x. say = M X has an J = c u *+. Let (#!.. (3) = \ Then 3 ^ 8 the transformation being contragredient to (3) (compare Chap. x...~dA 01 .. X = M~ x l or. lm' 3 I'm.2... Z m ).. c^ ..196 INVARIANTS AND COVARIANTS The transformation x in full.
INVARIANTS AND COVARIANTS
of
197
X
lt
r
i(
X Y
2,
z
and these are taken aa independent
d
variables,
we have
8
1
d
8
dY2
dY^
dX 2
8y
V+r
==
n
r
it
(Im'l'm )[
\l
^

&

& 
d
d
\
).
/A
.
(4)
dt/ 2
a*/! <fc 2 /
an invariant operator. Let t^, v l be binary forms ^, v when we put x = x l9 y = y^ u 2 v 2 the same forms u, v when we put x = x 2 y y%\ U^V^ and t/2 Ta the corresponding forms when ^, v are expressed in and X 2 Y2 are terms of X, Y and the particular values l9 Y^ on the product U^V^ we have introduced. Then, operating
(4) is
, ,
Hence
,
X
,
and
so on.
That
is,
for
any integer
/*,
r
1
^ a*^
2
1
+
These results are true for all pairs (asj,^) and (x 2 ,y 2 )> We may, then, replace both pairs by (x, y) and so obtain the
theorem that
&ud v
r
dtf'&y
is
r
dx
d ru r~l
dr v
' 1
(5)
dy
It
a co variant (possibly an invariant) of the two forms u and v. is called the rth transvectant of u and v. u in (5), we obtain covariants of Finally, when we take v
=
the single form u.
198
INVARIANTS AND CO VARIANTS
made
the basis of a systematic attack on the invariants and covariants of a given set of forms.
This method can be
Further reading The present chapter has done no more than sketch some of the elementary results from which the theory of invariants and covariants starts. The following books deal more fully with
7.
the subject:
E. B. ELLIOTT,
An
Introduction
to the
Algebra of Quantics (Oxford, 1896).
GRACE and YOUNG, The Algebra of Invariants (Cambridge, 1902). WEITZENBOCK, Invariantentheorie (Gr6ningen, 1923). H. W. TUBNBULL, The Theory of Determinants, Matrices, and Invariants
(Glasgow, 1928).
EXAMPLES XIV c
Write down the most general function of the coefficients in a Q x 3 ^3a l x 2 yi3a 2 xy 2 { a 3 y 3 which is (i) homogeneous and of degree 2
1.
in the coefficients, isobaric, and of weight 3; (ii) homogeneous degree 3 in the coefficients, isobaric, and of weight 4.
and of
HINT,
(i)
Three
is
the
sum
of 3 and
or of 2
.
and
1;
the only terms
possible are numerical multiples of o a 3 and a l a a Ans. <xa a 3 f/faia2 where a, J3 are numerical constants.
2.
What
the weight of the invariant / s= ae 4bd{3c 2 of the quartic (a,b,c,d,e)(x,y)*1
is
HINT. Rewrite the quartic as
3.
(a ,...,a 4 )(o;,2/)
4
.
What
is
the weight of the discriminant
\a r8
\
of the quadratic form
?
u x\ + a 22 x\ + a 33 x\ + 2a 12 x l x 2 + 2a 23 x 2 # 3 f 2a 31 x 3 x l
HINT. Rewrite the form as
2
summed
4.
Ztfw*'''**'
for af fi~\y
=
2.
Thus a n
2
=
2.o.o
a 23
2
==
,
ao. 1. 1*
Write down the Jacobian of ax deduce from it ( 3.6) that
+ 2hxy + by
a'x 2 f 2h'xy
+ b'y
2
and
(ab'a'b)* + 4(ah'a'h)(bh'b'h)
is
a joint invariant of the two forms. I 6. Verify the theorem *When
n
9
is
an invariant of the form
(a<...,a n )(x,y)
INVARIANTS AND COVABJANTS
in so far as
it
(
199
relates to
4.3) of
(i)
invariant
6.
A
a cubic,
the discriminant of a quadratic form, (ii) the (iii) the invariants 7, J ( 5.1) of a quartic.
Show
that the general cubic ax* + 3bx 2 y\3cxy 2 +dy* can be re
duced to the canonical form AX 3 +DY 3 by the substitution Y = p'x\q'y, where the Hessian of the cubic is
(ac~b*)x +(adbc)xy+(bdc
2 2
2
X = px+qy,
)y
= (px+qy)(p'x+q'y).
HINT.
The Hessian of
is
XY:
i.e.
AC = B\ BD = C
a cube;
or the form
AX*+3BX*Y+3CXY*+DY* reduces to Prove that, if A ^ 0, either B  C = C = and the Hessian is if A = 0, then B
2
.
identically zero.
Find the equation of the double lines of the involution determined the two line pairs by
7.
ax*
+ 2hxy+by =
2
0,
a'x*
+ 2h'xy+b'y* =
0,
and prove that
it
corresponds to a co variant of the two forms.
then,
HINT. If the linepair is a x x 2 + 2h L xy + b l y 2 condition frr harmonic conjugates,
by the usual
ab^ a b~2hh =
l l
0,
a'b l
+a
l
b'~2h'h l
=
0.
are such that a triangle can be inscribed in /S" and circumscribed to S, then the invariants (Example 11, p. 180) 2 4A0' independently of the choice of A, 0, 0', A' satisfy the relation
8.
If
two conies S,
*S"
=
triangle of reference*
HINT. Consider the equations of the conies in the forms
S s 2Fmn+2Gml + 2Hlm =
S'
0,
=
2fyz + 2g'zx + 2h'xy =
0.
The general
9.
result follows
by
linear transformation.
coefficients
Prove that the rank of the matrix of the
ar8 of
m linear
form8
in
an*i + ."+arn*n
is
(r
=
l,... f
m) (Xr
n
variables
an arithmetic invariant.
, ,
10.
Prove that if (x r y r
z r ) are transformed into
2
,
Y Z
r,
r)
by x
=
M X,
then the determinant
11.
\x l y Prove that the nth transvectant of two forms
\
z3
is
an invariant.
(a
is
,...,
a n )(z,
n
t/)
,
(6
,..,,
b n )(x, y) n
linear in the coefficients of each
is
form and
is
a joint invariant of the two
forms. It
called the lineolinear invariant.
Ans. a 6 n
12.
na 1 6 w _ 1 + in(n
I)a 2 6 n _ 2 +...
.
is
Prove, by Theorems 34 and 37, that the rank of a quadratic form unaltered by any non singular linear transformation of the variables.
CHAPTER XV
LATENT VECTORS
1.
Introduction
1.1.
Vectors in three dimensions. In threedimensional geometry, with rectangular axes 0j, O 2 and Of 3 a vector
,
,
=.
OP
has components
(bl> b2> b3/>
these being the coordinates of length of the vector t* is
151
P
with respect to the axes. The

V(f +!+#)
(!)
and the directioncosines of the vector are
ii
i
151'
is
I5i'
!5T
Two
vectors,
(??!, 772,
with components (f^^^a) ant^
7/ 3 ),
*)
with com
ponents
are orthogonal
if
l*h
Finally, if e 1? e 2
,
+ ^2+^3^U.
(0,1,0),
(
(2)
and e 3 have components
and
2,
(1,0,0),
(0,0,1),
a vector
with components
g
19
f3 )
may
be written as
= lie^^e.+^eg.
e^ e 2 e 3 is of unit length, by
,
Also, each of the vectors
(1),
and the
are, of
three vectors are, by (2), mutually orthogonal. course, unit vectors along the axes.
1.2.
They
Singlecolumn matrices. A purely
is
algebraical pre
sentation of these details
available to us
if
we use a
single
column matrix to represent a vector. A vector OP is represented by a singlecolumn matrix ^ whose elements are 1? 2 3 The length of the vector is defined to be
,
.
The condition for two vectors to be orthogonal now takes a purely
'? =  > which. The matrices e 1? e 2 e 3 . later. define the unit vector e r as the singlecolumn matrix which has unity f in the rth .LATENT VECTORS matrix form. they are mutually orthogonal and 5 = fi ei+^ 2 e The change from three dimensions to ?i dimensions is immediate. 73) when and or Dd .an Note the order of multiplication wo *)' form the product AB only (of. when J is a unit vector I 5'5 = Here. may be expressed in either of the two forms t == 5) ( 5'^ = 0. is a singleelement matrix The condition for \ and yj to be orthogonal S'Y) is. a vector is a singlecolumn matrix with elements t t Sl b2'"> is The length of the vector and when  bw DEFINED t to be = 1 the vector yj is said to be a unit vector. we 1 denotes the unit singleelement matrix. Two vectors ^ and are said to be orthogonal when (4) fi'h+&'h+l. in matrix notation. With (. row and zero elsewhere. are defined by e 3== ea = PI* PI. (2) = 0. the number of columns of A is equal hence we cannot form the products 4702 to the number of rows of B TQ^' p. in matrix notation. yj'5 In the same notation. With n variables. The transpose and elements 9 201 ' of is a singlerow matrix with . and Finally.
since x and x are orthogonal. 8 is JSC'Ji ..x a (strictly the numerical value in this singleelement matrix).l 6e mutually orthogonal unit Then . X otr X OLS is. x . with elements *^lr> *^2r>*"> *^/tr* THEOREM vectors.. Xg.3..202 this notation the LATENT VECTORS n vectors er (r = l.. Hence r r 3 r a r a s 1.. 59. X is an orthogonal matrix.. or singlecolumn matrix.. matrix X whose columns are the n vectors an orthogonal matrix. ^ this subsection Orthogonal matrices. Let x l9 x 2 . while X' is a matrix whose rows are The element is in the rth row and sth column of the product X'X xj... Let Let ^ s\ then x' \ 0. In we use x r to denote a vector. X^.?i) are mutually ortho gonaland 1. and so Aliter. and when ^r this zero. Hence Y' J = / the unit matrix of order n..l x lt x 2 ..) a matrix whose columns are Xj.. since the vectors x r and x3 are orthogonal. since is xr is a unit vector. = since x a unit vector = si then x' x r x^ x = / and X is an orthogonal matrix... Let X' be the transpose of X. Then the product X'X is the square is The element in the rth row and <sth column of the product using the summation convention... X is (The same proof in different words. x Proof..... on When s s = r this is unity.
t l mJ not all zero.4. f 203 In yj the [X = X~ l ] let = y r t^en X'%. Linear dependence. . Let x v . x r denotes a vector with elements *lr" *'2r>"*> ^nr' The vectors x^. Let A be a matrix having these vectors as columns.. Proof.. since yr = X .. LEMMA 1 . is Since the columns of / are e 1 . ^ r8 r not linearly dependent..... by Theorem 31 (with columns for rows) we can select r columns of A and express the others as sums of multiples of the selected r columns. l. (3) Any vector x p (p > n) must be of the form n  X and therefore. r X'x r and the columns of are x^.. The rank r of A cannot exceed n and.. (2) The determinant \X from (2)... 1 If ( 1 ) is true only when I ... x..3.n).l m are all zero.. ... Of any set of vectors at most n are linearly independent.... 2X . or % transformation yj x r TAeri y r er = = Xri.. + / m x m =0. y.. X .. say n Aliter. must be a sum of multiples of the vectors . for f which (1) 1 x 1 + . As in 1..n).. Let there be given n\k vectors.. established. Let 7 be the square matrix whose columns are J^. /> r=l 2. Let r X since the given vectors are be the cofactor of xr3 in then.. x n Then. x n be any given set of linearly independent vectors. the vectors are linearly independent..LATENT VECTORS COROLLARY.. Proof.. \ 2>r e r rl 5 \x rs \ (50. Xrp e r> by (3). /.. x m are linearly dependent if there are numbers l v .e n the result 1.. Y = X'X = ..x = JTe =i a (r=l.
^ w have a nonzero solution if and only if A is a root of the equation [Theorem 1 1 ] an &9.204 LATENT VECTORS 2. Definition.  independent. conclusion. These linear equations in the n variables f ls . where s is any one of Then M.. 2. .x n be mutually orthogonal unit vectors. Any n mutually orthogonal unit vectors are linearly independent.. 7.*)* be but not worked to its logical a given square matrix of n rows and columns and A a numerical constant. 2 . (4) Premultiply by x^.. l. (2) Proof..1 A a 12 #22 22 A 0. since x^x s = 1..k n = 0.^x^ of X are therefore linearly . THEOREM Let 4x = in which Ax. has a solution one element of x not zero if and only ifX is a root of the \A\I\ = 0.. Let x l9 . + fc x ll ll = > 0. Latent vectors 2.. X = .. The matrix equation 60.1.. LEMMA Proof.n. Chapter X. By Theorem 59.. We A begin by proving a result indicated.. tt). there are numerical multiples k r for which Suppose *1 x 1 + . The columns x^.. ks Hence (4) is true only when ix 0..2 a tf^ = A& (* = 1>... in an earlier chapter.. Aliter. If the elements of x are l9 (1) is equivalent to the n equations n n the matrix equation .. X'X = 7 and hence the determinant 1.. = and hence. (3) and (3) is merely (2) written in t full. (1) x is with at least equation a singlecolumn matrix of n elements.X.
A 5 )x r  0. X 8 is a s.) X . Let A' A. Ax = r x^A r x r (3) ( The transpose of the second equation x9 A Xs x g and from this. Ax = r Ar xr . The the 205 are the latent roots of roots in X of \A is XI = \ matrix A . By hypothesis. Av(A. ^ ~X (k x = X (k k X x = k X x s r r x r ). nonzero vector and therefore ks = ks Hence But. (2) Then. ^X 0. by hypothesis. since the numerical multiplier A r Xs ^ 0. Ax = 8 X8 x8 . and so. a nonzero vector x satisfying Ax is Ax to the root A. By kr = hypothesis xr is a nonzero vector and A r and (2) reduces to ks true only if xs 0. 1 ) gives A' x' s A. (1) (i) Let kr k8 be numbers for which . kr x.LATENT VECTORS DEFINITION. Hence (ii) (2) is kr = 0. (4) From (3) and (4). Hence x r and X 8 are orthogonal vectors. to THEOREM 61 . . since in . x r and x s are orthogonal. That to say. since Ax = s r Xs xs . Ax = r X 8 x' 8 xr . Let x r and two distinct latent roots A r and X s of a matrix (i) (ii) x s be latent vectors that correspond A Then .+k8 x. a LATENT VECTOR of A corresponding 2. x r and x s are linearly independent vectors. r r r s r . (2) = 0. xr  0. x!^ xr = 0. accordingly. (A r A. by the x' 8 first equation in . when \ a latent root. x r and x s are always linearly independent.2. (1). when (2) holds. Proof. and when A is symmetrical. gives s s s) Ak x r and so. is by (1).. Then.
.. we leave aside all complex values of a..... As in the f When x r is a solution and a is any number.. x r since each is a unit vector. When A has 62... if the length of any one such vector is k. (2) 1 when r ^ s and = (r = l..x corresponding to these roots. If X is the square matrix whose columns are X 1 .. . Theorem 49. real or complex.. the vector k~ l x r is a real unit vector satisfying (1 ). 77* . xr x where o r8 = 8 rr S ra . by Theorem 61.206 3...\ n Then there are n distinct rml unit x 1 ... The roots X v .. n can be reduced by a real orthogonal transformation to the form A 1 T/f+A 2 ?7 + .. X numbers as its elements......A n are the n roots of \A\I\ = 0.'^ n ^ a ? ea l orthogonal transformation and Proof. to variables ^i... For our present purposes. orthogonal to X 8 when r  s and.. LATENT VECTORS Application to quadratic forms We have already seen that (cf..n)..X n are necessarily real (Theorem 45) and so the elements of a latent vector x r satisfying ^4x r A r xr (1) can be found in terms of real numbersf and. xf is also a solution. is Again. 153): A real quadratic form %'A% in the n variables lv .. a real symmetrical matrix having n distinct latent roots \^. x n the transformation from variables !. Hence there are n real unit latent vectors x^. THEOREM latent vectors Let A /l be .1. p. wherein A 1 .+A /... 3. .. We now show that the latent vectors of A provide the columns of the transforming matrix X... n distinct latent roots.. x n and the matrix having these vectors in its n columns has real .
Numerical examples are easily dealt with by actually finding the vectors see Examples . when \ (say) is a fcple root of \A\I\ = 0. Math.2.. are AjX^.. there are k linearly independent latent vectors corresponding to Aj. Todd. The fundamental difficulty to provef that.. X'X = / and X is an orthogonal matrix. When whether . the element in the rth row and 5th column of the product X'X is 8 rg whence ... y n s the are Ax ly . X being Xv\ the is matrix with these vectors as columns.. ibid... \AXI\ of the has repeated roots or has roots distinct. The form ^ (X'AX)m is therefore f 3.. there are always latent vectors x 1} ... Ax n and matrix X'AX. . Ferrar. x n n mutually orthogonal real unit SYMMETRICAL matrix A and... XV.jA^x^... It is in fact true all its that. another treatment is given.. L. AX y . the transformation a real orthogonal transformation that gives wherein Aj. The transformation Xyj gives %A$ = n'X'AXn * (3) and the discriminant of the form in the variables T^. x^.X n as its elements in the principal diagonal and zero elsewhere. pp.. (i) (ii) the rows of X' are the columns of xi.. (Oxford) 18 (1947) 1835 gives a proof by J. Hence these. is The proof will not be given here. The setting up of a system of n mutually orthogonal unit f Quart. ~ \X r Xs == Thus X'AX has X l9 .LATENT VECTORS 207 proof of Theorem 59. A x n and the element /? in the rth row and 5th column of X'AX is the numerical value of ^a"rs> X r^s X by (2).. J. by W.4 has repeated latent roots. AX are A x x lv . A. Now the columns of from (1).. 18692. 11 and 14.. \ n are the n roots (some of them possibly equal) of the characteristic equation \A~ A/ = 0.
X2 X3 t J More matrix L. Ay 3 . and y so that ft it is orthogonal to x t . Ay 3 = . Moreover. so that then also := Ay 4 . /I X2 AX 2 and x r W. precisely. A^ so that k^ y t is a unit vector the constant a so that Choose and let X 1 k l y r Choose x^ya+KXj that is.208 LATENT VECTORS vectors or is then effected by Schmidt's orthogonalization processf the equivalent process illustrated below. if y 1? y 2 y 3 are latent vectors corresponding to the . the vector y 2 +aX 1 not zero and has a nonzero length. Finite Matrices. oc = xi v 2J is Since y 2 and x are linearly independent. by (1). 1 2) (2) (3) x x and x 2 are orthogonal. yly 2 :Lr: Ay 2 . .  0. 1 2 say. it has a nonzero length 3 and with we have a unit vector which. 139. The vector y 3 = xiy +j8x +yX ^ 0. . . 2 since the vectors y lf y 2 ? . . are mutually orthogonal unit latent vectors corresponding to the root A. x'ifra+jfrq+yxa) x 2 (y 3 since / = +^x +yx . AXi. (1) since x'x \L t = 1. y 3 are linearly independent. 1 y = x'2 y 3 . p. vectors. is orthogonal to x x and x 2 Thus the three x vectors are mutually orthogonal unit . 3. Theorem 29. Ferrar. Put Then X 2 is Now and That is.0. a is the numerical value of the element in the singleentry xjy z Both XjXj and xjy 2 are singleentry matrices. determine a unit vector and. by (2) and (3). . by Let y lt y 2 y 3 be three given real linearly independent vectors. latent root A of a matrix A.
We prove that T is nonsingular by showing that are t lt ... 1 ( (2) The effect of replacing A in 1 ) by a matrix of the type T~ A T amounts to considering the same geometrical relation expressed in a different coordinate system. the coordinate system be changed from means of a transformation let Now to Y) by 5 = Ti. y = T Y. since T is nonsingular. Let the square matrix A of order n. A n in the principal diagonal and zero everywhere else. Here shall prove only one theorem... T be the square matrix having these vectors as columns.X n and lett v . A n ) indicates a matrix whose elements A lf . 209 Collineation ri Let ^ be a singlecolumn matrix with elements 1 . is important in projective geometry. A by T~ AT. all distinct.. THEOREM Let 63.. we 4. f The NOTATION diag(A lv . 4702 E6 .. X and Y.... ( In the new coordinate system the relation 1 ) is expressed by TY = ATX or.LATENT VECTORS 4.. Proof..t n be corresponding latent vectors.. The study of such replace1 ments. the analogue of Theorem 62.t n are linearly independent*. T*AT (i) (ii) = di*g(X l . The new coordinates.1.. Let these elements be the current coordinates so for of a point in a system of n homogeneous coordinates and let A be a given square matrix of order n.. (i) T is nonsingular. of the points x and y are then given by x = TX.. Then the matrix relation y = Ax (1) expresses a relation between the variable point x and the variable point y. where T is a nonsingular square matrix.. by Y= T~ 1 ATX.. When the latent roots of A are .X H )... and other letters... have n distinct Then"\ latent roots X lt .
0... the rank of T is n and T is nonsingular.. 0... AT = 7 xdiag(A 1 . since At n = r A /t t H . . Further repetitions lead. __ K n _ A U. Since Aj. is When the latent roots are not all distinct...2.t n are linearly independent and.. = 1 f If necessary. . the columns of AT A 1 t 1 ... If A not symmetrical.. . since At = A t for r = l.^. A(k l t l +. and so on until we obtain the result that Kl ( 1 ) holds only if ____ K 2 . This proves that t lv .& Let be numbers for which . 1... are (ii) Moreover.210 LATENT VECTORS &!. a nonzero vector and therefore k v = By hypothesis.. AJ Hencef and it at once follows that 4.i = 0. wherein (2) A the numbers r^.. are all different.+k n _ l t n _ l ) That is. these being the columns of T...._! are nonzero.. a^O. this is CiMi + ''+'nAiVi^O.....+^nifAniAJt^... (1) Then. to tj is aiMl = 0. We can repeat the same argument to show that a linear tion rela implies k 2 = ...J . 1 for w gives repetition of the same argument with n all wherein d v ... step t by step. work out the matrix product a / 81 * * 'is! tt AX F Al 1' Aa ^i ttt aJ LO A.. it does not follow that a A?ple root of \A A/ gives rise to k linearly independent latent vectors. d n _ 2 are nonzero..A n t n 7 .. r r = AJMi + + AViVi).n MAiAJ^+..
not square matrices of order n have n linearly independent latent vectors. Throughout we take A and B to be square matrices of order n.. \* 1 V J LO The characteristic equation 2 (a A) = and a latent vector x. x 2 <*x are numerical multiples of 0 (ii) Let A = x must again a. This now re quires merely ~ ocx lt OLX% = ax2 .. then.x l \oc A latent vector satisfy Ax = ocx. whether the A's are distinct or not. all On the other hand. they may or may not have common latent vectors. . We illustrate by simple examples. chapter with two relatively simple theorems that connect the two possibilities. with components x l and #2 must satisfy Ax ax. vectors of A all vectors of the form ^ 1 e 1 +x 2 e 2 are latent ... indeed. We conclude this 5.LATENT VECTORS If it 211 so t tj. that is. From the first to satisfy Ax and the only nonzero vectors of these. n happens that A has n linearly independent latent vectors and T has these vectors as columns. The two linearly independent vectors and. A= is . as in the proof of the theorem. Commutative matrices and latent vectors Two matrices A and B may or may not commute.
. of A they are latent vectors As in 4./* B AT are A the columns of BT are the columns of and.f linearly independent latent vectors of A corresponding to the latent root A.. in common.n). Now suppose that AB = BA. t tl . then Bt is a multiple of t. p.A w ). T' ABT = T~ BAT 1 1 1 and. on premultiplying by T and postmultiplying by We have accordingly proved THEOREM '64.T~ BTT.. Let LATENT VECTORS A and B ha\e n linearly independent latent vectors . of B. T~*BT 1 If. say Then t lv .. say.. r tr L That is = diag(A 1 . Then At = At and (1) ABt = BAt = BXt When B is  XBt.4..j . l. common of order n with n linearly indepenare commutative.  L.1. p.AT. latent vector of nonsingular Bt ^ t corresponding to A..] a multiple of Bt and t is  Art. A rt say.. ^TT~ 1 AT T~ 1 ATT.. ing to latent roots ^4.t n Let T be the square matrix having these vectors as columns.. say and is a would imply A = B~ l Bt is = 0. [J5t latent vector of A corresponding to A If every t. = LM = ML ..1 BT so that It follows that TL... with the notation r tr (r (r = l...212 5.. m< n. a latent vector of B corresponding to the latent root k of B.. If there are m.... but not more than m.2.. 210. are latent vectors of . . t l9 ..1 (ii). A corresponding to latent roots of correspondAj... Two matrices dent latent vectors in 5. say MV> t t /. By 1. Let t be a latent vector of A corresponding to a latent root A of A.n)..
. every latent vector of the form (2). it is there(2). By our hypothesis that there are not more than in such vectors which are linearly independent. (2) o (19o4) 8285... . Let . . ksr = dl s (s m) rl and.. N. Hence to 6 is a latent root of B and 2 ^ *r *s a latent vector of B corresponding to 0. Afriat. J... provided that k l9 . (2) is a latent vector of A corresponding to A.. ABtr = XBt f (r = so that Bt r is fore of the form a latent vector of A corresponding to A.. it is A..LATENT VECTORS then. lm km \ kmm...'/ n t a h* zero) for which !. with this choice of the / r. We have thus provedf also a latent vector of A corresponding THEOREM and let 65... Quart. f common 4702 For a comprehensive treatment of commutative matrices and their latent vectors see S. let to  B\ ^ Then of B. Mi + + Mi.m). since 213 At r = At r (r = l. each distinct latent root A of A correat least one latent vector of A which is also a latent vector sponds A B = BA B be square matrices of order n. Hence there are constants ksr such that For any constants in I l9 B I i. E'C2 .. A corresponding to A l..t. ( are numbers j lr /!. .. s l 2(2 l Wl lc >A. A..k m are not all zero.. K 7 \ be a latent root of the matrix K Then there  c n . = m t . Math..m) is of As in (1).. . l m / m 1 1 = r=l 1.. r1 Let mm M...
6. b x The columns c^ = . d that make these vectors columns of an orthogonal matrix.0. OZ are mutually orthogonal.2.a.a 4 and A' A = diag( a ?. c.3. a w 2 .0.a.0). (0. when vectors a.0. Show that 1.0. (l^m^nj.. Given c = (1.1).0). 2.. c1 cf l l ly l f 1 a.0.3). of a square matrix A are four vectors a 1 . c. CTx = TMx = ATx . Prove that.1. The vectors x and y are orthogonal and T is an orthogonal matrix. 1. . 1. Show is that the matrix whose columns are 1 af^!. Four vectors have elements (a.d. 6).0.4) The vectors a and b have components and (2.ai.1).n 3 ).1) and that any values of the constants k v and fc r 8. (0.3).6. OY.n 2 ). (0.a). The vectors OX. OY.. (6.0. d).. find constants m lt / 2. (ii) that the second has three linearly independent latent vectors (1.n 2 the for which. Prove that the vectors with elements (l. = (0. 1 2 2 3J is a + (0. 6.3.. if x is a latent vector of A corresponding to a latent root A of A and C = TAT~ l then showing (i) that the first has only two linearly independent latent vectors. Find the latent roots and latent vectors of the matrices 1 0].4. Verify that they are mutually orthogonal and find positive values of a. (1. Find is *.5).0. OZ are m /! 1 2 2 "2 is an orthogonal matrix.aj).W3. so that = b+ fc t a l lt orthogonal to a. four 5. 2).a?. 1 2 li latent vector for = = k^ k^ . 2).1) are latent vectors of the matrix whose columns are (1. Prove that 4. m +n a. X = Tx and Y Ty bt are orthogonal vectors..2..^ !^ an orthogonal matrix. 6. 7..0. (c.2). (/ 3 .2.2. (0. 3. (I 2r 2 .214 LATENT VECTORS EXAMPLES XV Referred to rectangular axes Oxyz the direction cosines of OX. d t = d + Z 2 c 1 Hm 2 b 1 d t are mutually orthogonal. 1) and d = (0. (0.1.
y.0.<.LATENT VECTORS and Tx. and deduce the orthogonal transformation whereby 2 2 3?/ 3z {2//2 . = .Y F. corresponding to A 1. ~ X is an orthogonal transformation. Z whereby 2yz + 2zx{ 2xy = 2X Y*Z*./c is = ^C a . is 215 a latent vector of if also that.n. r= (x. 1.. V2'~V2' are orthogonal unit latent vectors.3. 2. y. 1. Prove that. = (F a Z 2 )V2.0) and (\\k t ~k % \) are orthogonal when k J . Find and 7 y. linearly independent. A^.y. 9..z) and  X= (X. a. 1) are linearly independent and (ii) (1.2X 2 2F 2 4Z 2 10. Prove that the latent roots of the discriminant of the quadratic 2. Prove that a unit latent vector corresponding to A = 2 is " Verify that when (L . that ( 1 .4X =2^ ^r^' r< into a stepbystep solution. = 2. where x 12. which gives 2yz+2zx 14.. z (Harder "f) Find an orthogonal transformation from vannblcs to variables X. Whenr = 1.. x. t Example 1 1 ia the samo sum broken down . b.. 3..]8. ~ ^ f 2/~i~ 2 (i) (x.2. C as columns and x ~ TX..2.z) is a latent vector whenever and (1. y variables x l9 . ( .x n . 4 and X'. is an orthogonal transforma Find an orthogonal transformation x ~ TX.0) . form are 1. a permutation of l. Prove that With n a: Z. z 9 Z).....1. 2 11.. 13. Y. ..1 WS'VS'VS/' U T is he matrix having a.r n = XK . show that xa xl \vhero tion. so also are three unit latent vectors of the symmetrical matrix 4 x and y an C corresponding to a latent root A of C. Show Tx 7 01 3 [2 in the 1 1 3J form <>'..r 2 KU) Ki) ..
1. n a = lj t a ~iV.1. 1. Find = B = the elements of 19.0.. 3. square matrix A. is The actual transformation not asked for: too tedious to be worth the pains.e. V2 2 * Y 2 l V. 1).2. 1. [2 1 1 2 1 51.1. 6J The matrix A has latent vectors (1. numerical multiples of thorn) and show that. and AB ..3.1). .2. (1.  diag(A 1 ..2./x n not necessarily BA.0).. a *! = 6= i. vectors (0. 1 3J L3 (1. _ ~ V3 V ^ ji\.. Show that there is an orthogonal transformation f x TX whereby 16. (0.0. . i^2. Prove that there is a matrix T for which distinct. 6. .A W and B has nonzero latent roots /i 1 . 2. when x mr' J A.V 2 ^V 4.0.1). unit latent vectors are .1. 3 * i" 15.. 1. = J. . 17.. are mutually orthogonal latent vectors of A Find the matrix T whose columns are unit vectors parallel to the above ~. has n mutually orthogonal unit Prove that A is symmetrical.i.0. 7. x = TX formation gives J x 2 ^ 2 X 2 therefore T'T 13.0). = J.2. latent vectors.BA The square matrices A and B are of order n.1.1. /. x'/x . has the 1 The matrix 1 0.TX. prove that the latent roots of (1. c Work out 4. 3. *~\.3.A n ). gives Theorem 62. l ( W2 f JL <A ""V2' /' / 1\ V2'2'V2/' x L 1 1  \ \2'2' V2/' Use Theorem t 62. HINTS AND ANSWERS 2. XT. A. (0. A has n distinct latent roots A 1 .3) . use 12.J.. . m . (1. 1). cf.X'T'/TX = X'T'J^. A 1.3.0).1) 1. (i. = d = 1/V2. 1. A and B and verify that A B =.. 3 and that (1. The given trans = f /.. A ~ 4. A = 0. 6 1].1. m = 0..216 LATENT VECTORS A are 1.. of order n. these give unit vectors. (i) A = 2. Find the latent vectors of the matrices A 120 [1 18.. corresponding to the latent roots A same latent vectors corresponding to the latent roots /x 1. 9. .
B = n i 01. . (2.t n As in 4.1. 1.3.. 1. 217 t\s as Let t lv . But T~ t columns. . 0. vectors (3.0). (i) (ii) is.1).LATENT VECTORS 16. (1. (1.2.X n ).. which 19.1). T and so TAT = A diagonal matrix is its diag(X lt .. Finish as in proof of Theorem A B 64...1.1. j LO 2 f 3J 1J LO has n linearly independent latent vectors (Theorem 63). I. A = n i 0]. are also latent vectors of (Theorem 65). vectors (2.4. \ n ).3.. own transpose and so 17. A .1.1). (1. 4). A = A'. .0).. and T'AT = (T'AT)' = T'A'T A = 1. T~ 1 AT = be the vectors and T have the l = d'mg(\ lf ..
.
Pfaffian61. 36. 151. Canonical form 146. reciprocal matrix 85. definition 7. Absolute invariant 169. Jacobi's theorem 55. 131. 206. Collineation 209. matrix 69. Hessian 174. 172. 145. Jacobi's theorem 55. equations 30. Alternant 22. Binary form 176. Adjoint. forms 144. 144. 153. Invariant arithmetic 195. characteristic equation 110. 36. 26. product 41. Latent roots 110. Definitions : adjoint matrix 85. joint 178. properties of 182. skewsymmetrical symmetrical 59. 207. ad jugate 55. B. of covariant 185. Laplace 36. Jacobians 174. weight 182. Hermitian matrix Hessian 174. 56. bilinear form 120. 144. : Equations characteristic 110. 122. Cogredient 129. 85. of transformation 131. 12. alternant 22. Binary quart ic 190. covariant 172. number 2. rank of matrix 94. . Barnard and Child Bilinear form 120. 8. Circulant 23. 143. Eliminant 176. 101. quadratic form 120.INDEX Numbers refer to pages. of cubic 176. 151. 59. form 119. 131. Characteristic equation of matrix 110. latent root 110. 204. 158. Linear dependence 94. circulant 23. consistent 31. discriminant 122. 39. Jacobian 174. 193. homogeneous A equation of 32. geometrical 191.v. of quadratic form 144. of quartic 193. Covariant : 211. E. of cubic 176. Fundamental solutions Grace and Young 198. notation 21. 203. as matrix product 122. definition 171. : Elliott. 128. Hermitiaii form 94. Expansion of determinant Field. cofactors 28. isobaric form 183.) 28. differentiation 25. 61. linear form 119. : 129. adjugate determinant 55. 119. definition 172. Kronecker delta 84. determinant 8. 188. 198. 142. adjugate 55. invariant 171. metrical 194. Division law 77. 190. Note the entries under 'Definitions'. 3. Determinant : vectors 204. Binary cubic 188. unit matrix 77. Algebraic form 169. minors (q. Array 50. further reading on 198. Cofactor 28. lineolinear 199. of covariant 187. expansions 12. 98. 89. Commutative matrices Complementary minor Contragredient 129.
Reversal law for reciprocal 88. W. transpose 83. 206. Substitutions. Notation : algebraic forms 170. Muir. 29. 153. unit 201. 160. Reciprocal matrix 85. commutative 211. 205. 205. bilinear forms 121. Signature 154. positivedefinite 135. Pfaffian 61. rank 148. M. 182. 89. 132. Turnbull. number' 67. G. Rank of matrix 93. orthogonal 201. substitutions 67. 151. Permutations 24. vectors 201. 206. rank 93. 122. 198. : Orthogonal matrix 160. Sum of four squares 49. Sign of term 11. : complementary Modulus 122. 29. rank of product 109. Transpose 83. equivalent 113. Matrix : discriminant 122. 61. determinants 21. Herrnitian 129. INDEX canonical form 146. unit 77. 135. number 1. division law 77. transformation 122. characteristic equation 110. adjugate 85. 202 transformation 153. * characteristic equation 131. Salmon. H. product 123. matrices 69. Summation convention 41. minors 28. . Multiples. orthogonal 160. Winger. determinant of 78. for transpose 83. Sir T. 206. definition 69. R. R. 127. addition 69. 61. Minors 28. 39. Weitzenbock. reciprocal 85. of zero determinant 33. of quadratic form 124. 202. Vectors as matrices 200. multiplication 73. latent roots 131.singular 122. signature 154. Transformation sum of 94. 184. pole of 131. adjoint. modulus 122. definition 120. linear 67. 198. 56.220 Linear substitution 67. elementary transformation 113. Rings. singular 85. 37. Positive definite form Weight Quadratic form: as matrix product 122. orthogonal. of quadratic form 148. Schmidt's orthogonal izat ion process 208. non. latent 204. elementary 147. Trans vec tan t 196. latent roots of 110. equivalent 156.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.