Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
<OU >fc
CD
160510
>m
OSMANIA UNIVERSITY LIBRARY Call No. 5/3 # % 3 Accession No^, 33
.
Author
This book should he returned on or before the date
last
marked below,
THE UNIVERSITY OF CHICAGO PRESS . OSAKA. LIMITED. CHICAGO THE BAKER & TAYLOR COMPANY. LONDON. SHANGHAI . NEW YORKJ THE CAMBRIDGE UNIVERSITY PRESS. TOKYO. THE MARUZENKABUSHIKIKAISHA. FUKUOKA. KYOTO. SENDAIJ THE COMMERCIAL PRESS.
INTRODUCTION TO ALGEBRAIC THEORIES By A. ADRIAN ALBERT THE UNIVERSITY OF CHICAGO THE UNIVERSITY OF CHICAGO PRESS CHICAGO ILLINOIS .
ALL RIGHTS RESERVED. U.A. PUBLISHED JANUARY 1941. COMPOSED AND PRINTED BY THE UNIVERSITY OF CHICAGO PRESS.S. ILLINOIS. CHICAGO.1941 BY THE UNIVERSITY OF CHICAGO. SECOND IMPRESSION APRIL 1942. COPYRIGHT .
to serve primarily the first of these groups. In terial is fact. and they all seemed to find the material easy to understand.and fourthyear sible for undergraduate and beginning graduate students. to grasp the contents. ALBERT 1940 v . I am fully aware of the serious gap in mode of thought between the intuitive treatment of algebraic theory of the First Course and the rigorous abstract treatment of the Modern Higher Algebra. and statistics. whose only training in algebra is a course in college algebra. I trust that it will find such use elsewhere and that it will serve also to satisfy the great interest in the theory of matrices which has been shown me repeatedly by students of the social sciences. firmed by publication recently of more abstract presentations of the theory of equations gives evidence of attempts to diminish this gap. economics. psychology. A. it would actually be pos a student with adequate mathematical maturity. Dickson's First Course in the Theory of Equations. its only prerequisite maa knowledge of that part of the theory of equations given as a chapter of the ordinary text in college algebra as well as a reasonably complete knowledge of the theory of determinants. chemistry. Thus. However. This book is a text for such a course. of course. as well as the pedagogical difficulty which is a consequence. I have the feeling that neither of these compromises is desirable and that it would be far better to make the transition from the intuitive to the abstract by the addition of a new mathematics course in algebra to the undergraduate curriculum in a curriculum which contains at most two courses in algebra and these only partly algebraic in content. and its rather widespread use has assured me of the propriety of both its contents and its abstract mode of presentation. This assurance has been conits successful use as a text. A. UNIVERSITY OF CHICAGO September 9. E. I have used the text in manuscript form in a class composed of third. I wish to express my deep appreciation of the fine critical assistance of Dr. Another such at The tempt has resulted in a supposedly less abstract treatise on modern algebra which is about to appear as these pages are being written. the sole prerequisite being the subject matter of L. However.PREFACE During recent years there has been an ever increasing interest in modern algebra not only of students in mathematics but also of those in physics. My Modern Higher Algebra was intended. Sam Perils during the course of publication of this book.
.
5. . Rational equivalence of rectangular matrices III. Nonsingular matrices Equivalence of rectangular matrices forms 8. II. RECTANGULAR MATRICES AND ELEMENTARY TRANSFORMATIONS 1. 14. . 5. 6. Linear spaces over a Linear subspaces field 66 vii . The matrix of a system of linear equations 2. Skew matrices and skew bilinear forms Symmetric matrices and quadratic forms Nonmodular fields 52 53 56 58 59 62 13. 22 24 26 29 32 36 36 38 39 42 44 45 47 48 51 Determinants Special matrices 7. The division algorithm divisibility 4 5 3. Multiplication of matrices 2. 9.TABLE OF CONTENTS CHAPTER I. 11. 15 17 . 5. Polynomials in x 2. The associative law  3. 4. EQUIVALENCE OF MATRICES AND OF FORMS 1. The determinant of a product 6. IV. 4. 19 19 21 Submatrices Transposition Elementary transformations * 4. Congruence of square matrices 10. : 66 66 2. Bilinear 9. 15. 12. . 3. PAQB 1 1 POLYNOMIALS 1. 6. Summary of results Addition of matrices Real quadratic forms . 7. 7. Polynomial Polynomials in several variables Rational functions 6 8 9 13 A greatest common divisor Forms Linear forms Equivalence of forms process 8. . LINEAR SPACES 1. Products by diagonal and scalar matrices Elementary transformation matrices .
The characteristic matrix and function . FUNDAMENTAL CONCEPTS 1. 8. 6. 10. 8. Groups Additive groups Rings Abstract fields Integral domains Ideals and residue class rings 2. Integers of 11. The The ring of ordinary integer^ ideals of the ring of integers 10. WITH MATRIC COEFFICIENTS Matrices with polynomial elements 2. 5. 7. Systems of linear equations Linear mappings and linear transformations Orthogonal linear transformations Orthogonal spaces 67 69 73 75 77 79 82 86 87 89 89 94 97 100 103 105 107 109 109 Ill 112 113 114 116 117 119 121 124 127 130 V. 3. 5. 9. 6. Quadratic extensions of a field quadratic fields 12. 4. 11. Similarity of square matrices 6. 3.viii TABLE OF CONTENTS PAQBJ CHAPTBR 3. . 9. Elementary divisors Matric polynomials 4. POLYNOMIALS 1. Characteristic matrices with prescribed invariant factors 7. 5. Additional topics VI. Gauss numbers An integral domain with nonprincipal ideals INDEX 133 . Linear independence The row and column spaces The concept of equivalence Linear spaces of finite order Addition of linear subspaces of a matrix 7. . 4.
Later on in our algebraic study we shall be much more explicit about the meaning of this term. Polynomials in x. A polynomial f(x) plication of a finite g(x) in x is any expression obtained number of integral operations as the result of the apto x and constants. subtraction. For the present. we shall usually say merely that /(a.) and g(x) are equal polynomials and write the polynomial which is the constant f(x) = g(x).CHAPTER I POLYNOMIALS 1. There are certain simple algebraic concepts with which the reader is probably well acquainted but not perhaps in the terminology and form desirable for the study of algebraic theories. We shall thus begin our exposition with a discussion of these concepts. A positive integral power is then best regarded as the result of a finite repetition of the operation of multiplication. then we shall regard f(x) and g(x) as being the same polynomial. the polynomial f(x) is not the zero polynomial. we shall assume then either the properties that if a and 6 are constants such that ab = By this 1 . We shall speak of the familiar operations of addition. Our definition of a polynomial includes the use of the familiar stant. We observe that the zero polynomial has the properties g(x) = . in the consideration of a conditional equation f(x) = No we seek a constant solution c such that /(c) = 0. confusion will arise from this usage for it will always be clear from the where context that. we shall merely make the unprecise assumption that our constants have the usual properties postulated in elementary algebra. In particular. We shall designate by zero and shall call this polynomial the ze^o polynomial. Thus. and multiplication as the integral operations. term conterm we shall mean any complex number or function independent of x. in a discussion of polynomials f(x) = will mean that f(x) is the zero polynomial. however. + g(x) = g(x) for every polynomial g(x). This concept is frequently indicated by saying that f(x) and g(x) are identically equal and by writing f(x) = g(x). However. If is a second such expression and it is possible to carry out the operations indicated in the given formal expressions for/(z) and g(x) so as to obtain two identical expressions.
). a = 6* for then/(x) and g(x) are equal if and only if m n. is unique. In other words. + a n \x + an . 2c. But if a n = 0. a nonnegative integer and a is a The terms with the same exponent fc is may be combined into a single term whose coefficients. nomial in x and that f(x) = g(x) in the sense defined above. we may express f(x) as a sum of a finite number of terms of the form ax k Here k constant called the coefficient of x k . c8  c) + . expressions (1) are identical.. if /&(x). so that f(x) is the zero polynomial. we may say that the expression (1) of a i = 0. any polynomial of degree no may be written as an expression of the form (1) any integer n ^ no. and are stating that for any c we have x. If. the indicated integral operations in any given expression of a polynomial f(x) be carried out. . in particular. that is.2 INTRODUCTION TO ALGEBRAIC THEORIES if a or 6 is zero. a n we say that the constant polynomial f(x) has degree zero. and 1 1 a"" such that aar If f(x) is a is a nonzero constant then a has a constant inverse assign to a particular formal expression of a polyit occurs in f(x) by a constant c. q(x). = 6 zm + fciz". r(x) l)(c 2 1. two polynomials are equal if and only if their polynomial with 60 . we ob 1. Then it is nomial and we replace x wherever tain a corresponding expression in Suppose now evident that /(c) = g(c). is The is most important since. This will * ^ Clearly of virtual degree gree of /(a.+& 1 = n. 3 = 2z.+. T(X) are poly= h(c)q(c) nomials in x such that }(x) = h(x)q(x) r(x) then f(c) r(c) + + for any c. we have f(x) = a. if g(x) pression (1) of f(x) with do 5^ polynomial and we write g(x) in the corresponding form a second (2) g(x) 5^ 0. but ex the zero polynomial. we may always take ^ 0. h(x) = re For example. and we may then write (1) coefficient is the sum of all their f(x) = a xn + aixn ~ + l . we shall assign to it the degree minus infinity. . The constants a unless f(x) is are called the coefficients of /(x) and may be ao zero. . The integer n of any expression (1) of f(x) is called the virtual degree of the expression (1). Thus. We may thus speak of any such n as a virtual de . either any = a n is a constant and will be f(x) has a positive integral degree. or /(#) called a constant polynomial in x. the label we c which is the constant we designate by that g(x) is any different formal expression of a poly/(c). . Thus. then. If a 7* we call n the degree* of f(x). = x2 g(x) 2 2c + 3c= (cIf 2z 2 + 3z. ..
4. Then = h(x). s(x) as in a corresponding statement about g(x)s(x) where g(x) has odd degree Ex. What can one say about the degree of f(x) + g(x) . so is f(x) LEMMA stant if A product of two nonzero polynomials is nonzero and is a conif both factors are constants. that both /and g are zero. = f(x) a polynomial. We then have the elementary results referred to above. if State why it is true that x is not a factor of f(x) or g(x) then x is not a fac tor of f(x)g(x). State the condition that the degree of /(x) + g(x) be less than the degree of either /(x) or g(x). choice of oral exercises will be indicated by the language employed. The of f (x) + g(x) is at most the larger of the two degrees and g(x). if /(x) and g(x) have posi tive leading coefficients? 2 3 3. 2. The author's . is a positive integer then x is a factor of [/(x)]* if (identically). State the relation between the term of least degree in f(x)g(x) and those of least degree in /(x) 7. = /i +. 6. and g(x). g(x) are monic. 7 to prove that if k and only if x is a factor of /(x). State a result about the degree and leading coefficient of any polynomial s(x) 5. We shall call/(x) a monic polynomial if ao = 1. and only 3. 8. if f (x) and product of of the degrees of f(x) g(x). whose almost trivial verification we leave to the reader. 2. The leading coefficient of f(x) g(x) is the leading coefficients of f (x) and g(x).POLYNOMIALS 3 be done so as to imply that certain simple theorems on polynomials shall hold without exception. LEMMA 4. LEMMA g(x) Let f(x) be nonzero and such that f(x)g(x) degree of f (x) = f(x)h(x). Use Ex. What can one say about the degree of/ of/ of/* for/ k a positive integer? . LEMMA sum the The degree of a product of two polynomials f (x) and g(x) is the and g(x).+/? for t > 1. The coefficient ao in (1) will be called the virtual leading coefficient of this expression of f(x) and will be called the leading coefficient oif(x) if and only if it is not zero. / = /(x) a polynomial in x with real coefficients. then. Let / and g be polynomials in x such that the following equations are satisfied Show. and thus. EXERCISES* 1. 1. 4. Hint: Verify first that other * The early exercises in our sets should normally be taken up orally. 9. Make and real coefficients.
12. If also f(x) = q Q (x)g(x) +r Q (x) for r (z) of virtual r(x) is 1. r(x) = The Remainder Theorem Algorithm to write /Or) of Algebra states that we use the Division = q(x)(x  c) +r(x) . r(x) = f(x). and h are polynomials in x with real coefficients the following equations (identically). b^ CkX g(x) is Thus a virtual degree of f(x) . and (5) of Ex. degree m+k> a polynomial h(x) of virtual k l k 1. 4 a) b) c) / 2 / 2 / 2 = xg* + h* + g + (x + 2)fc 2  xg* = xh* 2 = g. If Ck is the virtual leading coefficient of m+ m. Express each equation in the form a(x) apply Ex. either n < m and we have with q(x} = 0. q(x) is either zero or has degree f (x) n m. then a virtual degree of s(x) if = r Q (x) m 1. a) 6) and / / 2 + xg* = . Use Ex.) and g(x) be defined respectively (3) by (1) and (2) with 6 ^ 0. ^ Find solutions of the equations of Ex. ora j* 0. In parts (c) and (d) complete the squares. 11 for polynomials/. (6). But then/i and g\ are also solutions grees. . 1. The result of the application of the process ordinarily called long division to polynomials is a theorem which we shall call the Division Algorithm for polynomials and shall state as Theorem g(x) 7* 0.a# = 2 *)0 4 = 2 if 10.zy = c) d) / 2 / 4 + 2xfY + (* + 2zfa . and = q(x)g(x) + r(x) . there exist unique polynomials q(x) and r(x) such that r(x) has virtual degree (3) m 1.)g(x) of b^ (a x m and leading virtual degree m and hence (3) for q(x) of degree n 1. This q(x) r Q (x). Then. and = 0.4 INTRODUCTION TO ALGEBRAIC THEORIES = b(x) wise bothf and g are not zero. 3. g. Then Let f (x) and g(x) be polynomials of respective degrees n and m. then they are all zero: satisfying 11. h with complex 2. if = the degree of s(x) is = impossible. 9. and a finite repetition b^ a Q x g(x) is n n~ l of this process yields a polynomial r(x) = f(x) + . n > m. g(z) = ?o(z). a virtual degree of h(x) n'^n l 1. Hint: Show that / and g are nonzero polynomial solutions of these equations of least possible de= xfi as well as g = xgi. But Lemma t(x) states that t(x) t(x)g(x) is the sum of m and q Q (x) 7* the degree of t(x). to show that if/. 8 to give another proof of (a). . then x divides/ ^ a contradiction. "* Use Ex. The division algorithm. coefficient degree m 1 a^ ^ 1 0. For let /(a. coefficients and not all zero.
2. obvious proof of this result is the use of the remark in = r c) paragraph of Section 1 to obtain /(c) = q(c)(c r. . * If f(x) is a polynomial in x and c is a constant such that /(c) = then we shall call c a root not only of the equation /(x) = but also of the polynomial /(x). and we shall say in this case that f(x) has g(x) as a factor. g(x) T only if the polynomial r(x) of (3) is the zero polynomial. q(x) and h(x) are nonzero constants. We shall leave the statements of that theorem. (x c) q(x) then c is a root of multiplicity What then is a necessary and sufficient condition that /(x) have multiple roots? 2. Hint: Write h(x) = q(x)f(x) + + + b nic n~ for rational + r(x) and replace x by l c. . Thus f(x) andg(x)are associated if and only if each is a nonzero constant multiple of the other. g(x) is a factor o/f(x). Applying Lemmas 3 and 2. Compute the corresponding &< for each of the polynomials 6 a) c 6) c 4 + 10c + 25c + 4c + 6c + 4c + 4 2 3 2 c) c 6 2 2c 4 +c 3 2 1 d) (2c + 3)(c + 3c) be polynomials. Let /(x) = x3 + 3x 2 + 4 in Ex. Use the Division Algorithm be a root of a polynomial /(x) of degree n and ordinary integral coeffito show that any polynomial h(c) with rational coefficients may . m Let c cients. The Division Algorithm and Remainder Theorem imply the Factor Theorem a result obtained and used frequently in the study of polynomial equations. It for this application that we made the remark. Let f(x) and g(x) f (x) ^ by the statement that g(x) divides we mean that divides /(x) if and nomial q(x) such that/(x) = q(x)g(x). Observe thus that the familiar process of dividing out the leading coefficient in a conditional equation f(x) = is equation by the equation g(x) associated with/(x). The is fifth has degree one and r = r(x) is necessarily a constant. It is clear that every nonzero polynomial is associated with a monic polynomial. Thus. so that/(x) q(x)h(x) = 1. we have h(x)f(x). Polynomial divisibility. We shall call two nonzero polynomials/(x) and g(x) associated polynomials if = q(x)g(x)> g(x) = f(x) divides g(x) and g(x) divides /(#). numbers 3. to the reader. EXERCISES 1. Then there exists a poly 3. 6 . /(c) c + as desired. =0 where g(x) that used to replace this is the monic polynomial .POLYNOMIALS so that g(x) 5 then r the = x = /(c). . Then f(x) = q(x)h(x)f(x). Show by formal m differentiation that if c is a root of multiplicity m of f(x) = 1 of the derivative /'(x) of /(x). be expressed in the form 6 &ic 6 ni. and the subsequent definitions and theorems on the roots and corresponding factorizations of polynomials* with real or complex coefficients.
. in x may be . . . Define/ = = if Show 4. . the sum with unique coefficients. . of the term (4) and define the virtual degree in k q the virtual degree of a parXi. x is not a factor of/.c.. its . . . constants. Show that m(f) is a polynomial in x of virtual degree m if and only if m is a virtual degree of /(x). . that is. of polynomials shall obtain a property which may best be described in terms of the concept of rational function. EXERCISES be a polynomial in x and define m(f) = x mf(l/x) for every positive integer m. finite As in Section 1 we may express f(xi. . Some of our results on polynomials extended easily to polynomials in several variables. 1. . Prove that $ a factor of m(f) for every m which is at least the degree of /. . We define a polynomial/ = f(x\. x q ) in xi.6 INTRODUCTION TO ALGEBRAIC THEORIES We Two associated monic polynomials are equal. "n j for every = and/ = n(f) m> n and is any nonzero polynomial of degree n. .. . It will thus be desirable to arrange our exposition so as to precede the study of greatest common divisors by a discussion of the elements of the theory of polynomials and rational functions of several variables. Let g be a factor of/. . We we In our discussion of the g. (5) / = f(xi. ater when we discuss the existence of a unique greatest common divisor (abbreviated. and we shall do so. x q to be any expression obtained x q and as the result of a finite number of integral operations on #1.) of polynomials in x. k xq . .d. m[m(f)} = /. terms (4). that. if / 7* 0. see from this that if g(x) divides f(x) every polynomial associated with g(x) divides f(x) and that one possible way to distinguish a member of the set of all associates shall use this property of g(x) is to assume the associate to be monic. . + . 1.c. number of terms of the form azji . . if /is 3. . the virtual degrees of kq exponents k\. 4. x q ) as the sum of a (4) x$ .d. . g.*)= k . . We a the coefficient + . .. Let/ = f(x) 2. jX q of such a term to be k\ ticular expression of / as a sum of terms of the form (4) to be the largest of call . If two terms of / have the same set of we may combine them by adding their coefficients and thus write /as the unique sum. . that m(f) xm / 0. Polynomials in several variables. . . Show that m(0) = 0.0.
then a virtual degree in x q oifg is ra n. We now use this result to obtain the second of the properties we desire. . . . . . . and a virtual (2) coefficient of fg is a &o. . if / is given by (5) and we replace x> in (5) by + x is replaced by y k ^~ *Xj> power product x k x q ) identixq and thus that the polynomial/(i/xi. t . . Then we have proved that the product fg of two nonzero polynomials / and g in x\. . Also / is the zero polynomial if and only if all its coefficients are zero. derivation. . . shall type of polynomial. . . Xq and f be nonzero. nomial. . As before we and have the property that nonzero constant polynomials have degree zero. Let f. . . . . . Theorem is not zero. . . . . observe that a polynomial / in x\. . . xq if all terms of nomial or a form in Xi. = + + . . in xi. If we x qi prove similarly that the product of two nonzero polynomials in x\ . g be given by . . degree of * fl / is defined to be the assign the maximum sum degree minus fci + . h be polynomials in Xi. Note now that a polynomial may have several different terms of the same degree and that consequently the usual definition of leading term and coefficient do not apply. xq The product of two forms / and g of respective degrees n and m in the same Xi. . . x 2 is not zero. . yx q ) = y f(xij x q ) is a form of degree k x q if and only if /(xi. then a and 6 are nonzero polyleading + nomials in Xi and ao6 ^ by Lemma 2. . need to consider an important special x q ) a homogeneous poly(5) have the same degree . . . similarly. . . some of the most important simple properties of polynomials in x hold also for polynomials in several x and we shall proceed to their infinity to the zero polynomial v . . . k ki . = we . we apply the proof above to obtain ao&o ^ x q is not proved that the product fg of two nonzero polynomials in #1. If / is a nonzero poly Here the . xq may its coefficients be regarded as a a a n all . . +k for a kl ^ 0. . see that each . Then g h. However. x q \ and a not zero. kq T* 0. = To fh. XQ We have Theorem fg the immediate consequence 3. . We . . continue our discussion . . is and hence have not zero. polynomials in x\. . . then some a kl . . by Theorem 2. . . yx^ we . with 6 not zero. We have 2. It is a generalization of Lemma 1. . . Thus we shall call /(xi. . polynomial (1) of degree n = n^in x = x q with .POLYNOMIALS coefficients a^ k q are constants and n/ is the degree of x q ) considered as a polynomial in Xj alone. . . x q is clearly a form of degree m + n and. . . . } . zero. . q and the . . If q = 2. . thus completed the proof of The product of any two nonzero polynomials in Xi. cally in y Xi. If. . g. is nonzero if and only if / and g are nonzero. . . . . . . . Then. . . . . kq .
xq may be expressed as a quotient for polynomials a(xi. a(xi.*. The postulates of elementary algebra were on xi f . . xq is now defined to be any function obtained as the result of a finite number of rational operations x q and constants. x q ) are then called coefficients of/. Let f and g be polynomials in xi. . . (a/6) (c/d) = (ac)/(6d).. . . . forms of f and g. . . . for forms g> of degree (8) m fg i and such that g Q ^ . Rational functions. . . . . + h m+n i By Theorem 2 /o(7o. The coefficients of x q ) and 6(xi. tion of division The integral operations together with the operaby a nonzero quantity form a set of what are called the rational operations. A rational function of xi. . coefficients has a property which we describe by saying that the set is . This may be seen to be due to the definitions a/6 + c/d Here b and d are necessarily not = (ad + 6c)/6d. .8 INTRODUCTION TO ALGEBRAIC THEORIES all the terms of the same degree in a nonzero polybe grouped together into a form of this degree and then may express (5) uniquely as the sum first Observe nomial (5) that we may (6) / = /(*!. Then the degree of f g is the sum of the degrees of f and g and the leading form of f g is the prodwhere the hi are forms of degree m+n and Ao = .. . 5. . . . Xq. + gm 0. x q) = g + . is where /o is a nonzero form of the same degree n as the polynomial/ and /< a form of degree n i. zero. then clearly = h + . . . .. we clearly have Theorem 4.) =/o+.+/n. . If also (7) g = g(xi. . . function of . and we may use Theorem 2 to obtain . . . . . Let us obx q with complex serve then that the set of all rational functions in xi... . . Ao T* 0. above is evidently fundamental for the study of polynomials in several variables a study which we shall discuss only briefly in these uct of the leading The result pages. x q ) and 6(xi. . . . . . . . seen by the reader in his earliest algebraic study to imply that every rational x\> . By this we mean that every rational function of the elements in this set is in the set. Thus if we call/o the leading form of/. x q ) 7* 0. . closed with respect to rational operations. .
/. We shall now study this latter problem and state the result we shall prove as Theorem 5. h(x) divides d. (x) is the g. . We shall repeat this material here importance for algebraic theories.+i(x) divides /i(x). d Q (x) divides /i(x)./. ./. the g. . d (x) divides /. .c.c.POLYNOMIALS bd if 9 7* 0.c. the g. there is no loss of generality if we assume that both f(x) and g(x) are nonzero and that the degree of is not greater than the degree of /(x).1 exists such that //~ 1 = 1. . . and is such that if g(x) divides every fj(x) then g(x) divides d(x).c. .<x) and d . Moreover.d. of polynomials f\(x).d.d.d. . of /i(x). polynomials and are equal. If dj(x) is the g.//+i(x). Then there exist polynomials a(x) and b(x) such that result (10) is The d(x) = a(x)f(x) + b(x)g(x) a monic common divisor o/f(x) and g(x). . For every common divisor h(x) of /i(x). fg = if and only = 0.d. . A greatest common divisor process.(x) d (z).(x) not all zero to be any monic polynomial d(x) which divides all the fi(x). Moreover.d. . . d(x) is a common divisor of the /i(x) of largest possible degree and is clearly the unique monic com mon divisor of this degree. For if /(x) = 0. . Thus the g.c. according to our definition. of two polynomials and the method of its computation are essential in the study of what are called Sturm's functions and so are well known to the reader who has studied the Theory of Equations. and hence the degree .c./. then d(x) is associated with g(x). and the divisor of /i(x). .(*)/(*). of /i(x). erties we assumed or g / = Observe. Hence. of d. t of d(x) is at least that of g(x).+i(x) . . we put (11) *. 6. .a(x) = Iand6(x) = b^ 1 is a solution of ( 10) if g(x) is given by (2) Hence. . while if / j then /. The existence of a g. .(x).c.c. .d. d(x) and d (x) are associated monic its because of We define the g. above evidently reduces the problems of the existence and construction of a g.c. . that is. . dj(x) and //+i(x). then. and hence both then d (x) is . If do(x) is a second such polynomial.(x) and //+i(aO. . . . .(x). If g(x) divides all the/i(x). Let f (x) and g(x) be polynomials not both zero. then d(x) and d Q (x) divide each other.d. . of any number of polynomials in x not all zero to the case of two nonzero polynomials. f (x) is a unique polynomial. h l (x) = g(x). .fs+i(x). . /. d(x) is then the unique 0/f(x) and g(x).d. . of /i(x). then g(x) divides d(x). . For consistency of notation g(x) g. that the set of rational functions satisfies the propin Section 1 for our constants.
$(x). we = fc(x)M*) + W*) .2 ()/(x) (14) + 6<i(a?)g(x) Ai_i(x) = + common 6i_i(o:)gr(a:) then implies = fe r (z) Thus we obtain divisor d(x) /i r (o. where the degree of hz(x) is less than that of h^(x).d. An evident proof hi(x) by then (14) implies that h r (x) induction shows that h r (x) divides both ho(x) b*(x) bi(x) = f(x) and = g(x). Equation (12) implies that h z (x) = = qi(x). .(*) .b(x) = cb r (x). If. r The polynomial is a divisor of /(#) and gf(x) and is associated with a monic common Then d(x) has the form (10) for a(x) = car (x). Ai. who utilized it in = ch r (x). .2 () = ai. his geometric formulation of the analogous result on the g. If ht(x) ^ 0.10 INTRODUCTION TO ALGEBRAIC THEORIES 1 By Theorem (12) h (x) = q^h^x) + h (x) 2 .c. now. of integers.) = ar (x)f(x) + b (x)g(x). Thus our division process yields a sequence of equations of the form (14) hi<t(x) = qii(x)hii(x) + hi(x) . observe that it not only It is therefore usually called Euclid's process. The process used above was first discovered by Euclid. We have already shown that d(x) is unique. Clearly also h\(x) = = a*(x)f(x} a\(x)j(x) + b^(x)g(x) with a^(x} + bi(x)g(x) with ai(x) and that Ai(x) = = 1. where the degree of h z (x) is less than the degree of may apply Theorem 1 to obtain (13) hi(x) hi(x). unless (15) WX) ^  g r_i(oO/l r_i(z) + hr(x) and (16) for r *. We enables us to prove the existence of d(x) but gives us a finite process by . 0. Thus h r (x) hi(x) and be replaced by h r^(x) = divides both h r ~i(x) and h ri(x). where hi(x) if n< 0. A r_i(z) = qr (x)h r (x) > 1. is = We the degree of hi(x) then Hi > n > while n< > conclude that our sequence must terminate with . . 1. Equation [qri(x)qr (x) (16) implies that (15) + l]h r (x). If we may At_i(x). assume that h r (x) divides divides hi.
We now obtain Theorem 7. Then f(x) divides h(x).d. 1 and b (x) + Every pair of polynomials a(x) and b(x) satisfying (17) has the form (18) a(x) = a (x) + c(x)g(x) . we apply Theorem 1 to obtain the + + + 6 (x)g(x) = By Lemma 1 ao(x)/(rc) + 1 has degree at most m + n the degree of 6 (z) is at most n . of degree at unique polynomials a (x) of degree at most 1 such that a (x)f(x) most n b (x)g(x) = m 1. g(x) and polynomials secured from f(x) and g(x) as remainders in the application of the Division Algorithm. thus have the We COROLLARY. The polynomials a(x). Then the coefficients of their g. To do this we first prove the LEMMA is 5. But this implies the result we state as Theorem 6. 1 and If now ai(s) has virtual degree m bi(x) virtual degree n + We 1. Let f(x). For. if a(x) is any solution of (17). a*(x)f(x) + b*(x)g(x) = 1 as desired. be rational numbers.POLYNOMIALS 11 means of which d(x) may be computed. define b Q (x) c(x)f(x) and see that [b(x) c(x)f(x)]g(x) m .1. b(x). and hence the greatest common divisor d(x) of Theorem 5 all have coefficients which are rational functions with rational number coefficients of the coefficients of f (x) and g(x). are rational only common divisors of f(x) and g(x) and are constants. Notice finally that d(x) is computed by a repetition of the Division Algorithm on /(#). g(x). When/(x) and g(x) are relatively prime. = a Q (x)f(x) Then a (x) has degree at most 1. Let If the the coefficients of f (x) and g(x) numbers. b(x) = b (x)  c(x)f(x) for a polynomial c(x).c. It is interesting to observe that the polynomials a(x) and b(x) in (17) are not unique and that it is possible to define a certain unique pair and then determine all others in terms of this pair. Let Then there exist f (x) of degree n and g(x) of degree m be relatively prime. and h(x) be nonzero polynomials such that f(x) prime to g(x) and divides g(x)h(x). first equation of (18) with a$(x) the remainder on division of a(x) by g(x). saying that f(x) is prime to g(x) and hence also We that g(x) is prime to/(x). (10) to obtain polynomials a(x) and b(x) such that (17) we use a(x)f(x) + g(x)b(x) = 1 . then d(x) = 1 and we shall call f(x) indicate this at times by shall also g(x) relatively prime polynomials. a(x)f(x) b(x}g(x) = b(x) = 1. may write g(x)h(x) = f(x)q(x) and use = [a(x)h(x) + b(x)q(x)]f(x) = h(x) b(x)g(x)]h(x) For we (17) to obtain [a(x)f(x) + as desired. 1.
/(x) and a (x)/(x).c. Thus. a(x) = a (x)a(x)/(x) + ao(x)6(x)]. r(x) + so must be 5. If /(x) &o(z)gr(z) = 0i(z)d(x). . . But the definition above of a$(x) as the remainder on = = = division of a(x) by and g(x) There is also a result shows that then (18) holds. Hint: Show that if d(x) is this polynomial then f(x) has the form (10) as well as degree less than that of d(x) and q(x)d(x) r(s). 6. State the results corresponding to those above for polynomials of virtual degree two. EXERCISES Extend Theorems 5. Show if that/(x) either divides g(x) or is prime to g(x). A polynomial /(x) is called rationally irreducible if f(x) has rational coefficients is and What are the 6. 1 which impossible. 1 Theorem 8. .d. . .'s 3./t(x) be all sible g. prime to g(x) the . Conversely. /<(*) 2. .c.d. Let f(x) T and g(x) 7* have respective degrees n and Then polynomials a(x) j 1 such that at most n (19) exist if of degree at most m and b(x) ^ of degree a(x)f(x) + b(x)g(x) = relatively prime. not the product of two nonconstant polynomials with rational coefficients. State their posfor each such possible g. Hence.c. we have a +  = 1. . we have/(x) = fi(x)d(x). Let/i(x). a$(x)j(x) +b Q (x)g(x)j then /(x) clearly divides By Lemma by degree n &o(z) 1 is divisible 5 the polynomial 60(2) bi(x) of virtual of degree n and must be zero.c. g(x) have rational is coefficients. Prove that the g.sothatai(x)/(x) 6o(#) unique.c. of f(x) and g(x) is the monic polynomial of least possible = degree of the form (10). 0i(x)/(x) + [/i(2)ff()l = where m and /i(x) of degree has degree less than n. f(x) degree of g(x) is less than that of f(x). But then is gr(x) m a(x)6 (x)gf(x) divides a(x) ^ = flf(x)[o(x)6 of degree at most m 1. (x)/(x) (a. zero. We state it as m.d.12 a\(x)f(x) [60(2) INTRODUCTION TO ALGEBRAIC THEORIES + bi(x)g(x) = bi(x)]g(x).d. which is somewhat analogous to Theorem 7 for the case where /(x) g(x) are not relatively prime.ai(x) a (x). possible g. and the corollary to a set of polynomials /i(x). 1? Let f(x) = be rationally irreducible.) and g(x) are relatively prime.d. . g(x) g\(x) has degree less than let (19) hold. and the conditions on the/(x) polynomials of the first degree. of f(x) and g(x) a nonconstant polynomial d(x). 4. is and only if if f (x) and g(x) are not For the g. This proves a (x) 6i(x). 's of a set of rationally irreducible /(z) of Ex.
4x + + = = / / = = /i = / / = jfi x6 x 8 2 3 x4 x8 x2 z4 2 3 + 2x . all Find the g. 4.6 . if g(x) is a polynomial in x with rational coefficients and g(c) ^ there then exists a polynomial h(x) of degree less than that of /(x) and with rational coefficients such that g(c)h(c) 10. 2. quaternary. 7.. Show that. show that if d(x) is the g.. 4 of /i.c. quartic. ft of Section 3 to is 3. The reader and cubic.d. already familiar with the terms linear. .. .3x + x . xq is called a qary poly As above.c. . ternary. .c. Forms. In particular. .1 + 2x .3x + 2 .5x + 6 + 4x + x . In a similar fashion a polynomial in nomial. 3. . 5 to be unary. . . The terminology just described is used much more frequently in connecis tion with theorems on forms than in the study of arbitrary polynomials.d. .. . c) /i /2 /3 d) /i / /3 9. = Let /(x) be a rationally irreducible polynomial and c be a complex root of 0.6 +x2 + 2x + x + 2 4 8 2 6x  3 2 3 2 8 2 3 2 4 /(x) 0.3 x + 2x + 3* + 6 x 4 4 8   +4 2 e) 2 x4 z8 x4  2x 3 6x 2 2  2x 2 llx  2x  /) 3 x8 x2 x8 +6 .x . Thus.8x . be polynomials in x of virtual degree n and/i ^ 0.c./< . 5. .. for an integer fc> 0. . 1 of Section 2 together with the results above to show that a rational ly irreducible polynomial has 8.5x + x + 3x + 3 +x x. . 2. 13 Use Ex.d. . we shall find that our principal interest forms.. k n(f t ) has the form x d. . of /i. . 4. = 1. quintic polynomial in the respective cases n x\. t/n and which have special importance because they Xi. 3. . quadratic. Let /i. we specialize the terminology in the cases q = 1./* then the g. Show that every rational function of c with rational coefficients uniquely expressible in the form a 11. + be with a and b rational numbers. Use Ex. A polynomial is of degree n is frequently spoken of as an nic polynomial.POLYNOMIALS 7. 2 = = = = = = = = = = 2x 5  x8 8 2 2x 2 2  6x + x x 2x 3x + 8x . = 1. . and quinary. of n(/i). show that the g. . no multiple roots. root of /(x) is Let/(x) be a rationally irreducible quadratic polynomial and c be a complex = 0. .3s .d. . . . of each of the following sets of polynomials as well as of possible pairs of polynomials in each case: a) /i /2 6) /i /. binary. in nary quadratic There are certain special forms which are quadratic in a set of variables xm 3/1.
. xm A bilinear form / is called symmetric if it is unaltered by the interchange of correspondingly labeled members of its two sets of variables. . . with (22) and conclude that a quadratic form may be rej/i. . . x n ). .y = i (a*y = a/<. . n) . . . . = n. . y by a*. . . . .14 INTRODUCTION TO ALGEBRAIC THEORIES . bilinear form in x\. Thus skew bilinear forms are forms of the type (24) f .. . .l t1 linear . form in yi. y n whose coeffi cients are linear forms in x\. . n) . They may be expressed as forms . But / = S2/ IiXiaayj if = . . is symmetric if and only if and hence / is symmetric if and m= n. z n respectively. such forms bilinear forms. yn. . . A quadratic form / is evidently a sum of terms of the type the type ctjx&j for i 7* j. . yn) = /(yi. and / a. . f.   .Zi. . a  .i j and have djXiXj = a^XiXj + a^x^x^ so that (23) /= t. .. . . y n separately. .i (i. We garded as the result of replacing the variables .. aij = a . j = = 1. . . compare this . This state ment / = only (22) clearly has meaning only if m = ZyidijXj. x\. . Later forms. Here again xn. . . and we call a bilinear form / skew if / = f(x\. .. yi. . n (20) f= we may thus write so that (21) / . We may write an = a^ as well as \ c/ for i ^ a. y n in a symmetric z and yi.. we shall obtain a theory of equivalence of quadratic forms and shall use the result just derived to obtain a parallel theory of symmetric bilinear type of form of considerable interest is the skew bilinear form. n. are linear in both xi. and see that / may be regarded as a . . . final A m . . . xm and yi. . We shall call yi . . j = 1. . . . . = a.
x n with coefficients (27) a linear combination of xi.c. with constant coefficients (29) . n) . Use the language above to describe the following forms: a) ft) c) + Zxy* + xl + 2xiyi + x yi + x^ z3 3 2/i d) x\ e) 2 + 2xfli x#i Xiy 2 2 2. . . sum (27) / shall call . . Hence / j a . . = + + anx n . . Thus any polynomial in # is a linear combination of a finite number of nonnegative integral powers of x with x q is a linear combination constant coefficients. . by corresponding then the new quadratic form/(zi. . (28) = . . / + g = (a x + 61)0:1 + . Express the following quadratic forms as sums of the kind given by (23) a) 2x\ ft) : xf  x\ 8. associate (25) only with skew bilinear forms. . that is (26) a is . . If g is g = fti. . . + (a. . . a polynomial in xi. . . + b n )xn . sum of terms an(xiyj if . for i j j. n 1. j = 1. see that . + an = 0. . It is also evident that replace the y/ . . = #) we . i = . n) . It is important for the reader to observe thus that while (22) may be associated with both quadratic and symmetric bilinear forms we must 2. . polynomials in x as coefficients. . + . 1. b ny we .POLYNOMIALS where (25) It follows that 15 an an = an * (t. Xjj . . The form (27) with a\ = a^ a second form. xi. Linear forms. We ai. ftiXi + . . . . . . . ft. a n The concept of linear combination has already been used without the name in several instances. = an = b nx n is the zero form. . . . x n ) is the zero polynomial. ORAL EXERCISES 1. A linear form aixi is expressible as a . of a finite the g. . . = (i = 1. . of /(x) and g(x) is a linear combination (10) of f(x) and g(x) with . x n . . . .d. . . . . number of power products x$ xj with constant coefficients. .
The reader is already familiar with these properties which he has used in the computation of determinants by operations on its Let. then. however./= g if (oi)si g + ( . . . (ca n )x n . and only if / = / + g) 0. as properties of sequences of constants (which cients of linear forms) may be thought of. . as the coeffiwill and in this formulation in Chapter IV be very important for all algebraic theory. . u be a sequence u = (ai. . . + + . may seem to be relatively unimportant. a) is of n constants a define called the elements of the sequence u. if we so desire. a< = bi (i = 1. we (33) au call = ua = (aai. . We now consider a second sequence. n). . It is clearly the unique sequence z with the . we have cf = (cai)xi + . + = (a*)x n that is . . They may be formulated abstractly. aa n + bbn ) has been uniquely defined for all constants a and b and all sequences u and v. . Then / = . . If a any constant. . . . . (32) rows and columns. . and (35) define the sum of u and v u + v = (ai + 61. as such. . Then the (36) linear combination au + bv = (aai + bbi. .16 Also (30) if INTRODUCTION TO ALGEBRAIC THEORIES c is any constant. . . . The properties just set down are only trivial consequences of the usual properties of polynomials and. The sequence all of whose elements are zero will be called the zero sequence and designated by 0. . . aa n ) and (34) au the scalar product of v u by a. = by (61. We (31) define / to be the form such that / ( /) = and see that /=!. 6 n) . . . an + bn) .
yq) y q} is an identity. is not zero. . . . constant au = We define the negative u of a sequence u to be the sequence v such v = 0. . x q and obtain a linear mapping which we may call the inverse of (39). . .. . In this case y 9 as linear forms in #1. If q = r and the determinant (40). anyi . quence u +x= v is the se + We v (38) u = (61 ai. . . . x q we obtain the original form We now into g(y\. . Evidently. is . . we obtain a form g = g(y\. . .. . The statements above / is equivalent to g then g is also equivalent to /. Evidently if a is the zero property that u for every u. .POLYNOMIALS 17 z = u i or every sequence u. (39) Xi = . we f(xi. . . . . . u is the unique sequence that u + + (37) u = 1 w = (ai. . . #i. it is . then. nology is justified by the fact that the equation f(xi.. Then we shall say that / is equivalent to g if . . .x q ). . and thus if we replace t/i. g) . . If / = f(xi. in x\  t . x q ) and g = g(x^ . . y q in g(y\. bn a n} . x q and we replace every x> in/ by a corresponding linear form . . a n ) . . . .. = . + a ir y r (i = 1. imply that if x q ) of / is carried y q ) by a nonsingular linear mapping. easily . . . . The reader should observe now that the definitions and properties derived for linear combinations of sequences are precisely those which hold for the sequences of coefficients of corresponding linear combinations of linear forms and that the usual laws of algebra for addition and multiplication hold for addition of sequences and multiplication of sequences by constants. . . consider two forms / the same degree n. we shall say that (39) seen that we may solve (39) for yi. . . . . . . . . . + . yr Then we shall say that / is carried into g (or that g is obtained from /) by . . . by the corresponding linear forms f(xi. the linear mapping (39). . . . . 9. Thus. . . nonsingular. This termix q ) = g(yi. x g ) is a form of degree n in Equivalence of forms. . y r ) of the same degree n in t/i. and we see that the unique solution of the equation v evidently call this sequence ( w). . . . . . .
Find the inverses of the following linear mappings: . What is its effect on any form/? 2. 6) / = 4x?  4x!X2 + 3x1 .. 1. Apply a nonsingular linear mapping to carry each of the following forms to an cx 2) 2 . ..2x 2 = = 2/i 2/ 2 b} M f Xi 2xi + ^2^2/1 +x = 2 2/2 4. lent forms g Apply the linear mappings of Ex.18 INTRODUCTION TO ALGEBRAIC THEORIES say simply that / and g are equivalent. pleting the square on the term in x\ and put Xi + + + + a) 2x1 b) c) + 3xi x? + 14^10:2 + 9x1 3x1 + 18xix + 24x1 4xiX 2 2  d) 2xf e)  XiX 2 3x1 + 2xiX 2  xi 3. f2xi x* ' \3xj 4. We have now obtained the background needed for a clear understanding of matrix theory and shall proceed to its development. The linear mapping (39) of EXERCISES the form x< = y* for i = 1. . . by coma^yl Hint: Write / = ai(xi expression of the type a\y\ cx 2 = y\. . 3 to the following forms / to obtain equivaand their inverses to g to obtain /. q is called the identical mapping. shall usually We shall not study the equivalence of forms of arbitrary degree but only of the special kinds of forms described in Section 7. and even of those forms only under restricted types of linear mappings. x 2 = y*.
CHAPTER II RECTANGULAR MATRICES AND ELEMENTARY TRANSFORMATIONS 1. . y n with constant coefficients . We study. of coefficients arranged as they occur in (1) has the ai2 form a22 and is speak of the coefficients . shall henceforth and fc in (1) as scalar s and shall derive our theoa/ We rems with the understanding that they are constants (with respect to the variables y\. . on the matrix of a system. an. chapter we shall make a completely explicit statement about the nature of these quantities. a in ) 19 . called the coefficient matrix of the system (1). (3) The line Ui = (an. . . . . . It is not only true that the concept of a matrix arises as above in the study of systems of linear equations. of certain natu ral manipulations on the equations themselves with which the reader is shall devote this beginning chapter on matrices to that very familiar. The matrix of a system of linear equations. . y n ) according to the usual definitions and hence satisfy the properties usually assumed in algebra for rational operations. The array an i . The concept of a rectangular matrix may be thought of as arising first in connection with the study of the solution of a system km of m linear equations in n unknowns y\. In a later . . but many matrix properties are ob tainable by observing the effect. Let us now recall some terminology with which the reader is undoubtedly familiar.
20 1JNTKUJLJUUT1UJN TU ALUEJBKA1U THEUK1ES of coefficients in the ith equation of (1) occurs in (2) as its ith horizontal line. displayed equations we shall usually not use the notation (2) for a matrix but shall write instead (5) A = (i = 1. Thus. and this usage will be of some importance in the clarity of our exposition. briefly. as an matrices and its columns are m by n matrix. 024. speak of A as a matrix of are 1 m rows and n columns. Similarly. 2. . 3. the systems of equations (1) with constants &. . a^ in the following matrices 4050^ 1 2 3 c) 3147 01 6 1. 3 2 1 4 6 l 6 8/ 1. it is natural to call u* the ith row of the matrix A. ai2. . Read Read off off the second row and the third column in each of the matrices of Ex. To avoid bulky by one matrices. . the coefficients of the unknowns y. Read off the elements an.in (1) form a vertical line (4) which we call We may now the rows of as one the jth column of A. ] j = 1. We shall speak of the scalars ay as the elements of A.. . as an rarowed and ncolumned matrix A by n or. n) If m= wrowed shall use is a square matrix. Then m by 1 matrices. This. is a concept and terminology which we square n then A very frequently. all zero and matrices of coefficients as in Ex. and we shall speak of A simply as an matrix. and they may be regarded The notation a^ which we adopt for the element of A in its ith row and jth column will be used consistently. too. ORAL EXERCISES 1.
(6).t. . AH in adjacent positions and for columns. that is. shall We EXERCISES 1. A 2 As. . State how . have the same size and equal corresponding elements. reader led to study subsystems of s in < We call such a matrix an s by of t submatrix s the elements in the remaining m A is B of A.. A*. A 3 and A 4 have the same number of rows. . Introduce a notation of an arbitrary sixrowed square matrix A and partition A into a threerowed square matrix whose elements are tworowed square matrices.j=l. . . . (5). . number of rows.* have the same number of columns.. .. A 4 are themselves rectangular matrices. A whose elements AI. the complementary submatrix of <7... and every row of A consists partially of a row of AI and of a corresponding row of A 2 or of a row of A 3 and a corresponding row of A 4 . An. Then AI and A 2 have the same number of rows. We have thus accomplished what we shall call the similarly partitioning (6) of A by what amounts to drawing lines mentally parallel to the rows and columns of A and between them and designating the arrays of elements in the smallest rectangles so formed by A</. It is then clear how each row of A is a 1 by t matrix whose elements are rows of AH. We A it all have the same assume that for any fixed i the matrices A n. columns form an the complementary m up (6) s by n t submatrix C and we C submatrix of B. . . rows and n shall call If s t <m and t < n. where now the symbols At. then. the columns of A of (7) are connected with the columns of AI. . . to time to regard a matrix as being made It will be desirable from time of certain of its submatrices. and its elements lie in certain s of the rows and t of the columns of A. . and for fixed k the matrices AU.*). (7) of the symbol of equality for matrices. The corresponding coefficient matrix has s rows and t columns. and A4 2. matrix (A. . Our principal use of (6) will be the use of the case where we shall regard A as a two by two . always mean that two matrices are equal if and only if they are identical. In solving the system is (1) by the usual methods the equations in certain t < n of the unknowns.. . B Thus we write (t A = (A) = l. themselves represent rectangular matrices. Clearly. 21 Submatrices. .s.RECTANGULAR MATRICES 2. A 2. Note our usage in (2). A 2 A.
. if they exist. Thus. We call rices. w) . 5 into tworowed square matrices whose elements are threerowed square matrices. a two by two matrix. 3. which we A A. the complementary submatrices. Transposition. the element a*. and then the corresponding column properties were merely stated as results obtained by the process of interchanging rows and columns.22 INTRODUCTION TO ALGEBRAIC THEORIES A into a tworowed square matrix Also partition whose elements are threerowed square matrices. A 3 . notation a) 6. matrix. a one Partition the matrices of Ex. 4. m and define the matrix (8) . Read off A 2. rices Partition the following matrices so that they become threerowed square matwhose elements are tworowed square matrices and state the results in the (6). The theory of determinants arose in connection with the solution of the system (1). Let the induced process transposition and define it as follows for matA be an by n matrix. 3 occur in some partitioning of A as a matrix of submatrices? 5. 3. 5 into the form (7) such that A\ is a by six matrix. Which of the submatrices in Ex. two by three and A 4 and state their sizes. Write out all submatrices of the matrix 2 1 1 3 4 5\ 01236 7 021 3 2 1 2 1 and. a notation for which is given by (5). Partition the matrices of Ex. 7.in the ith by interchanging row and jfth column of A occurs in A 7 as the element in its jth row and ith shall call the transpose of its . It is an n by m matrix obtained from rows and columns. The reader will recall that many of the prop erties of determinants were only proved as properties of the rows of a determinant.
. Let if also A be a threerowed square matrix. any matrix of Ex. . A. . ... We shall pass have now given some simple concepts in the theory of matrices and on to a study of certain fundamental operations. .the diagonal elements of A.. we put (A W' l \ where the matrix A\ is again a #rowed square matrix. where analogous result if A = 1. . . We should also observe that if A has been partitioned so that it has the .*. q = n and have (12) if n < m. We also observe the evident theorem which we state simply as (10) (AJ = A.} . Give also (7). We shall call the a. Note then that in accordance with our conventions been written as (9) (8) 23 could have A' = ((a. a 22 or. In Ex. . t = 1. 1 of Section 1. It is a diagonal of the square matrix Ai and is its principal diagonal. 1 assume that A = A'. . A = A'. On the other hand. .) (j  1. If A is our m by n rigid matrix and m < n we put q = m and write (11) The operation A = (A.. . .. The line of elements a aq of A and hence of A\ is called the principal diagonal of A an. ... of transposition may be regarded as the result of a certain motion of the matrix which we shall now describe.. EXERCISES for A'. Notice now that A' is obtained from A by using the diagonal of A as an axis for a rigid rotation of A so that each row of A becomes a column of A'. form (13) (6) then A 1 is the t by s matrix of matrices given by A' = (G/0 (On = A'rfj** 1. 2. Let A have the form Give the corresponding notation A' if A is tives of those of 3. A.). the diagonal of A. simply.. ). where A\ is a growed square matrix.RECTANGULAR MATRICES column.<= 1. m) .. n. Find the form (2) of A if A = A' and . . What then is the form (7) of A ? Obtain the A is the matrix whose elements are the negaA'.
e. c be a scalar.24 4. Then B is said to be obtained from A by an elementary row (column) transformation of type 1. We define this and the corresponding column transformation in the DEFINITION 1. Jt 8 . equation (4. 'Let i and r be distinct integers. The addition of a scalar multiple of one equation of (1) to another results in the addition of a corresponding multiple of a corresponding linear form to another and hence to a corresponding result on the rows of A. 4. for example.7. Lemma 9. Thus we make the following DEFINITION 2. A A and will turn out to be very useful of our transformations is the result on the rows of A of the intertwo equations of the defining system. INTRODUCTION TO ALGEBRAIC THEORIES Prove that the determinant of every threerowed square matrix A with the property that A! 5. and B be me matrix obtained by the addition to the ith row (column) of A of the multiple by c of its Tth row (column). Then B is said to be obtained from A by an elementary row (column) transformation of type 2. Theorem 4. B = (&*/). Solve the system (1) with matrix / 1 2 4 1\ 4= V for in terms of fo. and the reader equations Which are permitted in this method familiar with the operations on method and which A are called elementary transformations on tools in the theory of matrices. However. multiplication by a scalar) of such sequences were defined in Section 1. equation (10) in Chapter IV. * . by Section 4. Thus. Let i 5^ r and B be the matrix obtained from A by interchanging its ith and rth rows (columns). and the operations of addition and scalar multiplication (i. Our final type of transformation is induced by the multiplication of an We shall use a corresponding notation henceforth when we make references anywhere in our text to results in previous chapters.9. 1 2 32 3/ 3 2/ 2/1.* The The first change of left members of (1) are linear forms. if the prefix is omitted.. Lemma 4. Do this system (1) with matrix A' and com Elementary transformations. The system is (1) may be solved by the yield systems said to be equivalent to (1). 2/2. 2/s Write the results as also for the == x & A' and thus com pute the matrix pare the results. The rows (columns) of an m by n matrix are sequences of n (of m) elements. we shall mean that theorem of the chapter in which the reference is made. Theorem 8. The resulting operations on the rows of the of the system and corresponding operations on the columns of matrix of elimination. for example.8.8.10) we shall mean Section 7. Theorem 8. = A is zero. & 2 . as.
the combination of the elementary transformations which carry A A B with those which carry B to C will carry equivalent to C. verify the fact that. In view of this fact we shall phrase the definition in our present environment so as to be usable in this other shall then evidently require situation and hence state it as DEFINITION 3. Thus. of type 3 defined by a is that defined for a" of type 1 is itself. Let the scalar a possess an inverse or 1 and the matrix B be obtained as the result of the multiplication of the ith row (column) of A by a. Observe next that every equivalent to c itself. First. observe some simple consequences of our definition. it is of basic importance to study first what occurs if we make no restriction whatever on the elementary transformations allowed. We now see that. A to C. we if is rationally equivalent to B and B is rationally equivalent to to C. the matrix obtained is the same as that which we obtain by applying first the column transformation and then the row transformation. We a to be a polynomial with a polynomial inverse and hence to be a constant not zero. if we apply any elementary row transformaand then any column transformation to the result. Finally. The restriction a 7* is made so that A will be obtainable from B by the like transformation for or 1 Later we shall discuss matrices whose elements are polynomials .RECTANGULAR MATRICES 25 equation of the system (1) by a nonzero scalar a. Let tions. . But then A is rationally equivalent to B if and only if B is rationally equivalent to A. A and Rbembyn matrices and let B be obtainable from A many arbitrary elementary transforma by the successive application of finitely Then we shall say that A is rationally equivalent to B and indicate this by writing A ^ B. the 1 is inverse of any transformation of type 2 defined for c is that defined for c. In fact. to A. Then m by n matrix is rationally For the elementary transformations of type 2 with = 1 A A is rationally = and of type 3 with a are identical transformations leaving all matrices unaltered. * The reader should tion to A . in x and use elementary transformations with polynomial scalars a. For convenience in our discussion the we first make DEFINITION. we see that if an elementary transformation carries A to B there an inverse transformation of the same type carrying B to A. in the theory of matrices are connected with the study of the matrices obtained from a given matrix A by the application of a finite sequence of elementary transformations. restricted by the The fundamental theorems particular results desired. Then B is said to be obtained from A by an elementary row (column*) transformation of type 8.
Clearly the sequence by the transformations carrying A\ Bl < to B\ by the matrix A  \0 A*)' by ele M r We similarly follow this sequence of elementary transformations mentary transformations on the last which carry A 2 to B 2 and obtain B. matrices A 2 and B 2 Then tionally equivalent For it is clear that any elementary transformation on the first r rows and s . s < n. For any permutation results from some properly chosen sequence of interchanges.. (15) bi D . and \ A and B _ m by n matrices of the form A.. Determinants.. 5.. We shall then discuss another result used for the types of proofs mentioned above. r by n s rafor T by 8 rationally equivalent matrices AI and BI and and B are rationally equivalent. /B. As a tool in such proofs we then prove be the following LEMMA 1. Let B be the square matrix (14) The corresponding symbol .26 INTRODUCTION TO ALGEBRAIC THEORIES shall replace the is rationally equivalent are rationally equivalent. It is m rows and n s columns of A important also to observe that we may arbitrarily permute the rows of A by a sequence of elementary row transformations of type 2. m A columns of A A and the zero matrices bordering of such transformations induced will replace induces a corresponding transformation on A\ and leaves A* it above unaltered. Before continuing further with the study of rational equivalence we shall introduce the familiar properties of determinants in the language of matrix theory and shall also define some important special types of matrices. and similarly we may permute its columns. have now shown that in order to prove that and are rationally both rationally equivalent to the and equivalent it suffices to prove Thus we may and terminology A to B in the definition above by A and B We A B A B same matrix C. Let r < m.
. . is t 1)* is unique interchanges. In particular. it . Let B be the matrix obtained from a square matrix = ele mentary transformation of type 2. and we shall state them now in the language we have just introduced. E. but their square submatrices have determinants called the (read a square matrix of n > t rows. .. . it ranges over all permutations may be carried into 1. We now pass to a statement of some of the most important results on determinants.. t and the permutation 1*1. . The determinant D will be spoken of here as the determinant of the matrix by i we B and we shall indicate this by writing (17) DD fi equals determinant 5). Let B be the matrix obtained mentary transformation of'type The reader will recall that to obtain from a square matrix A by an ele3 defined for a scalar a. Nonsquare matrices A do not have determinants. . .. Then  B = A   .RECTANGULAR MATRICES is 27 t.. Then A' = A.^..2. . Lemma 3 may be used in a simple fashion . and proved in L. If A is minors.  LEMMA 5. every element a. The next three properties of determinants are those frequently used in the computation of determinants. Let B be the matrix obtained . .. . by an by an ele mentary transformation of type 1 Then B A. LEMMA 4. from a square matrix A A. . of 1. may now be stated as Abe a square matrix./ of a matrix A defines a onerowed square submatrix of A whose determinant is the element itself. That the sign ( son's First Course in the Theory of Equations.. where the sequence of subscripts ii. LEMMA 3. called a trowed determinant or determinant of order It is defined as the sum (16) of the t! terms of the form (l)*^*. Thus we have seen that the elements of a matrix may be regarded either as its onerowed square submatrices or as its onerowed minors. The result on the interchange of rows and columns of determinants menLet ' tioned in Section 3 LEMMA (18) 2.. the complementary submatrix of any trowed square submatrix B is an (n t) rowed square matrix whose determinant and that of B are minors of A called complementary minors of A. 2. Then B =a A. Dickshall assume this result as well as all the consequent properties of determinants derived there.
and of these we shall use only very few. The and Let result (20) is of fundamental importance in our theory of matrices be applied presently together with the following parallel result. . we have 8. j. of course. j. of course. be the matrix obtained from a square matrix A by replacing the tth row of A by its qth row. the properly signed and labeled minors ( l) <4 " J 'd t/. Then the result we refer to states that if we i+) define c/* = ( l) 'da then We let A (20)  A =  aikCki = c ik a ki (i.28 INTRODUCTION TO ALGEBRAIC THEORIES LEMMA 6. Thus. . If a square matrix has two equal rows or columns. 7. n) . Then B has two equal rows and by Lemma 6 will B B =0. There many other properties of determinants. . LEMMA Let A. be an nrowed square matrix A = (o</) and define AH to be the complementary minor of aj. the determinant of A is obtainable as the sum of the products of the elements a/ in any row (column) of A by their cofactors c/i. its determi nant is zero. n) . q. = 1. B. Of particular importance is that result which might be used to define determinants by an induction on order and which does yield the actual process ordinarily used in the expansion of a determinant. Then (19) C are. . also well known to the reader. is the C be nrowed square matrices such that the ith row (column) of C sum of the ith other rows (columns) of B and C row (column) of and that of B while all are the same as the corresponding rows (col A umns) of A. // a square matrix has a zero row or column. that is. . . We expand B as above according to the elements of its ith row and obtain as its vanishing determinant the sum of the products of all elements in the qih row of A by the cofactors of the elements in the ith row   of A. . Those we shall use are. = A + B. 8 = 1. . s ^ j. . Another result of this type is LEMMA zero. Combining umns we have (21) this result with the corresponding property about col a ik ckq *~1 = k=l (i c 8k a k j = T* q. i. its determinant is Finally.
1.  A = \ an./) is triangular if it is true that either a. j = 1. The most general of these is the triangular cial matrix. We first assume it induction by 1 and complete our true for square matrices of order n expanding \A according to the elements of its first row or \ column A in the respective cases above. Theorem . we have The determinant of a triangular matrix is is the product ana22 ./) is called a diagonal matrix matrix A if it is a square matrix. Clearly. . There are certain square matrices which have speforms but which occur so frequently in the theory of matrices that they have been given special names. if A = then adj A = 0. Expand the determinants below and verify the following instances of Lem ma 8: 321 a) 1 3 1 2 3 01 1 1 003 1 1 1 3 1 1 3 1 1 1 1 01 1 3 1 234 6. (i. Compute the adjoint of each of the matrices 2. moreover. that is. These relations its will have important We call the matrix (22) the adjoint of A and see that if A= (an) is an nrowed square matrix adjoint is the nrowed square matrix with the cofactor of the element which appears in the jth row and ith column of A as the element in its own tth row and jth column. a nn of its diagonal elements. n) .. . EXERCISES 1. . 1 1 4 1 3 1 234 1 234 1 1 Special matrices. . The result above clearly true if n = 1 so that A = (an). and. = if for all j that an for all j < i. a square matrix having the property that either all its elements to the right or all to the left of its diagonal are zero. Thus a square matrix A = = (a. = (a. It is clear that A is triangular and only if > i or A is 7 triangular. . .RECTANGULAR MATRICES The equations square matrix (22) (20) 29 A and (21) exhibit certain relations between the arbitrary and the matrix we define as adj A = later consequences.
We shall frequently feel it desirable to of the forms consider square matrices of either Al A A. Any (23) scalar matrix may be indicated by of. then A = An A. Let be a square matrix partitioned as in (6) A and suppose that s = t} the An are all square matrices. we call the for all i ^ j. We shall discuss the implications of this notation later. fact that the Laplace expansion of de where A! is a square matrix.30 INTRODUCTION TO ALGEBRAIC THEORIES = its and a/ that If all determinant Clearly. Then and the reader should verify the terminants implies that A A  A =  Ai A 4 . (24) implies that 4 is necessarily square. and every A. The reader will find that this usage will cause neither difficulty nor confusion. We let A be a square matrix and partition it as in = t and the submatrices An all square matrices. . are onerowed square matrices and the result con. the diagonal elements of a diagonal matrix are equal. If there be some question as to the order of I or if we are discussing \ several identity matrices of different orders./ = . In connection with the discussion just completed we shall define a notation which is quite useful. The property above and that of Theorem 1 are special instances of a more general situation. we shall indicate the order by a subscript and thus shall write either 7 n or I as is convenient for the n rowed identity matrix. In any discussion of matrices we shall use the notation to represent not only the number zero but any zero matrix. where a = an is the common value of the diagonal elements of the matrix. . It is natural to call any m by n matrix all of whose elements are zeros a zero matrix. Then the La(6) with s place expansion clearly implies that if all the A/ are zero matrices for either all i > j or all i <j. Evidently Theorem 1 is the case where the A. sidered in (24) the case where t = 2. a diagonal matrix A is triangular so the product of its diagonal elements. is matrix a scalar matrix and have \A = ajl4 The scalar matrix for which an = 1 is called the nrowed identity matrix and will usually be designated by 7.
31 composed of zero matrices and matrices Ai = An which are what we may call its diagonal blocks.. LEMMA matrix 9. by Lemma 1.A. the determinant . . Clearly. and we shall indicate this Then A is by writing (25) A = diag{Ai.RECTANGULAR MATRICES for i 7* j. Moreover. We then apply replace B by a rationally equivalent matrix C with c\\ row transformations of type 2 with c = c r \ to replace C by elementary for the rationally equivalent matrix D = (d< ) such that du = 1. 1 matrix. By an ? r c > = 1. and I\ is the identity matrix of one are rationally equivalent. As above. At... and then use elementary column transformations of type 2 with dir to replace D by the matrix row. Every nonzero m by n matrix A is rationally equivalent to a /I (26) 0\ \o or 1 where I is an identity matrix. of A is the product of the determinants of its submatrices Ai. For by elementary transformations of type 7* we may carry any element a pq of A into the element 6u of a matrix B which is rationally equivalent 1 elementary transformation of type 3 defined for a = frji we = 1. . or our proof shows that AI is rationally equivalent to a matrix is Now Ai an m 1 by n A and A ( But t then.. In closing we note the following result which we referred to at the close of Section 4. d r \ = to A.}. either AI = and we have (26) for / = /i. . A is rationally equivalent to a matrix (29) .
A = (11). 1 2\ o i iy Apply elementary row transformations only arid carry each of the following matrices into a matrix of the form (26) 10^ 2250 2152 10 00 ly 3.32 INTRODUCTION TO ALGEBRAIC THEORIES many such steps we obtain (26).. / a) 2 32\ 27 3 1 1 \5 / J . Apply elementary column transformations only and carry each of the following matrices into a matrix of the form (26). 3 2 4 1 x /! 6) \ . is ^ of A The largest order of any nonvanishing minor of a rectangular matrix A is called the rank of A. * e. 1\ 5/ d) /O 2\ 1 c) Vl 2 4 VO 5 3J 4. and by row transformations we may a matrix with ones on the diagonal and zeros below and then into /. The result of (20) states that every (t + l)rowed minor of A is a sum of nuRational equivalence of rectangular matrices.g. Hint: The property \A\ in the first column carry 7. A into preserved by elementary transformations. Show that if carried into the identity matrix the determinant of a square matrix A is not zero then A can be by elementary row* transformations alone. is After finitely We shall show later that the number of rows in the matrix I of (26) uniquely determined by A. Some element must not be zero. Carry the following matrices into rationally equivalent matrices of the (26) / form 3 5 1 10 2 4\ 6) /I 1 2\ a) I \l 1  211 O/ (2 \1 3 2 5] 3/ 2023 0000 i (1 2. Not every matrix may be take carried into the form (26) by row transformations only. . EXERCISES 1.
If A has rank r we put t = r + 1 and see of A that B = C =0. and it follows that A and Ao have the same rank. then AJ = A\. + l)rowed minors of A vanish. The problem of computing the rank of a matrix definition A would seem from our and lemmas to involve as a minimum requirement the computation of at least one rrowed minor of A and all (r + l)rowed minors. A and A have the same rank. and our proof shows the Also there exists an rrowed minor B 5* = B or B = B + c C in existence of a corresponding minor B =0 implies that C = cr \B\ ^ 0. Howproblem may be tremendously simplified by the application of elementary transformations. and the computations themselves generally quite complicated. the . that if an elementary row transformation be applied to the transpose A' of A to obtain Ai and the corresponding column transformation be applied to A to obtain Ao.        1    1   \ \ l  . however. then by Lemma 8 Bo = B + c C where C is a Crowed square matrix all but one of whose rows coincide with those of B. Bo = Of or every (r + 1) rowed minor B in A. the correspondingly placed submatrix B of A is equal to B if no row of B is a part of the ith row of A.  1 I \  1 I    . Let and let all (r + l)rowed Note that we are assigning zero as the rank of the zero matrix and that the rank of any nonzero matrix is at least 1. a row of B is in the ith row of A but no row is in the 0th row of A. But then B nonzero rrowed minor C. The number of determinants to be computed would then normally be rather large.RECTANGULAR MATRICES merical multiples of its Crowed minors. Let then A result from the application of an elementary row transformation of either type 1 or type 3 to A. We are thus led to study the effect of such transformations on the rank of a matrix. and this remaining row is obtained by ever. then by Lemma 2. By Lemmas 3 and 5 every Crowed minor of A is the product by a nonzero scalar of a uniquely corresponding Crowed minor of A. But then it is easy to see that C is a minor of A as well as of AO. finally. and the former. If Ao results when we add to the ith row of A the product by c 7^ of its 0th row and B is a Crowed square submatrix of A. Let all (r r. a row erf B is in the ith row of A and a row of B is in the 0th row of B. By the above  \ . finally. replacing the elements of B in the ith row of A by the correspondingly columned elements in its 0th row. We observe.4 we have B = B If. and A has a Ao. Then the rank of A most We may also LEMMA minors of state this result as A have a nonzero rrowed minor A vanish. Then r is the rank of A. 11. If. We thus clearly have if 33 all the latter are zero so are LEMMA is at 10.
A and A have the same rank. Every m by n matrix of rank r > may be carried into a matrix of the forms (31) . any two rationally equivalent matrices have the same rank. so that A\ and A( = AQ have the same rank. We shall prove Theorem 4. Thus. I) (30) B is rationally equivalent to 2. and to A. where G is an Trowed matrix.34 INTRODUCTION TO ALGEBRAIC THEORIES proof A' and AI have the same rank. as in Ex. A < is rationally equivalent to (o' Similarly. I = I r and use Lemma . Conversely. have also the consequent COROLLARY. But column transformations applied to (30) clearly carry this matrix into . But then Theorem 2 implies. and both G and H have rank equivalent to (30) transformations. H is an Tcolumned matrix. let the rank r of two by n matrices m A and B be the same 9 to carry A into a rationally equivalent matrix (26). Theorem 3. we is For A by a sequence of elementary first may obtain (30) by row and column applying all the row transformations and then all the column transformations. respectively. We have thus proved what we regard as the principal result of this chapter. r. Theorem they have the Two m by n matrices are rationally equivalent if and only if m r is rationally equivalent to same rank. Clearly. If we then apply the inverses of the column transformations in reverse order to (30). 4 of Section 6. It is clear that the rank of the matrix in (26) is the number of rows in the matrix /. In closing let us observe a result of the application to a matrix of either the row transformations only or column transformations only. Every urowed nonsingular matrix is rationally equivalent to nrowed identity matrix. we obtain the result of the application of the row transformations alone to A. (H 0). by Lemma 2 A and A! have the same minors and hence the same rank. by a sequence of elementary row or column transformations only. A matrix is called nonsingular if it is a square matrix and its determinant not zero. Every by n matrix of rank We an m by n matrix is (SO). By the above proof A and this matrix have the same rank. Hence.
G has rank r. The result for column transformations is obtained similarly. B=ll VO /I 1 \3 3/ 0. it is evident that the rank of this matrix is that of (?. . Hint: If necessary carry A and B into the form (30) and then apply the inverses of those transformations which carry B into (30) to carry (30) into B.RECTANGULAR MATRICES 35 a matrix of the form given by the first matrix of (31). Moreover. EXERCISES 1. A = /2 1 3\ ' _ /I n 6)A = (l \2 /I c) 2 i\ /2 3 1 221. /I a) 1 ( 1 5 5 3\ 2 /2 6) 1 (  1 4 14 32 11 \1 I/ 3 4 16 \1 /' c) 4 12 2. 4 2 4 6 I/ 1\ J5=(60 \1 A = ( 2 2 1 . Carry the first of each of the following pairs of matrices into the second by elementary transformations. Compute the rank r of the following matrices by using elementary transformabut r tions to carry each into a matrix with all rows (or columns) of zeros and an obvious rrowed nonzero minor.
k = 1. We 36 . Suppose now that 21. . m) . . . . y n by a q (2) yi = ^b ik z k (j = 1. and easily verified by substitution that (4) c ik  _ (i = 1. . to the z k . this system was said in Section y. . . . . . . . ra. If xi. . . mapping (3) is usually called the product of the mappings (1) and we shall also write C = AB and call the matrix C the product of the matrix have now defined the product AB of an m by n matrix A and an n by q matrix B to be a certain m by q matrix C. ping (1). with n by q matrix stitute (2) in (1) B = (bj k ). . A = (a^) the matrix of the mapping carrying the mapy\. . we have defined*C so that the element dk in its ith row and fcth column is obtained A by the matrix B. . . related by a system of linear equations . .. . . . . . xm and t/i. if we sub we obtain a third linear mapping (3) Xi = 2*c ik z k it is (i = 1. second linear mapping zq are variables related to . . .9 to define a linear Xi to the We call the m . with m by q matrix C = (CM). . . 1. . carrying the y. Moreover.CHAPTER III EQUIVALENCE OF MATRICES AND OF FORMS 1. Multiplication of matrices. . y n are variables (1) Xi = atMi (i = 1. The and linear (2). Then. . by n matrix . n) . q) . m) . .
(AB)' is a q by in mawhich we state is the product of the q by n matrix B' and the n by m . Observe also that I m is the matrix of the linear transformation Xi = yi for i = 1. A is an m by n matrix. .* in the fcth column 61* 62* (6) of B. We have not defined and shall not define the product AB of two matrices in which the number of columns in A is not the same as the number of rows in B. we shall say that A and B are commutative if AB = BA. the fact that AB is defined need not imply that BA is defined. A is m by n. for example. Then. and thus in this case the product of (1) by (2) is immediately (2) hence (7) is trivially true. (7) if A is an m by n matrix. If A and B are nrowed square matrices. they are generally not equal and may not even be matrices of the same size. w. = AI n = A where 7 r represents the rrowed identity matrix defined for every r as in Section 2. and m^n. . . In symbols we state this result as (8) (AB)' = B'A' . But when both are defined. Finally. Here trix.6. Im A a zero matrix the product then we have . let us observe the following Theorem 1. evidently.EQUIVALENCE OF MATRICES AND OF FORMS as the 37 sum dk = an&i* + + . This latter fact is clearly so if. B is n by m. AB is also. B is an n by q matrix. . Thus we have stated what we shall speak of as the row by column rule for multiplying matrices. . Observe that if either A or B is Moreover. Note the examples of noncommutative square . + o>i n b n k of the products of the elements ay in the ith row (5) of Ay by the corresponding elements ?>. . matrices in the exercises below. The transpose of a product of two matrices is the product of their transposes in reverse order.
n by q matrix B.38 matrix INTRODUCTION TO ALGEBRAIC THEORIES A f . Compute (3) and hence the product C = AB for the following linear changes variables (mappings) and afterward compute C by the row by column rule. /4 a) 3 2 1 /I 2 3^ 1 3 3 4^ \2 1 I/ \~1 2  2 3illi o "I/" t * A ^t f\ i . the powers A of any square matrix are unique. . . 3. q by s matrix C. This result known as the associative law for matrix multiplication and it may be shown as a consequence that no matter how we group the factors in forming a product A\ At the result is the same. Compute also BA in the cases where the product 2\ 1 defined. It is important to know that matrix multiplica tion has the property (9) (AB}C = A(BC) for every is mby n matrix A. We shall assume these two consequences of 6 . We leave the direct application of the row by column rule to prove this result as an exercise for the reader. . Verify by explicit computation that EijEjk = Eik and that if j ?* g then EijE ok = 0. of EXERCISES 1. In particular. The associative law.   9 n* ^ j/9 o io 1  ith Let the symbol EH represent the threerowed square matrix with unity in the row and jth column and zeros elsewhere.v ci. I I Vl == ~~~ "2*1 ^"2 i~ "^"8 Ixi = 2y l +  32/2 2/s ISai [2/1= 1 2/a + 26z 2 +  13* 8 Us = 2. 2. 2/2 92/3 = *i 2z 2 s3 Compute the latter following matrix products is AB.
./). Then that BA is defined. Let A = (a/) be an m by n matrix and of B be a diagonal matrix. . j = 1. hu for all i and Z. G = H. . . n.! c) A(l 2 1 3). But those terms in the respective sums with the same sets of subscripts are equal. . q. m\j = 1. I = 1. C = (c k i). Products by diagonal and scalar matrices. Then it clear that the element in the ith row and Ith column of G = (AB)C is (10) gn while that in the same position in H = A(BC) is (ii) hu y = i Each of these expressions is a sum of nq terms which are respectively of the form (dijbjk^Cki and a^bjkCki). . . BA = (btftf) (i = 1. . Hence. 1 1 2 3\ A = 3 12. . . . 3. where in all To prove (9) we write A . since we have already assumed that the elements of our matrices satisfy the associative law a(bc) = (ab)c. I/ B = \0 ~3\ / / 2 1 1 ^a. . s. B cases is i = 1.EQUIVALENCE OF MATRICES AND OF FORMS (9) 39 without further proof and refer the reader to treatises on the founda tions of mathematics for general discussions of such questions. m. (12) ith diagonal element is rarowed so our definition of product implies that if and designate the B . . and (9) is proved. k = 1. . then ' B by &<. gu EXERCISES Compute the products /2 o) (AB)C and A(BC) in the following cases. . = (a. . n) . . . = (ft/*). . . . . . .
A be a square matrix.< is in the for j j* i. . First. (i. As an immediate consequence simple of the case bi . Equations (12) and (13) imply that the jth row of EjB is the same as that of B and the jth column of BEj is the same as that of 5. = l. . be and suppose that B is commutative with every nrowed square matrix We shall select A in various ways to obtain our theorem. prove the converse of Theorem 2 a result which is the inspiration of the name scalar matrix. . The only nrowed square matrices which are commutative with every nrowed square matrix are the scalar matrices. . j = 1. .n) A. Then from (12) and (13) we see that (14) AB = BA if and only if (&<  bj)av = = . . = 0. the product of A by a diagonal matrix on the right is the result of multiplying the columns of A in turn by the corre sponding diagonal elements of B. n) .40 while (13) if INTRODUCTION TO ALGEBRAIC THEORIES B is nrowed. Thus if i ^ j. . while all other columns of BEj are zero. Theorem all distinct.. 3. Every nrowed scalar matrix is commutative with all nrowed square matrices. next see that. n) ... . if i 7* j and & 5^ This gives the result we shall state as We &/. then AB = (aijbj) (i = 1. Then Let the diagonal elements of an nrowed diagonal matrix B be the only nrowed square matrices commutative with B are the We may now nrowed diagonal matrices. we have 6/ = is the matrix with 1 in its first row and jth column and zeros elsewhere. Since 6.j. Thus the product of a matrix A by a diagonal matrix B on the left is obtained as the result of multiplying the rows of A in turn by the corresponding diagonal elements of B.. Now. Theorem 4. ra. If D ith column. the elements in the ith column of EjB must be zero. then (14) implies that a. . . and 5 is a diagonal matrix. the the diagonal matrix with unity in . let m = n. = bn we have the very Theorem 2. . we let E its 3 jth row and column and zeros elsewhere and put BEj = EjB. j = 1. . . For (15) let B = (ba) (i. .
and we shall call such a product the scalar product of a by A. . we have defined (17) as the instances (16) of our matrix product (4). = bn = 6 for j = 2. Find all threerowed square matrices /I o) B such that BA = AB\i 0\ 4 = 1 b) A . Hence. 2. . B = 1 o 1 Ov /O ** 000 2/ (2 73 d) 02 01 002 or 0\ 0\ _ ~ / 1 01 1 o o \0 A \0 2 17 . 0\ 6) A = \0 1 07 .EQUIVALENCE OF MATRICES AND OF FORMS 41 product BDj has bu in its first row and jth column and is equal to the matrix DjB which has 6/.in this same place. n. Compute the products rule if EXERCISES AB and BA by the 0\ use of (12) and (13) as well as the row by column 71 a) A = \0 71 2 37 . then (16) is (a! m )A = A(al n) by n matrix whose element in the ith row and jth column is the product by a of the corresponding element of A. . and B is the scalar matrix 6/ n Let us now observe that if A is any m by n matrix and a is any scalar. fry. 1. However. . . This is then a type of an product (17) m aA = Aa like that defined in Chapter I for sequences.
! . Obtain similar results for the matrices of Ex. 4. 3 3 Compute BAB if Ha where c ic d\ c)' Hi n /O 1\ o)' c and d are any ordinary complex numbers. Show that the matrix matrix 6. of type 1 on AB by applying it to A. Let A be an m by n matrix. . Prove that BA wAB if 5. C<> = AB<> . Apply an elementary row (column) transformation to A (to B) resulting in what we shall designate by Ao(by B ). We shall show that the matrix which is the result of the application of any elementary row (column) transformation to an m by n matrix A is a corresponding product EA (the product AN) where E is a uniquely determined square matrix. Let w be a primitive cube root of unity. Our definitions also imply that for elementary row transformations of type 3 the result stated as (18) follows For proof we see that we from (19) . m..in the right members of (4) Thus we obtain the result of an elementary row transformation get c. . We might of course give a formula for each E and verify the statement. . . and then the same elementary transformation to C resulting in Co (in C ). q) . 4 has the property B = al and that the A = 7. 3. INTRODUCTION TO ALGEBRAIC THEORIES Prove by direct multiplication that BA = AB if i 2 = 1 and 4. ..'1 (i = 1.42 3. B be an n by q matrix so that C = AB is an m by q matrix. . if we replace a^ by a. k = 1. but it is simpler to describe E and to obtain our results by a device which is a consequence of the following Theorem 5. A has the property B of Ex. . . and 3 are their conjugates. . Elementary transformation matrices.*. Then (0) (0) (18) Co = AB .
. to be the matrix obtained by interchanging the ith and jth rows of by adding c times the jth row of I to its Oh row.< = EH . P. Similarly. . We shall call EH. k = . then APa(c) is the result of an elementary column transformation of type 2 on A. Observe that (21) Eh = E. the product AE^ is the result an elementary column transformation of type 1 on A. and if P.7 in terms of elementary transformation matrices. . + ca j)b = t 3k I \yi ^a^b/^  / + c I \yi (i ^^A* / 1. Finally.. The corresponding column C (0) = AB (Q) has an obvious parallel proof or may be thought of as obtained by the process of transposition. P*. Then we see that to apply any elementary row transformation to A we simply multiply A on the left by either EH. It is surely unnecessary to being We now write A = IA where / is the rarowed identity matrix and apply Theorem 5 to obtain A = /<>A.(c) is nrowed. then ARi(a) is the result of an elementary column transformation of type 3 on A. P(c)P w (c) = P(c)P(c) = / .(c) . 2.(c). we may interpret Theorem 2. (23) [Ri(a)Y = Rt(a) ./(c). or R<(a). = c ik + 1. and 3. and Ri(a) elementary transformation matrices of types 1. Ri(a) by multiply ing the ith row of / by a ? 0. . (22) (PH(C)]' = P.EQUIVALENCE OF MATRICES AND OF FORMS Finally. R^a^R^a) = E i (a)^ i (a~ 1 ) = / . so that. E En = i3 / . result and we have proved the first equation in (18). 43 we n see that for type 2 they follow / from / n \ n \ 1 (20) yi 2} (a*. P. First of all. We shall now interpret the results of Section 2. . .2 as the following . q) . n. cc 9k = .(c) supply further details. of if we now assume that EH is nrowed. Here we define EH I. in which 7 is the matrix called E above. respectively. Thus the elementary column transformations give rise to exactly the same set of elementary transformation matrices as were obtained from the row transformations. and if Ri(a) is nrowed.
v Theorem 2. In the theory of determinants it is the symbol for the product of two determinants may be comshown that puted as the row by column product of the symbols for its factors. that is.30) by matrix multiplication. 6.4.44 INTRODUCTION TO ALGEBRAIC THEORIES LEMMA 1. m . . Every nonsingular matrix is a product of elementary transforma We shall close the results with an important consequence of Theorem 2. of a product. Let A and B be . A Qt. Qt such that . . B = if (?! . if the rank of A is r. . . EXERCISES 1. and the rank of result C is the rank of relation on the C and is at most r. By Theorem 5. The determinant of a product of two square matrices product of the determinants of the factors. we the bottom m obtain a rationally equivalent matrix Co = A Q B. by n matrices. 2 of Sectfoi^fcfrand carry the matrices cise into the form (2. Then r rows of the wrowed matrix C are all zero.4. . and it is interesting to note that it is possible to derive the theorem as a . The rank of a product of two matrices does not exceed the rank of either factor.Qi matrix. . Thus we obtain LEMMA 2. as desired. is the nrowed identity . Then there exist elementary P 8 Qi. Theorem For. and consequently B . . P. transformation matrices PI. tom there exists a sequence of eleby n matrix A Q whose bot m if m we apply these transforma tions to C = AB. tion matrices. In our The determinant present terminology this result may be stated as Theorem 7. mentary transformations carrying A into an r rows are all zero. ^ 0) 2 l 2. Q t) and only if A and B have the same rank. .3 is the case of Lemma l where = PI P. . . by Theorem 2. Express the following as products of elementary transformation matrices. 6. The usual proof in determinant theory of the result is quite complicated.)A(Qi . . Find the elementary transformation jjrtatjafes corresponding to the elementary of that exer row transformations used in Ex. (24) is the AB = A B. The corresponding between the ranks of C and B is obtained similarly.
21) in terms of matrix product. tions where the Pi are elementary transformation matrices.20) and (2. We shall. and 2. Moreover. Ei.A = 8.aG . let A be nonsingular so that. P. or 3. . give such a derivation..4. \ !   \ uniquely determined as the matrix (26) A* This formula follows from the matrix equation (27) A(adj A) = (adj A) A = A . if .EQUIVALENCE OF MATRICES AND OF FORMS 45 simple consequence of our theorems which were obtained independently and which we should have wished to derive even had Theorem 7 been as sumed. inverse Nonsingular matrices. 2. E t are . . The converse may be shown to follow from (21). An nrowed square matrix if there exists a matrix B such that AB = BA A is said to have an identity matrix. if G is an type 1. A ^ 0. 2. Clearly. I l multiplication by the scalar \A\~ and we observe that (27) is the inis terpretation of (2. G. then EiO\ = \E t sult first to A to obtain \A\ = Pi . Pi .. that (?    It follows clearly that. respectively.. and hence G = \E\ G. therefore. by Lemma 2.  \  \ A = PL. but we shall prove instead the result that if A 5^ then A" 1 is For if  1 (25) holds.= / = 1. =  \A\ as desired. respectively. then Lemmas 2. (22). . . \ any elementary transformation . I . We thus let A and B be nrowed square matrices and see that Theorem 6 states that if A = then AB = 0. . \Ei\ G.P.P. if E is an nrowed elementary transformation matrix of 1.22). = 6. Thus. or a. From our definiwe see that. we have Theorem singular. (23). .5 imply = \G\. where adj our symbol for the adjoint matrix defined in (2. and Lemma 2. 1. matrices. B is = 7 is the nrowed an nrowed square matrix which we shall l 1 designate (25) by A* and thus write 1 AA~ = A. by A . Hence. A square ?>8j^m^A has an inverse if and only ^ if A is non we apply Theorem 7 to obtain \A JA. P and then \B\ We apply this reto A B to obtain J8 \AB\ = Pi. .. .3. M_i . .. then \E\ = nrowed square matrix and <7 = EG.
46
INTRODUCTION TO ALGEBRAIC THEORIES
We now prove A~~
then
B
is
unique by showing that if either AB = / or BA = / 1 necessarily the matrix A" of (26). This is clear since in either
l
case \A\
^
0,
A~
l
of (26) exists,
if
A" = A^I = A~ (AB) = (A~ A)B =
1
l 1
IB =
B, and similarly
BA =
and
/.
We note also that, if A
(28)
B are nonsingular,
=
B+A*
1
.
(AB)*
l
For (AB)(B lA~ l ) lar we have
(29)
= A(BB~ )A~ = A A" =
l
/.
Finally,
if
A
is
nonsingu
(AO'^A')/'
1
.
For
=
if
I
= (AA )' = (AyA',
1
A
n =
linear
q
mapping
(1)
is
with
m=
(A )' is the inverse of A'. n has an inverse mapping
is, if

1
(2)
with
is
the product (3)
the identical mapping, that
C = AB

the
identity matrix. But this is then possible if and only if A 5^ 0, that is, shall (1) is what we called a nonsingular linear mapping in Section 1.9.
We
use this concept later in studying the equivalence of forms.
EXERCISES
1.
Show
1.
that
if
A
By
is
has rank
Hint:
an nrowed square matrix of rank n 1 ?* = 0, PAQ(Q~ 1 adj A) (27) we have A(adj A)
0,
then adj
for
A
=
Then the
first
n
1
rows of Q"1
adj
A
\
must be
zero, adj
A
has rank


1.
if
2. Use the result above and (28) to prove that 2 and A ^ 0. and adj (adj A) = A if n

adj (adj
n~2 A A) = A
n > 2
3. 4.
Use Ex.
1
and
(27) to
show that
adj
A
= lA]^
1
.
Compute
the inverses of the following matrices
by the use
of (26).
EQUIVALENCE OF MATRICES AND OF FORMS
5.
47
Let
A
be the matrix of
1
by
(28)
(d) above and B be the matrix of (e). Compute C and verify that (AE)C = I by direct multiplication.
=
7. Equivalence of rectangular matrices. Our considerations thus far have been devised as relatively simple steps toward a goal which we may now
attain.
We
first
DEFINITION.
exist
make the Two m by n
matrices
A and B
are called equivalent if there
nonsingular matrices
P and Q
such that
(30)
PAQ = B
.
Observe that
P
and Q are necessarily square matrices
of
m and n rows,
respectively. By Lemma 2 both P and Q are products of elementary transformation matrices and therefore A and B are equivalent if and only if A and B are rationally equivalent. The reader should notice that the definition of
rational equivalence
form of the above and, while the previous definition is more useful for proofs, that above is the one which has always been given in previous expositions of matrix theory. We may now apply Lemma 1 and have the principal result of the present chapter.
is
to be regarded here as simply another
definition of equivalence given
Theorem
the
9.
Two
m
by n matrices are equivalent
if
and only
if they
have
same rank.
emphasize in closing that, if A and B are equivalent, the proof of 2.2 shows that the elements of P and Q in (30) may be taken to be rational functions, with rational coefficients, of the elements of A and B.
We
Theorem
EXERCISES
1.
Compute matrices
such that
P and Q for each of the matrices A of Ex. 1 of Section 2.6 PAQ has the form (2.30). Hint: If A is m by n, we may obtain P by apillustrative
plying those elementary row transformations used in that exercise to I m and similarly for Q.
(The details of an instance of this method are given in the example at the end of Section 8.)
2.
Show
is
rank 2
that the product AB of any threerowed square matrices A and B of not zero. Hint: There exist matrices P and Q such that A = PAQ has the
form
(2.30) for r
the same rank as
3.
= 2. Then, if AB = 0, we have A*B* = where B Q B and may be shown to have two rows with elements
of
QrlB has
all zero.
Compute the ranks
A, B,
AB
for the following matrices. Hint:
into a simpler matrix
A Q = PA by row transformations
alone,
Carry A B into B Q = BQ by
48
INTRODUCTION TO ALGEBRAIC THEORIES
of AoB<>
column transformations alone, and thus compute the rank
stead of that of
= P(AE)Q
in
AB.
a)
A=
2
3
4
,
B
B
1213
583 (352
8. Bilinear
4^
11 02
5/
forms. As
we indicated at the close of Chapter I, the problem
two forms of the same
of determining the conditions for the equivalence of
is
restricted type customarily modified by the imposition of corresponding restrictions on the linear mappings which are allowed. We now precede
the introduction of those restrictions which are
linear forms
discussion.
made
for the case of bi
by the presentation of certain notations which will
simplify our
The onerowed matrices
(31)
x'
=
(xi,
.
.
.
,
x m)
,
y'
=
(yi,
.
.
.
,
y n)
let
have onecolumned matrices x and y as their respective transposes. A be the m by n matrix of the system of equations (1) and see that system may be expressed as either of the matrix equations
(32)
We
this
x
= Ay
,
x'
=
y'A'
.
have called (1) a nonsingular linear mapping singular. But then the solution of (1) for yi,
. .
We
if
.
m=
n and
A
is
non
,
y n as linear forms in
xi,
.
.
.
,
xn
is
the solution
(33)
y
= A
1
x
,
y'
=
z'(A')
1
= x'(A~y
of (32) for y in terms of x (or y' in terms of x'). and variables Xi and y, and shall by n matrices
We
shall again consider
m
A
now
introduce the no
tation
(34)
.
x
=
P'u
v' = (v\ t . . Um). . Thus.EQUIVALENCE OF MATRICES AND OF FORMS for a nonsingular linear fc 49 i. . for r m. Xi to new variables Uk. yn Here x and y are given by . . . and are equivalent. we write square matrix and u = 1. . for a nonsingular nrowed square matrix return to the study of bilinear forms. . . Also let g = x'By be a bilinear form in #1. mapping carrying the . for i = . . . is a scalar which may be regarded as a onerowed square matrix and is then the matrix product (36) / = x'Ay . so that the transpose P of P is a nonsingular mrowed 1 = (u\. . . xm with m by n matrix B. The matrix f of / is 23 1 3 1^ W6 W 5 . A bilinear form / = Zziauj//. ILLUSTRATIVE EXAMPLE We shall find nonsingular linear mappings (34) and (35) which carry the form into a form of the type (37). . and we call the m by n matrix A the matrix rankle rank of/. . It follows also that every bilinear form of rank r is equivalent to so that B = PAQ A the form (37) zit/i + . its (31). . carried by these mappings / is B. . v n ). . n. . . . Then we shall say that / and g are and ?/i. v n into which / is that the matrix of the form in ui. We now . . then x' = u'P and = (u'P)A(Qv) = u'(PAQ)v . +xy r r . But if (34) holds. . . These results complete the study of the equivalence of bilinear forms. . two bilinear forms f and g are equivalent if and only if their matrices are equivalent. . . . . if . (35) y = Q Q and 1. . of/. equivalent there exist nonsingular linear mappings (34) and (35) such um and Vi. Similarly. . . By Theorem 9 we see that two bilinear forms are equivalent if and only if they have the same rank. . m and j ~ 1.
and obtain which evidently has rank the first 2. EXERCISES find nonsingular linear following bilinear forms into forms of the type (37). the second row by /I and obtain 05\ i PA= o \o y o/ o The matrix P is obtained by performing the transformations above on the threerowed identity matrix.50 INTRODUCTION TO ALGEBRAIC THEORIES rows of A. 2 as desired. and hence / Pf* \ 1 1 I 4 0\ o I/ . add twice the new first We interchange the first and second new second row.ftt2 ~ = ^3 Vi 2/2 4W3 . Then 5\ 1 Q = \0 V 17  The corresponding linear mappings f (34) and (35) are given respectively ( by xi I I Z2 Z8 = . add 6 times the new first row to the row to the third row. multiply J. row by 1. Use the method above to mappings which carry the o) 2x<yi + 3x# . 1. We continue and carry first PA into PAQ of the form (2.Jtt2 + Uz = Ui .30) f or r = 2 by adding five times the column and V times the second column of /I PA to its third column.8 + 3i. We then add the second row to the third row.x&i 2 2x22/2 . I i 2/3 = = = vi t>2 + 5^3 ^3 We verify by direct substitution that (6vi  30t.
of the type (37). (38) B = PAP' shall call .Xiy + 3xi2/ + . . Q = 4. We A and B congruent if there exists a nonsingular matrix P satisfying (38). P. n so that the matrices A of forms/ = x'Ay are square matrices. x 2 x3 carry the forms of Ex. 9. Ex. a) c) Xii/i  4xi2/ 2 + Zi2/ 8  x 22/i +xy 2 2 d) e) 3 2x42/2  Find a nonsingular linear mapping on 2/1. Use elementary row transformations as above 6.EQUIVALENCE OF MATRICES AND OF FORMS c) 51 d) + 3xi2/ ~ + 5x + 2x y + x y + 3x . y2 y* carry the following forms into forms 2. Then P = A" 1 AQ = / has the unique solution 3. X2. 2/2. There are some important problems regarding special types of bilinear forms as well as the theory of equivalence of quadratic forms which arise when we restrict the linear mappings we use. to compute the inverses of the matrices of Section 5. 2 are nonsingular so that the corresponding matrices P have the property PA = I. 4. 2/3 such that it and the identical xi.2x yi 2xiyi . Hint: The matrices A of the forms of Ex. mapping on . and (35) are called cogredient mappings if Q = P' Then/ and g (34) are clearly equivalent under cogredient mappings if and only if We let m = Then . x8 such that it and the identity mapping on 2/1. 1301 10 23 1 4 1 1\ O I 3 1 Congruence of square matrices. 2 into forms of the type (37). . .X + 2X32/4 + 4X42/1 2xiyi 2 Zi2/a 8 22/i 2 2 2 s 3 2/i 2 zi2/4 2 32/3 X 4t/2 + OJ42/3 + 5X42/4 Use the method above to find a nonsingular linear mapping on Xi. Use elementary column transformations to compute the inverses of the fol lowing matrices.
n. If B = PAP' is congruent to symmetric if A' = A. PjAPJ . A = (an) is a an = skew matrix. . .'. . We may interchange the ith row and first row. . A then B' if = PA'P'..= hiz j^ 0.52 INTRODUCTION TO ALGEBRAIC THEORIES We shall not study the complicated question as to the conditions that two arbitrary square matrices be congruent but shall restrict our attention to two special cases. 2 times the first row of C to its jth row for j = 3. en = c 22 = 0. . PI. . if B is is symmetric if and only if A is symmetric. An gruent to A and with a. We shall speak of such operations mentary transformations of on a matrix A as cogredient elethe three types and shall use them in our study of the congruence of matrices. and skew if A' = 10. Hence is zero. . Skew matrices and skew bilinear forms. Multiply ^12. sired.. Before passing to this study we observe that are congruent if and only if A imply that A and B Lemma 2 and Section 7 may be carried into B by a sequence of operations each of which consists of an elementary row transformation followed by the corresponding column transformation. P 2 (P!APDP 2 and so forth as deP. r. i B is skew If and only a>a A skew. or some an T* 0..) OJ .. same rank an even It and only if they and every skew matrix is thus congruent to a matrix /O (39) It 0\ . matrix (40) Ac = \ ' E _(0 * \1 1\ ' A. A square matrix A is called A.. We now apply 1. and thus obtain the skew . c 2 i congruent to A and with Ci 2 = For either A = = PAP' t a sequence of cogredient elementary transformations of type 2 where we add c/i times the second row of C to its jth row as well as c.. . hzi = the first row and column of H by h^ and obtain a skew matrix C = (c . We use this result in the proof of Theorem the Two nrowed skew Moreover r is matrices are congruent if integer 2t. Thus = P = P.) = 1. . . and j.. B . P. where the P* are elementary transformation matrices. the ^'th and second row and thus also the corresponding columns by cogredient elementary transformations of type 1.' = P. then an = an for all and consequently every diagonal element of A 10. \0 O/ for every P and our result is trivial. We thus obtain a skew matrix H = (/&. P.) con= ^22 = 0.
. . . 0} with each trix of Ek = E. . . The theory of symmetric is considerably more extensive and complicated than that of skew matrices. . We may evidently assume that A = A' =^ 0. A i is necessarily a skew matrix of n 2 2 rows rows. a. = 0. . carrying it to a congruent skew matrix H\ and carrying Ao to a congruent skew matrix (E VO It follows that. If B also is a skew maB is congruent to (41) and to A. = a< and an = a/. Clearly. This is true for A = H if some diagonal element of A is not zero. . Hence. . the corresponding bilinear form x'Ay is a skew bilinear form. rank 2. Moreover. Our matrices principal conclusion may be stated as Theorem of the 11. a finite number we may replace A by a congruent skew matrix (41) G = diag {JEi. and any cogredient elementary transformation on the last n and columns of A induces corresponding transformations on AI. . then G and A have rank 2t. E t. 0. Every symmetric matrix A is congruent to a diagonal matrix same rank as A. If A is a skew matrix. 80:22/4  Symmetric matrices and quadratic forms. and we shall obtain only some of its most elementary results. + EXERCISES Use a method analogous to that of the exercises of Section 8 to find cogredient linear mappings carrying the following skew forms to forms of the type above. . Otherwise there is some a</ ^ 0. if / is a skew bilinear form of rank 2t it is equivalent under cogredient non singular linear mappings to (xiyz x^yi) + . both A and B are con gruent to (39).EQUIVALENCE OF MATRICES AND OF FORMS The matrix Ao is 53 congruent to A. two such forms are equivalent under cogredient nonsingular linear mappings if and only if they have the same rank. and we shall prove first that A is congruent to a symmetric matrix H = (hij) with some diagonal element ha ^ 0.. a) b) c) + 11. We then obtain H as the result of adding the jth row of A to its ith row and . after H of such steps.
and obtain a symmetric matrix We now /On (42) \ \0 Aj 1 rows. A\ in the proof of symmetric bilinear forms of rank by a symmetric matrix A of rank r.J and g are equivalent if and only if B = PAP'. . . form in xi. . x n with be regarded as a a r j.j^ as de permute the rows and corresponding columns of H to obtain a congruent symmetric matrix C with en = ha 7* 0. follow with the corresponding column transformation. . . . Now in Section 1. + The f(xi. . bilinear forms / = x'Ay defined Theorem 11 then states that /is equiva lent under cogredient transformations to a (43) aiXitfi form a rx ryr . form in XL. We shall . that is.7. = 2a. x r with nonsingular matrix diag . .9 we defined the equivalence of any quadratic form / and any second quadratic form g = y'By. However. Then we add ~~cTiCki times the first row to its ifcth row. We call the uniquely determined symmetric matrix A the matrix of f and its rank the rank of f. . the jth column of sired. r.54 INTRODUCTION TO ALGEBRAIC THEORIES A to its iih column. . it may {ai. then x' = u'P is a consequence. and it is clear that we ultimately obtain a diagonal matrix. / is the onerowed square matrix product (44) / = x'Ax = ui for a symmetric matrix A. The result above may be applied to obtain a corresponding result on congruent to A.. . . and/ = u'(PAP')u. . 0. Thus Theorem 11 states that every quadratic form of rank r is equivalent to (45) ap* + . . + . . + a x* r . oj n ). . ar . 0}. . Clearly. .* +a . As is a symmetric matrix with n Theorem 10 we carry out a finite number of such steps. . . . We may then use the notations developed above and see that / and g are equivalent if and only if A and B are congruent. . For if our nonsingular linear mapping is represented by the matrix equation x = P'u. results of . . The form matrix diag (45) is of course to be regarded as a {ai. ha = a. Theorem 11 may also be applied to quadratic forms/ = As we saw in Section 1. . It is a matrix equivalent to A and must have the same rank. . .
(c). Determine P by writing A = IAI' and by applying cogredient elementary transformations.J 211 1 1 ~5 3. Ex. 14/ \l 1 O/ and Write the symmetric bilinear forms whose matrices are those of (a). 3 to (e). 55 a quadratic form / = x'Ax in xi. .3x^2 + 2xiX . x n with nonsingular matrix A a nonsingular form. and we have shown that every quadratic form of rank r in n variables may be written as a nonsingular form in r variables whose matrix . 5. 2 are congruent we num bers as elements of 6. Pf Show linear = xf x\ are not equivalent under that the forms / = xf x\ and g with real coefficients. . EXERCISES 1. is a diagonal matrix. 3 7 6 1 0023 I 2 2 1 3 1 3 0\ "0023 l 1 6 15 3 '341 4 5 1 1 1 / *> 3 2 1 2 1  5\ 1 . 2 for quadratic forms. matrices Find a nonsingular matrix P with rational elements for each of the following A such that PAP' is a diagonal matrix. . Hint: Consider the possible signs of values mappings + of / and of g. 2 and use the cogredient linear mappings obtained from that exercise to obtain equivalent forms with diagonal matrices.3xJ x\ + 2 0:1X2 4  4 2. (d) of 4. What is the symmetric matrix a) 3x1 6) c) A of the following quadratic forms? 3 2 + 2xiX 3x x + x\ 2xix 8x3X4 + x\ .EQUIVALENCE OF MATRICES AND OF FORMS call . allow any complex Which of the matrices of Ex. and (h) of if Ex. Apply the process of Ex. 2 e) . (/). (g). (6).
and the set of all rational functions with rational coefficients of any fixed complex number c. b. . . The fields be defined in Chapter VI. INTRODUCTION TO ALGEBRAIC THEORIES Nonmodular fields. in F. bers. of a field in such a general way as to include systems like the field defined shall do so and shall assume henceforth that what we called con K We stants in Chapter I and scalars thereafter are elements of all a fixed field F. it has not. 6. Thus we shall find it desirable to define the concept above. . . If F is any field of functions in x with coefficients in erties. lowing brief DEFINITION. Thus all our fields contain the field of all rational called numbers and are what are will modular fields called nonmodular fields. Now it is true that even if one were interested only in the study of matrices whose elements are ordinary complex numbers there would be a stage of this study where one would be forced to consider also matrices whose elements are rational functions of x. been necessary to emphasize it. a fc. . III. with rational coefficients. the set F is a mathematical system having prop K with respect to rational operations. then. c of F. field of a. given conditions that two symmetric matrices be congruent. + b) + c = a + (b + c) a(b + c) = ab + ac ab = ba a + b = b + a (a . until now. just like those of F. We now make the fol set of elements F is said to form a nonmodular field if F contains the set of all rational numbers and is closed with respect to rational operations such that the following properties hold: I. . . But it is clearly possible to obtain every rational number by the application to unity of a finite number of rational operations.56 12. algebraic concept which is one of the subject the concept of a field. But the reader will observe that we have not. A numbers fe. In our discussion of the congruence and the equivalence of two matrices A and B the elements of the transformation matrices P and Q have thus far always been rational functions. the set of all real numc 7* a. ab. = F(x) of all rational complex numbers. A II. Examples of such fields are. the set of all complex numbers. (ab)c = a(bc) . and our reason is that it is not possible to do so without some statement as to the nature of the quantities which we allow as elements of the transformation matrices P. The fields we have already mentioned contain the complex number unity and are closed with respect to rational operations. for every a. We shall thus introduce an most fundamental concepts of our complex numbers is a set F of at least two distinct complex and a/c are in F for every such that a + &. While we have mentioned this fact before. of the elements of A and B. as yet.
In the author's Modern solution Higher Algebra it is shown that the solutions of (46) and (47) are unique. and the existence of a solution of (46) is equivalent to that of b x = a. + In fact. made as . the + bi di 3(c ) a Note that in view of III the property II implies that (b c)a = ba ca. y 1 l . Prove that F is a field. and for these systems the properties just mentioned must be additional assumptions. + l)a = a = 1 0. But there are mathematical systems called quasifMs in which the law ab = ba * + + + does not hold. set of all matrices of the following kind and d range over all rational numbers and i 2 form a quasifield which is not a /a \c = field. 6. Then the solutions = ab~ of (46) and (47) are uniquely determined by z = a + ( b). EXERCISES 1. 6. 1 + 1 a + + = a) ( 1 6) a + Hence = It is (a) = a. it may be concluded that the rational numbers and 1 have the properties (48) for every a of a + = al = a . Show that if a. yb = a Thus our hypothesis that F is closed with respect to rational operations should be interpreted to mean that any two elements a and b of F determine b and a unique product ab in F such that (46) has a a unique sum a in F and (47) has a solution in F if b j 0. aO = 0. ' r " x /a b ^(b a) b 2. whereas a.EQUIVALENCE OF MATRICES AND OF FORMS The difference a tion x in (46) 57 6 is always defined in elementary algebra to be a solu F of the equation x +b= in a . = ab for every a and b of a field F. We 1 also see that the rational number a 1 is defined so that l)a = also true that ( ( 1) 1=0. c.) a N /a 2b\ '~ . of (47) to that of by = a. F and that there exists a unique solution x = aoix }. (Use the definition of addition of matrices in (52). l.a ~ and a unique solution y = 6" of yb = 1 for 6^0. and a ( thus ( 1 a.. and c range over the set of all rational numbers and F consist of all matrices of the following types. and the quotient a/6 to be a solution y (47) F of the equation* . Let a.
. two skew matrices with elements in F are congruent in F if and only if they have the same rank. and every matrix B congruent in F to A is skew. and no simple solution of this problem exists for F an arbitrary field. of finding necessary and sufficient conditions for two quadforms with coefficients in a field F to be equivalent in F is one involving the nature of F in a fundamental way. Hence. Summary of results. the precise nature of F is again unimportant. We next assume that A and B are two m by n matrices with elements in F and say that A and B are equivalent in F if there exist nonsingular matrices P and Q with elements in F such that PAQ = B. b(x) with coefficients in F. Let A = A' be a symmetric matrix with elements in F so that any matrix B congruent in F to A also is a symmetric matrix with elements in F. When A = A the matrix A is skew. then we call A and B congruent in F if there exists a nonsingular matrix P with elements in F such that PAP = B. and these results may be seen to vary as * we change our assumptions on F. . two bilinear forms with coefficients in F are equivalent in F if and only if they have the same rank. we can obtain results only ratic The problem after rather complete specialization of F. Moreover. in F and 0} with a* ^ that correspondingly every quadratic form x'Ax is equivalent in F to F if . we say that the bilinear forms x'Ay and x'By are equivalent in F under cogredient transformations* if A and B are congruent in F. and of two bilinear forms under cogredient mappings. we have shown that every symmetric matrix of rank r and elements in F is congruent in F to a diagonal matrix diag (a a a r 0. corre spondingly. The theory completed thus far on polynomials with constant coefficients and matrices with scalar elements may now be clarified by restating our principal results in terms of the concept of a field. Then A and B a field are equivalent in F if and only if they have the same rank. We observe first that if f(x) and g(x) are nonzero polynomials in x with coefficients in a field coefficients in F. .58 INTRODUCTION TO ALGEBRAIC THEORIES 13. the particular F chosen to contain the elements of the matrices If is relatively unimportant for the theory. Since the rank of a matrix (and of a corresponding bilinear form) is defined without reference to the nature of the field F containing its elements. . . Moreover. Similarly. where all the forms considered have elements in F. F then they have a greatest common divisor d(x) with and d(x) = a(x)f(x) + b(x)g(x) for polynomials a(x). of F to the reader the explicit formulation of the definitions of equivalence in linear . f 1 Then two corresponding quadratic forms x'Ax and x Bx f are equivalent in and only if A and B are congruent in F. We leave two forms. . . . A and B are square matrices with elements in a field F. In fact. . of two bilinear forms.
then (51) will still hold. . . Thus (51) will have major importance as a formula for representing matrix computations. . Let F be the field of either Theorem 12 or CoroUary I. numbers if and only if they COROLLARY II. . then 5 is 0} and hence to A. . = diag ^{ai. = diag {&rS a 7* and a. . 59 conditions are those given in Let F be a field with the property that for every a of F there exists a quantity b such that b 2 = a. If also . . = .2. . . . 0}. . . . + &/ . . .EQUIVALENCE OF MATRICES AND OF FORMS The simplest Theorem 12. also congruent in to diag {/ r B' has rank r. Then two quadratic forms with coefficients in F are equivalent in F if and only if they have the same rank. Hence every such form of rank r is equivalent in F to (49) 14. Then if for . n. Then we define (52) S = A +B = SH (i = an 1. 0. PA F P' = diag [I r . = . . . + x? .j = 1. are congruent in F if and only if they have the same rank. We then have the obvious consequences. = A' of rank r is congruent to ar .m. ^ P  67 1 . COROLLARY I. The converse follows B= from Theorem 2. . Then two symmetric matrices with ele ments in . . Addition of matrices. Two symmetric matrices whose elements are complex numhave bers are congruent in the field of all complex the same rank. . we should have the formula But it is also true that if the partitioning of any A and B is carried out so that the products in (51) have meaning and if we define the sum of two matrices appropriately. 0}. If these matrices were onerowed square matrices. Its There is one other result on symmetric matrices which will be seen to have evident interest when we proof involves the computation of the product of two matrices field which have been partitioned into tworowed square matrices whose elements are rectangular matrices. . that is to say in F. . over an arbitrary state it. 1. 0} > F A A . = 6? for 6< in F. . so that A and B are m by n matrices.n) . . 0. (ay) . For every . x? + . m and j We now let A = (ay) and B = (6</)? where i = 1.
The elements If of our matrices are in a field F./* = a^k + bi. (51) has meaning only if B\ has t rows. then similarly (56) we have D(A + B) = DA + DB > 1 . if (A+B) + C = A + the (B + C) . however./) be an mbyn matrix. which we call the distributive law for matrix addition and multiplication. that if n rices.*) be an n by q matrix. For this equation is clearly a consequence of the corresponding property is. we have Hence we have the properties also + by = 6/ + a/. Clearly if We D is a matrix such that DA is defined. that (55) of (an + &</)<. . C = (cjk) be any n by q matrix. then A+ = A. Then so that AI = (a*/) but now with i = 1.. + \B\ = 2A while A + B\ = 2A = Let us now apply our definitions to derive (51). #2 has rows* and n rows and g columns. t. Observe.60 INTRODUCTION TO ALGEBRAIC THEORIES that We have thus defined addition for any two matrices of the same shape such A + B is the matrix whose elements are the sums of correspondingly placed elements in A and B.Cjk have thug seen that addition of matrices has the properties (53) alassumed for addition of the elements of our matrices and that the ways law (54). . J5 4 has n t t rows rows . . Then (A + B)C = AC + BC .) (53) A+B = B + A. Now (54) let is m by n zero matrix. . B = (&. . and q g columns. s and j = 1. and we thus assume that Bi is a matrix of t rows and g columns. an m by n matrix. . also holds. and A\ be an s by t matrix. + c*/ = an + (a</ + 6. Our partitioning is now completely det columns. AS has m s termined and necessarily A* has s rows and n rows and t columns. We let A = (a. A B 4 8 has m t s has n t columns. then A B\ and \A  and A and B are nrowed square mat + \ if A and B are equal. then \A\ + \B\ are usually not equal. We observe also that A + (~1A) = 0. For example. and a^ C = (dj) is (bij + dj). . of F. .
D = 2 Ei  (fit.. . B 4) Moreover. . 61 The element in the ith row and fcth column of AB may clearly be expressed as (i = 1. . s 1.EQUIVALENCE OF MATRICES AND OF FORMS and q g columns.r } is nonsingular. obtain (51) from (57) by simply using the ranges 1. . Then Ho' F if A and B are congruent in and only if AI and BI are congruent in F. for i separately. Then P = diag {Pi. and A and B be the corresponding nrowed symmetric matrices <> of rank r. q) But this equation is equivalent to the matrix equation given by (57) A = (A D2 ) Z>2. ro. . . AB = DJE. For if AI and BI are congruent in F there exists a nonsingular matrix Pi such that PiAiP( = BI. . . . where we define Di. g + .) . as well as 1. . . Let Ai and BI be rrowed nonsingular symmetric matrices with elements in F. #2 by 2 . . m . . g and . 2. Theorem 13. I n . and . + Drf* . Ho'S). . + 1. and then have used (57) and addition We shall now apply the process above to prove the following theorem on symmetric matrices mentioned above. . and com . 2. k = 1. s . . E = 2 (B. . In matrix language we have used (58) and computed as the result of partition of matrices of matrices in (59) to give (51). B. . (^} . B= JE?i. q for j separately. may . (58) Di = . we .
and Hence ORAL EXERCISES 1. We now call the number of positive a the index i of / and prove 15. Hint: Put A == B (7 = 0'.p(Af( Af' \_(P l A P'J' ~MO P B l J^A^ P( P^ PtA must be nonsingular since \Bi\ = and A l are congruent in F. But then  B 1 PA 1 1 P( J  5^ 0. Conversely. We shall close our study of symmetric matand hence of quadratic forms with coefficients in F as well by consider= x'Ax ing the case where F is the field of all real numbers. symmetric matrix B and a skew matrix C. 14. + rices Real quadratic forms. and solve. Two the field of all real .. g may be written as a nonsingular form g Q in r variables. if / and g are quadratic forms with coefficients in F so that / and g are equivalent only if they have the same rank r. Let then / = a x xf + have rank r so that we may take / + arxl for real a^ ^ 0. Verify that (A + B)' = A' + B' for any m by n matrices A and B. Theorem same index. then / may be written as a nonsingular form / in r variables. A! l PJ PJ The result above thus states that. and finally the original forms / and g are equivalent in F if and only if / and g Q are equivalent in F. 3. compute A'. Computed + Bif A a) B= 2. if putation by the use of (51) gives PAP' a nonsingular matrix P we may write I PAP = B for 7 Pi p *} \p* p*)' shall where Pi is an rrowed square matrix. and we have l p *\ P'* t PA p.62 INTRODUCTION TO ALGEBRAIC THEORIES = B. of a Show that every nrowed square matrix A is expressible uniquely as the sum C with B = B'. numbers quadratic forms with real coefficients are equivalent in if and only if they have both the same rank and the .
EQUIVALENCE OF MATRICES AND OF FORMS
*
63
if
For proof we observe, first, that by Section 14 there is no loss of generality we assume that/ = x'Ax where A is a nonsingular diagonal matrix, that = n. Moreover, there is clearly no loss of generality if we take A = is, r d d r for positive d. Then there exist real diag {di, dt+i,
. . .
,
t,
.
.
.
,
\
numbers p
{PT with
(61)
1
>
^
P7
1
}
>
such that p* = diy and if P is the nonsingular matrix diag then PAP is the matrix diag {1, 1, 1, 1}
1
.
.
.
,
.
.
.
,
t
elements
1
and
(x\
r
t
elements
1.
Thus, /
is
equivalent in
F
to
+
.
.
.
+
*?)

(* +1
+ ...+*).
Now, if 0has
lent in
the same rank and index as/,
let g
hence to /. Conversely,
have rank r
it is equivalent in F to (61) and = n and index s so that g is equiva
F
to
(62)
(*?+...+
*2)
 (J +l +
.
.
.
+3
.
propose to show that s = t. There is clearly no loss of generality if we assume that s ^ t and show that if s > t we arrive at a contradiction.
We
Hence, let s > t. Our hypothesis that the form / defined by (61) and the form g defined by (62) are equivalent in the real field implies that there exist real numbers d^ such that if we substitute the linear forms
(63)
Xi
=
dnyi
+
.
.
.
+d
and
in
yn
(i
=
1,
.
.
.
,
n)
,
in
. .
/
.
= f(x
l9
.
.
.
,
x n), we obtain as a result
(y\
+
=
.
.
.
.
+ yj) =
yn
(yj +1
+
+ yj).
Put
Xi
=
x2
=
.
.
.
=
xt
=
y, +1
.
.
=
in (63)
and consider the
(64)
resulting
t
equations
AMI
t
+
.
.
.
+ cky. =
in s
(t
=
1,
.
.
.
,
*)
.
These are
exist real
linear
homogeneous equations
v\,
t
. . . ,
>
t
unknowns, and there
numbers
v9
not
all
zero
and
satisfying these equations.
The remaining n
certain
/(O,
0,
.
numbers
.
.
Uj for j
equations of (63) then determine the values of x, as = t = 1, n, and we have the result h
+
.
.
.
,
,
0,
w l+1
,
.
.
.
,
tO =
t>f
+
.
.
.
+
0J
>
0.
But
clearly
ft
=
(ttf +1
+
.
+ ul) ^
0,
a contradiction.
We have now shown that two quadratic forms with real coefficients and
if
the same rank are equivalent in the field of all complex numbers but that, their indices are distinct, they are inequivalent in the field of all real
numbers. We shall next study in some detail the important special case t = r of our discussion.
64
INTRODUCTION TO ALGEBRAIC THEORIES A
symmetric matrix
A
and the corresponding quadratic form /
if
=
x'Ax
are called semidefinite of rank r
A
air
is
congruent in
\
<>
F to
a matrix
;
equivalent to a form a(x\
is,
for a 5^
.
. .
in F.
Thus /
is
semidefinite
if r
if it is
+
+ xj). We call A and/ definite
F is
if
=
n, that
A
is
both semidefinite
and nonsingular.
If
the field of
all real
positive or take a
tive
=
1
numbers, we may take a = 1 and call A and / and call A and / negative. Then A and / are nega
and only
if
A
and
/ are
positive.
Thus we may and
shall re
our attention to positive symmetric matrices and positive quadratic forms without loss of generality. If f(xi, x n ) is any real quadratic form, we have seen that there exists a nonsingular transformation (63) with real d^ such that / = y\
strict
. .
.
,
+
+
if
!/f
(y\+i
we put
Xi
ing system
if
in (63), there exist unique solutions y, = dj of the resultof linear equations, and the dj may readily be seen to be all zero
c
= d
++!/?)
are
.
If
Ci,
.
.
.
,
cn
are
any
real
numbers and
and only if the
all zero.
. .
y
t
t
=
0, yi+i
r.
=
t
l,/(ci,
,
cn )
Now if < r, we have/ < < 0. Conversely, if /(ci,
t
.
for y\
. .
=
...
0,
. .
=
,
cn)
<
then
.
<
.
For otherwise / =
If
y\
+
.
.
.
+
. .
j/J,
/(ci,
.
.
.
,
cn)
dr =
0.
.
.
d,
cn )
,
>
is
= 1 and all other have n, then we put y r+ \ c n not all zero such that /(ci, c n ) = 0. Hence, if /(ci, for all real c* not all zero, the form/ is positive definite. Conversely,
r
=
<
= d\ + = and y/ =
+
.
.
,
.
.
,
iff
+
if
cn) d\ y\ 2/ n /(ci, positive definite, we have / for all dj not all zero and hence for all c not all zero. d%
.
. . .
.
=
+
+
,
.
,
,
+
>
We
have proved
x n ) is positive semidefinite for all real Ci, is positive definite if and only . cn ) > , for all real ci not all zero. if f (ci, As a consequence of this result we shall prove
15.
real quadratic form f(xi,
.
. , . . .
Theorem
. .
A
and only
if f (ci,
.
cn )
^
Theorem
16.
is positive semidefinite, every principal
is positive definite.
Every principal submatrix of a positive semidefinite matrix submatrix of a positive definite matrix
For a principal submatrix B of a symmetric matrix A is defined as any im th rows mrowed symmetric submatrix whose rows are in the tith, of A and whose corresponding columns are in the corresponding columns of A. Put XQ = (x^, z m ) so that g = x Q Bx Q is the quadratic form with B as matrix, and we obtain g from / by putting Xj = in / for j 9^ i*. for all values of the x ik and for all z = a, then g ^ Clearly, if / <>
. . .
,
f
.
.
.
,
,
EQUIVALENCE OF MATRICES AND OF FORMS
hence
65
B
is
and B is Xj above
is positive definite positive semidefinite by Theorem 15. If for x ik not all zero, and hence / = for the then g = singular,
A
all
zero
and
for the x ik not all zero, a contradiction.
The converse of Theorem 16 is also true, and we refer the reader to the author's Modern Higher Algebra for its proof. We shall use the result just
obtained to prove
Theorem
Then AA' For we
is
17. Let
A be an by n matrix of rank r and with real elements. a positive semidefinite real symmetric matrix of rank r.
write
m
may
A p A P (I'
(o
<>\o Q
>
o)
00' QQ
where
P
is
and
is
Then QQ'
that Qi
Q are nonsingular matrices of m and n rows, respectively. a positive definite symmetric matrix, and we partition QQ' so an rrowed principal submatrix of QQ'. By Theorem 16 the ma
trix Qi is positive definite,
is
congruent to the positive semidefinite matrix
C
of rank r and hence has
the property of our theorem.
EXERCISE
What
tion 11?
are the ranks
and
indices of the real
symmetric matrices of Ex.
2,
Sec
= 1 u . and we have w + = . over F. set .is that vector of whose coordinates are u. . = Q. A subset L of V n is called a linear subspace of V n if bv is in L for every a and b of F. and (5) of Vn to the reader. and we leave (2). . + v) = aw + aw for all a. zero. . . Note that the zero vector. The (2) properties of a field (u F may u be easily seen to imply that + v) +w= . Vn . . and w^ in L. . 1 Then u of F. . u . of Vn We suppose also that the quantities Ci are in a fixed field F and call c the ith coordinate of u. We assume the laws of combination of such sequences of Section 1. + amum . w of Fn The vector which we designate all by (4) O.CHAPTER The (d. +v=v+u a(w (3) a(bu) = (o6)w (a + V)u = + bu . We the verification of the properties 2. Then it is clear that L contains all linear combinations au + (6) u = a in F. of all sequences (1) U = . 6 in F and all vectors w. for . . (4). Linear spaces over a field. w = first . The entire set V n will then be called the ndimensional linear space .8 and call u a point or vector. where (5) u = ( Ci. Linear subspaces. u+(u) u = u . Cn) may be thought of as a geometric ndimensional space. of (5) is the quantity and the second zero is the shall use the properties just noted somewhat later in an abstract definition of the mathematical concept of linear space. v. . aiUi + . every u and v of L. c n the coordinates of u. cn ). . (3). the quantities Ci. + (v + w) aw . IV LINEAR SPACES 1.
F andui. for the time being. and only if the they are linearly dependent in F. Linear independence. . Observe. . we shall restrict } 3. shall write is L a linear subspace L of Vn according we shall say that u\. fol our attention to the nonzero subspaces L of Vn and.. . . . u m are linearly independent in F or that ui.. m . of L = . + (am b m )um a< &< {ui. um are a set of linearly independent vectors (of Vn over F). if e. .. Vn = e\F + L spanned by + . Um . . . um are linearly independent and + 6mum then = (ai . . .bi)ui + . . shall say that . In what . a = } . (6) in at least this one way. there is no other such expres sion of is ai um are linearly independent if it Thus. . + .. . It is clear that. in the form We . this LetL . that the definition of linear indeIt is evident that .is the vector whose jth coordinate is unity and whose other coordinates are zero. . . um over . + u mF . in the form (6). if and only if true that a linear combination ami (J^um = = 0. . Um are now seen to be linearly independent in . . ui. . and may be lows called the zero space and designated by L = {0}. eJF. if . subspace of . . + ui. . . But we may actually show a finite number of vectors of Vn has a that every subspace basis in the above sense. = . .. Then we shall by writing call Ui.!!*}. + . = . & as desired. = a 2 = . um be linearly inde pendent in F. + 0. . . when we call L a linear space over F that L is a linear subspace over F of some V n over F. e n span Vn The space spanned by the zero vector consists of the zero vector alone . . Llm. + . {ui.. We now make the . u m in } the form (6) is unique. um is span the space (7) L and so defined. . . we shall indicate. . . Our definition of a linear space L which is a V n implies that every subspace L contains the zero vector. . . . . L = L is expressible . . . . u^. . then = Oui . Ui. DEFINITION. independence as the spe cial case u = u . + Conversely. we if shall say that A set of vectors Ui. first. Hence the zero vector of {MI. . . am Um are not linearly independent in F. . . If now any finite num. If Ui. then ei. F expression of every u . . . . . .. aiUi + . For this property clearly implies linear 0. .LINEAR SPACES 67 (6) of We observe that the set of all linear combinations ber of given vectors. . if ui. . If Oum. u m a basis over F of L and indicate (8) L = UiF + . . to this definition. . am um = biui . .
. But then + OrUr = 0. Determine whether or not the spaces spanned by the following spanned by the corresponding vi. . Theorem Then L 1. 2) (4.68 INTRODUCTION TO ALGEBRAIC THEORIES and thus that u 5^ 0. Determine which of the following is sets of three vectors form a basis of the subspace they span. 1. = = (2. x 2 ui = (0. has first coordinate not zero. . . 1). (7. a contradiction. 2) . 1. 1. 1) ui tii (1. u. 1. = = = (3. . u^ HZ are linearly dependent we write uz = xu\ + yui and solve for x and y. 6 2 c 2 ). u2 = t*2 (2. To see if ui. Show and is L that the space L = {MI. 3) . . . 9) 0) c) (1. 2) mI* (0. . c*). 6) Ul tii (3. (1. 3) m 7/3 (1. 3. 3. 1. 1) . 2. . 1. from which a\ = l l u r }. 5. 2) . we may choose some largest number r of vectors in this set which are linearly independent in F and we will then have the property that all remaining vectors in the set are pendence in case 1 is m= that au = . 2. . Mm are any m distinct nonzero vectors. . 1) . . (2. . 3. has a basis consisting of certain r of these vectors. Hint: Li = . 2. 2) = . .0. 3) 2. We state this result as . Then either u\ + arur + a Qu = for a* not all zero. u3 = u. . . EXERCISES 1. ^3} spanned by the following sets of vectors Hint: In every case one of the vectors. 4) > /) (1. 0.. 1. 3) . It follows that. then aiUi + a>iUi + = ar = 0. u = ( a Oi)ui + + ( a^ an )ur is in [u^ if ui.  linear combinations with coefficients in F of these r. . Let L be spanned by distinct nonzero vectors Ui. (2. u = 3 (0. 1) 1. easy to show then that L contains e 2 = (0. . Hint: It u\ easy to see whether some two of the vectors. u 3 contains u* XsUi = (0. 0) = (5. w2 . 1 < r < m. = (1. } . u. 6) . ) 2. 1. 1. 23)  . (1. 1. 6) . u3 = (1. 3. is 7s. 1) (1. 1. 1. tion of these two quantities has the form (0. 1. . 0. . 8) (0. 2. ur be linearly independent vectors of Vn and u ^ be urj u are linearly independent in F or another vector. sets of vectors If wi. = Fs. 1) . . . 0. 3. u* = = = (1. (1. 1. 4) 3.. . 0) and e\ = (1. 3. u2 = (1. . d) and L contains e 3 = (0. If a . w2 u. 5. . 63. say ui. (0. 1. 1. 3) . only if a = Then let ui. u* 11. 1. . Ut coincide with those v2 . d) u. 1) . 5. um . 1. 2. 3. say and Ui are linearly independent. Some linear combina. . . 0). u. u. m . . L a) ui 6) c) It = (1. a) MI 6) * = = = = = = (1. . 3. 2.
8) *= (1.1).2).3). = 3. /) Wl = (1. 0. 13.1. u2 (1. a in ) (i = 1. . 4. 1). what we shall call the The jth coordinate (10) is pkiQit of the vector Wk = PklUl + PA . . of ing V n over F. x3 ou of the equations . . tn=(7. . Let us consider a set of (9) m vectors. 3) (3. 1. 1. The row and column spaces of a matrix* We shall obtain the principal theorems on the elementary properties of linear spaces by connecting this theory with certain properties of matrices which we have already derived. 2. . 1) c) in = (1.3. + fcth row and jth colcombination of the rows + . ti.0.2). there must exist solutions xi. . 1. 2. It follows that the row space of PA is contained in that of A. rn) . 6. 4. 0. 3. . 4.0. in=(l.1). 1. 1. the product PA is a q by n matrix. 1. . 0) 4. 10. u. = (6. = 1. 4). 1) i(l. 1. (2. t>2 02} . = (2.2). . the element in the is that linear umn of of PA. u. 0. . = . e) 2. n= (2. 2. + pkmOmh the that is. . 0. 6) in =(1. v. in 2) . If P = (pki) is any q by m matrix.2. 2. 1. + #2^2. 6. 6. . 2. Then we may regard w as being the tth row of the correspondum as being m by n matrix A = (a</) and the space L = [u\ y . #2X3 ^ 0. . 1.0). 9) *= . . . 6) = (7. 11. = = = = (1. . 1) t*=(l. Vl 1. 0. . a) = ^3^1 + #4^2 such that the determinant x\x u. 1. 2. . 9) d) i(l. (2. We have now shown that every row of PA is in the row space of A. 2. 1. . = {tfi.4). = (3. . ] row space of A.LINEAR SPACES Lu 69 a? 2 . 4. If P is nonsingu .2. 4. 3) U! (1. 11. m=(3. Thus every m by n matrix defines a linear subspace of V nj every subspace of V n spanned by m vectors defines a corresponding m by n matrix. Hence Wi row of A whose coefficients form the kth row of P.1. 1) t> 2 . 8. 6) (4. (2.2. Ui = (aa.1) (0.
of G span the row space of PA. . they are a basis of the row space of A. zero rows. . H is an m by r matrix. . + + = for bi in F not all zero. r + 1. . . and we shall now state this LEMMA 2. We shall use this result in the proof of The r rows of G form a basis of the row space of A. . Then PG is rrowed and of rank r. k = 1. INTRODUCTION TO ALGEBRAIC THEORIES then the result just derived implies that the row space of A = P~ l (PA) is contained in the row space of PA and therefore that these two linear spaces are the same. Let P be an mrowed nonsingular matrix. now. . We use (11) and note that the rows of G differ from those of PA only in Then the rows 3. By Lemma 1 we have LEMMA The row spaces 2. But then (r g k) . Define B = (bki) and obtain Wk as the fcth row of the r + 1 by n matrix BG.4. . . they span the row space of G and of A.6 the rank of BG is at most r. Assume. + b rv r Hence the rows of G are linearly independent in F. . . Let Abeanmbyn matrix of rank r. . Any r + 1 vectors of the row space of A are linearly dependent in F. . P = 2 (0. that Wk = bkiVi + = v\F + that Wk are any r + 1 vectors of the row space L + vr F of A. . . &. . Thus we have LEMMA of 1. . 7. v r Our definition of G For we may designate the rows of G by v\. In the proof of Theorem 3. implies that there is no loss of generality if we permute its rows in any de Theorem . if b\v\ may assume for convenience that 61 9* 0. respectively. so + bkrvr for k = 1.6 the elementary transformation theorem as the useful we used the matrix product equivalent of Theorem 2. . such that the + l)rowed matrix D = (d for g. . = (fc. Thus. we The determinant of the rrowed square matrix (12) P = (%) . Then there exist nonsingular matrices P and Q of m and n rows. of G and A coincide. . . + l)st row of D(BG) is the zero vector. .70 lar. . b rv r sired fashion. But this is impossible since biv\ + is the first row of PG. and hence P is nonsingular. . P. where G and H have rank r. . AQ = (H 0) . and by Lemma 2 there is .x) clearly 61. such that (11) PA = (\ . Then the row spates PA and A coincide. G is an r by n matrix. . exists a nonsingular (r r + 1.) . By Theorem 3. .
. r D= (d gk ) such . order of L over F. . It follows that s is the rank of G. the rank of A be s. m. ur in L. . . We may now obtain the principal result on linear subspaces of F. . . . . If s < . . r . This proves Theorem 2. . necessary. Then u k = b k \ui + + b kr u r for b ki in F. Theorem 4. L = UiF + By the proof of Theorem 1 the vector u is in \Ui. . + urF. e n F. Thus any linear subspace L over F of n contains at most n linearly independent vectors. . . . . . u are linearly dependent in F for every M of L. . of the nonsinguwr+i are linearly dependent in F. r+iWr+i r lar matrix D and cannot all the dr +i tk form a be zero.LINEAR SPACES dr+i. . r = s is unique. It follows that there . + + Vn V maximum number r < n of linearly independent vectors HI. iWi 71 + . pendent. . dr iUi . and that w. . . . . row of DG = 0. then Theorem 2 implies that s < r and similarly that r < s. and it is clear that (14) G = Ur then PA = is we apply Lemma that the rth given by (11). Every linear subspace L over F of V n over F has a basis. . . matrix P given by (B is Dif nonsingular. r . the rth row of D is . . We now apply Theorem 1 to obtain Theorem 3. . u }. After a permutation of the rows of A. But if also L = ViF + + v. For Vn = + . + drr u r u r are linearly indecontrary to our hypothesis that w 1. the vectors 101. = row . . not zero. The . . For let A be an m by n matrix whose ith row is Ui and let r be the integer m of Theorem 1. . and k = r + 1. . exists a . we may assume that ui. + . and by Theorem 2 any n 1 vectors of eiF are linearly dependent in F.F. The integer r of Theorem 1 is in fact the rank of the by n matrix whose ith row is Ui. Any two bases of L have the same number r < n of vectorsj and we shall call r the . . The matrix G of Lemma 2 may be taken to be a submatrix of A. + d +i. . u\. This completes our proof. 2 to obtain a nonsingular rrowed matrix s < r. if u r are a basis of the row space of A. 0.
) that the rows of A* are in the row space Show . Show thus that LI = [HI.\ A. 3. vectors v2 ] if u\. Show that then there exists a nonsingular matrix P such PA Give also a simple form of Ai. Solve Ex. that the rank of the matrices formed from the first two rows of each A. Also A f Theorem 5. 12 24 10 20 1 2301 2 1 1 1 231 47 21 11 1 231 0412 2122 3 3 1001 011 c) 2301 1 5100 1234 be a rectangular matrix of the form 121 35102 11 366 47036 4 1 7 15 16 4. the row of is its row rank. Section Form the fourrowed matrices whose rows are the 3. EXERCISES 1. Let A Mi Ui where A\ that is nonsingular. respectively. u^. of the corresponding matrices A are equal to the order of the subspace Li. 3. Find a basis of the row space of each of the following matrices. v\. 2. the basis to consist actually of rows of the corresponding matrix. 1 and 2 of Section 3 by the use of elementary transformations to compute the rank of the matrices whose rows are the given vectors. Hint: Mi VO A. indeed. Thus we shall call the row space of A' the column space of A. It is call the order of the row and column spaces and column ranks of A. By Theorem 3 the rank A and A have the same rank and we have proved The row and column ranks of a matrix are equal to its rank. a linear subspace of Vm We of A. i^} = L2 = {vi. and only the ranks is. their transposes. for P. .72 INTRODUCTION TO ALGEBRAIC THEORIES are uniquely determined In closing this section we note that the rows of the transpose A* of A by the columns of A and are. v 2 if of Ex.
distinct. same system. We define this concept in terms of that of function. the matrix ^5 = 0. It is clear that (15) then (15) is r Suppose now that f defines a second onetoone correspondence (16) M= is 0'<7. How/ is a function on G to G or that / is a correspondence on G to G'.LINEAR SPACES 5. a correspondence such that every element g of G is the corresponding element f(g) of one and only one g of G. In elementary algebra and analysis the sys tems G and G' are usually taken to be the field of all real or all complex numbers and (15) is then given by a formula y = /Or). however. which now on G ence between (17) G to G. / on G In elementary mathematics it is customary to call G the range of the independent variable and G f ever. The concept of equivalence. 73 Let the matrix of Ex. also that. if the order of A\ is the rank of A. This concept may be seen to be sufficiently general as to permit its extension in many directions. G and G' are the and (16) are. the functions (15) Thus we may let G be the field of all real . it is f more convenient in general situations to say that the range of the dependent variable. Show that the choice of P then implies that Show 6. Then we call (15) a onetoone correspondence on G to G'. 4 be either symmetric or skew. But the basic idea there is that given above of a correspondence (15) on G to (?'. Note. and we may thus call (15) a onetoone correspondand G' and indicate this by writing f g+ if ><?'. In discussing the properties of mathematiand linear spaces over cal systems such as fields F it becomes desirable quite frequently to identify in some fashion those systems behaving exactly the same with respect to the given set of definitive properties under conshall call such systems equivalent and shall now proceed to sideration. Then / is given by (15) g>g'=f(g) (read g corresponds to g'} such that every element g of G determines a unique corresponding element g' of (?'. Let G and (?' be two systems and define a singlevalued function to G'. that. in general.
if G' is equivalent to G".74 INTRODUCTION TO ALGEBRAIC THEORIES numbers and (15) be the function x 2x. (gh)' = g'h' for every g and h of F. then G is equivalent to G". A. Let F be a fixed field and V and au are unique elements of consist of a set of elements such that u +v V for every u and If v of is V and a of F. (Thus. Finally. for example. . and there is a onetoone correspondence u < that (u 7 a second* such space UQ between V and V such + 0)0 =u <> + ^o . then G' is equivalent to G.) We then call G and G' equivalent if there exists a onetoone correspondence between them which is preserved under the operations of their definition. Of course. The reader should observe that under our definition every mathematical system G is equivalent to itself and that if G is equivalent to a system G'. so that (16) is the function x \x. We linear spaces. Let us consider two mathematical systems (?. we might have g + h in G for every g and h in G and also g' + h' in G' for every g' and h' in G'. EXERCISES 1. closed with respect to certain operations. if G and G' are distinct it is not particularly important whether we use (15) or (16) to define our correspondence. . (au) = au for every u and v of and a of F. V V . have thus introduced two instances of V V and V what is o are equiva We a very im portant concept in all algebra. Then we shall call V a general linear space over F. We now see that we have defined two fields F and F' to be equivalent if there exists a onetoone correspondence (15) between them such that (g + h)' = g' + h'. proceed to use the concept just given in constructing the fundamental definition of this section. that of equivalence. G' of the same kind such as two fields or two linear spaces over a fixed field F. These systems consist of sets of elements #. then we shall say that lent over F. Let us then pass to the second case which we require for our further discussion of . Verify the statement that the field of all rational functions with rational coeffifield of all cients of the complex number V2 is equivalent to the matrices 'CD for rational * a and 6 under the correspondence A < > a + We of u' for the arbitrary vector of use the notation Vo instead of to avoid confusion in the consequent usage as well as for the transpose of u.
v of V and be shown very easily that. . (4). Every linear subspace L of order r over F of Vn is equivalent F to V r. our definition implies that every two linear spaces of the F V F V same order n over F are equivalent over F. all scalar matrices with elements in a field 4. then V is equivalent over F to V n However. a) a and 6. b in F. every quantity of V is uniquely expressible in the form (18) Citfi + . . w. Moreover. Cn) . where the equivalence between (19) CiVi V and V n is < > given by . of V and a. . if V is a linear space of order n over F. we prefer instead to define V by its equivalence to Vn This preference then requires the (somewhat trivial) Conversely. 6. + .LINEAR SPACES 2. . proof of Theorem over 6. define au it a of F. Verify the statement that the mathematical system consisting of all tworowed square matrices with elements in F and defined with respect to addition and multiplication is equivalent to the set of all matrices fa n ai 2 an a* 021 a a. if (2). Then we define is equivalent over to linear space of order n over a given field F if n over F. and (5) hold for every u. Clearly. Then . O22 under the correspondence indicated by the notation. Linear spaces of finite order. and (5) hold may and every u of V is uniquely expressible in the form (18) for d in F. + Cn t>n . (4). . field of 75 is Verify the statement that the field of complex numbers matrices equivalent to the b\ (a \b for real 3. then the properties (2). v. (3). . + Cn V n + (ft. We shall restrict all further study of to be a linear spaces to linear spaces of order n over F. (3). = ua and u v to be in V for every u. Thus we justify the use of the proving that term linear subspace L of order r over F by L is indeed what we have called a linear space of order r over F . Our definition also implies that. . . F forms Verify the statement that the set of a field equivalent to F.
Let L = UiF + + unF and Vi. . finite Verify the statement that the following sets of matrices are linear spaces of order over F and find a basis for each. Moreover. and y and degree at most two 2 c) t* + t and y = J 8 + and degree at most two numbers. The set of all nrowed diagonal matrices. F the field of all . first row are zero. . All polynomials in independent variables x (in x and y together). . . Moreover. . Thus u in L uniquely determines the d in F and conversely. and the coefficient matrix A= (aij) is defined. v m form a basis of L over F if and only if Vi. It follows that (21) is w>(ci. a onetoone correspondence between it that defines an equivalence of r.76 INTRODUCTION TO ALGEBRAIC THEORIES Vn r . in x and y. Then the number of the vk which are linearly independent in F is the rank of the matrix A. with d) All polynomials in e) F the field of all rational All polynomials in a primitive cube root of unity with rational numbers. linear spaces of order . . +a in un . a) All polynomials in 6) Find bases for the following linear spaces of polynomials with x of degree at most three. . L = M if and only if m = n. vm be in L so that there exist quantities a. _ ^2. = n and A is nonsingular. + u F is uniquely expressible in the form U = CiUi (20) + . Theorem 3 should now be interpreted for arbitrary and we have a result which we state as Theorem 7. . . . + CrU r for d in F.. . contained in the space of For proof we merely observe that every vector L = uiF + . . a) b) c) d) 2. All polynomials in x = coefficients in F.in Vi F for which = anUi + .. m EXERCISES 1. . . . ...<v) r V and V and it is trivial to verify L and V We have now seen that every linear space L of order n over F may be regarded as a linear subspace of a space M of order m over F for any m > n. n. . The set of all m by n matrices with elements in F. The set of all mbyn matrices whose elements not in the The set of all nrowed scalar matrices.
. i. Let A = diag {1. of the orders of LI 2 and L 2 and in fact . . . . . we may ask . 1.Li} . all 3. For only if it is clear that aiVi . A* form a basis of the set of 4. A. are not all zero and therefore that the vectors and Wj spanning L if not form a basis of L . A. . + and +bw = + OmVm + Jwi + a\v\ + = (bi)wi + + (bq)w is in both LI + am vm q . . . . 5. q q if . . 77 numbers + F m g) The polynomials of (/) but with F the field of all real numbers. B. . w F. 1. . . Li and (22) If the L 2 and will be designated generally by Lo= only vector which is {Li. I/ 2 2 A. Show that 7. . v . . 2}. in both LI and L 2 is that LI and (23) L 2 are complementary subspaces Lo of their . . . w q ] of L will be called the sum of vm w\. then Lo = v^ = + + . . . we shall show that. then the v> and w. Conversely. B AB A J5 2 form . w q ) are linear subspaces over F of a space L of order n over F. . if . . q and L 2 Thus v ^ t\ implies that the a* and 6. Show that. Show if that 7. vm } and L 2 = Addition of linear subspaces. . we shall say sum and write  L! + L. {wi.LINEAR SPACES the field of all rational 1) with /) All polynomials in w = u(i and i* = v? = 2. are a basis. AB. the zero vector. . Hint: Prove that 1. 1 (0 \0 2 01. A 2 . = viF + vm F + WiF + + . A 2 . and Lo has order m + If LI and Lo are linear subspaces of L and if L contains LI. . L = wiF + + w F. threerowed diagonal matrices. In this case the order of L LI is the sum . w. . . a basis of the set of all threerowed square matrices. . are linearly dependent in F and do = 0. . . If LI = (vi. . . 7. . if /O 0\ A = then 7. necessarily t> <? do form a basis of Lo. . AB form a basis of the set of all tworowed square mat rices 5. + vmF. the subspace Lo = {wi. q .
fin (4.2. Then there exists a linear subspace L2 of L such that LI and L2 are complementary subspaces of L. 2) * 3) 2. u. . H EXERCISES 1.4. Let LI of order over F be a linear subspace of L of order n m over F. . the rows of A span V n and the rows of H span the space L 2 which we have been seeking. 1. However.0). . v span Li. *s = (0. and A = AoQ"" 1 is obtained by permuting the columns of A$. 1.= (1. Let LI be the row space of each of the by n matrices of Section 4.(!. 1) (!. 0. (4. . 2. (2. . whether a linear subspace L 2 of L exists such that L = LI + L 2 The existence of such a space is clearly a corollary of Theorem 8. c) . where we apply this method to the set un F. u3 . theory. Then G has rank m. 1. 1.(!. (?. . . 0.2. 2. . 2. Let the following vectors U{ span Li. (4.0) 3. 0. let us give.78 INTRODUCTION TO ALGEBRAIC THEORIES . 2.1). a proof using matrix L = uiF + + i. 1.0. 2. . Then the matrix ft is nonsingular. %= i* (2. . u n in rem 1. . 3. 1) (1. 1. The result above may be proved by the method we used to prove Theovm u\. let G be the mbyn matrix whose rows are the basal . . 3) (3.1. 1) 4. 0. 2. 3. Ex.* = = /. 1) .0). 4. . 1) 1) . 6. instead. . and there exists a nonsingular matrix Q such that the columns of GQ (ft are a permutation of those of ft) . . (1. . 1) (2. vectors We put L = Vn vi. vm of LI. .% in . it is clear that the rows of are certain of the vectors e which we defined in Section 2. *= \ %= .1. Find a complement in { Li. . 3.1. 1. But then A = is nonsingular.0). 1. . 2) . Use the method above to find a basis of the corresponding V n consisting of a basis of m Li and of a complementary space 2.(l. GQ = where ft is nonsingular. 3. . L 2} to Li and f to La. 1. L 2. . Moreover. = = (1. 2) . (5..
Then. . (27) implies that (30) Ck fk(di. . + (fc  r + 1. . . . .LINEAR SPACES 8. . . m by n matrix are such forms and are in L. r + . . . + a in x n of a system (26) a t iXi + . We may assume without loss of generality that the equations (26) have been labeled so that /i. . where (29) ui. . 79 Systems with coefficients in of linear equations. . . Then (28) A = (i = 1. If the system (26) . dn ) = bklCi + . m) . . The set of all linear forms in x\. xiF + . . . . fr are linearly independent in F. . . F is a linear space (24) L = n over F. . . and (k uk = is bkiUi + . + x nF of order The left members . . of if linear equations in r is the rank of the m n unknowns. w) . . . A = (a. . (27) /* = 6*1/1 + . + b kr Ur = . u r are linearly independent in F. . dn) = = Then ./) of coefficients of (26). . . d n in F such that/i(di. . . . . . . . . m) . . 1. there exist quantities Ci. +a in xn = c (i = 1. xn) = d + . . . . . . m) for b k j in F. . . . (25) fi = fi(xi. di. m) . and . . we see by Theorem 7 that F and the remaining m certain r of the forms / are linearly independent in r forms are linear combinations of these r. . . . . + b krfr (k = r + 1. consistent.
. . . if any c* 5^ 0. c (i = 1. . . +b kr u* r (k = r + 1. A* . clearly. . . . But. if A * has the same rank r as A t . m) . the has a nonzero (r l)rowed minor. . . . and we choose wi. . . .80 INTRODUCTION TO ALGEBRAIC THEORIES A* of the system (26) to be the Define the augmented matrix matrix u* m by n (31) A* u (OH. . . . (33) m. . . r)rowed and onecolumned matrix C 2 has c/bo = c* bkrCr) as its elements. if A* does have the same rank as A. Conversely. + . Moreover. . . (b k iU* + These replace the submatrix A of A * by . . has to rank r. m) . . ain . so that the rank of A* is most But A* has A as a submatrix. m. This is impossible if A* is . then u* We then have (29) and may apply elementary row transformations of type 1 to A* which add b kr u*) to ul for k r + 1. then m r of the equations (26) may be regarded as . (S)A* by o UT and replace (34) (G \0 Cl \ 1 C. ur u* are clearly linearly independent. and (32) see that (29) and (30) imply that 6*11* ul = at + r. + + We now see that the system (26) is consistent if and only if A* has the same rank r as A. . we have already shown that. matrix A* of rank r./ where the (m (ftjfcici + . be linearly independent.
. r) and 2/ r +i. . Obis . . 7 n _ r }. . . . x n with matrix of rank r has solutions. . r. We may write x = (xi. the solution of (35) is then given by z = y(Q')~ l for yi = Ci(i = 1. = xQ' = ( yi . . . Then singular and is nrowed and nonso is H H is nonsingular. . .LINEAR SPACES 81 linear combinations of the remaining r equations and are satisfied by the solutions of these equations. v = (ci. then (38) Q r where FI and X\ have columns.. (36) for Q2 = Q 2 Qi. .. yn). . 62) with Gi an rrowed nonsingular matrix. Before solving (35) we prove be LEMMA 4. (36) Let G an r by n matrix of rank r.. Then G= (I F 0)Q for a nonsingular nrowed matrix Q. . . where Qi is an r by r matrix of rank r. = (X G(+Xt G' 1 i so that . (35) (7 r 0)Q 2 = (H 0). But then y = xQ' a nonsingular linear transformation and the t/ are x n Evidently. . For Lemma 2 states that G = (H 0)Qi. cr ) . if we choose the notation of the x so that G = (Gi. 2/n arbitrary. Thus we have Q = The system (37) may now 0)t/' be written as y (I T = t/. It thus remains to show that any system of r equations in xi. . . . (37) has the linearly independent linear forms in Xi. .   . serve that. . . x n ) and see that such a system may be regarded as a matrix equation (35) Gx' = t/ . . solution yi = c for i = 1. diag {H. . . From xQ' this we obtain Q' = (! ^J.
x r as linear functions of Xr+i. ym) . u is a vector of Vm over F v is a vector of V n over F. as well. v = (xi. (3. . where (41) A is an m by n matrix and u = (pi. u and U Q of L. . .32) as a matrix equation y'A'. + M M . to u upper S) wherein every vector u of Suppose also that L determines a unique (au + 6tto) 5 = au8 + bu for every of the space linear. . A and A.' in this equaThen we see that a linear mapping may be expressed as a matrix Linear mappings and linear transformations. so that S is the function u*us u goes us M . Clearly. . The system of equations a linear mapping was expressed in (3. .82 INTRODUCTION TO ALGEBRAIC THEORIES (35) is given and our solution of (39) by l X l . . .( But then we have solved the for xi. v n F.1) of x' = tion. . . Designate the correspondence more M m F and by the symbol S: (read in (42) 5. For exercises on linear equations we Theory of Equations. . . . + ain vn (i = 1. X. Let us interchange the roles of m and n. equation (40) v = uA. . . . m) . refer the reader to the First Course in 9. .(Y . . xn . . and (40) may be regarded as a correspondence u > uA defined by A whereby every u of Vm determines a unique vector uA of F n We now proceed to formulate . . . . be linear spaces of respective orders and n over Let L and consider a correspondence on L to M. + uf = anVi + . . . Then we shall call S a linear mapping L on the space and describe (42) as the property that S is M Suppose now that so that we are given not only the spaces Then a (43) linear = viF + um F and L and but fixed bases mapping S uniquely determines u$ in M. this concept abstractly. . xw) . . a and 6 of F. . and hence u\F L= + .
conversely. Define new bases of L and M . .LINEAR SPACES for 7 83 an in F. then the property that S is linear uniquely determines the mapping S. by (47) 4 0) = ii yi . This is true since every u of L is uniquely expressible in the form (44) u = in yiUi + . for 2/< F and (42) implies that (45) us It follows that to every linear and M mapping S of L on M and 0wen bases of L We there corresponds a unique by n matrix A and conversely. Thus every linear mapping which is a change of be regarded as a linear mapping of the space Vm Let us next observe the effect on the matrix defined by a linear mapping of a change of bases of the linear spaces. shall call A the matrix of S with respect to the given bases of L and M. . respectively. w We now observe that where  1 But v then. But. Thus S determines also an m by n matrix A = (a ).us = uA for the given matrix variable as in (3. may A. we see that S is the linear mapping if M Vn and put (46) u .1) on the space Vn . . = u s in = we assume temporarily that L = Vm and (40) and (41). + ymum . if A is an m by n matrix and we define uf by (43) for the given elements an of A.
. Since we are now considering only a single space. . . . . .84 for k SB 1. u n of L of order n over F and of a second basis v n of L. = 11 where JB = (r/j) = Q" We 1 . . . . 1 ) singular linear transformation defines a onetoone correspondence of L and itself. . n. correspondence is given by u us = uA . . . we should and do call S a nonsingular linear transformation if A is = u8A~ we see that a nonnonsingular. If L = F n such a Since M M M . Then P = (pki) and Q = (qu) are we may also write the second set of equations of (47) in the form (48) v. . . Then we define the matrix A of a linear transformation $ on L with respect to a fixed basis ui. . . m and I = 1. u n of L to be the matrix A determined by (43) with Ui = Vi. Since we may then solve for u Clearly. we see that changes of basis in L and replace A by an equivalent matrix. Let us restrict our attention to the case where v = u^ v\. . apply the linearity of S to (47) and obtain wW Substituting (43) and (48). INTRODUCTION TO ALGEBRAIC THEORIES .in (43) can be that of a fixed basis u\. as we saw in (3. . the only possible meaning of the Ui and v. . nonsingular and. Thus any two equivalent m by n matrices define the same mapping of L on If L and are the same space we shall henceforth call a linear mapping S of L on L a linear transformation of L. . respectively. . and Q" 1 are arbitrary nonsingular matrices of m and n rows. we have (49) where b k = Spkidarji. . . We have defined thereby a onetoone correspondence between the set of all nrowed square matrices with elements in F and the set of all linear transformations on L of order n over F.33).Hence the matrix B S with respect to our new bases is given by i = (bki) of the linear mapping (50) B = PAQr P 1 .
. 2 u = d) 3x1 6)  2x1 + 2zi + 40:1X2 10xix 8 4x? + Hxl = 6x^2 (47) of  18x2X3 + 1 * Observe that a change of bases uf wj 0> . rices Let S be the linear mapping (46) of 73 on A. Define of all vectors (points) tions. /S for the matrices of Ex. x%. (1. L defines a linear transformation of of basis as being induced we put Thus we may regard a change L when by a nonsingu lar linear transformation. (0. 1. 0. x$) Find the equation of the curves C s into which each S carries each C. 0). I 1 F4 defined for the following mat 3. Show A /4 a) 3 5\ A = V 3 5 20 6) A = 2 117 3.LINEAR SPACES 85 We now observe the effect on 0) A of a change of basis of L. PAP~ We shall call two matrices A and B similar if (51) holds and shall obtain necessary for a nonsingular matrix and be similar in our next chapter. Hence a change of basis* of L with matrix P replaces the matrix A of a linear trans B = PAPnow clear that in order to 1 . 0) of F3 are mapped by 3 3 2 1 1\ a) A = I 261 1 5\ 21 b) A = I /2 1 3 4 2 0\ 2 4) \l I/ \ 3/ 0\ c) A = /2 \1 2 1 32 1 I/ d) A = /2 I 010 1 1 01 I/ \l 2. Apply both S and to the vectors of Ex. We use (47) but note that ut formation by (51) It is = v+ and i4 = *40) so that we have P = Q. formation on pass to study those properties of a linear transnot depend on the basis of L over F we need only study those properties of square matrices A which are unchanged when we L which do l . 1. and let C be one or the other of the curves whose coordinates satisfy the following equa(x\. . Find the vectors of 7 4 into which (2. tions that P and sufficient condi A B EXERCISES 1. rices S" 1 that the linear transformations (46) of V* defined for the following matare nonsingular and find their inverse transformations. S. 4).
= us (us )' = uAAV. that (/ A be a skew matrix such that / + A is nonsinguis + A)""^/ A) orthogonal. f(us ) = 2 + 26 b pq = for a p 5^ g. We call matrices A clearly. Write B = AA' . . f(u) = uu'. then BB' and (PAP')(PA'P') A . . Show that every orthogonal tworowed matrix / A has the form a =1 V (b .(ft... EXERCISES 1. all = /(w) only if other c/ = 0.c/. We may by square Then. P'P = /. B is also orthogonal. sional linear space of vectors u = (ci. INTRODUCTION TO ALGEBRAIC THEORIES Orthogonal linear transformations. cn) of the quadratic (52) form /fe. define . have seen that S determines basis of Fn Now in . Let I be the identity matrix. S on V n which are said We propose to study those linear transformations to be length preserving and define this concept as the property that f(u) 8 f(u ) for every u of n Such transformations S will be called orthogonal. that is. for which the matrix P of (51) is orthogonal.^^ + . The final topic of our study of linear spaces will be a brief introduction to those linear transformations permitted in what is called Euclidean geometry.. those We is an orthogonal matrix.) = 2Ci6i. f(u s s ) pfl satisfying (53) AA' = I is orthogonal matrices and have shown that S orthogonal if and only if its matrix A uniquely only in terms of a fixed Euclidean geometry the only changes of basis allowed are those obtained by an orthogonal linear transformation. = V . so that if S is a linear transformation with orthogonal matrix and we replace A by PAP"" 1 = B. B = / is the identity matrix. . 3. But then PP = 1 7. . We . c/ = for j ^ p and and see that/(w ) have f(u) = /(w5 ) only if 6 = 1 for every i. Let then Vn be the ndimenc n ) over a field F. * . What Show are the possible values of the determinant of an orthogonal matrix? 2. Put c p = 1. lar. Then take c p c q = 1.+4.86 10. from which f(u) = 2.. /(w) = 2c?.PAA'P' = I .. the norm of u (square of the length of u) to be the value f(u) = /(ci.. . us = uA for an nrowed define S matrix A. P' = P 1 .
we define the set to be the set of all vectors w in Vn such that tw' = for every v of L. and of the t such that 0?i s* + 2 7* 0. 6 = 2 Z(s + J 2 2 )*/ . 1N i \ a) OJ 6) M /3 4> \ (4 3J C> x A (l \ l) /3 (2 Orthogonal spaces. + . we ^i + ^2 is in 0(L). For. 7. wW = uAA't.) and 0. while. tr 8 wA .LINEAR SPACES where a 87 = 2 s(s + t* * 2 J 2 )~ 1/2 . Then. The equations h are x = is z cos h for rotation of axes in a real Euclidean plane through an angle sin h. 1 t an are 1.' =u f = 0. conversely. . first note that the elements of GH' are the products t^toj . Hint: The result s is trivial if a 2 + #12= 1 so + 6 = (s + 2 that 2 = J 2 range over all quantities in F A is a diagonal matrix. we have 11. + Then 0(L) in is a linear subspace of if Vn which we shall call the space orthogonal in Vn to L. fawj 0(L) and a and 6 are in F. Find a real orthogonal matrix P for each of the following symmetric matrices 1 A such that PAP' is a diagonal matrix. Then O(L) is the = (0 InmXQ')" 1 o/ rank n m. . )(s + )" = The values 021 = form above. Thus orthogonal transformations on V n preserve orthogonality of vectors of V n If L = viF v m F is a linear subspace of F. = + a are derived from 6. We now Theorem 9. Let L be a linear m. y z sin ft t/ 3/0 cos h. perpendicular) if uv' any orthogonal matrix and we define a linear transformation S of Vn by us = uA. We shall call two vectors u and v of V n orthog= 0. = yQ . of 0(L) is n In fact we shall prove Theorem 10.022 AA' 4. clearly. Now an. Let L be the row that subspace of order m of Vn . Show that the corresponding = + orthogonal and that every real orthogonal tworowed matrix matrix of a rotation or its product by the matrix matrix is either the Evo of a reflection x 5. i = XQ y . if A is is. in a geometric sense. Wi and have 0(01^ prove + bw 2 )' = atwj + w 2 are = 0. 0(L) and only if uv' = . TVien the order G= (I m row space of H For proof we space of the m by n matrix G of ran/c m so 0)Q for a nonsingular nrowed matrix Q. onal (that us = wA if . Hint: put du = Compute l PAP = D = j\ (d*.
2. . of G by the transposes of the rows Wj of H. only if d = 0.0). u3 (3. 1) (2. u. 1. (0. w = vF. 1) (1. 1. = 1. 7 EXERCISE Let L be the space over the field of all real numbers spanned by the following vectors u*. Hence let w be in both L and 0(L) so that w = dG. while also Gw = GG'd By rank m whose row space is L. 0) d) (1. clearly. . 2. 2. the row space of H is 0(L). Then Gw' = Theorem 3. 1. . 1. 2. since if F is the field of all complex numbers and + = (1. dm ) has elements in F and G is an m by n matrix of where d = (di. This is not true in general. However. 0) ui in 1. 3. We let HO be = the g (I m by n matrix whose kth row is y*. in (1. (1. in = (0. 4. v f . and it is natural to ask whether or not they are complementary in V n that is. . This proves that q = n ra. and. D has at most n n m. then GH' = 0. + y F of order q over F so that 0(L) contains the Evidently H has rank n m. then vv = 1 + i2 = 0. < . 1) . The sum of the orders of L and 0(L) is . For by Theorem 9 it suffices to prove that the only vector w in L and 0(L) is the zero vector. whether V n = L 0(L). . . is g since Q is nonsingular. 2. But so that m nonzero rows. 1. and we must have g > n . . 2. i). u3 . 0.2. q row space m.17 the matrix GG' has rank m and is nonsingular. Let F be a field whose quantities are real numbers. 3) u. 0. Then V n = L + 0(L). d = 0. u. 3) . Find a basis of the space 0(L) in the corresponding a) Vn . But GH = f /nm)].88 INTRODUCTION TO ALGEBRAIC THEORIES Vi of the rows (I m 0)[(0 . i 2 L = = 1. of the homogeneous linear system Gx = 1 0. = = = (1. and we have GH' = Q The rank of the n by q matrix 0)Q#Q. we may prove Theorem 11. 1) ^ fl. . Hence 0(L) is contained in L and 0(L) = L. 0. f f . n. 1. q Note that the row space of H is the set of all solutions x xn) (xi. Let then 0(L) = yiF of + H. and D must have rank Di = 0. GG'd' = as desired. 0) . 1) 6) Ul c) = (1. .
Every nonzero matrix to A of rank r with elements in F[x] is equiva in F[x] 1 (S where GI vides fi+i.CHAPTER V POLYNOMIALS WITH MATRIC COEFFICIENTS 1. Then a ^ must be a constant polynomial. As we stated in that section.4.  X) fi = diag (fi. . we may assume that f\ is monic and is the element in the first row and column of a matrix C = (c ) equivalent in F[x] to A. Matrices with polynomial elements. Using elementary transformations of types 3 and 1. fr } for monic polynomials = fi(x) such that fi di in x. that is. We shall consider m by n matrices with elements in F[x] and define elementary transformations of three types on such matrices as in Section 2. 89 . . a may be any nonzero quantity of F. A and B We may then prove LEMMA lent 1 . . Let Fix] F be a field and designate by (1) (read: F bracket x) the set of all polynomials in x with coefficients in F. By the Division . But those of type 3 are restricted so that the quantity a in F[x] shall have an inverse in F[x]. We now let A and B be by n matrices with elements in F[x] and call A and B equivalent in F[x] if there exists a sequence of elementary transformations carrying A into B. /i For the elements of all matrices equivalent in Fix] to A are polynomials and in the set of all such polynomials there is a nonzero polynomial = /i(#) of lowest degree. The field F(x) of all rational functions of x with coefficients in F contains Flx] and it is thus clear that if A and B are equiva m } lent in F[x] they are also equivalent in F(x). Hence we see that are equivalent in F[x] only if they have the same rank. . we assume that in the elementary transformations of type 2 the quantities c are permitted to be any quantities of F[x\.
90
INTRODUCTION TO ALGEBRAIC THEORIES
= qifi Algorithm for polynomials we may write CH F[x] and r of degree less than the degree of f\. But if
first
+r
for g<
q>
and
r< in
we add
times the
row
Ti
of
C
to its ith row,
we
I'th
pass to a matrix equivalent in F[x] to
A
with
as the element in its
is
thus implies that r< in F[x] to a matrix
zero.
D
column. Our definition of /i we have now shown A equivalent Moreover, = (da) with dn = for i 7* 1, du = f\. Similarly,
row
arid first
we
see that every
du
is
divisible
by /i and hence
that
A
is
equivalent in
F[x] to a matrix
(3)
Vo
where
A
2
has
w
1
rows and n
1
2.
columns. Then either
A2 =
may
apply the same process to
A
After a finite
number
of such steps
ultimately show that
A
is
equivalent in F[x] to a matrix (2) of
we we our lemma
0,
or
such that
the set of
every / =
all
/<(x) is
all
monic and
is
a polynomial of least degree in
elements of
matrices equivalent in F[x] to
(4)
Write /i+i
= fiSi
+
ti,
where
s<
and
ti
are in F[x] and the degree of
ti is
less
than the degree of /i. Then we add the first row of A i to its second row so that the submatrix in the first two rows and columns of the result is
ft
(5)
(
\fi
We
then add
a*
times the
first
column to the second column
to obtain
a matrix equivalent in F[x] to Ai and with corresponding submatrix
//,
(6)
\f<
definition of /t thus implies that
Our
We
/ divides />i as described. observe now that the elements of every <rowed minor of a matrix
ti
=
0,
A
with elements in F[x] are polynomials in
z, so that these minors are also
POLYNOMIALS WITH MATRIC COEFFICIENTS
polynomials in
x.
91
By Section 2.7 if B is obtained from A by an elementary then every Crowed minor of B is either a Crowed minor of transformation, A, the product of a 2rowed minor of A by a quantity a of F, or the sum
Crowed minors of A and/ is a polynomial of F[x]. If d in F[x] then divides every Crowed minor of A, it also divides every 2rowed minor of all m by n matrices B equivalent in F[x] to A. But A is also equivalent in F[x] to B and therefore the Crowed minors of equivalent matrices have the same common divisors. We may state this result as LEMMA 2. Let Abe an m by n matrix with elements in F[x] and d t be the greatest common divisor of all irowed minors of A. Then d t is also the greatest common divisor of the irowed minors of every matrix B equivalent in F[x] to A. Every (k + l)rowed minor Mk+i of A may be expanded according to a row and is then a linear combination, with coefficients in F[x], of fcrowed minors of A. Hence dk divides every Mk+i so that dk divides dk+i. We ob
MI + fMz where MI and Afa
are
serve also that in (2) the only nonzero fcrowed minors are the fcrowed
minors diag
divides /,+
\fi
we
<i* and since clearly every / ,/,J for ii<z 2 < v see that the g.c.d. of all fcrowed minors of (2) is /i ... /*.
. .
.
.
.
.

,
We
(7)
thus have d k
=
/i
.
.
.
fk,
whence
A.
t
customary to relabel the polynomials / and thus to write fr = gi, = g r We call g the jth invariant factor of A and see that if we /r_i 02, /i = 1 we have the formula define d
It is
=
>
3
(8)
gf
=
(j
=
1,
i
r)
.
Moreover,
factors
0, is
now
,
divisible
by
0/+i for j
1
=
1,
.
.
.
,
r
if
1.
We
mentary transformations of type
0i,
. . .
to (2)
and
see that
A
apply elehas invariant
r,
then
A
is
equivalent in F[x] to
w
If,
(o
then,
is
2).
*i
*>
visors d k
equivalent in F[z] to 4, it has the same greatest common diand hence the same g, of (8), while, if the converse holds, B is
equivalent in F[x] to (9) and to A.
We have
proved
92
INTRODUCTION TO ALGEBRAIC THEORIES
Theorem
1.
Two
F[x]
if and only
m by n matrices with elements in F[x] are equivalent in if they have the same invariant factors. Every m by n matrix
. .
.
of rank r with invariant factors gi,
g r is equivalent in F[x] to (9). As in our theory of the equivalence of matrices on a field we may obtain a theory of equivalence in F[x] using matrix products instead of elementary
,
transformations.
We
thus define a matrix
P
=
to be an elementary matrix
if
if
\,
both
P
and P have elements
1
in F[x}. Then,
a
=
\P\ and b
=
\P~
l
the quantities a and 6 are in F[x], ab
quantities of F.
1
1

=

/
1
1,
so that a
and
6 are nonzero
But
if
P has elements in F[x]

so does adj P.
Hence
so will
P = PI" adj P if P is in F. Thus we have proved that a square matrix P with polynomial elements is elementary if and only if its determinant
is
a constant (that
We now
is, in F) and not zero. observe that, in particular, the determinant of any elementary
and B are square matrices with elements in F[x] and are equivalent in F[x], their determinants differ only by a factor in F. Moreover, if \A and \B\ are monic polynomials, then the equivalence of A and B in F[x] implies that \A  \B\ and, in fact,
transformation matrix
is
in F.
Hence,
if
A
\
\
that
when A
is
a nonsingular matrix with
.
0i,
.
.
.
,
gn
as invariant factors
its
determinant
It is
if
the product g\ gn clear now that a square matrix
is
. .
.
P with elements in F[x] is elementary
and thus the
and only
if
P
is
equivalent in F[x] to the identity matrix,
invariant factors of P are all unity. We may now redefine equivalence. We call two m by n matrices A and B with elements in F[x] equivalent in F[x] if there exist elementary matrices P and Q such that PAQ = B. Then we again have the result that A and B are equivalent in F[x] if and only if they have the same invariant factors. For under our first definition P and Q are equiva
and hence may be expressed as products of elementary transformation matrices. But, if PO and Q are elementary transformation matrices, the products P&A, AQo are the matrices resulting from
lent in F[x] to identity matrices
the application of the corresponding elementary transformations to A. Hence, PAQ must have the same invariant factors as A. The converse is
proved similarly, and we have the result desired. In closing let us note a rather simple polynomial property of invariant factors. The invariant factors of a matrix A with elements in F[x] and rank
r are certain
i
.
.
monic polynomials
1.
Qi(x)
such that Qi+i(x) divides gi(x) for
=
.
1,
,
.
.
.
,
r
If
g k (x)
=
1,
then g&x)
=
1
for
larger j
=
k
+
1,
r.
Let us then
call
those gt(x)
^
1
the nontrivial invariant factors
d.. Describe a process for reducing a matrix A with elements in F[x] to an equivaHow. 1 of elementary trans formations. Reduce the matrices of Ex. g r (x) = . a) I x3 + 3x 2  3x  1 x3  x2 + 4x  2 /x 2 d) x 2 +x +x 1 x x x2 1 + 2x x2 \x 2 2x 2 + 1\ +x + 2/ 1 2.6 with/ = an. . Q = a>kj> 3. Express the following matrices as polynomials in x whose coefficients are matrices with constant elements.POLYNOMIALS WITH MATRIC COEFFICIENTS of 93 there A } the remaining g^x) = 1. Determine the invariant factors of the matrices of by the use of (8). 3 to carry this preliminary diagonal matrix into the' form (9)? 5. 1 to the form (9) by the use Ex.c. may we use the process of Ex. 6. /x(xl) b) \ x(x /x 2 c) + x2 x 2 \ x + l) \ + l)j V + 2*  3/ lent diagonal matrix. 1 the trivial invariant factors of A. process of Section 1... form \ a) (x \Q 4. Let A be an m by n matrix whose elements are in F[x].c. of an and a*.. i Thus exists an integer t g i+ i(x) = . Describe a process by means of which we may use elementary row transformations involving only the fth and kth rows of A to replace A by a matrix with the g.in its ith row and jth column. EXERCISES 1. . Hint: Use the g.d. Use elementary transformations to carry the following matrices into the (9). then.. . = such that gi(x) has positive degree for = 1.
. . r"i . . . . . (x e c. . e\j = ^ for i = 2. . . to the form x 2. g\(x) = (x . and thus C\C\\ v^iuy of a matrix A clear n(r\ &%{) r is the r.94 7. INTRODUCTION TO ALGEBRAIC THEORIES Use elementary transformations to reduce the following matrices (9). +2 divisors. .) . . c. rj . Here . number of invariant factors of A. . (r {*' /. that every g<(x) divides 0i(x). where Ci. x2 Let x + 1 ~x 2 Elementary K be the field of all complex numbers so that ci) ei . Those for which e^ nontrivial elementary divisors of > will be called the A. are the distinct complex roots of the first invariant factor it is with elements in K[x]. Since 0+i(x) divides 0(x).Vti l ^i) fr c v Vi cy * / TV ^ e^ e^ 1 i. We shall call the rs polynomials the elementary divisors of A.
/.. Then g is the only nontrivial invariant factor of A. diag 1}.. Theorem . It is now evident that two by n matrices with elements in K[x] are equivalent in K[x] if and only if they have the same elementary divisors. complex numbers the corresponding elementary divisors /. . where 0. first has prescribed invariant factors and hence prescribed it is desirable to obtain a matrix of a form exhibiting the elementary divisors explicitly. . then. A t } is equiv . If s = 2. that A. . . ...c. Define gi(x) as in (10) for i = 1. . . 1}. . obtain a set of polynomials gi(x) such that g%+i(x) divides g^(x) for i 2 1. and our theorem Cjj is proved. Then the only nontrivial invariant factor of the matrix tively prime fs f B } is its determinant g = f i A = diag {fi. . For each be number of hkj in our set. Assume. A The distinct roots in our set hkj may tfyen n *> be labeled Ci. . . the Qi(x) are the nontrivial invariant factors of a matrix .d._i. Hence {</. . . 1}../2J is unity.. However. let us consider a set of q polynomials each a power with positive integral exponent of a monic linear factor with a complex root. . is We see now that if gi(x) in F[x] to diag {0 t defined by (10) for distinct .) / with Let us then order the t exponents n. . ._i is prime to / is equivalent to is equivalent in F[x] to diag {g. Then . /._i = /i . Let us prove f s be monic polynomials of F[x] which are rela2./} is equivalent . The /2 of result is trivial for s 2 = 1./. Then. /. But then A = diag {^i. and s is the number of distinct roots of the invariant factors of A. . ... . ._i} A = diag {/i. We shall do this.. we have r ^ t. . . If A has rank r.. to be integers e^ satisfying t and ^ ^ Ctj ^ 0. is equivalent in F[x] to 5.i. and 1J. . . . of the elements f\ and l. . By Theorem 2 the matrix Ai = diag {/. in pairs.. . A . . Conversely. n polynomials by adjoining t tj polynomials (x c. . . . q = ti t tSj and our set of polynomials may be extended to a set of + + ^ exactly ts HH = #ij 0. .. . t be the maximum tj. !.!}. / 1. But 0. c. d\ = A A . !. . .!}. ./i/2 is the only nontrivial invariant factor of 2 is equivalent in F[x] to diag (fif 2 2. . A whose nontrivial elementary divisors are the given hkj. .. . . /* are relatively prime in pairs. the elementary divisors of uniquely determine its invariant factors. 1. In fact. and our let tj polynomials have the form the . 02j ^ .POLYNOMIALS WITH MATRIC COEFFICIENTS 95 The invariant factors of A clearly determine its elementary divisors Cj where r is the uniquely as certain rs powers // of linear functions x rank of A._i = diag {/i. = (z c/) for n^ > 0.. . the g._i = diag {0_i.} is equiv alent in F[x] to diag {0 f _i.(z) of A. . . 1. . A = diag {/i.} diag {gf. m The matrix (9) elementary divisors. . clearly. Let f i. c. and we adjoin (r t)s new trivial elementary divisors // to obtain the complete set of r invariant factors 0.
(x x. + 2 I) . **. (x + x' x* 1)S Find elementary transformations which carry the following matrices into the (9). .2z + x. (x . g tj . (x (x 2)'. x'. x 2x* + x* + x. (x  2). we have proved Theorem 3. . (x 2  2 2 . EXERCISES 1. The whose rank a) b) c) following polynomials are the nontrivial elementary divisors of a matrix is six.  (x  5) (x + 1)'. + 1)2.96 INTRODUCTION TO ALGEBRAIC THEORIES . . . . (x + 1). x  1 2. . d) x. (x x. .1). . x. (x + 1) 2). e)  1). 31 a. . /) x.I/ (xl) 6)(0x + l \0 \ x + 2/ 0) (0 \0 x + I/ . (x x 4). (x x*. 1}. x. . 2). x* z . c r be complex numbers and fi= (x n Ci) i/or in ^ 0. z x(x (x . 3. (ft We and hence the nontrivial inexamine the form of A to see that . *'.1)*. (x (* . 1. .1). The are a) V) c) What its following polynomials are the nontrivial invariant factors of a matrix.4z + 5z . 2 3 2 2  3z +2 2 (*  l)^. . alent in F[x] to diag [g^ variant factors of A are 0i.  1). + 2x* + x\ + z + 2x* + 5 2 8 x* 2 3 2 . form /(x  1) \ /x 3 \ o)( \ x2 x /x a c) 0) .7z + 2. Let tegers ni Ci. nontrivial elementary divisors? x6 6 d) e) x* + x + x*. (x 1). .5s + 9z . I) (x  1). (x (*3). . .2.f r j as its elementary divisors. . (*3). (x  3).I) . .2x + 1). z> (x3). . x x* + x. What are its invariant factors? (x (x  1)'. . Then the matrix A = has the fi diag {fi. .
while if s t. In order to be able to multiply as well as to add our matrices we shall . then q(x) and us While the proof is very much the same as that in Theorem 1. of an m by n matrix A be and let s be a positive integer not less than the degree of any of the Then every a# has s as a virtual degree and we may write Oif (11) for a$ in F. 3. Let the elements a*. 97 in F[x] a. Then the degree of f (x)g(x) is m + n and the leading coefficient of f (x)g(x) is AoB LEMMA m . Evidentf (x) we have LEMMA or g(x). we say that/(x) has degree s and leading coefficient Ao if ^o ^ 0. Q(x) have degree s = 0. Moreover. then q(x) t. m with henceforth restrict our attention to nrowed square matrices with elements in F[x] and thus to the set of all polynomials in x with coefficients nrowed square matrices having elements in F. and if q\(x)g(x) has virtual deq^x) = A^B^x'"' the polynomial fi(x) = f(x) ^ . Let f (x) have degree n and leading coefficient Ao. The degree of f (x) + g(x) is not greater than the degree of gree 4. in Chapter I we use Lemma 4 in the derivation of the Division Algorithm for matric polynomials which we state as As Theorem degrees s 4. t Let f (x) and g(x) be nrowed matric polynomials of respective coefficient of g(x) is nonsingular. Q(x). for s by n matrices A k Thus f(x) is a polynomial in x of virtual degree m 6y n matrix coefficients Ak and virtual leading coefficient AQ> Moreover.1. R(x) such that r(x) and R(x) have virtual degree (13) and f (x) = q(x)g(x) + r(x) = = Q(x) g(x)Q(x) + R(x) ^ . + A.POLYNOMIALS WITH MATRIC COEFFICIENTS 3. nomial f(x) ly is If all the A k in (11) are zero matrices. and we again designate by 0. Let us call these polynomials nit rowed matric polynomials. let and J?o be give it in some detail. We assume first that s ^ t and let A the respective leading coefficients of /(#) and g(x). and such that the leading t 1 Then there exist unique polynomials q(x). r(x). Then B^ 1 exists. if s < t./. Define Ak = (a^) and obtain an expression of A in the form (12) f(x) = A Qx> + . the poly the zero polynomial. g(x) have deand leading coefficient B such that AoB 5^ 0. . Matric polynomials. .
C be annrowed square matrix with elements in F. we call (/(x) a left divisor of 0(x) if /(x) = gr(x)Q(x). 1. It is natural now to try to prove a nomials. the second as the left division of f(x) by g(x). the degree of this polynomial is h jg 0. and 0(x) is a ngrfa divisor of /(x). and (15) f L (C) = CAo +0^1 + . Then we shall speak correspondingly of q(x) and r(x) as right quotient and remainder. respectively. division of f (x) by xl C are fn(C) . But leading coefficient is = r(x) r (x) is t then by Lemma 4 the degree of [qo(x) ft. of Q(x) and 12 (x) as left quotient and remainder. and these matrices might all be different. //(x) of virtual degree t q(x)  0. This proves the uniqueness of g(x) and r(z). 1 . the theorem in Remainder Theorem for matric polyusual form is ambiguous since. . and fL (C) (14) (read: f left of C) by f R (C) = A C + A^*. for ex ample. However. = q$(x)g(x} + q(x) 0. r(x) = /(x) Q if s 1 < The process terminates when we obtain an Thus we have the first equation of (13) with t and t. and its C 0. The existence and uniqueness of Q(x) and R(x) is proved in exactly the same way except that we begin by forming /(x) whereas its virtual degree is t 1. +CA_ + A. Thus we must first define what we shall mean by/(<7). Theorem 5.98 gree 5 INTRODUCTION TO ALGEBRAIC THEORIES 1.+ 1 . g(x)]0(x) ^ ^ + a contradiction. we have /(#) = q(x)g(x). = x 2A = xA x. Let us regard the first equation of (13) as the right division of f(x) by g(x). Define fn(C) (read: f right of C). We shall do this and obtain a Remainder Theorem which we state as CA 2 Let f(x) be an nrowed square matric polynomial (12). Also C Bo 5^ since B is nonsingular. . Then the right and left remainders on and fi/C). If r(x) = 0. x2 . and otherwise with q(x) of degree s If also /(x) leading coefficient r (x) for #o(#) A B^ and r(x) the //(x) above. Similarly. . CAoC. if C is an nrowed square matrix and/(x) the polynomial /(C) might mean any one of A Q C 2 = A . This implies a finite process by means of which we begin with i * t and leading coefficient a polynomial /i(x) of virtual degree s 1 i and then form /i+i(x) = fi(x) q i+ i(x)g(x) of virtual degree s A^ for <7t+i(x) = A^jB^x*""*""'. so that J?(x) = in (13).
if /#(C) = 0. Conversely. . . . and only if f(C) = 0. . . + C.POLYNOMIALS WITH MATRIC COEFFICIENTS field 99 C and have }(x) = q(x)g(x) + = B has elements in F.(C) = 0._i and have *(*) = q(x)(xl  C) = Then. f(x) = q(x)(xl  C). .C_i q*(D)(DI  C) = C D + (CiD 1  CoD'C) + . . if D and C are commu h(x) +B C) +B = B. . . if (Co* + da' + . if /(re) has x C is a root of f(x). The matric polynomial f (x) has xl C as a left divisor if and only if f/. we have = CoZ> h R (D) while + (Ci  CoC)!) 1 + . This statement is correct. then in (12) the polynomial r(x) = 0. . + C_!C) .CD.g. it is nontrivial that /(C) = and follows only from the study above where we showed that if D is any square matrix such that DC = CD. then h R (D) = D2 . if have seen that/fi (C) similarly. D is any nrowed square matrix with elements in F. it has xl For Theorem 5 implies that. then the trivial part of Theorem 5. if q(x)  x so that h(x) unless = x8  Cx._ 2 C)D . + (C_i  C. that is.DC * D . and these matrices are equal tative.CD E. However.. q M (D)(D  C)  DC = CD. = fe(C)(CJ  f(x) C) = 0. + C.C) implies that h R (D) = qR (D)(D *  C). D' . of the result just proved for matric polynomials which we state as As a consequence we have the Factor Theorem C as a right divisor if Theorem 6. but let us examine it more closely. The second part implies that of our proved similarly._!x)  (CoC*' + . . The C). then h(x) = q(x)(xl . we q(x)(xl results on the left follow =  Our principal use of the result above is precisely what is usually called C as a factor. We write q(x) = 4 with g(x) To make our proof we use Theorem as in the case of polynomials with coefficients in a = xl Cox*~ l + . We then 1 and r(x) r(x) where q(x) has degree s wish to draw our conclusion from /(C) = qR (C)(C C) + B. in general* They is are equal if D = /(C) = theorem hR (C) +B = qR (C)(C  if and only and thus f(x) = C.
R(x) of (13).2 y? x  1 x i 2 5 2^2 T 3 2. the quotients q(x) and Q(x) in (13) are the same scalar polynomials and also r(x) = R(x) is scalar. We call f(x) monic if a = 1. Q(x). l) O/ 6)C=(l \0 I/ (7=121 \0 I/ The characteristic matrix and function. and (12) with (16) a matric polynomial shall call/(x) a /(x) is = (a<& + . If now g(x) is also a scalar polynomial.+ a.(C) by the use of the division process as well as by substitution /O 1 0\ /O 2 0\ c) /I 2 0\ a)C=(0 \2 4.1 by the nrowed identity matrix. r(x). all the poly .. Express the following matrices as polynomials /(x) and g(x) with matric coeffi cients and compute q(x).100 INTRODUCTION TO ALGEBRAIC THEORIES EXERCISES 1. then we scalar polynomial Thus A k = a k l for the a^ in F. Use /(#) of Ex. 2z 4  x* + 2 x + x 8 1 X t O'rS __ iJU ff JU \jJ(/ Q/2 2 _l " (1 + 3 0*2 /v. If f(x) is nrowed scalar matrix coefficients A^.)I . For obvi where / ously this case of Theorem 4 is now the result of multiplying nomials of Theorem 1. l(c) and if find /ft(C) and/z.2 4 x 3a.. the nrowed identity matrix.
A) are the cof actors of the elements of (in general. Every square matrix is a root of its characteristic equation. Let gi(x) be the first invariant factor of the characteristic ma Then g(x) = gi(x)I is the minimum function of A. Observe also that the g.d. the polynomials //?(A) and/i. nonscalar). Hence B(x) A were defined in (8). q(x). g(A) = 0. A) is a matric polynomial A. where B(x) is an nrowed square matrix with elements in F[x\. A We shall call the matrix xl  A the characteristic matrix of A. the de terminant \xl nomial (18) A the characteristic determinant of A. By the and by (19) \xl uniqueness of quotient in Theorem 4 we have of the elements of adj (xl (20) The g. Hence. The invariant factors of xl .c. g(x) = 0i(x)7 = (xl  A)J5(x) .A)B(x)d n _i(x).or a lefthand factor. . We now apply (3. is a matric polynomial.A\I = gi(x)di(x)I = (xl . We now say that either the polynomial /(x) as a root /(A) only if /(x) has xl if = or the equation /(x) = has 0. then q(x) We now define the minimum function of a square matrix A to be the monic scalar polynomial of made above then implies least degree with A as a root. and we shall designate their common value by is A (17) /(A) = aoA + .d.(A) are equal for every scalar polynomial /(x). and adj (xl By the argument above we have Theorem 7. of the elements of B(x) is unity so that if B(x) = Q(x)q(x) for a monic scalar polynomial = 1.c. A) is clearly the polynomial d n i(x) defined f or xl A and d n (x)= \xl A \. 8.POLYNOMIALS WITH MATRIC COEFFICIENTS If 101 any nrowed square matrix. But then adj (x/ A) =d n _i(x)B(x)._iA + aj . . By Theorem 6 the matrix A is a root of /(x) if and A as either a right.A Then xl the elements of adj (xl  /. + a. The remark just Theorem trix of A. . clearly. and the corresponding equation /(x) = A to obtain the characteristic equation of A.27) to xl (19) (xl  A)[adj (xl  A)] = [adj (xl  A)](xl  A) = x/. the scalar poly f(x) = \xl  A\ I the characteristic function of A.
(22) c \xIA\ \Q(x)\ in F. A"* it is = o^ (A^ l + M. .and0(A) = Osothatr(A) = 0. distinct roots of g\(x) as well as of teristic roots F is a root of But A in any field K conwe have already seen that if F is \ xl  A if . gn ] for elementary matrices P(x) and Q(x). \A \ ^ 2 0. Moreover. r  \ (21) P(x) (si  A) .r(z) = 0. A A~ = A. If. then . from which g(x) = h(x) as desired. . the elementary divisors of xl Then the c. then/(0) = A if 7 \ = (l) A /. . Q(x) = diag {jfi. . fc . d = gl . . so that \A\ (23) = (l) nan  It follows that. . q(x) = 1. 6m _i/ (24) is a nonzero matrix with the property AG = GA = . But&(A) = 0. . A are the field of all complex numbers. Hence g(x) = h(x)q(x). By Theorem 6 we have h(x) = (xl A)Q(x). . and d  = Hence cd = 1. we have m > 1.gn for c = P (a:) 9. Since A 9* 0.A. But then. It follows that Theorem jPAe characteristic function of a square matrix A is /&# every root of \xl in fact taining g\(x). This result implies that \xl A\ is the product of g\(x) by divisors gi(x) of g\(x}. \ They are called the charac of A. . In closing this section we note that (x we  write f(x) = \xl n  A\ / = + aa". l M 2 . The uniqueness in Theorem 4 then (xl A)Q(x)q(x) states that B(x) = Q(x)q(x). and we have proved product by I o/ tfta product of the invariant factors of the characteristic matrix of A.102 INTRODUCTION TO ALGEBRAIC THEORIES is For if h(x) the minimum function of A we may write g(x) = and h(x)q(x) + r(x) for scalar polynomials h(x) r(x) such that the degree of r(x) is less than that of h(x). We see now that xl A is a monic polynomial of degree n and is not = m = n in (9). + a*) /. then. if \A\ =0. and since g(x) and h(x) are monic so is q(x).. . as we have already ob served in an earlier discussion. + an! /) . . and zero. and by (20) we have g(x) = = (x/ 4)5(z). are the whose product is \xl c/)*w polynomials (x A\.+ 1 . then ~ A~~ does not exist. . A 7* and gi(x) = x m + bixm + + m the polynomial gf(a:) = g\(x)I can be the minimum function of A only if ~ b m = 0. and G = A m + m~ + + and hence l obvious that l ..+ l 1 .
^ .B = xl . then/m+nCA) k tion that A = diag [Af.= 1 B. A%\. Similarity of square matrices. = diag {fm(Ai).fn (A 2 )}. . 6. sum of all irowed principal minors of A. A\. .POLYNOMIALS WITH MATRIC COEFFICIENTS EXERCISES 1. is let g(x)I m^ n gi(x)I m and Qi(x)I n be the . 6. Verify this for the 5.. gi(x). A is in F. What. . We have defined two nrowed square matrices A and a nonsingular B with elements in a field F to be similar in F if there exists matrix P with elements in F such that PAP. Let for A = diag A^} where 4i and if A 2 are square matrices of m in F[x] and n rows f t respectively. then \P\ P(xl But P has elements in F.B. Compute the characteristic functions of the following matrices. are the minimum functions . Hint: Prove by induc Let A have the form of Ex. 103 When is the minimum function of a matrix linear? of the following matrices? 2. /O (6 1\ e> /2 (4 3\ l) /o 0\ 6J > 2J {^4i. respective minimum and A* Prove that g(x) is the least common multiple of g\(x) 5.A)P~ == xPIP~ . Apply Ex. /2 (l 3\ . The principal result on the similarity of square matrices is then given by Theorem 10.= B 1 . Two matrices are similar in F if and only if their characteristic matrices have the same invariant factors. if For PAP. Show that f(x) is any polynomial and we define first (x) = f(x)I t any t. 4. 3 and functions of A. 6) a) (o 3. P is elementary. 4 in the case where A\ nonsingular and A = z 0. then. It may + ( be shown that the characteristic function f(x) = x n n~* n l)*atX ( l) a n of an nrowed square matrix a\x n~ l + + + A has the property that di is the matrices of Ex. 7. Hence xl l l . .
) P*(x)(xl A)] (x/  J5) P (si  A) Q = (x7  (x)C(x) D(x)Q Q (x) Pt(x)(xIA)Q Q (x). Obviously. P (xl ^ A) 7. A is \ n and hence that n  is the de gree of the ith nontrivial invariant factor gt(x) the property g (x) t xl A =  implies that 2 . where R= fl(a?) =P +  R(x) (30) = 0. P  But then from and P and thus (29) (xl  A) = (s/  J3)[Z)(a. But the degree in x of the left member of (29) is at most one. B . (25) xl A and xl P(x)[xl B have the same invariant factors. . Q(x) = Qo()(x7  B) + Q . .PAP" = B 1 as desired.I (28)  B) . P7Q = j Q = P. (31) HI +n + . so that  A]Q(x) Q(x). ^ n = t .A) Q =  xl  We now [P(z)]1 use (25) and the fact that P(x) and Q(x) are elementary to write = C(x). +n = t n . . this is an important restriction on the possible degrees of the invariant factors of the characteristic matrix of an nrowed square matrix. Q(x)~ 1 = D(x) for matrix polynomials C(x) and D(x) such that (28) (J  A)Q(x) (27) . = xl B for elementary matrices P(x) (26) and We define P = PL (B) Theorem 5 and have P(x) Q = Qa (B) as in (27) = (xl  B)P*(x) +P  . Then P(x)[(xl  A)]Q(x) = (xl B)P(x)(xI  P (xl  4) Q (z)(z7  B) +P (xl . . . It follows that PAQ = B. Conversely.  Q = xl 1 .104 INTRODUCTION TO ALGEBRAIC THEORIES B are equivalent in F[x] let and xl and have the same invariant factors. . if Observe that the degree of xl . By Lemma 4 the degree in x of the right member of (29) is at least two unless R(x) = 0. = C(x)(a. tti ^ n2 ^ . .
. + bn ) and bn Then g(x)I For x (33) b n _i b n and the is both the characteristic minimum function of A. f clude that xl B diag {#.. then A\x and Q with are equivalent in F[x] if and only if there exist nonsingular matrices elements in + + F such that in (25).. if B 6..!}. !. of constructing an nrowed square matrix whose characteristic matrix has prescribed invariant factors is completely solved by the result we state as A Theorem 9..POLYNOMIALS WITH MATRIC COEFFICIENTS EXERCISES 1.. B = diag {Bi. . . yet P and Q do not exist. x 62 1 x 61 . . 105 What are all possible types of invariant factors of the characteristic matrices of square matrices 2. are the nontrivial invariant factors of a matrix xl A and n is g t (x) the degree of 0(z).. 3. where Gi = . having 1. 3. .. Use the proof of Theorem 10 to show that if AI.. . .. A^ Bi. . and we conhas the same invariant factors as xl A Thus the problem . t ] F[x] to diag {G i. . B t ] will be similar in F to A if Bi is an nrowed square matrix such that the only nontrivial invariant factor of the characteristic matrix of Bi is g>(x). and Bz are nrowed A 2 and E\x B2 square matrices such that A\ and B\ are nonsingular. BT^ Show that the hypothesis that A\ is nonsingular in Ex. or 4 rows? Give the possible elementary divisors of such matrices. B = diag {xl ni BI. . l matrices with prescribed invariant factors. . Let g(x) = xn (32) 010 00 000 1 (bix* 1 + . If g\(x). 2. Hint: Take P A = Ar ^. Characteristic .. 1 x 1 . 1 B 4. PA& = Bi and PA Q = B2 2 .. . 3 is essential by proving / and B\x that Aix J are equivalent hi F[x]. .. B is equivalent in xl nt For then xl . g(x) is the only nontrivial invariant factor of xl A..(?}. then by Theorem 1 the nrowed square matrix .
. Thus if Ci.. \ \xl A ni\ first z"" (bix + . . It remains to prove that dn (x) = \xl A\ = g(x). . . so that Let it be true for matrices A n _i of the form (33) and of n 1 n~2 = . x in the first row and column of (33). . t t ) . !. and \xl 1 rows... . For xl n The c) complementary minor of the element in the \xl A\ = (x A is a triangular matrix with diagonal nth row and first column of xl n elements all unity. where Bi = xl ni Ai is A = diag {Bi.. .. + &ni) is the cofactor of the element . and we shall reduces the solution to the proof of Theorem 10. its bn = g(x) as desired. . A = x bi./* as . We now expand (33) according to n~ l n~ 2 column and obtain \xl + + 6ni)] (b\x A\ = x[x . and by Theorem 3 the nontrivial elementary divisors of xl A are /i. Let c be a complex number. c 1 c A is (x c) n Then the only nontrivial invariant factor of xl A is a triangular matrix with diagonal elements all x c.!} such that / = (x then xl A is equivalent in F[x] to diag {/i. and its charrows.!. . n are positive inc are complex numbers and ni.. dni(x) = 1. desired. . and dn (x) = (x c) is the only nontrivial . t . B }. .. . . acteristic matrix xl c n But equivalent in F[x] to diag {/. .106 INTRODUCTION TO ALGEBRAIC THEORIES & n of (33) is an (n The complementary submatrix of the element 1)1 and its derowed square triangular matrix with diagonal elements all ~ terminant is (I)"1 Thus the cofactor of 6n is (l) n+ 1 (l) n 1 = 1 and hence dn \(x) = 1. The matrix A = diag {Ai./. we construct matrices Aj of the form (34) for c = c. t tegers.!}.. .. .. . .. and with rc A ] then has n rows.. This is true if n = 1. . c 1 A with complex number elements have prescribed elementary divisors has a see that the argument preceding Theorem 9 A be the nrowed square matrix c 1 (34) 000 000 .. This proves our theorem. since then A = A\ = (61). . The whose construction of square matrices characteristic matrices simple solution. invariant factor of A.. t .
We then may call it an automorphism over . however. B = diag {Bi. is. 3. B t] similar in to each of the matrices of Ex. is where Bi has the form (32). where R is the is field of all real numbers. now the ith nontrivial ele 7. elements of its subfield R unaltered. 107 Compute the invariant factors and elementary divisors of the characteristic matrices of the following matrices. 6 2 2 5 19\ 1 5 1 2 312 F 1 05 9^ of 2. The quantities of the field K of all complex numbers have the form c = a + bi (a. a = a for every real a. a subfield of K. There are many important discussed. and the characteristic function the ith nontrivial invariant factor of A. topics of the theory of their exposition matrices other than those to we have and we leave more advanced texts. The complex conc jugate of c = a bi . a fact verified in Section 6. . . Additional topics. 1. Bi Solve Ex. . . Bi of the form (34). i2 = 1) .POLYNOMIALS WITH MATRIC COEFFICIENTS EXERCISES 1. > c defines a and the correspondence c < selfequivalence or automorphism This automorphism of K leaves the of K. 2 with the characteristic function of Bi mentary divisor of A. b in R. that jR.9. Find a matrix respectively. Let us mention some of these topics here.
then (JHB)' = now define a matrix to be Hermitian if JF = A. * In this connection see Exs. if simultaneously PAP' = References to treatments of the topics mentioned above as well as others be found in the final bibliographical section of Chapter VI. Similarly. C. in fact. and nonsingular C. B equivalent* pairs if there exist nonsingular square matrices P and Q such that simultaneously PAQ = C and PBQ = D. B and C. possible to obtain a general theory including both of the theories above as special cases. B and C. If A is any m by n matrix with elements a a in K. for every A. D be matrices of the same numbers of rows and columns. skewHermitian We A if JF = A. we call a matrix P with complex elements such that PP 7 = P 7 ? = /. we define A to be the It is then m by n matrix whose element in its ith row and jth column is a simple matter to verify that Z5 = IS . Then we say that two Hermitian matrices A and B are unitary equivalent if PAP7 #. we f call C and PBP = will A. Finally. (J)' =A 7 . B. Analogously. and it is. where P is a unitary matrix. Two symmetric matrices A and B with elements in a field F are said to be orthogonally equivalent in F if there exists an orthogonal matrix P with elements in F such that PAP' = B. But PP' = P'P = / so that B = PAP~ l is both similar and congruent to A. . We shall not state any of the results here. WS The results of this theory are almost exactly the same as those of the theory of congruent matrices. Moreover. R= ()* . Let A. Then we call the pairs A. B. B C./. 3 and 4 of Section 5. if A. D are nrowed square y matrices. Both of these concepts may also be shown to be special cases of a more general concept.108 INTRODUCTION TO ALGEBRAIC THEORIES a. let us mention the topic of the equivalence and congruence of pairs of matrices. D congruent pairs D for a nonsingular matrix P. Two matrices A and B are said to be conjunctive in K if there exists a nonsingular matrix P with elements in K such that 7 . a unitary matrix.
If we wish later to consider special sets of elements with specified operations we shall then replace "product" in our definition make the knowledge of how these concepts by the operation . 109 . III. and we are ready now to begin the study of abstract algebra. as presented in the usual course on the theory of equations. The product ab is in G. . Groups. b. ever.CHAPTER VI FUNDAMENTAL CONCEPTS 1. it will not matter if we indicate the operation as multiplication. respect to multiplication if for every a. The objective of our exposition has now been atFor our study of matrices with constant elements led us naturally to introduce the concepts of field. The associative law a (be) = (ab)c holds. shall give this discussion here and shall therewith not only leave our readers with an acquaintance with the basic concepts of algebraic theory but with a We may lead into those branches of mathematics called the Theory of Numbers and the Theory of Algebraic Numbers. ax = b ya = b . c. The reader is already familiar with the groups (with respect to ordinary multiplication) of all nonzero rational numbers. It should be clear that if we do not state the nature either of the elements of G or of the operation. linear space. b. These pages were written in order to bridge the gap between the intuitive functiontheoretic study of algebra. II. Our first new concept is that of a set G of elements closed with respect to a single operation. we believe it desirable to precede the serious study of material such as that of the first two chapters of the Modern Higher Algebra by a brief Modern Higher discussion of this subject matter. desired. without proofs (or exercises). Howtained. . c of A G is said to form a group with G I. and we wish to define the concept that G forms a group with respect to this operation. Thus we shall set DEFINITION. of elements a. all nonzero real numbers. There exist solutions x and y in G of the equations . and the abstract approach of the author's Algebra. correspondence. and equivalence.
110 INTRODUCTION TO ALGEBRAIC THEORIES all and. linear subspace over F of a linear space over F) are two concepts of evident funda mental importance which are given in algebraic theory whenever any new mathematical system is defined.g. called commutative or abelian groups. The number either infinity.5. The concept of equivalence of two mathematical systems of the same kind as well as the concept of subsystem (e. of all field. and we call G LEMMA 1. groups G such that for every a and 6 of G we These are have all examples of IV. H contains the identity element of G and the inverse element h~ l of every h of H. subfield of a field. nonzero elements of any field. This finite and we call G an infinite group. Then the (3) solutions of the equations of Axiom III are the unique elements x = ar l b . The equivalence of two groups is defined as an instance of the general definition of equivalence which we gave in Section 4. with respect to matrix mulnrowed square matrices with elements in a nonsingular e called its identity element . * number is number n. Then H forms a group with respect to the same operation as does G. simple example of a finite abelian group is the set of all nth roots of unity. Then the order of H divides the order of G. The products ab = ba . or it is a a finite group of order n. A set H of elements of a group G is called a subgroup of G if the of product any two elements of H is in H. Such groups are tiplication. of elements in a group G is called its order. is An example of a nondbelian (noncommutative) group the group. indeed. Let H be a subgroup of a finite group G. every element a of (2) G has a = 1 unique inverse a" in G such that aar 1 ar l a = e . The reader may verify that an example of a finite nonabelian group * A The proof is given in Chapter VI of my Modern Higher Algebra. For finite groups we have the important result which we shall state without proof. Moreover. . subgroup of a group.. Every group G contains a unique element such that for every a of G (1) ae = ea = a .
. Tften an = e /or et. f T ' A / 0\ (4) Mo. a n = m ) q = eq = e. Moreover. a m = e. it can be shown that a* = e if where e is m and only since if m divides m (a t. a'TO 1 1 the identity element of [a]. In any field and in the set of all nrowed square matThus we have said that the set of all nonzero elements of a field and the set of all nonsingular nrowed square matrices form multiplicative groups. rices there are two operations. and [a] consists of the m powers e. The order m is of an element of a finite group G divides the order of G. 2. Its order is called the order of the element a and it can be shown that either a has finite all the powers (5) are distinct and a has distinct infinite order. Then the order of a is the least integer t such that a* = e. Additive groups. AB. or order ra. the elements of any linear space. D B /O 1\ o)' (i [AB. all form additive groups. B. The reader will observe that the axioms for an additive abelian group G are those axioms . .FUNDAMENTAL CONCEPTS is 111 given by the quaternion group of order 8 whose elements are the tworowed matrices with complex elements. that is. . a. or 1 a2 cr2 . We therefore have LEMMA (8) 2. and we may apply Lemma 1. (7) a2 .en/ a of G. . groups with respect to the operation of addition as defined for each of these mathematical systems. Thus n = mq. the order of the subgroup [a]. a. of an element a of a group forms a subgroup of G which we shall desig nate by (6) [a] and shall call the cyclic group generated by a. But the elements of any field. .)' i. . the set of all m by n matrices with elements in a field. Let e 66 2/ie identity element of a group G o/ order n. set of all powers a = G e. . . The (5) A. .
2a. . of R. we have x y and designate their common value by Thus we define the operation of subtraction in terms of that of addi In a cyclic additive group (11) 0. for every a. The set consisting of all nrowed square matrices with elements in a field ring. When G b tion. define ( m = is ra) a any (m positive integer a). the elements of [a] are 0. and we Here m a does not mean . 2 (2 a).12 for a field. A I. The product ab The is in R. Thus a and is such that designated as the solutions of the additive formula tion (9) a +x= b . the element verse with respect to addition of a is is usually called its zero + = a = + a.112 INTRODUCTION TO ALGEBRAIC THEORIES for addition which we gave in Section 3. positive integer If [a] is m + + a finite equal. of the equations of our group (10) Axiom III are z is = (a) + 6. then it may be seen that n a = q a for any integers n and q if and only if n and q are by the . y + = a = b . . c) III. II. abelian. . associative law a(bc) distributive laws = (ab)c holds. (b + c)a = ba + ca hold. Additive groups are normally assumed to be abelian. the product of a mands. y 6 + (a). The in a + ( a) = ( a) +a = 0. The identity element of an additive group such that a element. F is an instance of certain type of mathematical system called a Many other systems which are known to the reader are rings and we the shall make ring is an additive abelian group of at least two distinct elements such that. the use of addition to designate the operation with respect to which a group G is defined is usually taken to connote the fact that G is abelian. if [a] the elements are always designated a. a. . if [a] is infinite. . a. that is. . . but means the sum a a with m sumgroup of order m. (m a) = . However. m ( a). b. that is. . and m is least positive integer such that the sum of m summands all equal to a is zero. . by a. The a(b + = ab + ac. DEFINITION. clearly. where. (m 1) a. 3. a. Rings. c.
and the unity element was always the number 1 in the other rings we have studied. the set of all ordinary integers is a ring.FUNDAMENTAL CONCEPTS We leave 113 to the reader the explicit formulation of the definitions of sab ring and equivalence of rings. A ring may also contain divisors of zero. the equations ax = b of ucts b might not have solutions in R if a ^ 0. The element 1 e then has the properties of the ordinary number and all is usually designated by that symbol. However. b are in R. may easily be seen to be a ring without a unity element. Then we shall call R a division ring if R* is a multiplicative group. elements a 9* 0. Abstract fields. Observe that by making the hypothesis that R contains at least two fields The elements we exclude the mathematical system consisting of zero alone from the systems we have called rings. However. ya = b have solutions in R* for every a and 6 of R*. In nonzero elements of this ring are divisors of zero. A ring is said to possess a unity element e if e in R has the property = ea = ae = a for every a of R. let c and d range . c ^ such that ac = 0. the set of all tworowed square matrices of the form /O (12) r\ \o with r rational o. A ring R is said to be commutative if ab = ba for every a and 6 of R. This occurs clearly if and only if the equations ax = 6. The reader should also verify that all nonmodular are rings. The set of all is nrowed square matrices not a division ring. zero element of a ring R is its identity element with respect to addition. If R is any ring. 4. that is. the ring of all nrowed square matfact. the ring of all nrowed square matrices has already been seen to have such elements as well as the other properties just mentioned. Rings may now be seen to be mathematical systems R whose elements have ya all the ordinary properties of numbers except that possibly the prodab and ba might be different elements of R. all rices with elements in a field F is a noncommutative ring. They may be found in the first chapter of the Modern Higher Algebra. we shall designate by R* * the set of all nonzero elements of R. In particular. The ring of all integers is a commutative ring. The unity element of the set of nrowed square matrices is the nrowed identity matrix.
p  1 . We generates call all such fields nonmodular fields. the set F[XI. the set of all ordinary 1. The reader should verify in particular that every c 7* A ^ = cc is or d j and \A\ + this. . the unity eley ment 5. . noting nonsingular since A 9^ implies that dd > 0. . The group [1] might. commutative ring with a unity element and withIntegral domains. and . 1. . .BjAB of (4) as a basis over that field. 5 and d be the complex conjugate of Q of all tworowed matrices c and d. group. and it we have the property that the sum with p summands is zero. is closed with respect to rational operations. Less trivial examples are the set x q ] of all polynomials in F[x] of all polynomials in x.114 over all INTRODUCTION TO ALGEBRAIC THEORIES the set Then (13) complex numbers. and [1] then a subfield of F equivalent to the field of all rational numbers. +1+ . . The whole set F is an additive group with identity element and 1 infinite order. It easily be shown that the characteristic of subfields of a field the same as that of in fact. however. It is usually called the ring of real quaternions. Its elements are then But F the sums (14) 1 = 0. where p 1 is . . then the sum a + is 1 p summands such fields equal to the product (1 all +1+ may F is + + + + 1)0 = 0. prime. 2. . If this cyclic group has may be shown to be equivalent to the set of all ordinary integers. . it generates an additive cyclic subgroup [1]. . Any field forms a somewhat of F A trivial example of an integral domain. call F modular fields of characteristic p. . every subfield of F contains the subfield generated by F and. . We . A field is a ring F such that F* is a multiplicative abelian The 0. DEFINITION. x q and coefficients (in both cases) in a field F. . We now define fields in general. element identity element of the multiplicative group F* is then the unity 1 of F. the order of this group.A. be a finite group. out divisors of zero is called an integral domain. . under rational operations. Until the present we have restricted the term "field" to mean a field containing the field of all rational numbers. . A = ft \a * c is a noncommutative division ring. The ring Q is a linear space of it order 4 over the field of all real numbers and has the matrices I. It is easy to show that p is a a with a follows that if a is in F.
FUNDAMENTAL CONCEPTS
integers.
115
The
set of all integers
may be extended
to the field of all rational
numbers by adjoining quotients a/6, 6/^0. In a similar fashion we adjoin in J to imbed any integral domain J in a field. quotients a/6 for a and b ^ When we studied polynomials in Chapter I we studied questions about divisibility and greatest common divisor. We may also study such questions about arbitrary integral domains. Let us then formulate such a study. Let J be an integral domain and a and 6 be in J. Then we say that a is
divisible by
b (or that b divides
a
=
a)
if
there exists an element c in
1,
J such that
is
be.
If
u
in
J
divides
its
unity element
in J.
then u
is
called a unit of J.
clearly
Thus u is a
also a unit.
unit of
J if it has an inverse
The
inverse of a unit
Two quantities b and 6
Then
if
are said to be associated
if
if
each divides the other.
u.
b
and
6
are associated
and only
if
6
=
bu for a unit
Moreover,
b divides a so does every associate of b. Every unit of J divides every a of J. Thus we are led to one of the most important problems about an inte
gral domain, that of determining its units. quantity p of an integral domain J is called a prime or irreducible quanis not a unit of J and the only divisors of tity of J if p p in J are units and
A
^
associates of p. of J is an a
^
Every associate of a prime is a prime. which is neither a prime nor a unit of
A composite quantity
J. It is natural
then
to ask whether or not every composite a of
(15)
J may be
written in the form
a
=
pi
.
.
.
pr
also ask
if it is
for a finite
number
a
of primes
. . .
p of
J.
whenever
also
=
We may
#,,
true that
qi
qa for primes
then necessarily
s
=
r
and the
qi
are associates of the p/ in
these properties hold we may call J a unique factorization integral domain. The reader is familiar with the fact that the set of all ordinary integers is such an integral domain. This
order.
fact, as well as the
Xij
. .
some
When
.
,
corresponding property for the set of all polynomials in x n with coefficients in a field F are derived in Chapter II of the
g.c.d. (greatest
author's
Modern Higher Algebra. The problem of determining a
common
divisor) of
two
elements a and b of a unique factorization domain is solvable in terms of the factorization of a and 6. However, we saw that in the case of the set F[x]
the g.c.d.
may be found by a Euclidean process. Let us then formulate the problem regarding g.c.d. 's. We define a g.c.d. of two elements a and b not both zero of J to be a common divisor d of a and b such that every common divisor of a and 6 divides d. Then all g.c.d. s of a and b are associates. We call a and b relatively prime if their only common divisors are units of J,
;
116
that
INTRODUCTION TO ALGEBRAIC THEORIES
is, they have the unity element of J as a g.c.d. We now see that one of the questions regarding an integral domain J is the question as to whether every a and b of J have a g.c.d. Moreover, we must ask whether there exist
x and y in
J
such that
d
is
=
ax
+
by
d
common divisor and hence a g.c.d. of a and b; and finally whether or not may be found by the use of a Euclidean Division Algorithm.
a
6.
of a ring R is called an Ideals and residue class rings. A subset and a of R. contains g R if h, ag, ga, for every g and h of Then either consists of zero alone and will be called the zero ideal of R, or may be seen to be a subring of R with the property that the products
ideal of
M
M
M
M
M
am,
If
ma of any element m of M and any element a of 1? are in M. H is any set of elements of a ring J?, we may designate by {H\
the set
sums of elements of the form xmy for x and y in #, m show that {H} is an ideal. If H consists of finitely m of jR, we write {H} = {mi, m/}, and if H many elements mi, consists of only one element m of R we write
consisting of all finite in H. It is easy to
.
. .
,
t
.
.
.
,
y
(16)
M=
{m}
most important type of an ideal is called a principal ideal. It consists of all finite sums of elements of the form amb = {m} consists of all for a and 6 in R. When J? is a commutative ring, am for a in R. products
for the corresponding ideal. This
M
The
ring
R
itself is
an
ideal of
R
called the unit ideal. This
term
is
{
de1
}
.
rived from
the fact that in the case
where R has
a unity
quantity R =
Evidently {0} is the zero ideal. be an ideal of R and define Let
M
(17)
o
s
6
(M)
what we
a congruent b modulo M) if a b is in M. We may then define for every a of R. We put into the shall call a residue class g of class every b in R such that a = 6 (M). Clearly a = b (M) if and only if b 33 a (Af). Moreover, if a c in Af, then (a 6 is in and b 6) = a c is in M. It follows that g = 6 (a and b are the same resic) (6 due class) if and only if b is in g.
(read:
M M
+
Let us now define the sum and product of residue
(18)
classes
by
g
+b=
a
+b
,
g
b
= a
b
.
FUNDAMENTAL CONCEPTS
may be = 06. aifri
It
verified readily that
It follows that
if
117
a\
=
g, 61
=b
,
then ai
+
b\
=
a
+ b,
our definitions of
sum and product
of residue
classes are unique. Then it is easy to show that if is not R the set of all the residue classes forms a ring with respect to the operations just defined. We call this ring the residue class or difference ring (read: R minus
M
R M M = R the residue classes are the zero and we have not called set a M an integral domain, we the When the residue ring R ideal M a prime ideal of R. We Ma ideal of R R M M has only a a These concepts coincide the case where R
M). When
all
class,
this
ring.
class
is
call
call
divisorless
if
is
field.
in
finite
number of elements since it may be shown that any integral domain with a finite number of elements is a field. This coincidence occurs in most of the
topics of mathematics (in particular the Theory of Algebraic Numbers) where ideals are studied.
7.
The
ring of ordinary integers.
The set
by
it
of all ordinary integers is a ring
is
which we
zation.
shall designate henceforth
E. It
easily seen to be
an
inte
gral domain,
and we
first
shall
prove that
has the property of unique factoriare those ordinary integers u such 1 and 1 are the only units of E.
We
observe
that the units of
v.
E
that uv
=
1
for
an integer
But then
Thus the primes of E are the ordinary positive prime integers 2, 3, 5, etc., and their associates 5, etc. Every integer a is associated with 3, 2,
its
absolute value
a
 \
^0. We
note
now
that
if
b is
any
integer not zero,
the multiples
(19)
0,
6,
6,2&, 26,...
are clearly a set of integers one of which exceeds* any given integer a. Then let (g 1) 6 be the least multiple of 6 which exceeds a so that  g\b\ = r such that < r < \b\. put 1)6 > a, a g\b\ <a,(g
+ +
We
q
= giib > Qj q = g otherwise, and have g\b\ = <?6, a = bq + r. If also < r\ < \b\, then b(q q\) = n r is divisible by 6, a = bqi + n with whereas r n < b This is possible only if r\ = r and q\ = q. We
.
   \
have thus proved the Division Algorithm for E, a result we state as Theorem 1. Let a and b ^ be integers. Then there exist unique 6 a = bq + r. q and r such that
integers
0<r<
1
1
,
We now
*
leave for the reader the application of the Euclidean process,
1.5, to
which we used to prove Theorem
our present case. The process yields
We use the concept of
magnitude of integers throughout our study of the ring E.
. = g. . . has divisors b such that 1 < b < a. uniquely apart from the order of the positive prime factors pi. we write a 2 = pz with p 2 a positive prime and have (21) for r = 2. and there exists a least divisor Pi But then pi is a positive prime. is composite.  > . Then f divides h. For let a and b be prime to m and d be the g. . for positive . it does . Then p divides g or h. . . a 2 = p 2a 3 and a = pip 2 a 3 f r a s < aa After a finite number of > 1 of a. of f The result above implies Theorem 3. . For by Theorem 2 we have af + bg gh = 1. Then the set of integers prime to m is closed with respect to multiplication. For if p does not divide gr. (21). afh + fy/A = h. If also a = qi .c. The latter is impossible. We A = (ah then have 4. But if the divisor pi of qi q8 is not equal to gi. of p and g is either 1 or an associate of p. . primes qi . If a6 is not prime to m we have d > 1.. Let p be a prime divisor of gh. If a it . the sign is . or pi ^ g/ Then either we may arrange the q* so that gi s.d.p r . + bq)f is divisible by /. Otherwise a* is composite and has a prime divisor p% by the proof above. . For if a = 6c. is sometimes called the Fundain the form Every composite integer a is expressible a = pi. g. .d. as > must terminate.118 INTRODUCTION TO ALGEBRAIC THEORIES 2.d. g. q. forj = 1. Theorem We also have 5. h be integers such that f divides gh and is prime to g. . and by Theorem 3 if d is prime to a it divides 6. and p\ = pi. . But then a divisor c > 1 of d divides a or b as well as m contrary to hypothesis. the g. .. of ab and m. .c. If Oz is a prime. pr = uniquely determined by a. g. Let f. Then there exist integers a that d positive divisor of both f g. . . > . every divisor of b or of c is a divisor of a. pr . . a = p\Oz for \a^\ < \a\.. Theorem and b such (20) is Let f and g be nonzero integers. . af + bg is the a and and Then d unique positive g.c. a 2 . Theorem (21) 6. and therefore p is prime to g. But by hypothesis = /g. We may now conclude our proof of what mental Theorem of Arithmetic. Theorem Let m be an integer. and we have gi. l stages the sequence of decreasing positive integers a .
a = we have a = mq + r for r = 0. . . 119 not divide qi and by Theorem 4 must divide q z q 8 By our hypothesis it does not divide # 2 and must divide g 3 qa This finite process leads to . r <?i . Thus the elements of the residue . the p = qi for an appropriate ordering* of the q^ 8. ?. Then of all integers divisible integers E in {m} is a ring whose zero 1 by m and whose unity element the class is 1. . and therefore and are relatively prime. p. Thus pi = gi. and we may take p 2 = q*. . we ultimately obtain r = s. 02 and c { . the elements of the residue class a are prime to m. > 1. whose remainder Theorem m on division by all is an integer prime to m. form If m is a composite integer cd where and Theorem 7 implies Theorem 8. But C'd = cd = m = Q. . then m> c. 1. then * We may order the p so that pi ^ p 2 ^ ^ p and similarly assume that ^ ^ 7. = & for i = 1. we may use Theorem 1 to write h < r < m. classes m I . Then we obtain r = 5. The ideals of the ring of integers. m> and d are both not the zero class. . class ring Edefined for {m} m> 1 element is is the class 1 of all are given by (22). = a + mq. . . But a = to classes a in a multiplicative abelian group. classes of E modulo {m} are now the 0. . If h is in M. a contradiction. . . E of all integers and m be mq = M be a nonzero ideal of the M M M M. ^ r. The residue {m} defined for a prime m d. TTie ideals of E are principal ideals {m} is We m a positive integer. . m 1. then b 1. Our definition of m implies that r = . But mq and h are r where mq set . and thus Theorem The residue (22) in {m}. Proceeding similarly. . have proved 7. r. If m is a prime. For if a is any integer. . . be 6 + m(d m E c qc) = ac ac + md = + m(qc + d qc) c = 1. d > 1.. . and the proof just completed may be repeated. p r = ?*. . For by Theorem 2 there exist integers c and d such that If a (23) If 6 is in a.. Then a r is in {m} . . E {m} has divisors of zero and is not an integral domain. .FUNDAMENTAL CONCEPTS . ac + md = 1 . p2 . that every element of M in /i + r is in . Let the least positive integer in Then contains every element qm of the principal ideal {m}. 1. . .. ...
.. We observe now that a = 6 (M) for = {m} means that a b = mq. b is divisible by m. a(o^~ 1) visible by p.120 INTRODUCTION TO* ALGEBRAIC THEORIES every a not in field. = d (mod m) hold. Thus f(p) = p 1 l = ap a is also diTheorem 10 aP~ 1 is divisible by p. Thus the rules (26) for combining congruences are equivalent to the in defini tions of addition and multiplication E {m}.. and a is 1 and by one of the residue classes 1. Thus it is customary in the Theory of Numthat is. a positive prime p. Our result then follows from Lemma 2. prime to w. For /(m) is clearly the order of the multiplicative group defined in Theo rem 8. Otherwise a is prime to p. The * ring E {p} defined by a positive prime p is a field* P whose field This field is equivalent to the subfield generated by its unity quantity of any of characteristic p.. (26) we have a +cs6+d (mod m) . Let f (m) be the to m> (27) and prime Theorem and which we state as number of positive integers not greater than m. we will have g + c = 6 + das well as g and if we also have 6 d. Hence if (24) and (25) c 3= = c 6. a m is We have M M M bers to write (24) (read: a a b (mod m) a congruent 6 modulo m) if a 6 is divisible by m. and Theorem 8 states that E {w} is a proved Theorem 9. We next state the numbertheoretic consequence of Theorem 8 and Lemma 2 which is called Euler's Theorem 10. For (28) holds if a is divisible by p.. = {p} for is a divisorless ideal. Let Fermat Theorem. Then ap = a (mod p) for every integer a. An ideal of E is a prime ideal if and only ifM. Then if a is prime to m we have af s 1 (mod m) .2. p 1. But then a c == d. ac = 6d (mod m) . We next have the Theorem (28) 11. pbea prime.
Then 1) = = in the field E F is a subfield of a field that K which is + . t = rn. element (30) fc of a quadratic field f(x. fc) is a root of an equation = x2  T(k)x + N(k) = with (31) T(fc) and N(k) in F. . cofc + cjc + c 2 = 0. Then if n = 1. the field K is F. However. c 2 not all zero and linearly dependent in F. The quantities of K are then uniquely expressible in the form fc = c\ + c 2 u 2 for ci and c 2 in F. we shall not mention them here. We now say that a quantity u in K generates K over F if 1. ratic. .FUNDAMENTAL CONCEPTS nonzero elements form an abelian group 121 1. and k = c^ 1 is in F. u are a basis of K over F. and we let r be a primitive root modulo + !)(  p. For if 1.1). . rf a complete set of nonzero residue classes modulo {p Such an integer r is P* of order ~~ xp l = . fc is a root of a monic polynomial of degree two. and fc is in F k = fc 1 + Ow 2 if and only if c 2 = 0. Then u generates K over F if and only if u is not in F. t*  1 as desired. The elements k of a quadratic field have the property that 1. u= have ai + a zu = o^ a\ is in F. Clearly. called a primitive root modulo p. However. of the monic polynomial (x fc) then Co T^ 0. We call K a quad= 2. We then have Theorem integer t (29) 12. fc is a root in F. Hence we may take Ui to be the unity element 1 of F.(P + 1)(P . {p}. u 2 over F. if k is in F. . . 3. 4. Then it may be shown that P* is a cyclic " r p 2 are multiplicative group [r] where r is an integer such that 1. ft = r^y* I since r is 4n. quartic. Let p be a prime of the form 4n + ! Then there exists an such that t 2 +1a ( (mod p) . u* if u generates K over F we have  bu +c= . Thus evgry ) . 9. cubic. fc2 are 2 = for c ci. If fc is not in F. primitive. If a linear space u\F u n F over F. . Quadratic extensions of a field. we say K is a field of degree n over F. then 1. If c 2 with coefficients in F. or quintic field over F according as n Let n = 2 so that K has a basis u\ = 1. For p t* 1 = = r i. then Ci cannot be zero. l . fc. fc do not form a basis of X over F. ^ + 1 There is a number of other results on congruences which are corollaries of theorems on rings and fields. The theory of linear spaces of Chapter IV implies that Ui may be taken to be any nonzero element of K. } . r. or 5. . u are linearly dependent in F we for a 2 7* 0. In particular. The elements p of P* are the distinct roots of the equation I and are not all roots of any equation of lower degree.
if K were any field of degree n over F and F of S and T were automorphisms > over ing K. define ST as the result fc then k s > (fc s ) r It is easy to k ST of applythat the set of all (fc ) = 8 2 " K * K We u)] see k\ + now that if fc = = fci + + fc 2 u. (fci + fc 2 u)[fci f fc 2 (6 = fcifc 2 6 + fcu(6 u). In case G has order equal to the degree n of K over F it is called the Galois group of K over F. (34) if 5 (fcfco) = is. In either case unaltered and K F F of K. for all in F where us is in K. c nomials in and we propose to find the value of T(K) and N(k) as polyand the coordinates fci and fc 2 in F of k = fci + fc2W. then fcfc^ = c. In the former case S is defines a selfequivalence of is called an automorphism over If $ > the identity correspondence fc < the elements of leaving fc. b. if (36) T(fc) = kk s . w2 = N(k) Hence . (33) (fc + fc s ) = ks 2 + kj . Then (32) is a onetoone correand itself if and only if u s generates K over F. If fc = spondence 8 k + fco = (fci + fcs) + k$ + k^u fcs + kiu for fca and ki in F we have kfi s = 5 and we have fci + fcs + (fca + fcOu (fc 2 + fcOw. distinct roots in a field (35) K. Also fcfco = u. we would s first k>k and show is a group G with respect to the operation just automorphisms over F of defined. fcifcs + ks (fci&4 while kH = u = + kjc^u + + c) + + (fcifo + k^k^u3 + kJd(u8 But then fc 2 fc 4 (fcifcs fc 2 fc 4 (fcifc 4 fcifcs )*. But b since u is not in F unless K is a modular field in which u + u = 0. and only if (u s ) 2 = c bu s c. and that branch of algebra called the Galois Theory is concerned with relative properties of K and G. But the quadratic equation can have only two of the roots is 6.122 for b INTRODUCTION TO ALGEBRAIC THEORIES and c in F. the sum us = u or 6 u . . so that (fc + fc ) and of fc 2 K . In our present s 2 case u s = (u s ) s = (b u) = b (b u) = u so that S is the identity = u if and only if 2w = 6. the automorphism group of K over F is the cyclic group [S] of order 2 and is the Galois group of K. which is not possible w automorphism. fc But bu 5 fc . that us is a root in K of the quadratic z equation x bx + = 0. Hence if is a nonmodular quadratic field over F. Define a correspondence S on K to X by k fci (32) = fci + to  fc s = ki + k us 2 . .
N(k) = k\  k\a . T(ajc + a 2 fc ) = aiT(fc) + a 2 r(fc ) . and N(k) is called the norm of fc. we may write k(x) = ki(x ) + fci(a) + ki(a)u. we have u2 = d2 (u . a. and the polynomial of (38) (30) is /(*. Thus every nonmodular quadratic field is the polynomials with coefficients in F in an algebraic root u of an 2 = a where a is and F is nonmodular. For a 2 fc + 2 ) ajc + + ^Jc + = . K now u impossible in a field. N(k) = k\ + ks) k\c + fakj) . the nonmodular field F and the equation x 2 defined. 2 = & 2 /4 = a = (w u2 bu c 6 2 /4. Let us then assume with6/2) out loss of generality that u is a root of (40) so that in (31) we have 6 = 0. fc(w) = ring F[u] of all 2 equation x any polynomial in F[x]. given. a is not the square root of any quantity of F. is a field. we may show that (39) trace of k. fc) = (x  fc)(x  . it is actually the field F(u) of all K rational functions in Conversely. fc 2 quantities of k(x) is + d^O. . Then K is also generated %b of the z2 = a (a in F) . For we may take = a in F are < "(?. Now if a = not in d in F. then if K is u with coefficients in F. ) The trace function is thus called a linear function and the norm function a multipli cative function. This is 2 (x )x.FUNDAMENTAL CONCEPTS we have (37) 123 T(k) = 2fci +kb t . + d) (u d) = 0. The function T(k) is called the Moreover. Similarly. s (hkj! = + asfco) = (ifc + a + + ks + 02(^0 + kfi) as ) desired.* N(kk Q ) 2 if fc is in F. Then (37) has the simplified form given by c = + + (41) T(k) d? for = 2*i . whereas u is F and w For if The in F. N(kk Q ) = N(k) T(aifc ai(fc N(k Q ) 2 fc ) for every ai and a 2 of F a fc s = (aifc and fc and fc of K. of all polynomials in u with coefficients consist d^O. s Note that Let us now assume that the field K is nonmodular so that K by the root u = F + uF where u equation (40) if satisfies (31). Since in F. then JV(Jfc) = fc kk Q (kko) s = kk Q k s k$ = (kk )(k Q k$).) * " (.
a 7^ d for any d of F.k\a = 2 a = (fti/if ) 2 contrary to hypothesis. unity element of over a nonmodular field fields K We F nonmodular is evident since the have thus constructed all quadratic is K in terms of equations x 2 = a for a in F. Then J consists of all linear of combinations (45) Ci +cw 2 . If ic field is we take 6 = generated by c" 1 in (43) we replace a by d. But F(u). while if 1 we interpret (44) as the case a negative and r = 0. k 2u) for every of take K = F + uF and have fc~ = (fc? kla)' and in fc 1 fc l 1 K. be a quadratic field over the rational field so that is Let. then. It is also shown easily that if K is defined by a and K by a then K and KQ are fields equivalent over F if and only if a = 62a. where the p< are distinct positive primes and r > I for a > 0. Thus we may replace the defining quantity a in F by any multiple 62a for b 7^ in F. We shall discuss somewhat briefly the case n = 2. Hence K is the field F(u). if and only if ki = k = ki + k*u and N(k) = k\ . The Theory of Algebraic Numbers is concerned principally with the integral domain consisting of the elements of degree n over the field of all rational called algebraic integers in a field numbers. Integers of quadratic fields. But then we For otherwise 2 7^ 0. Write a K K K integers. Observe in closing that 2 if K= v is such that (43) fc 7^ is in F. That K ^ fc is that of F.124 INTRODUCTION TO ALGEBRAIC THEORIES identify F with the set of all tworowed scalar matrices with elements F (so as to make K contain F). Thus k is an integer of T(k) N(k) and k\ are both ordinary integers. Hence every quadrata root u of the quadratic equation (44) x2 = a = pi . The integers containing the ring E of all a quadratic field 'Kform an integral domain J ordinary integers. a = The quantities k of K have the form k = k\ + k^u where fci and k% are if the coefficients ordinary rational numbers. then a root of K F(v) for every v = bu z2 = b2 a . . . pr . 10. Then every polynomial in u has the form = k = 0. . We call k an integer of if and only if of (30) are integers. 2 generated by root u of u = a where a is rational and not a rational square. We shall determine all integers 2ki k\a K y K of K stating our final results as Theorem 13. By the use of (43) we may multiply a by an integer and hence take a inte= c2d where d has no square factor and c and d are ordinary gral.
and thus 2 divides 6 2 We have a contradiction and have proved that 6 = 1. a contradiction. But m . where 125 3 w = u if a = 2 (mod 4) or a = (mod 4) but (46) w = l+ i/ a = 1 (mod 4). a contradiction. and 1. common denominator of fci and 2fci is fc 2 . But this condition holds since ii w = u then ger m. b 2 . then 4 would divide 6f.'or in either case 6 = = mi +mu+w 2 for w in (46). The elements of J are in the field K. and thus it suffices to show that fc A. remains to show that J is an integral domain. b 2 in E and 6 2 the positive least 60. 4). then we have shown that either fc fc = 61 + b 2u with b. and 6 2 in 1. .d. Since a has no square factors this is possible only if p divides 62. But then p would divide b?. integer. bi. an ba. This completes the proof. = 4[mf m>i ra 2 and 6f a(^ 2 l and b\a 2 )] ordinary integers a is divisible by 4 if and only if a = 1 (mod 4). while otherwise v? = a = 4m + 1 for an inte= J + i = 1.m = 0. 62 = 2?w 2 1 for . Hence 6 is a power of 2. w* = m +w . 4 divides b? and ba. if 6 2 were even. 61. of bJ7 (6? . ty2 . the product is of the form 2 (45) if w2 is of that form. and hence 2 ba). A/2  7 > 7 fv v __  1 00 00 for bo. then 4 divides b\a.w . If 6 = 2 and 61 is even. b\a. 2. Now .c. + + +m + if a = 1 (mod . Then k\ 1 is the g. 2w fc has the form (45) with Ci and c 2 in E. Note that a ^ I. Thus we have proved 1 that J consists of the elements (45) with w = u if a s= 2. However. (mod A/1 4) since 7 a has no square __ &2 factor. 3 (mod 4). If k%a factor of 6 p were an odd prime 2 61. #(tiO = i(l ~ a) = m. We write _ ^1 . Hence 61 = 2wi f 1. so b\ it would divide 2bi only if it divided and p 2 would divide 6? b\a as well as  = that b 2 divides divides 6 2 contrary to the definition of 6 Similarly.. w = It + form. T(w) (47) w = a + Qw. If b > 4 divides 2bi. then 2 divides bi.. 60 divides 2bi. kh are in J for every But the sum of fc and h of the form (45) is clearly of that fc and h of J.FUNDAMENTAL CONCEPTS for Ci and 02 in E.
and so is A for every integer L If h* t > 8 and have A'""* = 1. we may take a unit of J. if (48) holds. and hence g ^ 4. Thus (48) is a necessary and sufficient condition that k be a unit of /. 1. = 3 we c2 use (50) and have (2ci +c 2 2) + 3cl = or 1 and if c2 ^ we must have c2 = or 1. Hence we have proved that the units of J are 1. However. 2 elm = But this is equivalent to (50) (2ci  ca = determine the units of J completely and simply in case a is 4 for For both (49) and (50) have the form x\ + x\g = 1. k has the property kk = 1 or 8 = s = tf = Ar 1 or and N(k ) J when k is in J since T(k ) (*). 4w2 = 9  8 = = 9 . then (48) is equivalent to (49) c\ cia= 2 1. = N(k)=l. the only possible remaining cases are gr = 1. and we have shown that if a is negative this group a 3 1 finite 1. u2 = In the remaining case a 4. s s 1. c\ = 1. w . Hence h is h for s 7* t. + CiC 4. the units of J are Then 1 = while c 2 = (52) 1. Now a has no square factors. only if #2 = 0. 2. s w. But we may regard u as the ordinary positive V2 a = 2. 1 if a = 1 (mod 4). w s . 1 so that (49) becomes is 1 c\ + ci = 1 . then (48) becomes N(k) so that 4N(k) = 4c? + 4cic + c\ +c 2 2)  (4m = c\ + l)c = 4. = gives c\ for CL Clearly. 1. = a > 0. and the units of J are u.126 INTRODUCTION TO ALGEBRAIC THEORIES units of The N(kh) (48) J are elements k such that kh = N(K)N(h) = 1. But k is in Conversely. If a s 2. the other or 1. then h = + 2u has norm 9  group. w. 3. Then one of ci and c2 zero. 2^1 for every We may Now is a < let a save only a = 3. negative. If g = 2 we have x\ + 1 1 only ^ #2 = 0. + c2 = 1 with any choice 1 gives of signs. 2ci 1. 1. The If units of J form a is multiplicative group. N(k) is 1 for h also in J. 3 (mod 4) so that fc = Ci + c 2 u. Then an ordinary integer which divides unity. u. (51) 1. and this is possible for ordinary integers x\ and # 2 and </ > 4 g 1. Hence fc* r(*) 5 = fc fci.
and For fg. We shall not study the units of quadratic fields further but shall pass on to some 11. Hence there exist ordinary integers h\ and h% such that (s + 1) <J<s+l  (54) ai^fcifti. N(g) We shall use the Division Algorithm to prove the existence of a ggc. ft S2 = fe 2 ft2. we have = 2.d. fg~\ = k\ + k^u with rational fci and fc 2 Every rational number t lies in an interval s < t < s + 1 for an ordinary integer s. *i<J. Then there exist elements h and r in J such that (53) f = gh +r . Our first result is the Division Algorithm which we state as Theorem 14. Every ideal of J is a principal ideal. Then N(s) = + s u. The complex numbers of the form x + yi with raand y are called the Gauss complex numbers. + + 20u + Our proof will be different and simpler than that we gave in the case of polynomials and indicated in the case of integers but has the defect of being merely existential and not constructive. < N(r) < N(g). s = Si = gh + r. tf (r) + < s. a<i Put sg = hi + hju. mined the units 1. We observe that the quotient h and the remainder r need not be unique in our present case. For example.FUNDAMENTAL CONCEPTS and have h units of 127 > 1 5. Hence the multiplicative group of the J is an infinite group. If then \t s < t < s + \ then (<)< i while ifs + < i. By there exists an m 7* M M . and those for which x and y are integers are called the Gauss integers. gh N(g) as de / = + sired. It may similarly be shown to be infinite for every positive a.1 is in K. Gauss numbers. Let f and g ^ be in J. if / = 2 u and g = 1 u. Then the Gauss complex numbers are the elements of a quadratic field K of our study with tional x u 1. We first prove Theorem 15. ft "* > 5*~8 > 1. results on primes and prime ideals in two special cases. and the Gauss integers comprise its integral domain J. We have deteru of J and shall now study its divisibility properties. Then/ =lg l = with N(l) = Ar(u) = 1. and For the norms of the nonzero elements of in such that N(m) < N(f) for every / of M. M are positive integers. r = si + s\ < 2 l = h sg so that fg~ = N(s)N(g) J.
and d is M + composite. N(k) > 1. d is a prime. c But = is a dd s can have at most two prime factors = pp Q for positive primes p and p . We see first s = s = d zu. N(g) < N(f). M 0/+lginM. in fact.d. . ALGEBRAIC THEORIES and r in it Theorem 14 we have / = mh + r so is r = f /. and follows that r N(m). The above result then implies Theorem 16. Every composite Gauss integer f is expressible as a product = pi . and g = The above result may now be seen to imply that if / divides gh and is prime to g. then / divides h. c d = bf + eg a common divisor of f and g. For if d that if d is a prime of J. But then p = N(d). pr f of primes pi which are determined uniquely by apart from their order and unit multipliers. let d be a prime of J. N(p) = p* = N(d)N(k). then N(d) = p = N(g)N(h) so that For if p is N unit. a divisorless ideal) if and only if Let us then determine the prime quantities d = di d*u of J. A positive prime . Moreover. to obtain Theorem (56) 17. TTms d is a g.128 INTRODUCTION TO. wA. For the set of all elements of the form xf + yg with x and y in J is an = {d} d in has the form (55) ideal of J. Then there exist b and in J such that (55) is M M N(r) < . so is d d\ gh with N(g) ^ 1. then p divides g or h. But must be zero. then p = dk for N(d) > 1. m and mh are in for h J and = {m}. Conversely. Thus/ = mh. (and. we have d = g h for N(g ) = N(g). . in E. Let f and g 6e nonzero elements of J.c. p of E is either a prime of J or the product a prime of J. lid = = 1. s s s s N(h) * 1. h is a (h) gh with N(g) > 1. N(h ) = N(h). N(d) = dds where d is a positive prime of E and is composite in J. and also if p is a prime of J dividing g h. By Theorem 15 we have and divides / = 1 / + 00. Then if p is a divisor of/ of least positive norm it is a prime divisor of /. and we continue the proof of Theorem 6 M M . if / is a composite integer of J. Theorem We now prove 18. Then c = N(d) c positive integer of E and is either a prime or is composite. then / = g h for nonunit g and h. We observe that Theorem 17 implies that an ideal M of J is a prime ideal = {d} for a prime d of J. Every prime d of J is either associated with a positive prime of E or arises from the factorization in J of a positive s prime p = dd of E which is composite in J. of f and g.
p even. gr . Hence. and 1 == u. u) = b + 1 without dividing one of its factors 6 + u b (b + u)(b Hence p is not a prime of /. we may write x + yu d r for primes d in J. c = N(di) N(d r). 2 = 8r +1= 1 . Write c = f2g where f and g are positive integers and g has no square factors. We may = . . 4. A positive prime p of E is a prime of J if and only if p has the form t 4m + 2s t 3. It follows that 2 p = 4n + 3 7* x* + y We now assume that p = gh for g and h in J and 2 = = N(g)N(h). This completes the proof. Then (# + yu)(x yu) if and only if p divides either x + yu or x . We call a positive integer c a sum of two squares if c = x2 + y 2 for x and y in E. = di For if c = x 2 + y 2 = (x + yu)(x yu). . . Then we have Theorem 20. p r for positive primes p< of Snotof the form 4n + Conversely. . we have p< = N(di) for d< in J. N(d) Therefore d is associated with the prime p of E which is prime in J. . For p is a prime of J and divides yu. = JV(fc) = z2 + t/2 for = x + i/w in J. . Since p and po are both in E. t* = + 4s + 1 = 4s(s + !) an odd integer of E we + !. Then cisa sum of two squares if and only if no prime factor of g has the form 4n + 3.FUNDAMENTAL CONCEPTS of E. ifc . . One of s and s + 1 2 (mod 4) (mod 8). We use the result above to derive an interesting theorem of the Theory of Numbers. Then N(di) = p is a prime of E if and only if p 7^ 4n + 3 and otherwise N(di) p?. . If t is even. If neither nor h were a unit. . . and t 0. . We note that 2= 1 and that it remains to show that p + 1 = (l + w)(l w)is composite in = 4n + 1 is composite in J. . = N(fdi d r) and c Note in closing that a positive prime p of the form 4n + 3 divides x* + y2 if and only if p divides both a: and y. 1. We now clearly complete our study of the primes of J by proving Theorem 19. we would have p A'Xp) have N(g} > 1. } . . g = tf (di) N(d r ) = #(di 3. We know 2 by Theorem 12 that there exists an integer b in E such that b + 1 is divisible u = p(fci + fc 2tO. But p and p are positive and must be equal. . we have t = 2 55 4 (mod 8). . we = p2 have PQ = p. To prove have is this result we observe 4s 2 first that if is = + 1. . 2. Thus the prime factors of c of the form 4n + 3 occur to even powers and are not factors of g. But a prime p of J cannot divide the product 2 u. is J prime in J. . or 5 modulo 8 while 4n + 3 is congruent 3 or 7 modulo 8. if g = pi d r). and both of these integers would be divisors of p 2 But 2 2 = 4n + 3 then]V(g) = N(h) = p = x + y which is impossible. Thus a sum of two squares is congruent to 0. then 6 by p. 129 assume that p is associated with d and po with ds so that PQ PQ is associated with d and with p. If p divides b + u or 6 pfc 2 which is impossible. .
d must be a unit Number8 pp. 49. Moreover. y = p& 2 for integers ki and p(ki follows. then. the elements of Oban integer of E which is a divisor in J of k\ + fc 2 w. k\ = bhi. in the set E of all integers. we have N(g) is = g\ + 5gl > 0. fc 2 = the form k\ if fc 2 J have + k^u for k\ and . Dickson's Introduction to the Theory of 4042. It (57) z2 + 2 2/ = z2 . 2. But if g = gi + g*u. g\ < I But^l = 1 impossible since g\ The units of J are 1. k 1 2u are primes of J no two N(k) of which are associated.* y = 4n + y 12.c. Since these are nonassociated primes. E. An integral domain with nonprincipal ideals. Since 5 = 3 (mod defined by a = u* = the field J of integers of K 4). 1 + 2u. Theorem The elements 3. We now prove 22. 1 * 0. and 3 7 We see now that 21 = = (1 no two of them are associated.130 INTRODUCTION TO ALGEBRAIC THEORIES yu k zu). For 3 does not divide 1 + 2w and 1 J the residue class zero. y in J) . The norms of the integers of our theorem are 9. respectively. 2u in J yet does divide their product. For if k is a composite of J we have = gh. and we have factored 21 into prime factors in J in two distinct ways.d. serve that 6 is bhz. and the only positive proper divisors of these norms are 3. every prime divisor p 3 of z divides both x and y. y. and therefore 2u as divisors of ring J {3} contains 1 + 2u and 1 We let M be (58) If this ideal The ring contains nonprincipal ideals one of which we shall exhibit. 2. z have g. + 2w)(l 2w). unity. then 6 must divide both k\ and fc 2 For k\ + k^u = b(hi + h^u). the ideal of J consisting of all elements of J of the form 3* + y(l + 2u) {d}. 7. 7. 1. g\ + 5gl ^ 1. Then let x y. 7. there (x. Evidently g 2 3. We may show readily also that x and y cannot both be even or be odd and that z must be odd. were a principal ideal would exist a common divisor d of 3 and * 1 + 2u. 7. 21. . the principal ideal {3} of J defined for the prime 3 is not a prime ideal. = 3. 21. = N(g)N(h) for ordi nary integral proper divisors N(g) and N(h) of N(k). z are ordinary integers such that x = fc 2 of E. Thus clearly + 2u y 2u are primes of J. 1 For further results see L. We shall close our ex position with a discussion of some properties of the ring 5. It follows that the odd prime divisors of z have the form 4n + 1. that if x. x = pfci.
(59) wi is = ft 3 . The theory of orthogonal and unitary equivalence of symmetric matrices is contained with generalizations in Chapter of the Modern Higher Algebra and is further generalized and connected with ' V the theory of equivalence of pairs of matrices in the author's paper entitled 'Symmetric and Alternate Matrices in an Arbitrary Field. and Chapter VI of J. and of the Modern Higher Algebra inof course. This completes our discussion of ideals and of quadratic fields. 1 would be r in M. Dickson's Modern Algebraic Theories. 3<j + 0. 2. For a discussion clude. Let us begin with references to standard topics on matrices not covered in our introductory exposition. 1. 1. We now proceed to prove that M + is a divisorless ideal of a prime ideal of J. wz /& = Zu ft (1 + 2u) = ft 2 w  1 . It may be shown that every ideal of J has a basis of two elements over E. Thus we may call wi. For otherwise 3 2 = J and hence M M 1. (1 Then 1 = 36 + 2u)[fc(l  2u) + (1 + + 7c]. then = + Aau for ft and m E." in Transactions of the American Mathematical Society. E. u over E in this sense. (60) h = ftiWi + w ft 2 We have thus proved that consists of all quantities of the form (60) for hi and ft 2 ordinary integers.h 3fti 2 ft 2 w2 = hQ + h* in E and M. 2 such that 3 { = 0. 2. See also pages 7476 and Chapter VI of L. 386436. Wedderburn's Lectures on Matrices. it be an integer of E in M so that c c = r. Let Since c M contains 3g contains 3q = the ordinary integers in M are by The ideal M contains 3 and + 2u and so contains = 0. By the proof above + ^2 = for hi in E. Note that 1 has a basis 1. IV. Write = 3c + r for c in E and r = 0. H. V . is 3} and is a field. Both of these texts as well as Chapters III. 2u)c for b and c in J. all the material of our Chapters IIV. and \d} 7 = + (1 + 2u)7c is = = {!}. the elements of We have proved that J J M are the residue M equivalent to E is classes 0.FUNDAMENTAL CONCEPTS 1 131 or 1 216 7. XLIII (1938). 1. We shall conclude our text with the following brief bibliographical summary. Then k = cwi + fc 2w 2 + r = fci + fc 2 M M { } r (M). w 2 a basis of over E. We have shown that is not a principal ideal and does not contain Since contains 3 but not 1 it cannot contain 2. But 1 2w does not divide a contradiction. of J. If /i in in M . r where r = Hence divisible 1 3. M. Every integer of / has the form k = ki + k^u = ki + fc 2w 2 + fo. M a divisorless ideal of J.
and the theory of ideals in Chapter XI. 1940). See also the much more extensive treatment in Van der Waerden's Moderne Algebra. and its ideals on pages 10610 of H. Weyl's Algebraic Theory of Numbers (Princeton. trices see C. The units of the ring of integers of a quadratic field are discussed on page 233 of R. II. The Theory of Matrices ("Ergebnisse der Mathematik und ihrer Grenzgebiete. ." Vol. Volume III.132 INTRODUCTION TO ALGEBRAIC THEORIES without details but with complete references of many other topics on maMacDuffee. H. C. Fricke's Algebra. 5 [pp. 110]). No. The theory of rings as given here is contained in the detailed discussion of Chapters I and II of the Modern Higher Algebra. We close with a reference to the only recent book in English on algebraic number theory. Hecke's Theorie der algebrdischen Zahlen.
Diagonal : matrix. 49 rank of. 49 skew.. 2 Dependent vectors. 51 quantities. 29 rank of. 115 : of a matrix. 111 Adjoint matrix.. 72 space. 20 rank. 67 Determinant. 7. 57 Difference ring. 107. 37 ring. 131 Bilinear form. 64 Bibliography. 122 Coordinate. 8 Coefficient. 77 : Division algorithm for: Gauss numbers. 56 of Gauss.. 113 Complementary minor. right. 48 Degree of: matrix of. 31 Cofactor. 108 Constant. 110 Addition of matrices. 110 inverse. 115 Congruent: matrices. 27 spaces. 115 Complex numbers. 117 matric polynomials. 38 matrix. 27 of a product. 97 a term. 127 Composite quantity. 52 mappings. 60 Divisibility. 72 Division. 27 equation. 54 types of. 107 identity. 101 Closure. 23 matrix. 19 Augmented Basis. 2.INDEX Abelian group. 113 Element : submatrix. 117 Distributive law. 1. 127 integers. 97 polynomials. 2. 26 order of. 44 trowed. 46 Algebraic integers. 59 Additive group. 67 Associative law. 4 Division ring. 29 of a matrix. 97 blocks. 124 Associated quantities. 75 field of. 6 zero polynomial. 80 Automorphism. 51 : Column elements. 116 Conjunctive matrices. 110 matrices. 66 Correspondence.. 23 Difference. 114 Diag {Ai. 28 Cogredient elementary transformations. 110 133 . 14. 73 Definite symmetric matrix. 21 Complex conjugate. 98 Commutative : group. 53 symmetric. 101 of a field. left. 14 Characteristic : a polynomial. 113 Divisors of zero.
111 commutative. Linear: Gauss integers. 120 Field. 15 independence. 92 Groups. 122 of Arithmetic. 67 form. 118 97 Leading form. 9 process. systems of. 116 zero. 10 Euler's theorem. Length preserving transformation. 59 Euclid's process. 128 of integers. 101 Leading coefficient. 5. 46 matrix of. 74 Greatest common divisor. 79 Equivalence. 114 characteristic of. 92 quadratic forms. 51 inverse of. 110 matrix. 7 Ideal. 127 Gauss numbers. 73. 1 complex numbers. 111 order of a. 56 of. 17 groups. 8 Left division. 24. 114 Integral operations. 15 dependence. 19. 7. 114 of congruent. 98 virtual. 117 principal. 74 subspace. 47. 38 Linear mappings. 82 cogredient. 110 additive. 3.134 Element INTRODUCTION TO ALGEBRAIC THEORIES continued unity. 110 combination. 127 General linear space. 45 Irreducible quantities. 91 Inverse element. 74 element. 113 zero. 56. 116 bilinear forms. 9 of Gauss numbers. 83 nonsingular. 117 Integral domains. 118 of polynomials. 97 minimum. 13. equivalence of. 17 product of. 73 characteristic. 116 unit. 46 matrix. 112 Elementary: divisors. see also Bilinear forms Function. 89. 124 matrices. 110 cyclic. 117 Elementary transformations. 17. 43 Equations. 99 Fermat theorem. 66. 67 space. 113 order of a subgroup of. 110 subgroup of. 112. 67 Integers: algebraic. 25 cogredient. 110 linear spaces. 101 Fundamental Theorem Galois group. 110 zero element of. 121 generating quantity 121 mapping. 109 abelian. 17. 36. 120 Factor theorem. 10 Group. 110 Homogeneous polynomial. 36 . 120 ordinary. 52 matrices of. 3. 110 : degree of. 116 divisorless. 49 Identity: forms. Invariant factors. 71 Linear change of variables. 94 matrix. 115 Forms. 30 Independent vectors. 66. 74 Equivalence of: prime.
71 Linear transformations. 103 linear transformation. Minors. 25. 47. monic. 20 equal. 29 augmented. 108 equivalent. 92 of an elementary transformation. 86 of. 6 rank of. 71. 87 23 of. minimum 101 equal. 45 function divisibility of. 100 108 form. 84 orthogonal. 30 invariant factors inverse of a of. 110 a group element. 9 homogeneous. 27 subspaces. 111 a linear space. 59. 1 of. 80 of a bilinear form. 86 columns of. 6 97 constant. Multiple roots. 108 zero. 19 nic. 49 characteristic. 85. 108 mapping. 5 g. 101 Norm Norm quadratic form. 66. 3. 100 13 . 108 identity. of. 64 transpose of. 29 diagonal of.c. 5 nary qic 3. 20 of. 37 symmetric. 52. 123 of a vector. 29 unitary. 66 Polynomials : associated. 13 : orthogonally equivalent. 19 Order of: a group. 2 factors of. 92 equivalent pairs orthogonal. 34 partitioning of. transformations. 86 135 of. 32 rectangular. 5 mapping. 59 commutative. Matrices: addition see Linear mappings of. similar. 66 products of.d. rows skewHermitian. 101 congruent pairs of. 84 matrix. 36 34. 20 Mappings. 36. 52 scalar. 67 order of. 43 elements of. determinant diagonal. 108 square. 84 unitary equivalent. 101 of coefficients. 20 30 skew. 7. 2. 74 basis of. 5 coefficients of. 21 Point. 51 Minimum function. 53. 21 principal diagonal of. 34 Matrix : adjoint of. 87 elementary. 23 identically equal. 22 triangular. 86 diagonal elements 23 vectors. 89. Hermitian. 91 degree of. 108 conjunctive. 27 Monic polynomial. 47 Nonsingular rationally equivalent. 71 Orthogonal: matrices. 7 nonsingular. 86 characteristic function of a. 108 7idimensional space. 30 congruent.INDEX Linear space. 55 in a field. 83 of.
117 division. 100 in several variables. 62 Quadratic forms. 54 definite. 6 quadratic form. 16. 98 Ring. 16 Rank a a a a a of: an adjoint matrix. 4. 41 Scalars. 25 functions. 21 . 3 gary. 98 Residue classes. 11 irreducible. matrices. 117 Roots. 64 Prime to a polynomial. 72 right. 11 quantities. 13 Quasifield. 12 relatively prime. 64 Reflection. 113 residue class. 98 matrix. 115 relatively prime. 1 Positive form. 64 index of. 5 multiple. 55 real. 113 difference. 30 polynomial. 87 virtual degree of. 124 Quadratic form. 100 product. 107 Semidefinite forms. 97 zero. 121 integers of. 114 symmetric matrix. 6. 121 Quartic form. 112 commutative. 46 bilinear form. 115 Primitive root. 115 composite. 87 Row. 67 Subfield. 85. equivalent. 11 roots of. 110 Submatrix. 103 Skew forms. 62 negative. 116 Right division. 121 Principal diagonal. 2. 59 Quantities: associated. 69. 52 matrix. 23 Relatively prime: polynomials. 64 Sequence: elements zero. 13 Quaternions. 64 nonsingular. 69 rank. Row by Scalar: column rule. subgroup. 115 98 theorem. 5 scalar. left. 8 Real: Polynomials continued products of. 37 Quartic field. 13 rationally irreducible. 62 quaternions. 44 quadratic form. 115 prime. 57 right. see Linear space Spanning a space. 54 row space. 5 Rotation of axes. 21 complementary. 49 Similar matrices. 32 product. 114 Quinary form.136 INTRODUCTION TO ALGEBRAIC THEORIES Rational: equivalence. left. 16 of. 72 Space. 19 Selfequivalence. 72 space. 53 Skew matrices. 115 congruent. 115 Remainder : Quadratic field. 13 Quotient. 11 Prime quantity. 57 Quaternary form. 6. 8 operations.
13 Trace. 116 matrix. 13 Unique factorization. 115 Unity element. 16 space. 113 Transformations. 77 Subsystems. 66 linearly independent. 62 element.INDEX Subring. 84 Transpose. 2. 37 of a sum. Unit 116 Subspaces. 67 Symmetric bilinear forms. 75. 97 Virtual leading coefficient. 54 Symmetric matrices. of. 113 Variables. norm zero. 115. 66. 112. 6. 77 Units. 86 59 orthogonal. 30 Unary form. 123 Virtual degree. 1 sequence. 113 ideal. change of. 77 complementary. 52. 53 congruence of. 118. 22 of a product. 67 . 127 polynomial. 3. 110 Subtraction. 38 sum of. 112 Vectors. Zero: divisors of. 87 66 97 definiteness of. 113 137 ideal. 64 Ternary forms.
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.