You are on page 1of 220
“3 108s Booze 8 Jun Ho Kwak Sunopyo Hon Linear Algebra BinxwAvser Boston + BASEL * BERLIN x {64 93 1994 ino Kish Sengpyo Hong. Deparinca! of Mathematics Pohang Universi of Sclence and Tecnology Pobang, Te Republi of Korea Library of Congress Cataloging i Publcaton Date TAT att ort Beis cp 0909 = han 4 Hats ane wage 199 Delta oxen pinnae BB Copystar wc of US. Goverment emp. ‘lignes Nepean patna bereoive, din eal sat, ‘enum uy fry nnn cui peeping ering, {cei ht rpm of epg ova. emo to phtcpy fr nto peel of pe cans ego by ‘lies Son Hee od ot cs acted oh the Copel hace {Cener(CCC pete be a S600 ppp 8.20 epae lacy ‘0c 27 Hom Dine, Davey MAUS USA. Sel non shod Be et diy blur Boon 6SMamchets Aone, Cenps MAIS, ush ‘pgs yh ta LL Peau ty Hence ining, Rens, NY Pntine USA Preface Linear algebra is one of the most important subjets inthe study of slence ‘nd engineering because of its widespread applications in social or natural sinace, computer sclence, physics, ot economics. As one ofthe mast useful ‘courses in undergraduate mathematics, i hat provided essential tools for industrial scientist. ‘The basic concept of Inear algebra are vector spaces, lineor transformation, matrices and determinante, and they serve a5 ea altact language for stating idee and solving problems. "This book is based on the lectures delivered several years ina sophomore level linear algebra course designod for ince and engnoerng students. The primary purpore of this book i to give ecxcefal presentation of the basic ‘concepts of linear algebra a cobereut pat of mathematics, and to ustrate its power ad wsefulness through applications to other daipins, Wo have ‘tied to emphusie the computational skis slong with the mathematical abstractions, whieh have seo an integrity and beauty of their own. ‘The took inclades variety of interetingspplcations with many examples not only to help students understand new concepts but aso to practice wide ‘plications ofthe eubjet to such ares as diferencia equtions, statistics, ‘geometry, and physics, Some of thowe applications may not be central to the mathematical development end may be mitted a slectd in eelibas tthe discretion of the instructor. Most busie concepts and Introductory ‘motivations begin with examples in Bvliean space oF solving © system of linear equations, and are gredualy examined from diferent poluts of views 1 darivo general principles. Pr thoe students who have completed a year of calculus, linear algebra :may be the fret cours n which the sbjet is developed in an sbetract way, land we often find that many students strgle with the abetrection sad ‘hiss the applications. Our experience is that, to understand the material students should practic with maay problems, which are gometimes omitted ‘because ofa lack of time, ‘To encourage the students todo repeated practice, % Preface ‘wo placed in the middle ofthe text not only many examples but slo some cenrfully selected problems, with answers or helpfl ists. We have tied to make this book as easily secusible and clear ae posible, but certainly ‘there may be some award expressions in several ways. Any’ erica of comment fom the reeders wil be appreciates. ‘Weare very gratefl to many colleagues in Korea, especially tothe focally ‘ener in the mathematics department at Pohang University of Science and Technology (POSTECH), who helped us over the years with various specs ofthis book, For their valuable suggestions and comments, we would "eto thank the students at POSTE, who have used photocopied versions ofthe text over the past several years. We woul also lik to acknowledge the lavaluable assistance we have recsived from th teaching assistants who eve checked and added some answers or hints for the problems tod exercises in this book. Our thanks ako go to Mrs. Kathleen Roush who made this book such more legible with her grammatical corrections inthe final maNUEript (Our thanks finally goto the editing sta of Birknuar for play acepting our book for publlestion. Sin Ho Keak Sungpyo Hong ‘Esmail:jinkwakOpostec-ac kt sungpyoipostec ack Apri 1997, i Poheng, Korea "inar lgera isthe mathematics of our moder technolopea! world of comples muliarible systems and computers” Alen Tucker — We (Hames and Kaplansy) share «love of linear algebra. I think 41 our cancion thet well never understand infite- dimensional operators properly wil we hove a decent mastary of nite matrices. And we shee a ‘hilcophy about linear algebra: we think bse, we write baie fe, tuhen the chips are down we close the office door and compute with matrices lie fury” ~Teving Kaplansky ~ Contents Preface 1 Linear Equations and Matrices ua 12 13 14 15 16 17 18 19 Tntraduetion Gavernn elimination Matrices Producte of matrices Block matrices Inverse matziows Elementary matrices EDU thetorization Application: Linear modes 4110 Bxarcees 2 Determinants a1 22 23 24 25 26 ‘asc properties of dotarminant Exstence and uniqueness (Cofactor expansion Cramer's rule Application: Arco end Volume Exercises 3 Vector Spaces ‘Vector spaces and subspaces Bases _ Dimensions Row and column spaces Rank and nullity ‘Bases fr cubspaces verti Application: Interpolation ‘Application: The Wrenskan 210 Exercise 4 Linear Transformations Tntroduesion “nvetible linea tanaformatione “Application: Computer graphics ‘Matrices of tinea transformations ‘Vector gpces of linear transformations Change of bases Similarity Daal spaces Brercae \ 5 Inner Product Spaces ner products ‘The lengths and angles of vectors Matrix representations of inner products Orthogonal projections... ‘The Gram-Schmidt arthogonalization Orthogonal matress and transformations Relations of fundamental subspace Least square soltions Application: Polynomial apprximations 5.10 Orthogonal projection matrices 5.11 Bxercaee : 6 Eigenvectors and Eigenvalues oy 62 63 6a 65 66 67 68 Tntroduetion Diagonalization of matrices “Appliston: Dilference equations Application: Diflerential equations 1. “Applintion: Dileretil equations I Exponential metrics Application: Difeential equstons IIT Dingonaliation of linear trasformations CONTENTS no CONTENTS 69 Exercises 7 Complex Vector Spaces TA Introdetion 12 Hermitian and watery matrices 3. Unitary dingonalizable matices 14. Normal matrices 15. Thospocral theorem: 1. 16 Exerczes =. a 8 Quadratic Forme 81 Introduetion £2 Diagonalization of 1 quadrete forma 83. Congraenee relation 84 Extrema of quadratic me 85 Application: Quadratic optinlzaton 86 Defate forme 8:7 lines forme : 88 Exercloss : : 9 Jordan Canonical Forms 51 Introdetion 8.2. Generalized eigenvectors 83 Computation af 944 Cayley-Haniliontheorein 95 Brerceee Selected Answers and Hints Index 2s 2st Linear Algebra Chapter 1 Linear Equations and Matrices Y 1.1 Introduction (One ofthe central motivations fr lncar algsbra is coving systms of linear equations, We thus begin with the problem of finding the eoltions of a ‘tem of m near equetions inn unknowns ofthe flowing form: on + ont tm + onm = ca + cum bos + gum = | venus amet + amaea + + Aamtin | where ya oy ee the ima a's nd deat conan {rn or Sopa) in ‘A romenor of aber (21, ty ou) ld solution ofthe aston dass Sy = oy Saigy each equation in the pete, ‘Smultaneoilye When b yy = 0, we say that the system is homogeneous "The central topic of this chapter isto examlne whather or not a given stam has a colition, and to Bind a eoktion If I has one. For instance, ‘ay homogeneous system always has a least one solution = “iy = 0 called the trivial solution. A natural question is whether sich & homogeneous systom has a nontrivial selution, Ifo, wo woul like to havea systematic method of finding all th slutons. A system of lineer equations Seid tobe consistent if thas atleast one solution, and Inconsistent If 1 2 CHAPTER 1. LINEAR EQUATIONS AND MATRICES it has no solution, ‘The following example give ue an ides how to enswer the above questions. Example 1.1 When m = two uninowa 2 and y { az + by , the eye reduces to two equations in a ®. os + by Geometrically, each equation in the systom represents a straight line when we interpret 7 and y a6 coordinates in the syplane. Therefore, a point P = (2,3) isa solution if and only i the point P les om both lines Hlnoe there are three peeible typos of slution est (0) the empty sti the lines ae parallel, (2) only one point if they intersect, (@) a straight ins: Le, infinitely many solutions, they eoincide. ‘The following examples and diagrams iustrat the thro types: case (1) case (2) ase (8) eye pty = zy = 0 ey = 0 y ’ ‘To decide whether the given stem has a solution snd to ind generl ‘method of solving the system when it hss solution, we repeat ere «well: Tnown eleettary method of elimination and substitution. ‘Suppose fst thatthe systam consists of only one equation ax + by = « ‘Then the system has either infinitely many solutions (Ze, points on the 4d, IVERODUCTION a ‘enor sesurie thatthe system has two equations presenting two lies in the plane. ‘Then cleesy the two lines are pale with the came copes if and only Hf ay = Aa, and by = Ab for some A #0, of eyby ab, Furthermore, the tye line either coincide (infinitely many solutions) or are Gistinet and paalel (eo solutions) according to whether 6) = Ae holds or et ‘Suppose now that he Kins are not paral, or a:b ~ bs #0. Ta thls sae, the two lines eros et a paint, and honce thee is exactly one solution: For istear, if the este i omogeneons, then the ins eros atthe eign, 0 (0,0) i the only solution. For © nonhomogeneous system, we may find the colation a follows: Express «in terms of y fom the fist equation, sad ‘hen subetitte it nto the second equation (ée, eliminate the variable x ‘Hom the recond equaticn) toast hd nur bed aoe th equinox ge 9 ‘complete solution of the systetIn detail, the process can be summarized tlre (Ws frase may tunes Oe therm can Uechnge teenies, Tec ves an be liad Bo ‘Bev quan Wy eng oe he i guint the end Ss (Pre (2) Since ba—oah 10, ygam be found by muilyng the ssond equation Yy a pomaro mnber =" toate ar thy =e y= sare, fib eah” 4 CHAPTER 1. LINEAR EQUATIONS AND MATRICES (2) Now, 2 is solved by subetitting the value of y Into the fist equation, tnd we obtain the elation to the problem: rer = rea aha, Y aka oa Note thatthe condition osby~ aah #0 ie necessary forthe system to ave ‘nly on solution, . In this example, me have changod the original system of equations into a simpler one sing oxrtain operations, from which we ean get the solution of the given system, Tat if (2) ants the orignal eystem of equations, then x andy mst satisfy the above simpler aystem in (3), and vice vasa. Tis euggseted thatthe readers examine » system of thos equations ia three unknowns, each equation represeating a plae in the dimensional pace B, and conser the various pombe cases ina similar way: Problem Sf For astm of tne equation ia thee unknowns {x + ouy + aur = he deere ll the posible type ofthe soliton et in B 1.2 Gaussian elimination ‘Aa we have seen in Example 1.1, a basi dea for solving system of linear ‘sqution sto change the given system into simpler system, Keeping the ‘solutions unchanged the example showed how to change a general system to a simpler one. In fact, the main operations used in Example 1.1 are the folowing thee operations, called elementary operations: (1) sukiply a nonzero constant throughout an equation, (2) iotereinge two equations, (8) change an equation by adding constant multiple of another equation. 1.2. GAUSSLAN ELIMINATION 5 [Aer applying fnlte sequence of thes elamentary operations to the ‘sven eyetem, one can obtln simpler system from whic the eoution can be deived directs. [Note algo that ach of the three elementary operstions has its inverse operation whichis so an elementary opezation: (A divide the equation withthe same nonzero constant, (2) interchange two equations agai, (BY change the equation by subtrecting the seme constant multiple of the same equation ‘By applying thee inverse operations in reverse order tothe simple system, fous can recover the original system. This means that @ clution of the Crginleystem mt alo be solution of the simpler ono, and vie verse ‘These arguments can be formalized in mothematical language. Observe that in performing any of these base operations, only the coefeens ofthe ‘arable ae involved inthe ealeulations snd the variables 2, 2 and the equal sgn "=" aro simply repeated. ‘Thus, keping the ander of the ‘avian and "<* in mind, we jurt extract the coefints only from the ‘Squations inthe given system and make «retangular array of numbers ax on ain by on on am by rt Ona =** nm te ‘This matcix ited the augmented matrix for the system. ‘The term ‘metrirmean ost any retangula array ofmambers, andthe number inthis ray oe called the entries ofthe matix. To explain the sbove operations in terms of matics, we frst ntzoduce ome terminology eventhough inthe folowing sections we shall study matciee in more deta ‘Within metrx, the Rortontal and vertical subareas ay fon ea + im band | ons ac called the throw (matrix) andthe j-th column (matrix) of the aus ‘mented matrix, respectively. Note thatthe eateas in the jth column are 6 CHAPTER 1. LINEAR BQUATIONS AND MATRICES just the coafcents of jth variable 2, so there isa correspondence betwen ‘hun ofthe matrit and varieles ofthe syst. ‘Since each rm of the augmented matrix contains all the information of the corresponding equation ofthe system, we may deal wih cis augmented Inatrix instead of banding the whole esiem of Leer equations. ‘The elementary operations to ssystam of near equations are rephrased ‘the elementary row operations forthe sugmented mate, as follows: (2) multiply a nonzero constant throughout a row, (2) faterchango two rows, (2) change « 0 by adding a constant malipe of another rom: "The fnerse operations are (1) divide the row by the same constant, (2) interchange to rows again, (BY change the row by subtracting the somo constant multiple ofthe other Definition 1:1 Two augmented matrices (or systems of linear equations) se eid to be row-equivalent if one can be ansformed to the'other by & finite saquence af elementary row operations. a matebe B cen be obteined from a mtr A in tho way, then we can obviously recover A from B hy applying the inverse elementary row ‘operations in reverse order. Note again that en elementary row operation doesnot alter the solution ofthe aystam, and wo can formalize the above sngument in the folowing theorem: ‘Theorem 1.1 If two stems of near equations are row-eptvlent, then hey have the same ae of solutions "The general procedure for nding the eottions wil be ustrsted inthe following example: [Example 1.2 Solve the system af naar equations: fe: eee ae ere 12. GAUSSIAN ELIMINATION 7 Solution: We could work withthe augmented mate sone, Honever, to compar the operations on systems of linear equation with thoes on the augmented matrix, we work on the system and the togmentod matrix in parallel, Note that the associated augmented matrix of the eystam i o24 2 122 3 346-1 (1) Since the coefficient of = inthe fst equation i ero while that inthe second equation it not aero, we interchange these two equations: z+ hy + 12204 by + a 0262 Set + S461 (2) Aca — times th frst equation to the third equation: [op by + a - 10 ‘The coeficient 1 of the fist unknown inthe frst equation (ro) i called ‘the pivot inthis fst elimination step. ‘Now the cocond and tho third oqutions involve only the two unknowns vy and 2. Laave the fist equation (row) aloo, and the same elimination procedure ean be applied tothe second and the third equations (rows): ‘The plot for this step is tho coeicint 2 of yin the second equation (roe). To ‘liminatey fom the last equetion, (3) Ada 1 tines the sand equstion (row) to tho third equation (row): et ytee & [122 5 yeas 2 ford 2 wos loos ‘The elimination proce done sofa to obtsin this results eae for. ‘ward elimination: c., elimination of» from the las to equations (rows) fad then elimination of y from te lst equation (ron). ‘Now the plvots ofthe second and third rows are and 4, respectively. "To male these entries 1, 8 (CHAPTER 1. LINEAR EQUATIONS AND MATRICES (4) Divide cach row by the pivot ofthe row: pewter e 8 1223 yd = 1 forza <2 [oo1-2 "The eeulking matrix on the right sido is called « row-echelon form ofthe ‘ats, ad tha I's tthe lftmoet ents in each row are called te lending Ps, The proces ofa is ealled a Gaussian elimination. ‘We now want to eliminate numbers above the leading 1's; (6) Add 2 tines the thitd row to the eeoond and the fet rows, ety =f [ra07 y 5 joio s zo [oor (6) Ada ~2 tases the scond row tothe fst rom : “3 fio y 5 jot 22-2 [oo ‘This mattis called the reduced rowechelon form. The procedure to get this reduced row-echelon form from a row-echelon form is calld the back substitution. The whole proces to obtain the reduced ror-ecblon form called a Gauss-Jordan elimination, "Notice tht che enrreeponding system to thls edueed row-echlon frm is row-oquvalet tothe original one and ls eseatially a slved form: t, the solution t= = 8, y= 8, 3 In general, a matrix of row-ochelon form stisies the following prop: (2) Tho first nonzero entry of each row is 1, called a leading 1 (2) A row contlning only should come efter all rows with some nonzro entice (2) Ti leading 1's appear fom left tothe sight in sucesive rows, That {the lending 1 in the lower row occurs farther to the right than the lending I in the higher row Moreover the matrix ofthe redueed row-echelon form satisfes 112, GAUSSIAN BLIMINATION a (4) Bech column that contains leading 1 has seroseverythere ale, in ‘dition ta the above three properties [Note that an augmentad matrix hs ony one reduced row-eelon form ‘while it may have many row-echslon forms. In any ease, the mumber of ‘nonzero rows containing landing 1's sequal to the number of columns con- taining lending I's, The variables in the system corresponding to columns ‘with the leading I's in a row-echelon form are called the basle variables. In {eneral, the reduced row-erelon fr U nay have columns that donot coo- thin leeding I's. Tho variables ln the system corespondiag tothe columns vithout leading I's are called free variables. Thus the sum of the mumber Of base variables and that of fee varlabls i pray he total number of ‘ariebles or example, the frst two matrlos below are in reduced ror-echolon forma, nd the lat two just In eow-echalon form, 100] [o1so0s) [1282] firze o1o|,fooorrl,/o14s|,forr9 ooo} [oooo0} looirt) lors [Notice tht in an augmented matrix [Ab], the last column by doesnot correspond to an variable, Hence, if we consider the four matriee above fs augmented enattios for come eyes, then the ystems corresponding to the ft and the lst two augmented matrices have only bale variables ‘but no im vatiables, In the system coreeponding to tbe second augmertod matrix, the second and the forth variables, 3 and, are basic, and the frst and the thed arises, and 2, ae free varias. These ideas will be wed in later chapters. Tn summary, by appiying sini sequence of elementary row operations, the augmented meteix for «system of near equations can be changed to fs reded row-echelon form which Is row-equlvalent to the orignal one. From the redced row-echeln form, we can decide whether the system has solution, and ind the alution ofthe given system if has one ‘Example 1.3 Solve the following aystem of linear equations by Gaus Jordan elimineton, m+ - tn ae + 6x ~ One + de 3 8 m+ om + Sy = 10 10 CHAPTER 1. LINEAR EQUATIONS AND MATRICES Solution: The augmented matric forthe eytam is ‘The Gaussian elimination begins with: (0) Adding ~2 tines the fst row tothe second produces [33 283] (2) Note thatthe coefficient of zy in the second equation is zero and that ‘nthe third equation is not, Thus, Itarchangng the second and the third rows produam 134 01 00 (8) Tho pivot in the tied row a 2. Ths, dividing the ted ow by 2 roduc a row-ochdlon farm 13-1 o1 00 ‘This ca rom-echelon form, and wo now coutinue the back-sbetittion (4) Adding ~1 times the third row to the second, snd 2 times the third ow to the first prodvces 1204s oloi4 00126 (6) Finaly,sdding ~3 times the stcond row to the frst produces the reduced row-echelon form 1 x 2 2 1 x r Fr rs 10018 o1014 o0126 412. GAUSSIAN ELIMINATION a ‘The coresponling system of equations is n touas ma toned m+ Im = 6. ‘Since 2, 22, and zy orrespond tothe columns contlning loading 1's, ‘hay re the basi variables, an ithe fee varlable. Thus by solving cis syst forthe basi variables in terms of the foe variable 2, we havo the ‘jstam of equations in solved form: nos x mots om B= 6 24 By assigning an arbitrary value t t the free variable 2, the slutions ean ‘be writen as Gen ay 3, 2 = O—6 = 6-26, 0), for any ¢ © R, where R denotes the st of real mumbers. Ey ‘Remark: Consider a homogeneous system ee ea, + amma +o + mtn = 0 amit + Gata Hs + Omnte = Oy vith the number of unknowns greeter than the numberof equations: that Je, m- ay and am x 1 column matrix [ey «+ a0] the product ax isa 1x1 matric (ce, jut a number) Sane yea Ee] [Note thet the nunber of columns ofthe frst matrix must be equal to the ‘number of rows ofthe cond matrix to have entrywite multiplications of the entre. (2) For an mm mate en) | | = loses toate +--+ a9 an | om eo a : emt Ona + ae | | ae ‘where a's denote the row vectors, and for an n x 1 column matric x = [es «= 20, the product Axcis ly definition en m x1 mate defined by ere: haeue | Seca wax) [Shion or in an expend oem etn om | |e] _| ans tout enn oa tina ome | Len] Logue tomate te lh ran 1 cor matic ofthe form bab "Therefore, fr a system of linear equations inn unknowns, by writing the nonknowne ss an n> eon matrix x and tho cooicints as an mn atric A the system mngy be expressed asa mate eqution Ax = b, Notice ‘hat this looks just ike the usual insar equation in one viable: oz 18 CIAPTER 1. LINEAR EQUATIONS AND MATRICES (8) Product of matrices: Let A’be an min matrix snd B an nxr ratrix. The product AB is defined to bean m xr matrix whose columns se the produete of Aad the columns of in corresponding order. ‘Thus fA is mxn and B is nxr, then B has columns and each column of Bis an nx matrix Ifwe denote them by BI, ..., b”,or B= bt --- br), hen ap = [ae at ate | aah abe oe ae Sek BE abe ab! gb? anh? whichis an m xr mats. Therefore, the ()-entry [AB]y of AB isthe ‘th entry ofthe jh column matrix ab ab Abie : anh fey for 2, sy me and Jo], oo, i the product of throw and th column of A: * [ABly = adh? = Yeh empl Ll Cone th mon an(Zo} [6 25] TET = [ees] (J[4] - Ee )-E): [i a][o] - [28t80]-[3]- 1d, PRODUCTS OF MATRICES 10 ‘Therefore, AB is (ess 3 s]-[% 88] Since A is © 2x2 matric and B iss 2X3 matrix, the product AB bs « 23 matric. If we conoentrate, for example, onthe (2,1}-enty of AB, we single ‘out the second row from A and the Gre column from B, and then we mulily oresponding entries togethor and add them up, ie, 4-1 +0) 2 Note that the prodvct AB af A and 3 ie not defined If the mumber of columns of and the numberof rows of B ere not equal. ‘Remark: In step (2), we could have defined fora 1 row mate A and fsnn-xr matrix D sing the same rule defined i sap (1). And then instep (@) an appropiate modfieaton producis the same dafatan ofthe product of mations, We suggest the readers veely this (oe Example 1.0). ‘Tho dantity matrix of order n, denoted by In (or Tithe order i eae fiom th context) i diagonal mate whose dagoaal entees ae ll, Le, Loo 01 poo be on By o direct computation, one can sly ste that Aly = nen mate ‘Many, but not al ofthe rule of arthmetie for ees or compl numbers ‘so old for matriwe with the operations of scalar multipestion, the eum land the product of matrioes. The matrix Opn Pays the role ofthe number (and Jy that ofthe momber 1 i the st of real numbers. "The rule that does not bold for matrices in general a the commutativity AB = BA of the product, while the commutativity of the matric sum A+B =B-+A does bold in georal. The following example iusteates the onoommutativity of the product of matrices o a wo-{4 3]. o4-[2 3] = Ind for any ramets it a= [3 9] oa 2=[ » CHAPTER 1. LINEAR EQUATIONS AND MATRICES ‘Thos the matrices A and B in thie example satly AB # BA. 5. ‘The falling theorem lists some rules of cxdinary arithmetic that do bold for matrix operations. ‘Theorem 1.4 Let A, B, C be arbitrary matrices for wih the matisop- erations below are defined, and lt be an arbitrary salar. Tn (2) A(BC) = (AB)C, (written as ABC) (Aasocitvig), (2) A{B4+C)= AB+ AC, and (A+ B)C = AC + BO, (Distributviy), (@) [A= A=Al, (4) MBC) = (kB)o = BOC), () (4B) = BPA”. Proof Each eqsity can be sown by direct ealatlons of each entry of Teh ale tte equalities, We strate ths by proving (1) oly, and ae ‘he oer tothe onde ‘Amue thet A= [og] an mx mse, = (ian mxp mates, sand © = fea) a per tute. We now computa th (henry ofeach Se ofthe eqution. Note hat BC ian nxr mae whan (6,)}-eay is (By = Es bac. Ths (ABOy = SoaulBWs= Sa $s dno = Fo Eouooy Sillary, AB i an mxp matrix withthe (,i}entry [Bly = Tay ei nd (ane = Diashasy = Edens = ES eubuey ‘Ths lay shows that [A(BO)y ((AB)Oly forall, j, and eonsoquently (AB)C as desired, 2 Prolem 18 Prove or dpm: IEA not a nro matrix and AB = AC, then Protlem 1.9 Show tht any thnguls mari A satising AAT = ATA i ag cal ate L4, PRODUCTS OP MATRICES a ProSlem 510 Bora aqure mati A how that (0) AAT and 4+ AT ae oyna, (@) A-AT Se dhow-eymmeti, and (9) A canbe expres a the sum of ymntse past B= (A+ AP) and arm Gomtsic wt C= HA A), that A= BE As en application of our results on matic operations, wo shall prove the folowing important theorem: ‘Theorem 1.5 Any system of linear option hes either no solution, eactly tone solution, or infitaly many solutions Prooft We have slready soen that a system of linoar equations may be ‘written as Ax=b, which may have no solution or exactly one solution. [Now assume thatthe systan ax of linear equations has mare than one colton and let x; aud xp be two diferent eolutions so that Ax = b and fxg = b. Let %9 =>, —%2 #0. Sinco Av is just a partiular ease of = matrix product, Theorem 1 gives ws “Als + va) = Ans + alo = B+ Ade = Axa) =b, for any real number b. This says that 2+ bx is ls a solution of x= b for any k Since thane are infinitely maay choiees fork, Ax = bss inftaly sary soltions. a Proilem 121 Foe which valu of does ech ofthe allowing stems have 20 ‘elation, exactly one ston or infiately roan eltions? zt y- ued oye er} af: % ete (@-1 = a42 1 : 2 eRe hee 2 CHAPTER 1. LINEAR EQUATIONS AND MATRICES 1.5 Block matrices 1a this tion we introduc soma lachniqu that wil ten be vary blpil in manipulating matrices. A woboontsix of «mati Ais antec obtained from A by deleting etain rows and/or eolmos of A. Using system of horizontal and verti Uns, we can partition a matrix Aino submaties, called blocks, of Aas fallow: Consider a matrix an an on|on aay am ans | om a a om | a vided up into for blocks bythe dotted line shown, Now, if we woita aoe[2 mp ] se [2i]- dae [ an oor oor], Aer [a], Ay Asa dn daa |" called block matrix. "The product of matriaspartitionod into blodks aso follows the mate product formula, as ifthe Ay; were numbers: then A can be wetten as da dan Bu Ba |! AuBu +42Bn AnB2+ AnBn ] ae [2 ae] o[ AB = | uBatAnBa AnBa+ AnBer provided thatthe numberof columns in Aa is equal to the number of roms Jn By. This wil be true only ifthe columns of A are partitioned in the same way as the rows of B. Tt sot hard to soe that the matric product by blocks is comet. Sup- pose, for example, thet wo have 8 3x3 matrix A and petition tas [42 4) fe an aa | as on on| on a aa | os 1.5, BLOCK MATRICES 28 and & 8x2 mateic 2 which we partition a5 bu ba m ]-[2] fs Be} LP ‘Then the esos of C= fy] = AB are 9) = (onby + 00h) + ey "The quantity aby) + aby is simply the (¢-etey of ABs if $2, and the (Ojpentay of Any iv 5 Snag te (E)908y of A.3Bas iF5-2 2 nd of Aggy f= 3. Taos AB ca be wien os [ | . [argeaezs | 2 al Ga)” | AaB + AmB In partiular, fan m 3m matrix A is partitioned ino Blocks of ealumn vectors ies A= [al at == at], where each block a! the j-th column, then the product Ax with x= [zy "+ zl ie the oum of the biook matrins (or colin vectors) with eoofclons 2 Axe [al a a! maa baat, where a0! = nous an4 ~~ Onl Example 1.6 Let Abe an m xn matrix partitonad Into the row vectors fy 02, soy dy a8 ie blocks, and lz B be an nr mate co that their product AB) is welldfined, By considering the matele B ae block, the ‘oduct AB can be written a eB] fait abt os abr ape] |pa| 2B fa| eh eo aw] ae} [ane] Land! an? ante whose BE, B?, +, BP denote the columns of B. Hence, the row vectors of TAB ave the products ofthe ow vectors ofA end B. Ey CHAPTER 1. LINEAR EQUATIONS AND MATRICES Problem 1.18 Compute AB vsing block malian, where raj. o a2 ‘ [243 4] |e # 1.6 Inverse matrices ‘As we atm in Section 14, 4 aystamn of linear equations can be writen a8 ‘axe bin matrix form.’ This fen resembles one of the simplest Tear ‘uation In one vatiable ax = b whoeeeoltion ia smply 2 —a*b when (2940. Thus tls tempting to write the elution of the system aa X= At “HLwever, inthe case of matrices we fst have to havea precise meaning of ‘A-S, To discus this we begin with the following definition Definition 1.7 For an mx n mateix A, ann xm mnatsix 2 i called a left Inverse of Ai BA'= fy, and en n xm matrix C ls called aright Inverse oP AIEAC = Jy, [Bxample 1.7 From a det calculation fortwo matrices 1-3 Jeri a4 “sat wehow AD Inand BA=| 9 -2 6 | 4h. Rao ‘Ths, the matrix Bisa right iverse but not «left nvere of A, while Aisa laf inverse at nota righ inverse of B. Since (AB)? = BAT and I? = J, ‘amatzoc A hase right iver ifand only if ® has ale inverse, Hlowever fA lea square matrix and hus a lft invers, then we prove later (Theorem 1.8) that it has aio aright inverse, and vie versa. Moreover, the flowing lemma shows thatthe left inverss and the right inveros of = square matric are all equal. (This i not true far nonequare matics, of ours) Lemma 1.6 If an nxn square matric A has a left imerse B and right invert C, then B and C are aah Le, B= C. 1.6, DYVERSE MATRICES % Proof: A direct caleuletion shows that B= BI=B(AC) = (BAO= IC. ‘Now any tmo lft iaverses mt be both equal to aright inverse C, and hence tococh other, and any two right iverss must be both equel toa eft inverse 1B, and hen to eachother, So there exist only ane lft and only one right inveee fora square matric A itis known that A has both lft end right inverse, Furthermore, the let and right fverses are equal 3 ‘This theorem says that if mate A has both aright inverse and eft luvetse, then they mst be the same. However, wo shall ee in Chapter 3 that aay mxn matrie A with m 2 cannot have both aright inverse and ‘left iaveree: that i, = onquare matrix may have only a left inverse or aly ight inverae. To this eas, the matric may’ have many lf nverous or ‘many right inverse 10 Example 1.8 A nonsquare metric A= | 01 | can have more than one Do lef inverse, In fhe, for any =, y € R, one can ely choc thatthe matrix be |p? sfesueummcea Definition 1.8 An nxn equate matric A f said to be invertible (or nonsingular) if there exists square mire B ofthe same sna auch that AB=1= BA. ‘Such matr Bin called the Inverse of A and is denoted by A. A mateix ‘Assad tobe singular if ite not invertible, Note that Lenama 1.6 shows that if square mate A has bots oft and ght inverse, then It must be unique, That er call B tho" inverse nS PY eer, than i is easy to verify that : eats of A. Ror instance, consider a 2x2 matic A = wae 6 (CHAPTER 1. LINEAR EQUATIONS AND MATRICES since AAW! = Jp = AMA. (Check this product of matics for practee!) [Note that any zero matic i singular, Probe 1.19 Tat Abe aa verb matrix and ary none sea. Show that (0) Ae taveribleand (A) = A (0) he zich eevee ad (RA (9) AF invertible and (a7)=! = 2) ‘Thoorem 1.7 The product of inverible matrices is also invertible, whose Sncorae isthe product of the sndeldual inverse i reverse onder: (apy te Bact Proof: Suppose that A and B are invertible matrices ofthe same sie. Then (AB\(BA) = ABBA = ALA) = AA! wT, and silly (BA-1(AB) =. Thus AB bes the inverse BVA! o ‘We have written the inverse of A as “A to the power ~1", so we can xe the meaning of A! for any Integer k: Let A bo a equate matrix. Deine (3° = 1. Then, for any positive imager F, we define the power A of A Inductively as aba aca, ‘Moreover, if A ie invertible, then the negative integer power is defined as AHS (AYP for k>O. Its easy to check that with these rules we have AM = ANAC whenever the right hana side ie deine. (I A i not invertible, A2*-1 ie defined but a aot) Problem 114 Prow (0) It A has a tro rm, 20 dows AB. (@) 163 basa eo clan, 9 does AB. {@) Any matrix wth » ao row o sao oslumn camnet be invertible. Problem £45 Let A be an averible mate. Lei tras that (48)? later #7 Susy your answes. (AP) for any 1.7, ELEMENTARY MATRICES a 1.7 Elementary matrices ‘We now return to the system of linear equations Ax =b. If A has a eght Inver B such that AB= Ip, thon x = Bly is a solution ofthe system since Ax = A(BD) = (AB) {In partcular, if Ai an invertible square matrix, then i hs oly on inverse ‘A by Lemma 16, and x= AWD isthe only solution of the system. Tn ‘hi section, we ditcus how to compute AW? when is invertible. ‘Recall thet Gaussian elimination is © process in which the augmented nasi i transformed ita its ron-echelon form by a nit number of ee ‘mentary tov operations. Inthe following, we wll show that ech lementary row operation canbe expressed as nonsingular matrix, called an elementary ‘matra, and hence the proces of Gsuusan linination is simply multiplying {finite eequence of corresponding elementary metrices to the augmented ‘nat. Definition 1.9 A matrix E obtained from the identity matrix yy exe cuting ony one elementary tow operstion is caled an elementary matrix. or example, the following matices er three elementary matsces cor responding to etch type ofthe three elementary row operstions. (0 [52] ster he nt ta 8 Looe 42223]: tebe th med de oth owe fs o1oe Los [313]: ateewatmesinne Tt isan intersting fact that, if isan elementary matex obtained by executing @ cartain elementary row operation on the entity matriem, then for aay m xm matrix A, the product EA ds exactly the matris that ts oltaned when the same elementary row operation in Eis executed on A. "The following example tees thie argument. (Note thet AI isnot what sve want. For th, soe Problem 1.17). 8 (CHAPTER 1. LINEAR EQUATIONS AND MATRICES Example 1.9 For simplity, we work on 8331 column matrix b. Suppose ‘hat we want todo the operation “adding (~2) x the ret row to the second ow on mattix b. Thea, wo execute this operation on the identity matrix Test to get an elementary matrix B: 100 210 oon ‘Moltiplying the elementary mate E to b on the left produces the desired rem 20 0]fm%] fo -210/|m]=| m2 |, ooaslm] lo ‘Similarly, the operation nterehanging Uh fist and chird rows" on the ratrixb can be achieved by multiplying 8 permutation mate P, which is fn elementary matrix obtained fom Ty by Intarchanging two rows, to b on the le oo1]fh] [os po=]o10l|%/=| m% roojlm] [x [Recall that each elementary row operation bas an inverse operation, whichis abo an elementary operston, that brings the matrix back to the Criginal one. Thus, suppode that # denotes an elementary matrix core- ‘ponding to an elementary 10¥ operation, and let B* bo the elementary ‘trix eoresponding to its "iver" elementary row opeetin in E. Then, (2) 1 mip aro by €¥ 0, han mati he se rw by (2) iB iotrcngs two ros then ntechanges them agin; (G) if B adds a maple fone row to another, then eta it bck from te sa ro ‘Ths, for any m x n matrix A, EA A, and £'E = 1 = EE", Tht in, fuer elementary mats is tasertble so tat B-! = BY, ubich is also en elementary matrix. For instance, if B= ° Lf, ELEMENTARY MATRICES 2 Definition 1.10 A permutation matrix isa square matrix obtained from the identity matrix by penuting tho rows Profle 4.46 Prow "GA permutation marie the produc of Site mumber of enentry mates ‘th of whieh ie coreponding tothe "rorlaachacing™elameneary Tow (2) Any peeutation mate P invertible and P* = PF (8) The product of any two pecmtiron marion fe permtation mare. (8) The anspor fs peatation mata fan» permttion mas Problem 117 Dele the elementary columa operations fr «mat: by Just ‘eolacing Sow? by oclut” Inthe Giiton ofthe slemestry ow operation ‘Siw that i ean m em rattan i an lsanary moc otaoed by ‘rsotng a slamentnry clu operation on Ty, then AB i erally the matric {RAT obalued fom A when the sae clan operation ewctol on A ‘The next theorem establishes some fundamental relationships between nxn square matrices ad systems afm nese equations in m unknowns. "Theorem 1.8 Let A be an n xn matrin. The following are eivalen: (2) A fas 0 left inverse; (2) Ax=0 has only she trivial station x. (8) Ab rowequvalent 10 Ia (8) A ie a product of elementary matrices; () A da inortes (8) A has 0 right inverse, Proof: (1) + (2): Letx be a solution of the homogeneous system Ax = 0, snd let B be a let inverse of A. Thea (BA) hs Bax = BO=0. » CHAPTER 1. LINEAR BQUATIONS AND MATRICES (2) = (8) : Suppowe thatthe homogensous system Ax = O has only the trivial eaution x =O ™ ‘This means tht the augmented matrix [AO] of the sytem Ax — Oe reduced tothe system [0] by Gaus> Jordan einination. Hence Ais row-equivalent to ly (3) = (4): Assume ie ron-oquvalent to Jy, 0 that A can be reduced to In bya finite sequence of elementary sow opevetions, Thus, wo can find elementary matrices Ey, By... such that By BoB Since, Bay i are invertibe, by mukiplyng both sides of this equation on the let successively by Ey. By! By, we obtain EDM By BB! which express A us the product of elementary matrices (4) > (6) is tval, because auy lementary matrix is invertible, In foc, (= Bo BE (5) = (1) and (6) = (6) sre tvial. (0) = (6) + IE B iv right inverse of A, then A Isa let inverse of B and wo can apply (1) > (2) = (8) = (4) > (8) to B and conclude that B Istavertble, with A ast unique averse, That, athe inverse of A sad o> A i invenible o “ ‘This theorem shows that square mari is invertible thas a oneside lavets, In paricula, fa square mate A invertible then x= AWD i unique solution to tho system Ax = b. Problem 1.18 Fi the inverse of he product PANE 1.7, ELEMENTARY MATRICES a ‘Aa an application of the preceding theorem, we give a practical method for fading the faveree AW? of an invertible nin matrix A. I Ai invertible, there are elementary mstios By, Ba, .»., By such that Bae BABA I Hence, AM Bae Baby = Bho Babi 1 follows that dhe saruence of row operations that reduces an invertible ma tris A to Iy wail resale fy fo A In other words, lt (A I] be the ang- rented matrix with the columns of A on tho lef half the columns of T fon the right hall A Geussianeliminetion, opplied to both sides, by some ‘lementary row operations reduote the augimented matrix [A | 1] ‘0 [| K), Svhere U isa row-echslo form of A. Next, the back sbetitution proces by ‘nother sui of elementary row operetons reduces (0 | KC] o [| A~ (AU -+ (Been a] Bee] = 1] a RAUF AK) 4%, where Ee---By represents Gaussian elimination and Fh---F, represents ‘the back subatittion, The fllowing example strates the computation of fn inverse mate ‘Beample 1.10 Find tho invers of ‘We apply Gauss-Jordan elimination to 12s|100 alg = |235 [010] (howl +r082 102/001) Cthovitions 12 8] 100 = [0-1-1 | 210] -ayor2 o-2-1) 101 : 12 3] 1 00 a fori) 240 0-2-1 | 1 0 1J Mor2erors 2 CHAPTER 1. LINEAR EQUATIONS AND MATRICES 128/100 = forip2 rol. ooijs 24 "This s (U | K] obtained by Gousian eliminston. Now continue the back ‘substitution to reduce [U | K] to [7 | A~Y] Lasts oo (jon 81082 wine forij2a0/ 9 001) 3-21) (SrowS+rowl 120j-8 6-8 [E221 SB] coeet te dor] 3-23 roo} -8 4a a foro{a ia[- (iy por; aad Tan wo oa wef ia a4 (he reader should verify that AAW = I= A~1A.) ° [Note that if Ais not invertible, then, at some sap in Gaussian elimina ‘oa, a 20 row will show up on the left side in [|]. For example, the res 16 4 matic A=| 2 4 -1 | ierowequimlent to | 0 8 -9 | whichis a a5 0 0 0 noalnvertible mats, Profle 1.49 Wete Aa produ oflementary mates for Ain Example 110. of A by ung Gaasan ication. -Prolem 1.90 Find the vere of ach of th allowing mateo aioe a) Tea oo 4 oo). [ites [4 23] rite enfo te ol eo rots} looae 118, LDU FACTORIZATION 3 ae rata wives 9= "am hate DP om Thor 1. aagute marie A noninguls fan nly A ‘as oly the vl alan. That a aguae mate A be lngul if and caly i Ax = O bse wetrvial ele, oay x. Nowy fo any columa veto De fas ifittay a woo of x= Dor ng ae thon soe bgt oy Alby +4) = WA) + Any =O ‘This xqumest strengthens Theorem 15a allows when A es sqore ate ‘Theorem 1.9 ILA isan nerlensn mats te for any ea or Deis bf he nto Ax has ent ne tolton = hy 1 ALG oot tna then the pee hat ater no totion o ily tony oats secording fo wether or nt the aptem tcoaiten. Protem 1.8 White the syste of ner equations ( t+ ee = aint ela ao y te 4 Sn matte frm Ac = and ele it by Sing A“, 1.8 LDU factorization Recall that a basic method of solving Incr system Ax = b is by Gauss ‘Jordan elimination. Fore fixed matrix A, i we Want to salve mare than one cyetem Ax = b for various values of b, thea the same Gaussian elimination fon as tobe repented over and over again. However, this repetition may be avoided by expresing Gaussian elimination as an invertible matrix which fs a product of elamentary matrices, We fs assume that no permutations of rows are necosary throught ‘the whole proces of Gaussian eliminstion on [A b, Thon the forward lime ination s just to multiply fntely many elementary matrices By, ..., Ei to the augmented mates [4 B): that, (eB Be Eb)= Udy 3 CHAPTER 1. LINEAR EQUATIONS AND MATRICES ‘where each Flower tlangular elementary matrix whove diagonal etsles Gre all T's and [U ce the augmented matric of the system obtained ater {erward elimination on Ax = b (Note that U neod ot be aa upper trigular tmatrix fA isnot a square matix). Therefore, if we set L = (By Bs)" = yh then A= TT and em By BA Byes Bg = “Mb, Note tht Li lower triangular matrix whose agonal entees areal 's (600 Problem 1.24). Now, for any column matrix b, the system Ax = LUx- can be solved in two stope first compute © = Zh which le forward ‘limination, and then olve Ux = c by the back subsitation, "This means that, to ole the Csystems Ax = by for i= 1, ..., & we fire Sind the matrices L aad U such that A = LU by performing forward liminstion on A, snd then compute e, = Eb, fort = 1y...,€. The elution of Ac by are now those of Ux= ‘Example 1.11 Consider th eystom of linear equations 211olfa 1 aror||a|=|-2)=» -2ait||s 7 ‘The elementary matrices for Gaussian elimination of A aro easily found 21 10 a1 2 af= tobe 100 -210),B bod 0 0-44 BEES [Note that U isthe mats obtelned fom A aftr forward elimination, and A= LU with 4“ B cothat aa. on the diagonal, Now, the system 1 2 7 1 00 Dame gt=| 2 10), ‘whic es lower telangular matrix with es enb: | da + 9 2 ma, Prove tha the product (AB 9 98 vee. 220 Rk on nce eto {3} 24] LL. Wee che following systems of equations as mate equations Ax = b end solve them by eompiting Ab of of ra " bbe aye cam et lan 7 on + om 1.28, Find th LDU atraton fr hh loving mato: wa-[25) @a-[3 3] 1.28, Pd the LDL tron fh loving mma sto tas a wa-[26 3], @a-[e 3] eon 124 Save =D wh A=, whe Lad ae ie 109 1-10 2 oe[a foal, o-fo a a], vfs oat oot tye erwin th au ey lbenno i= snsee(HEl] = [f] 8 CHAPTER 1, LINEAR BQUATIONS AND MATRICES (2) Salve Ax =b by Ganse-Jrdan einiaton, (@) Find the ZDU factatastin of A (@) Wits A ar produce of elementary matrices (@) Find the inter of 1.26 A aqoae matrix Ai sid tobe notin AY = fora postive nagar (2) Show tha an invertible matic ie not elptet. (2) Show that ary tlanguar matte with zr dagonal nips. (8) Show that i Aisa iiotent with A" =O, then IA is ier with einen IS 44 at 1L2Y, Aequre msi Aol to be gto AY A (0) Pind an example of an dempetent matrix oer than OZ (2) Show tat fame Ale both dempotent and inverse then A = I 1.28, Deteioe whether he fallow stetements te tue ofl, genera, and sostify your ensrers (0) Lit A nd B be row-ogiten square matrices Then A is invert sod only Bi verb (2) Lat Aba aquare mati och tat AA A. Then A i he ety (@) Ie and Bare ivetbe acm euch that A? = T and ~ , then (AB = Ba, (4) IFA and B sce invertible matron, A+ B ie elo iver, (6) 4, Band AB ere syameti then AB = BA. (0) Wand sresymmesric andthe ame sas, then AB ie ln symmet (7) Let AB = 1, Than Avert and oly iB inverse (8) fa square maze Ae not inverse then alter e AB or ay B. (0) 15) apd By are elementary mas then EBs = FE (20) The vere on invertible wer tanga east oper triangular. (0) Any invetible matise A canbe written as A= LU, where ZS lover tangle and Ue upper langle (02) Ife avertible and eymmeti, then Ai alin eye Chapter 2 Determinants 2.1 Basic properties of determinant (Our primary interest in Chapter 1 was Inthe saltailty or aoltions of system Ax = bof linger equations. Foran inectibla matrix A, Theorem L ‘shoms thatthe stem has-a unique olution x= 4h fo ab. ‘Now the question is how to decide whether of aot a aguare matric A le tavertbie. "Ta this section, we introduce the notion of determinant os ‘¢seal-valued function of square matriowe thet satisfies certain axiomatic ules, and then show that a aquace matrix Ais invertible and any ifthe eterminaat of A isnot zero. Ta fact, we som in Chapter L that a 22 rnacicA= [9 | wren ony iad tot 0, Ti beri called the determinant of A, and is defined formally as follows 5 3] est, me Definition 2:1 For a 2x2 matsix A= | % ‘inant of A is defied as dat A= ad ~ be Tn ct, urns out that geometrically the determinant of 232 mate A represents, up to sign, the aoe of a parallelogram in the xy-plane whose ‘ges are constructed by the row vectors of (ove Tone 2.9), 20 i yl bervery nie if we can have the same idea of determina for higher onier matrices. Howover, the formula itself in Definition 21 doce not provide any clue of how to extend thie ides of determinant to higher order matrices. Hence, wo fiat examine some fundamental properties ofthe determinant funtion defined in Definition 2.1 0 50 CHAPTER 2. DETERMINANTS sy ne oan ca ay a ha ton Bs Ragin re oh oro - (vae[ 2] = 4] = tate tats = al 8] ket le “| (2) det (3) det (bot lod— (BB+ ee Tea wyeaas_v9 wisa[e ]otm[¢ t] cm hap pat of tint ae ce ar Ce ee rons sn Mt Nain Ot cee ee ee a ce eA sae Sete te ae ee ed Definition 2.2 A reabvaued function f + Mgy(Q) —+ Rf all nn square Inatrices called a determinant if sats the flowing thre rules: (R) the value off of the identity mate Is 1, Ley fn) = (Ri) the value off changes sign If any two rows ae interchanged; (Rs) f sine in the Sst sow: that is, by definition, eb n 4 Al |fou||2 feel] ‘where is denote the ow vectors of « matic. tis already shown thatthe det on 2 x 2 matress ctisis those rules, ‘Wi will show later that foreach positive integer there slays exits such ‘function f+ Me(R) — Beatitying the tree rules in the definition, andy tmorsorer, its wnigue. Therefore, we say “tho” determinant and designate it a "dt" in any order. 21, BASIC PROPERTIES OF DETERMINANT. st Let us fast devive some direct consequences ofthe three rues in the eftion (the readers ae uggested to verify that dat of 2% 2 mates also tise the following properte) ‘Thoorem 2.1 The determinant sais the following properties. (2) The detsrminant is linaar in each row, i, foreach ow the rule (Rs) also has, (2) 1A has ether a sero row or two identioa! rows, then det A= 0. (8) The elementary row operation that aie constant multiple of one row to another row leaves the determinant unchanged. Proof: (2) Any sow can be paced in the fst row with change of sgn in the determinant by zule (fy), end then use rules (Fa) end (Re). (2) IFA has o 2e0 row, then the row is zero tines the 2eo row. IFA ‘hs wo dential rows then interchanging these identical rows changes only the sgn of th determinant, but not A ite, Thus we got dat A = ~ det A, (6) By aiect computation wsing (1), me get net hy 8 ty f oy . {in which the second tarm on the right side is zero by (2) a I isnow easy to see the fect of lamentary row operation on evalustions of the determinant. The fst clemeatary row operation that “multipas © ‘constant to s row changes the determinant to k times the determinant ty (2 of Theorem 2.1. The rule (Rj) In the definition explains the eft ‘of the elementary row operation thal interchanges two rows". ‘The last tlementary row operation tht 'adde a constant multiple of row to another” Is explained in (8) of Theorem 21, Example 2.1 Consider # matsbe po1o4 a be bho che bre 2 CHAPTER 2. DETERMINANTS we add the seond row tothe third, then the third row becomes latte atdee atbtd, ‘which i 8 salar multiple of the fret row, ‘Thus, det A = 0. a Problem 2.1 Show sot, fran nx tee aod be R,dt(kA) = KY Problem 2.8 Bean why det A= 0 fr Wt eed ott eee wae Tie], @a-[8 & | oh ote ots S55 ‘Recall tht ay aquate matrix can be transformed to an upper tlanguae tats by forward eliminations. Further properties of the determinant are ‘btalned in the following theorem ‘Theorem 2.2 The determinant satisfies the following properties. (a) The determinant of a triongular marist the product ofthe diagonal (2) The mats Ai inertia fend only if dt A 0. (8) For any two mx matrices A and B, det(AB) = det A det. (8) det aT et A Proof (1) If As diagonal matrix, then it is ler that det A= ay Onn ty (1) of Theorem 2:1 and rule (R;). Suppose that Aisa lower teangular atric. Then «forward elimination, which doesnot change the determinant, produces a zero row if Alas aero diagonal entry, or males A row equivalent {othe diagonal matric D whos dagna nse ae excl the of A i ‘the diagonal entries are all nonzero. Thus, inthe former case, dot {Se the product of the lagna ei ose. Inthe ner ean, GetA = det D = ay." eq, Sina arguments pply when A is a upper ‘wienguler matic (@) Note gain that forward elimination reduces square matcit A to sn upper triangular matrix, which has a ero row if Ais singular and bas no sero row if Ais nonsingular (seo Theorem 1.8), (8) I isnot invertible, then AB isnot lnvertble, and so det(AB) 0 = det Adet By the properties of the elementary matrices, i is le that for any elamentary matrix B,det(EB) = det dot B. If Aiolavertible, 21, BASIC PROPERTIES OF DETERMINANT 8 Jt can bo written asa product of elementary matics, ey "Then by induction on, wo get et(AB) = dot(ELEa--B4B) et Ey det By + det Ey det = det(EBa--- Ba) det B = det Ade B. Bia Bh (4) Cloany, A is not invertible if and only if A® isnot. Thus for 3 singular ratrix A we have det AT =O = dat A. IA it invertible, then there 1s factoritation PA = LDU for permutation matrix P. By (3), we get GetP det A= det det D det 0. Note thatthe transpose of PA = LDU is ATP? = UTDTLT and that fr any ‘sinngular maine B, det B = det BT by (1). In parseutr, since L, U, Z*, tnd U? ae triangular with 1's on the dlagonel, thelr detenminants are el ‘gual to I. Therefore, we have et aT et PT = det? det DT det = death detD dee tA dt. By the definition, « permutation mateix P is obtained from the identity ratrix by a soquence of rv interchanges: thats, P = By---BjIq for some fy where each F; ls en elementary matrix obtain’ fom the identity matrbe by interchanging wo rows. Thus, dat Ey clely EP = Bi = Bp! Therefore, dt P ‘det A= dot A. Remark: From the equality det A = det A?, we could define the determi rant ia terms of columns instead of rom in Defition 2.2, and Theor 21 {aio true with *columus™instaad of "ows". [Example 2.2 Evaluate the determinant of tho following matrix Ai 2-4 0 0 1-3 01 Ali oa 2 3-4 3-1 ot (CHAPTER 2. DETERMINANTS Solution: By using forward elimination, A ean be transformed to aa per triangular matrix U. Since the forward elimination does not change the determinant, the determinant of Ais simply the predict ofthe diagonal cents of U Problem £3 Prov that IA avertbl, tan et A? = 1/6 Problem £4 Endust the determine ofeach of he folowing mats a Wm i ie ees nM BoM arr) o [iif nasap@)2s 1% eas aoe 4 bee i 2.2. Existence and uniqueness Recall that det = od ~ be defined in the previous section satis the the rules of Definition 22. Conversely, the fllowing lemma shows that any function of Mfxa(R) into R satisfying the thre rules (Ry) ~ (Ra) of Definition 2.2 most be dt, which implies the unlquenest ofthe determinant fnetion on Moea(R) Lemma 2.8 If f: Mava(R) ~+ satisfies the three rulea in Deiition 22, then (A) = ad = be, root Ft not ttf] = yt a(R) an na = sft 7°98] - oft }e(2 a] (USE 22, EXISTENCE AND UNIQUENESS 6 ‘Therefore, when m = 2 there is only one function f on Mpy(R) which, satisfies the thee rules: Le, f= det [Now for n = 3, the sme calculation st ia the cae of n= 2 can be applied, That in, by repeated use of the thee rules (Ry) - (Ry) as in the ‘roof of Lemma 23, wo can obtain the explicit Formula for the determinant unetion on Msya(B) as flows: «fa 23] Bo besa] oF leaf 8 Pallas TPF Gt sin['F S aufeoelen FO leael doy Saale es APM Lk es e305 + ayanon + e1taiean~ ancznena ~ anszzoss — O10 "This expression of det A for a matt A € Myes(Q)satsfes the three rules, Therefore, for n= 8 t abows both the uniqueness nnd the existance ofthe determinant function on Ms(@). stem 2.5 Shaw that the shove etl ofthe determina for 9 3 matron ie he thee rien Defition 22 ‘To get the formula of the determinant for matrices of order n > 3, the same computational proces ean be repeated using the three rules again, but the competation is going to be more complicated as the order gets higher. ‘Toderive the explilt formula for det A of ordar n> 8, we examine tho abore casein detail Tn the proous of darlvng the explicit formule for det A ofa 33 matric A, we can observe the allowing thre stops (ist) By using the Uncarity of the determinant function in each row, det A of 9 8 3 matric A is expanded as tho sum ofthe determinants of 38 = 27 matrices, Except for exactly sx matrices, all of thom have 2oro columns eo that ther determinants are 20 (0 the pref of Lemma 23). (Gna) In each ofthese maining sx matrices, ll antres ae aro except foc exactly three entries that came from the given matrix A. Indeed, no to ofthe three enties came from the eame cola ot from the same cw of A. 6 CHAPTER 2. DETERMINANTS In other words, in each rw there is only one entry that came from A and st the sume time in each cslumn there is only one etry that eame frm A. ‘Actually, in each of the sie marie the three ents thom Ay say a, ‘ye nd dy, are chosen a follows: Ifthe Bet entry a i chosen fom the Stet row en the third column of A, sy ai, then the other eateles aye sad ‘yin the product should be chosen from the second or the thied row snd {he fret oe the second column. ‘Thar, ifthe second entry aye i taken fom ‘the second row, the column it belongs to must be either the fist or the sacond, ie, ether az: ot azz If oz stake, then the third entry apg must ‘be, without option, ona. ‘This, the entries from A in the chosen matrix sre ‘ays, aay and dan. Therefore, the ths ences in each of the sx remaining ‘matries are determined as follows: when the rw indices (the Birt indices ¢ ‘of ay) ae arranged inthe order 1, 2,8, the assignment ofthe eal Indies 1, 2,3 (the seond Indio of ay} to each of the row indices is imply © reatrangementof 1, 2 8 without repetitions or omisions. Ta this way, one an recognize thatthe number 6~ 3a simply the number of ways in wish the thre column indies 1, 2, 3 are rearranged. (Grd) The determinant ofeach ofthe six matrios may be computed by converting it into a diagonal matric using sitabe *colomm interchanges" {ee Theorem 22 (1), 2 it determinant becomes beac, Where the gn depends on the mimber of colin interchanges For example, forthe mtr heving entries 3, oz, and as from Ay fone can convert this matrix tuto a digonal matrix in'e coupe of ways {or instance, one ean te just one interchanging ofthe frst and the third columns or tak three interchanges: the rt andthe socond, and then the ‘sacond and the third, and thon the frst and the second. In any eas, 0 0 ow on 0 0 at] 0 ax 0 |=—det] 0 exe 0 | = ~oigozan en 0 0 00 ox Note that an Interchange of two columns is the sume as an interchange of fo corresponding cluma indies. As mentioned above, there may be sev- al wags of column interchanges to convert the given matvx toa dlagonel satrx. However, itis very interesting that, whatever ways of column inter- changes we take, the parity ofthe numberof column interchanges remains ‘the same all the time. In this example, the glen arrangement of the column Is expressed in ‘the arrangement of column ince, which ls 8,2, 1. Ths, to arive at the 22. EXISTENCE AND UNIQUENESS a ‘onder 1, 2,3 which represents the dlagonal matrix, wo ean take ether jst tne interchanging of 9 and 1, of three interchanges: end 2,3 and 1, and than 2 end 1. In either case, the party is odd so that the *" signin the ‘computetion of determinant cae fram (—1)! = (—1)8, where the exponents ‘ean the mambers of interchanges of the clumn indece Tn aummary, in the expansion of det A for A € Mgy(B), the mumber 6 = lof the determinants which contribute to the computation of det A's simply ‘he number of waysin which the three numbers 1,2 Sarevearranged without repetitions or emisons. Moreover, the sign af ach ofthe ix detorminans i ‘Gctermiad bythe party (even o 9d of the nae of eolumn intarebangee requleed to atsve at th order of 1,2, $ from the given arrangoment of the elu indies, “These oberrations can be wed vo drive the explicit formula ofthe deter sminant for matics of order > 8. We begin with the following doinition, Definition 2.8 A permutation ofthe st of integers Ny {a8 one-o-one fonction from IN onto isl. a, Bm) permutation of Ny assigns number c(i) in Nq to cach ‘number | in Np, and this permatation 2 is commonly denoted by t,o 01= (hy of ow) oro, the frst row isthe usual lay-out of Ny asthe domain set, and the ‘second row i jst a aorangament in certain order without repetitions oF fomions of the mumbars in Nq es the image set. A permutation thet inter Changes only tno numbers a No, leaving the rest ofthe numbers feed, such so = (8.2/,.--on) ie called « transposition. Note tht the composition Of any two permtatons is also @ permutation. Moreover the composition tf transposition to a permitaton o prods aa interchanging of two num ‘bere in the permitation e. In particular, the composition of a transposition ‘with tall always produce the identity permataton. isnot hard to os that If Sy denotes the sot of ll permatetions of Np, ‘then Shas exactly n! permutations, ‘Once me have listed all the permiutatons tho next step i to determine the sign ofeach permataton, A permutation & = (jy fay voy dx) sald to ave an laversion if jy > je for # < ¢ (42,0 lnger number proces smaller nutnber). Bor example, the permutation ¢ = (8,1,2) has 6wo Inversion since 8 precedes 1 and 2. 58 CHAPTER 2. DETERMINANTS ‘An invasion in permutation canbe eliminated by composing it with fa aultable tanepesiton: for example, if ¢ = (8,2,1) with three inversions, ‘hen by multiplying a transposition (2,1,8) toi, we gt (28,1) with two Inversion, whichis the same as interchanging the rt two numbers 8 2 in <2. Therefore given» permutation «= (o(1},0(2), «(nin Ss, one ean Convert it tothe identity peritation (1, 2 yw ete only one ‘wth no inversions, by comping it with certain numberof transposition. Fr example, by compecng the theo (whichis tho mumber of inversions in «) transpositions (2,1,8) (18,2) end (21,8) with ¢ = (82,1), we get the Identity permutation. However, the numberof necessary transpositions to convert the given permutation into the identity permutation need not be ‘unique ase have som inthe third step, Notice that even if tho mumber of necanary transpositions i not unique the party (even or odd) i lays Consett with the numberof aversion. ‘Recall that all we need in the computation of the determinant is jst ‘he party (oven or odd) ofthe number of eolmn interchange, which isthe sine ae that ofthe uber oflversons inthe permutation ofthe column ines. ‘A permutation i ai tobe even if it has an even numberof inversions, sand it auld to be odd if t hasan odd number of lnversons, For example, ‘when n=, the permutations (1, 2,8), (2 8, ) and @, 1, 2) are even, ‘hile the parmutations (1 3 2, (2,1, 3) and (8, 2,1) are odd. Tn general, for a permutation ¢ in Sq th sign of o is datined as sano) 1 if is en even permutation 1 fe is an oad permtation Te snot hac tose that the number of even permutation is gu to tat ood pemtation soit Tn the can = 3 one can notice hat ee fro arms with + sig and 3 forms with sea Problem £5 Show tht the namber of even permitatons andthe mamber of od permutation in Sy are aga [Now, we repest the three stops to get an expllt formula for det A of square matebe A = [ay] of order n. Fst the determinant det A can ‘be expresed asthe oum of determinants of rl matics, each of which has ‘ero entrles except the n enties oa, arf) *°% Gna) taken from A, where ¢ is permutation ofthe st [1,2.-,.m) of column indices, ‘The = centres ogy Ona Ona) a6 chosen Hom A in such a way that no 22, EXISTENCE AND UNIQUENESS 0 to of them come from the same rotor the ame column. Such © mate ean be comvertd to» diagonal matrix. Hence, its determinant is equal to ‘Ese Snoay Where the sign it datermined by the pasty of the ‘numberof column interchanges to convert the matrix toa dlagoaal ste, ‘which is equal to that of inversions ino: sgn(o). Therefore, the determinant (ofthe matrix whose entries are all zero except for ans sequal to se0(¢)aiey ae)" Anan ‘which i called signed elementary product of A. Now, our dscusions an be ummaraed as follows: ‘Theorem 24 Por on nxn matric A, det = Sento) oiee)Sartn)**tnein & ‘That i, det A tthe sum ofall signed elomentary products of A. ts not fete to see that this explicit formula for det A satisfies the ‘three rules in the dalton of tho determinant. Therefore, we have both ‘iste and uniqueness fr the determinant faction of square matics of sy order n> 1 Example 2.8 Consider permutation ¢ = (3 4,2,5,1) € Si: be, o(1) =8, (2) = 4,» 0(6) = 1 Then @ bas total 2 += 6 inversions: two Inversion caused by the position of e(1) = 3, which precedes lend 2, and four inversions in the permutation + = (1,2,6,2), which so permatation of ‘the oat (1,2,4,8)- Ths, sen(e) = (-1)8* = (-1)Peen(). [Note that the permutation r ean be considered a permutation of Ny by replacing the aumbere 4 and 5 by 8 and 4, respectively. ‘Moreover, a= (3,4,2,5,1) can be converted to (1,3, 4,2,8) by shiting ‘the number I by fou transposons and then (1,3,4,2,5) canbe converted fo the identity peemusation (1,2,3, 4,8) by two transpositions. Hence, o ean ‘be converted t the identity permutation by se transpositions. 2 Tn general, fora fixed j, 1 j ey) ean na mate with the clan wets e's Sow eas tee as ay Ca] = (™1¥™ defy vg ve) Nowe ta {he amend of egal ol when i writen in to Weta 4 ‘Recall thatthe determinant of an nx n mats A Js the sum of ll signed sdomentary products of 4, and Prt 24 Co in ms | 2.8 Cofactor expansion det Am > sen(o}oe oar "not ‘The fist factor ayer In each tem ean be say one of 1.820: 01m i the ‘iat row of A. Among then! tare in this sum, there ace prosiely (a1)! ‘permutation auch thet dyeqy = 01, Le, o(1) ~ 1. The sum of thoes terms uch that o(1) = 1 ean be written 314 where An 1) Dewaleloaray ear ‘summing over all permutations 7 of the numbers (2,3,..,n) ‘The term (1? means that there is no extra inversion other than that cfr fat) = 1 {is ofthe first place. Not thet each term in Ay contains no etees from the 23, COFACTOR EXPANSION a firs row o fromm the it cokumn of A. Hence, all the terms of th sum in Aas te the signed elementary products of the submatsie Mi of A obtained by Gelling the fit sow and the rt column of A. Thus At, = (-1)°det Mia ‘Similarly if e10q) is chosan to be yy with 1 ei) went - det A= aga bade t+ oie where Aiy = (1) det Miy andy isthe submatrix of A obtained by (elting the row and the eolusan containing ay ‘Also, we ean do the same with tho column vectors because det AP = et, This gives the following theorem: ‘Thoorem 2.6 Let A be on nxn matric. Then, (1) foreach is 40, et A= on An + onda bb oon calle the cofactor expansion of det A along the ith row (2) Foreachis ism, et Am yds + gay toot oases, called the cofactor expansion of detA along the j-th column, (CHAPTER 2. DETERMINANTS Copncroe EXPANSION “The subtee My cll he minor ofthe entry oy and th number = (Cider Meri called yx. Therefore, {Bernina of an n> nme A canbe compute by meiplyng te tnt in anyone row bythe efnctos a adn the retling rode ‘Asta mater of fact te dtrint could be defied duet by tee ont rls ] = be Bxample 24 Let 12 Azlas 78 ‘Then the cofactors of ;1,a:2 and ays are 56 Bo An = cyto | bn = cattan| eg] = cnjesnro Aw caval tE] = at reepoctivaly, Hence the expansion of det A along the rt column ie et mann + 2A ays = 1 (8) 42:643-(-2) o For x matrix A, the cofactor eqpansion ofA along the second eslamn ‘hn the flowing forme de | on aa om | = cndztondn +ogdm aq, am ag | vdeo ]teeee] ay a2 ]-oose[ as 2] ‘As this formula suggests inthe cofactor expansion of et A along ron ot ‘column, the evaluation of Aycan be voided whenever oy = 0, because he product cxAy i zero regal of the value Ay. Therefore i i benafical to ‘make tho cofactor expansion along row or a.columa that contains ax tones pou Moreover, by using the elementary operations, 23, COFACTOR EXPANSION co ‘matrix A tnay be spl into another one having more aero entre aa rw or in acolsmn. This kind of slplifestion can be done by the elementary rom (or colima) operations, and generally gives the most flint way to ‘evaluate the determinant of « matrix. Tho ext examples illustrate this ‘method for an evaluation of the determinant. Example 2.5 Bvaluste the determinant of raza 34a Av| a 5-3 8 264 1 Solution: Apply the elementary operations: 3x rowl + row, (-2)x row + cows, 2x wl + rod to A. Then oto Li actA = det si] - an] 3 -7 10 0-3-7 0 ae 04 0-1 [Now apply the operation: ow 1 4+ row 2, to the mate on the vight side, thea 11 -4 ae} -3 7 | = ae} 20 6 a camergal tS} 2 sr@-29 = 1 88 ° Proton 29 Use cxactor expansions slong 8 ro ofa elumn to evlute the de ‘erin ofthe following atone . @A= a 1 5 204 wa-|2 92 222 o CHAPTER 2. DETERMINANTS Example 2.6 Show that det for (2—v)le—2}(2—w)(y—2)(y—w lew) ad cad 2B 7 v Solution: Use Guussan elimination. To begin with, edd (—1)x row 1 to rows 2,8, and dof A: 1 was ml? vie wha y Baste wltze ts! 1 (y~2)(e—2)(w~z)dat| 1 1 Lite Ptaytst ° ware] aru (e=wle+y+2) 0 wy (wa outy+a) = (e-s)e-aw—a2)den| 77¥ wet e+) teontensteayan| 374 oats | tr-nte-nte-sae| e-nie-ae—yy-aye-naaly 342] (ewe — slo ~ wily -2)G9— w)(e~w), o Prodlem 2.10 Let A be the Vasdermond tix fee n Da on apt La = a-[l 24 La eon apt ‘Show that ata~ TI (e290 ssidse 24, CRAMBR'S RULE 65 2.4 Cramer's rule “The cofactor expansion ofthe determinant gives method for computing ‘he lnvere of an invertible matrix A. For $j, lat A” be the matrix A with the jth row replaced by the -th ow. Then the detarminant of A* must be taro, because the ents of the Eth and jth rows are the same. Moreover {he cofactors of A" mith expect to the jth row are the same as thse of th i, Ay = Aye forall k=, «..y ne Therefore, we have O=detaY = and} Handy tot eindjn aaApe baad ++ Aja ‘This proves the flowing lemma. Lemma 2.7 Fagda + afore gi nintaater tania (4 $id Definition 24 If A isan nxn matrix and Ay isthe cofactor of ej, then the new matrix Ay die os Ain An Amo Am Any Ana oo Aa ts callad the matrix of cofactors of A. The transpeco ofthis matic is alld the adjolnt of A and is denoted by a Ie follows from Lemme 2.7 that aaa 0 ° 0 ata 0 AcndiA= = (Get all 00 ws deta IF Alnlavertibl, then de 0 and we may write A (gh aA) = 1. Thos et . Ata gh gach, ond A (dora) aaiar) by replacing A with AW 66 (CHAPTER 2. DETERMINANTS [:t]aee [4 t} ate we[t 2] ran Psu 1 Capea team [3 i ‘Example 2.7 For s matix A et Am ad te #0, then ate Problem 2.12 Show tht A isvertibe and only fad} lave, and that if Aner, then A (ota = A = nae Protlem £19 Lat A bo an 3 mate with n>, Show that (0) deh) = (des aye“t (2) adja) = (80: -*A, Avert ‘The next theorem eetablshes a formula for the salutlon of a system of n ‘uations inn unknowns. It snot oaeful ae prectical method bot can be ‘ued to sndy propertas af the sluton without coving the system, ‘Theorem 2.8 (Cramer's rule) Let Ax = be a apt of linear uae tions inn unknowns such that detA #0. Then the yet has the unique solution pon by 86; BS jay im, suhere Oj isthe matrisobtanad from A by replacing the j-th col with {he carn eater b= bby =~ Bl Proof: If detA + 0, then A is invertible and x = A~tb i the unique solution of Ax = Since xean Ut follows that a bAytbadyy tet byAny _ detC; o % ‘dea Gea 24, CRAMER'S RULE @ Example 2.8 Use Crame’ rule to sve nin + m= 50 tint no tts tan = OD Satin: ‘Thereore, ates an Se na ith ° (Cramer rule provide convelentmathod for waiting dowa the solution ofa system of m lneareqstios inn unlowns in tems of determinants. ‘To find the elution, however, one must ealuste n+ 1 determinants of order n, Evaluating even two of these determinants goneally invoves more ‘computations than solving the system by vsing Gause-Jordan elimination. Problem 8.16 Use Cramer ret sve the system NSS o fe sBteig popliss a faebiin plies Probie 2.15 Let A be the matrix obtained fom the dentty matte fy with Eth ‘lnm elas fy he clu eon = [29 Compt det A rote 2.6 Prove that fy ithe eolctr af ay A= [oy andi > 1, then 63 CHAPTER 2. DETERMINANTS 2.5 Application: Area and Volume In this ection, we restrict our attention to the ease of = 283 inorder to ‘visualize the geometcc figures conveniently, even ifthe same argument can ‘be applied for n> 8 For an nxn equare matsix A the row vectors oor of can be considered ar elements in [oa + a 1, Byes mare (fhin 0s eh hens} 's called a parallelogram if n = 2, of « parallelepiped ifn > 3. Note {hat the row vectors of form the edges of P(A), and diferent order of the sow vectors doce not alter the shape of P(A), ‘A geometrical meaning of the determinants that it represents the volume (er ase fr n =?) of the parallelepiped P(A). ‘Thoorem 2.0 The determinant det A of an nx m matric A i the volame oF P(A) up to sgn. In fact, the volume of P(A) ke epel to [dat A | BP (sees) | av Proof: We present here © geometrical akitchsluce this way sees more intuitive and more convincing. We give ony the proof of the ease n = 2, Ad leave the caso n= tothe seaders Let a-[ea]-[2]. str, ryan thoy vst fA. Arf) = df, oe ‘the area of the parallelogram P(r,, r2) (see the figure below). ey 25, APPLICATION: ARBA AND VOLUME © 10 (0) Beis quite clear that if A= 1 than Area |} 9 | 2, (2) Since the shape ofthe perllelogram P(A) does aot depend on the order of placing the row vectors: te, P(e, ta) = Plta) Ti), we have ‘Area(eist) = area(eny). On the other hand, darts) = deta 7) Ths et(ey, m2) = Arent, x2) ‘whlch explaine why we say “upto sgn’. {@) From tho figure above, if we replace r by bry in A, then the bottom ‘elge x, of P(A) is elongated by [| while the height remains unchanged ‘The Avea(in, 2) = [Area 2) (4) The adatvity in the est 20m is atv consequence of examining the folowing gure: Tht me nepce by rrr wil ing r,t, 8 the following figure shows, we have Areo(es +, 14) = Areal, a) + Area(e, m2) (6) Than be se ton Area on Mon(R) sisi the ls (FR) an (Ry) of the determinant except for the rule (FR). Therefore, by uniqueness, aera 3 Remark: (1) Nota thst if we hav const the prlepped P(A) ting the clue vets of 4 he he shape of he prepped totally Sire rom te on courtrced oxng the rover. However, 0A Get A nna hie ase are he a, wih 8 ally one ek 70 CHAPTER 2. DETERMINANTS (2) For n > 9, the volume of P(A) can be defined by induction on, sand exactly the seme argument ia the proof ean be applied to show that the volume isthe determinant. However, there another way of looking at this fact. Let {e},..-yeq} be m column vectors of an mre matrix A. They constitute an mimensional parallelepiped in BO" such that {Eee ostsn tetnh ‘A formula for the volume ofthis parallelepiped may be expresed as f- lows: We frst conalder two-dimensional parallelepiped (a parallslogram) determined by two column veetrse; and ey of A= [ey 9) ia BS Lz ‘Tho arn of tie parallelogram is simply Area(P(A)) = left where = leasind ond 6 the angle between ey and ey, Therefore, we have PEA) Arca(P(A))? = fea||\en|?sin?@ = |Jex|/*\en7(1 — coe? 8) fer-ea)? > eonteal(ine 25) = (ex-e1)(e2-ea) ~ (esta)? «(ge ge] > a ([F]l- ]) - aero teen i 6 men vs of mc mat ‘Then one ean show (ara prot so Exercise 8.16) thet the velume af the ‘dimensional paralleepiped P(4) determined by thos n ealumn vectors cjsinR™ i vol(P(A) = yet APA) 26, BXERCISES n In particular, fA an m xm square matrix, then wal(P(A)) = det(ATA) = f4et(APaRHCA) = dt( A], ss expected. rote 2.7 Show tht the aren of a langle ABC in te plane BP, whese A= (Gur anh Bo (en 9), C= (ey 95), onl tothe able value of [384] 2.6 Exercises F ‘ [ee 2.1, Deermine the ives ot for wet def | =. 12.2, Baste det{A2BA~!) and det 5-14?) forthe following masons mented 28, Enluste the determinant of ae 3b 8s a] or 2s 8 2d. Brant ded fora xm matte A= [a] when wan ( Ef eeerasntes Ainptpmita 2. Fadl eos out da AB) = 0 vit te a a[7 Aa) ae[E 2h] : 2.6, Pret f Avan ended nat i = O Ghee ca torre mad wo a ko 2. Une the determinant faction to nd (0) he arn of the paalelogram with elges deere’ ty (8) and (7,8), 26, 210, CHAPTER 2. DETERMINANTS (2) the volume ofthe parailoipe with odes determined by the vectors (10) 4) (0 ~2, 2) and, 1, 1) ‘ee Cramer ue to salve cach stam, o{2t2 ofzos ‘Ue Crarars ae t slo the given pte: oftle-() [$2 Y-[4] ye (eats + ur mere than obs colton (le pale apply Cram’ rle hts?) Solve the flowing eta of linear equations by using Cama’ rule ao by 2a Caleta the cfacors Any Ais Ai ad Ay forthe mats 7 Mo Pana wa=[0 ta], @an[t oa), an [a 23] eat tad ana at A be the mcm matrix whos otis ae all, Show tat () det(A fy) 0 2) (A= alayg = (1) for a,j, wha (A ry donee the tolactor ef the (i j}enty of A~ nly Show that fA is eymmetio, thon Is edlA. Moreover If Inertia, thn the iver of A ls ymmetcie 26, 216, 21s, 220, EXERCISES 3 ‘Use the adit formula to compa the inves of the each af he flowing “232 cone 0 sno a=| 60 8], Bal "oa 0 ara ot 0 ceed Compute nA, de A, tad), A, and wey As adjA = (et AT for (Ee (EF Let Ay 2 bevel matrices, Show that alj(AB) = B aA (To reader may az ey to prove tls quit for noninvertble motes) Foran m x mate and msm mati B, show that el _$ 4] sana, ° Foe eof ag wh ent (0.0, ed (4) dt mf ge wah vero ot (00,0 002) nd (221 an $8 sasse[ 4B] edecadan-an zane rn war 0h ang tenet ot tl nd Port en eee maton Aand Bethea d+) de At ie Re ABCD € Mga) show tate [ SB ] = ddd DB, a (2) For any square ratvces A aud B ofthe samesie det AB) = de(BA). (@) He tan mem unre matte then for any ela deta ~ A) = ended, (4) IPA ean equace mats, then for any ear deel, ~ AP) = dstele A) (6) HEB len lemestary mati, thn dt B = (6) There sno mse A of ode 3 such that A? =~ (7) Ler A be cllpeana main, Ce, AY = O fr some natural mumber ‘Then dt A= : (8) deka) = eet A for ay sqaze mae A (@) Any estan A= Bb bas slition f and aly dt A 0. (00) For any nm 22, eaumn vor wand, det) = 0 (Gt) IFAs megane atic with de A = 1 then aod) = 4 (CHAPTER 2. DETERMINANTS (02) he ati of A ae all itagr nd det A= 1.0 1 then tho ents fA" are liter (08) ithe entries ofA are Os or then det = 1, 0, oF 1 (04) Bvery etm of m near eqstions ln uring can be sled by Camere, (05) HAs permason mati, then AT = A Chapter 3 Vector Spaces 3.1 Vector spaces and subspaces We dlscussed how to solve a gystam Ax = b of lincer equation, and we saw that the basic questions of the existence or uniqueness ofthe ection ‘were mich ease to answer efter Gaussan-eliminstion, In this chepter, we Introduce the notion of a vector space, which is an abetraction of tho usual slgobrale structures of tho S-space Band then elaborate our study of @ sytem of linear equations to tis framework. ‘Usually, maay physical quantities, such as length, area, mass, temper ture re described by ral numbers es magnitudes. Other physical quantities le foros or velocity have directions as well as magnitvdes. Such quantities sith direction aro called veetors, wile the mimbers are called sears. For Instance an cloment (or point) x inthe Sspace Rs usually representa 56 trp of eal numbers x= (0, 2a 22) where 2; € B,1=1, 2, 3, are called the coordinates of x This xpresion provides a rectangular coordinate system in «natural wey. On the other hand, pletailly such point in the &-spaco B® can also be represented by ‘an arrow from the oejgin tox. I thie way, pont In tho -space R? can be “understood asa vector. The diction ofthe arrow species the direction of the vector, and the length ofthe arrow deserbas its magnitude In order to havea more general deSnitin of vectors, we extract the most ‘asic propatite of thoreartows in B®. Nota that for all vetors (or points) Jn R, thee ar two elgsbraie operations the addition of any two Yetors % 6 CHAPTER $. VECTOR SPACES plication of voetor by a scalar. That i, fortwo vectors '¥= (ah, ¥, im) in RE and ke salar, wo define xty = (atm mtu oti), fe = (ken, ka, Br). ‘The adtion of vectors and stalar multiplication ofa vector in the &-space B® ae lustrated ne follows xty oy = ” 'Bven though our goometse visullzaton of vectors dogs net go beyond the Space BF, It possible to extend the above algebraic operations of ‘vectors inthe dspace BY to the ganeral n-space 8 for any postive integer 1. This defined tobe the st of ll ordered r-tuples (a, a ---y ay) of ea numbers, called vectors Le, e ((eay 04; 25 09) # OF ER, FI, 2 oy mh For any two vectors 2% (ey 43) 4 fx) apd ¥ (Qhy Yay oo» 3h) ia the ‘espace RY, and a salar fy the sum x+y and the sclar ulipzaion bo ‘of them are vectors in R* defined by xty = (ebay 22am oy Satta he = (bat, Ben, 5 Bea) ein eay to verify the fllowng list of tithnetical rules of the operations "Thoorom 8.1 For ony salare and 8, ond vectors x = (21,22, «or» Bas Y= (Obs ay vy Yup nd 2m (Say Ay vey Za) OH the respec BO, The {allowing rues hol () xtyeysx, @) x+042)= (ty) +2, (8) x+0-x=04x, 1, VECTOR SPACES AND SUBSPACES 7 (a) x4(-)x=0, (6) Key) = ety, (6) (+ Qe= bes, (7) Hes) = (hs, (8) 1x, swhere 0 0, +5 0) ithe nero vector. ‘We usally write a vector (a), 025 «++» @y) inthe m-pace RY as an m1 column nti (es, 005-15 an) = | 9? | = fan an + ony oo called cour vector. Then the two operations ofthe matric sum and ‘the scalar multiplication of column mateiees coincide with those of vectors mR, and the above theorem is just Theor 12 ‘Theee rues of rithmeti of vectors are the most important onss because thay are the only rules that we need to manipulate veto in the mspace 1B, Hence, an (abstract) vector space can ba dafned with respect fo these rules of opertios of vectors In the mspaco Roo that R™ itll becomes {vector space, In genera, a vector space is defined to be a set with two operations: an addition and selar multiplication which stay the above es of operations in BP Definition 2.1 A (sel) vector space Is a nonempty sot V of elements, called vectors, with two algebraic operation that eats the allowing rls. (A) There is an operation ealed vector addition that associates to every pair x andy of vectors in V e unique vectorx-+y in V, elle the the sum (fo and y, 90 that the following rules hold forall veto x, y, a V () xtyeytx (commutativity in addition), (2) x4 (v2) = (x+y) +a(=x4+-y +2) (asocntvty in addition), (8) there is « unique vector On V such that 2+ 4 fora XV (itis ealed the zero vector), (4) for any x € V, thre is a vector such that c+ (-x) = ($x) 4 € V, called the negative of x, 8 CHAPTER 3. VECTOR SPACI (2) There fan operation called scalar multiplication that associates to each vector x in V and each sclar& e unique vector kin V ello the ‘multiplication of x by a (ral scalar bso that the fllowing roles hold for all vectors, ys 2 in V and all salas by € (8) Kox-ty) =hx-+ hy (lstubutivigy with respect to vector adton), (6) (E+ ex bx+ Ox (Altlbutvity with respect to scalar addition), (7) Klee) = (Ox (ssocatvity in scalar multiplication), (8) x=% (Csaly, th respace RY isa vector space by Theorem 31. A complex ‘vector space is obesined if, instead of real number, we take complex Sut ‘bers fr scalars. For example the st C* of all ordered ntupls of emapex umber ie a complex vector rpace. In Chapter 7 we shall discuss complex ‘ecto spaces, but until then we wil discus only teal vector spaces ess otherwise stated ‘Example 8.1 (2) For any postive intger m and ny the set Myen() of ail'm xn matrlows forms a vector spece under the matrix sam aad sealar ‘multiplication defied in Section 1, The zero vector in this space i the 42810 MstFx Opry and ~A isthe negative of matrix A. (2) Let C(t) denote th st of real-valued continuous functions defied ‘on the real ine R. For two functions f(z) and g(z), anda zeal munber ‘the um f+ and the scalar multiplication kf af them are defined by (U0) = fe)+0t0), (ke) = fle ‘Thon one can easly verify, as an exercise, thatthe set C(R) is a vector space under these operations. The aero vector in this space isthe constant function whose value at each point i er. (2) Lat Abe an rr matrix, Then i sear to show that the set of solutions of the homogeneous system Ax = 0 isa vector space (ander the um and salar mutipbcation of matics), ‘Theorem 8.2 Let V be a vector space and Tet, y be vectors in V. Then (1) x+y=y impli x= 0, 2) ox=0, (8) 10=0 for anyhER, 4.1, VECTOR SPACES AND SUBSPACES ” (1) x ts unique ond —x ©) Ykx=0, ten Proof (1) By adding -y to both sdes of x+y =y, we hove +ey=0. bw), xe0axty4(-¥) (2) 0 = (0+ 0)x = Ox + Ox implies (8) This san eury exercise, (A) The unquenese of the nagntive — of x can be shown by e simple ‘modication of Lemma 1.6, In fact fis another negative of x such that x28=0, then axa h0 xe (et8)= (x42) 42-048: (On the other hand, the equation xb (Dee bet (I) = (22) shows that (—1)x fs another negative of x, and hence ~x = (-1)x by the sniquenes of —x (6) Suppose kx = 0 snd & #0. Then ie)=jo=0 robe 9.4 Let V be the st ofl pac (z,») ofl numbers. Suppose that an ‘hon and salar maliplntin of pal te dened by (W)C 0) = (+2 9420), Mey) = (he, othe st a vector space un thre operation? Justi your answer. A subset W of a vector space V ie called a subspace of V IFW is islf ‘a vector space under the adation and selar multiplication dened In V. ‘Usualy in order to show that eeubsct Wie a sbeace, itis not necessary ‘to very all he rules of the daftution of vector space, becaute certin rule satnged in the large space V are automatically sated in every subse, if ‘ector addition and scala maltpletion are closed in subset, . ‘Theorem 8.8 A nonempty suet W of a wectr space V isa sboace if Gnd only xh y ond kx are contained sn W (or ouiaenty, + hy © W) Jor any wctars ond yin and any salar be » (CHAPTER 3. VECTOR SPACES Proof Weneed only to prove the suficency. Assume both conditions hold ‘nd let x be aay vector in W, Since W ie closed under scalar multiplication, (Oe and x = (-1}e are in W, so rules (8) and (4) fora vector space ‘old. All the other tues for a vector space ae lest 3 ‘A vector epace V itself and the zero vector {0} are trivially subspaces. Some nontrivial eubspaons are given in the flowing examples, Example 8.2 Let W= ((s, 2) €R® = ar-+by+ ce =0}, where a,b,c sare constants. Lx = (21, 23, 22), = (ony tay os) are points in W, chen deat x+y = (eit yi 34 -hn 25 + 9) le al a pola in W, because It eatiates che equation in W. Sina, fx also les in W for any sealer -Hlnce, W ie a eubypare of Ri and ie plane passing threugh the orgin in 2 Example 3.8 Let A be an m xm matix, Then, as we have seen in Exam- ple 8.1 (8) the set We{xeR" : Ax=0) of solution of the homogeneous system Ax = 0 sa vector space. Moreover, Since the operations in Wand ln B” coincide, W ise eubspaco of R™ ‘Example 2.4 For & nonnogstve intager m, lt Pa(R) denote the sot of al real polynomials in 2 with degree Ofer ay 2 eR (4) W i the et ofall continuous od fuesons on Rd, f(-2) = —f(3) fr sy eeR (8) W ithe net of alt plynoials with nagar conics, 3.2 Bases ‘Recall that any vector inthe space is of the form (212, 13) which can alse weltten ss (ex, 2, 29) = 001, 0, 02000, 1, 0) +2900, 0, 1) ‘That i, any vector in? oan be expressed atthe sum of scalar multiples of = (1, 0, 0), e2 = (0, 1, 0) and ey = (0,0, 1), which are aso denoted Dy 1 J and i, nespectively. The following defakon gives « name to such cexpresions Definition 8.2 Let V boa vector spac, and lt 1,2) «+» Xm beast cof vectors ia V, Then a vector y in V of the form ym ene bora tb eax where oy «ss nm ae sealers, ale a Hinear combination of the vectors May Rap Ki ‘The next tnsorem shows thatthe set ofl linear combinations of finite set of vectors in vector space forms a subspace ‘Theorem 844 Let wy x2) «1 Xm be tetors in a vector space V. Then the set = fax tezxa teh ont + 04 € RY ofall inear combinations of Hy Xt os Xm Heo subspace of V called the subspace of V spanned by ay a, ons Xe 2 CHAPTER $. VECTOR SPACES Prooft We want to show that W Is closed under addition and scalar mul tiplicaion, La w and w be suy two vectors in 7. Then ay Fama + ate boy + te + oF be for some sare at and b's, Therefore, ww (a) bres (oa ada oF (Om tba sand, for any seal k, = (hos, + (hag = (om) ‘Tos, u-+w and ku are linear combinations of x1, + Xm and conse quently contained in W. Therefore, W isa subspace of V. 3 Suppase that {xy 35 «ya i any et of m vectors in vector space Y Iany vector in V can be writen asa linear combination of Use vector 245, Wo say thet it i spanning set of V. Example 8.6 (1) For «nonzero vector vin a vector space V, linear com ‘ination of v are simply scalar multiples of v. Thus the subspace W of V spanned by v ls W = (Av ©}. (2) Consider three vectors vj = (12,1), va = (0y—Byl) and va = (0,0;1) in RP. The subspace W; spanned by vj and vai written as (ays + eave = (0s +03,01 ~ 03,01 +03): ER), ‘and the subspace Wa spanned by vi, v2 and vy is written as Was (aovi tonva-+aavs (y+ 0p + assay ~ 0390) +0903) :05 € Rp ‘Then aivs + 0n¥2 = aivs + enva +0vs Impllea W, C Ws. On tho other ‘bend, any vector in Wh ie of the form avi = anvs + avs. But, since vs = J{vi va) this can be rewritten as cy; + eqvy. This means that Wa & W,, thus Wy = Wa which sa plane in R® containing the vectors vs, vy and, In general, cubepace in a vector space can ave many iflereni spnnlng ets 3 32, BASES 88 Example 8.7 Let (0,0, ..., 0 = 104 0 ee = 0,0... 0) ‘be n vectors in the m-spece BY (n> 3). Then a linear combination of 1, 68, 69 i ofthe form ayes + on +a Hence, the st W = (01,025 435 0, ony O)EB™ + a3, an) 09 €R) 1s the eubepace ofthe rspace B® spanned by the vectors ene, e9- Note that the subspace WT caa be identified with the Sepace R® through the denieaton (04,02, 3, 0-1 (aay 0 8550; «5 0) (6,62, 3) swith 0, € R. Tn genera, for m " enKon ‘snd see whether any vector In V ean be wrttn in tis form in exsely one ‘way. This problem cxn be replated as to whether or not nontivil linear Combinetion produces the x0 vector, while the trivial combination, with al scalars = 0, obviously produces the zero vector. Definition 8.8 A st of vectors (xa) #2, «5 Xm} In a vector space V ‘uid to be linearly Independent ifthe vectar equation, cllod the linear ependence of, ep ena tb emen 0 ‘hs only the trivial elution o, = op = ++ 6g = 0. Others, it sad to be linearly dependent. ‘Therefore, a sot of wectors (x3, +» Xm) 8 linearly dependent ifand, ‘only if there ie linear dependence em tam te beata 0 ‘with a nontrivial solution (ey, m5 Gq). Tn this ease, we may assume ‘that cm 0. Then the equation can be rowriten as ‘Thetis, 0 set of vectors is Unearly dependent if end only if at last one of the iectors im the eetcan le writen as elincar combination ofthe others. Example 8.9 Let x = (1,23) and y = (8,2,1) be two vetors In the 3 space B®. ‘Then clearly yx for any A € B (or ax+ by = 0 only when ‘), This means that (x,y) Is Uinariy independent in 3 If w = (8.6), then (x) 58 lnearly dependent sins w'~ Sx = 0. Tn 32. BASES & general, fy re noncolinear vectors in the 3space B®, the st ofl linear ‘combinations of x andy determines a plane W through the origin in ie, W = (ox by :0,b€R}. Lot 2 bo another nonzero vector in the space B®. Ifz.€ W, then there aro some numbers 0, b R, uot all of them are taro, such thai 2= ox +by, that i, the set (x, 2} is ieely dependent, Ia ¢ W, then ox-+ by +ca= 0 is posbleoniy when om b= c= 0 (pro%e 1). Therefore, these (x,y, 2} i inoenly independent if and only if 2 does ot le in W. 2 By abuse of langunge i is sometimes convenient to aay that “the veetre x) Xa, ns Xm ate ineely independent,” although thi ls relly © peoparty of et ‘Bxample 8.10 ‘Tae columns ofthe matrix 12-10 a-|¢ 2 68 2-113 ‘are lineerly dependent inthe S-epaceR®, since the third ealumsa isthe sum ofthe fist snd the second, As ths exampla shows, the concept of linear dependence can be applied to the row or column wectre of any matric. Example $.11 Consider en upper teangular matric ass ois oo4 "The lnsar dependanos ofthe column vectors of A may be writen as i]--[2--{3)-[2) ‘whic, in matric notion, maybe written asa homogeneous system! 2as|[a] fo 016|fa}-lol. . cola} lo From the third row, cy 0, fom the seond row cy =O, and sbstitutlon of them int the fist 0 forces e, = 0, the homogeneous system has only the trivial solution, so thatthe column voctrs are linearly independent, © alo ° 86 (CHAPTER $. VECTOR SPACES "The following theorem ean be proven by the same argumeat ‘Theorem 8.5 The nonzero rows of a matris ofa rowecelon form are lin carly independent, and 20 are the elurins that contain lading 1's In particular, the rows of any telngular matrix with nonzero diagonals ze liner independent, and 90 are the ealumas. In general, if V = RO and vy, Va, ..-, Ve are vectors in R™, thn they form an mr matric A= |v Vy ~* Vol: On the other hand, Beample 3.8 showe that the near dependence cv) + ep +--+ Gi¥q = O of 5 Is nothing but the homogeneous equstion Axe = O, where X= (61 c2,"-) ‘Ta, the column vectors v's of A are Knearlyéndependent in of and only if the Romogenaous sytem Ax = O has only the trivial solution, and ‘hey are Hncarly dependent if and enly if Ax =0 hos a nontitlsoltion THU is the reduced row-ecelan form ofA, then we know that Ax = O and, ‘Ux = 0 have the same st of solutions. Moreover, « homogeneovs eyetem ‘Ax = O with unknowns move than the umber of equations always haa & ‘oateval solution (ee the remark on page 11). ‘Tis proves the following ems. Lemma 8.6 (1) Any set of m vectors én the m-space R™ is linearly de. ‘pendent if > me (2) IU is She rotund rowecelon form of A, then the columns of U re linearly independent if and only if the columns of A are lneerly Independent ‘Example 8:12 Consider the vectors €1 = (1 0, 0), = (0 1, 0) and = (0,0; 1) lathe Space R. The vector equation ee) +2369 +e369 =O Decomee A(t, 0, 0) +ea(0, 1, 0)+44(0, 0, 1) = (0,0, 0) or, equivalently, (6, on 4) = (0,0, 0). Ths, e1 = 2 = 6 = 0,90 the set of vectors (ey, €3, 9} is linearly independent ‘and also spans Example 8.18 In general its quite clear that the vectors &, #9. yin 2" are linoany independent. Moreover, they span the mspace Ta fet, ‘when we write a voctor x/€ Ras (21 «ey Za Ht moans just the linear combination of the vector o's +a) However, if any ooe ofthe eit missed, then they cannot gpan R*. Thus, ‘his kindof voto pays a special roe inthe vetor space. Ort os $669 92, BASES a Definition 3.4 Let V be a vector space. A basta for Visa sat of inesey independent vetors that spans V, For example, as we sin Frame 8.18, theset (6, 6,» en} frm ‘basi, call’ the standard basis for the mepece R™ Of couse there are many other bases for R Example 8.14 (1) The st of vectors (1,10), (0-1, 1), and (10,1) not ‘a bats for the -pece BS, sce this sat neaely dependent (ihe third is the mum ofthe frst two vectors), and canzot span BE (The vector (0,0) ‘eanaot be obtained ae linear combination of then (prove If)) ‘Ths set ‘doc not have enough vectors spanning BS (2) The et of vectors (1,0,0), (1,1) (10,1) and (0,1,0) a not a base ‘ther, since they are not lintely Independent (the sum ofthe st two minus the third mikes the fourth) even though they span B. This st of vctars ‘ha smne redundant vectors spanning B® (8) The set of vectors (1,1,1), (0,11), and (0,0,1) is tinerly id= pendent and alo spans 8. ‘Tati it sw basis for Ri diferent from the Standard bass. This st asthe proper numberof vectors spanning R since the set cannot bo rediced to a smaller st nor does it noed any sdeconal vector spanning R* 3 By definition, in order to show that att of vectors ina vector space aa basis one needs to show two things: ff Mealy independent, and f spans the whole space. The following theorem shows that bes for a vetar space repreents a coordinate system Just ke the rectangular coordinate cyst by the standard boals for 2. "Theorem 8.7 Let a = (v1, va) «+4 Ya) 868 bsis for « vector space V, ‘Then each vectra in V canbe uniquely exprested os a lnear combination ef Vip Vas ones Nov $8 thar are unique sears cys, = Ly eoey My SUCK thet xm ayv tonya to bans In this ease the column vector [ay a2 + al is called the coordinate ‘vector of x with trpect to the basis and it x denoted [xl . Proof: If x can bo also exprosed 0s x = bivs + bava-t 10+ Bava then swe have O = (aj —bi)vi + (0a — balva +--+ (= Bava» By the Lincar Independence of, a; = by for a= 1, «5 o 88 (CHAPTER 8. VECTOR SPACES Example $.15 Let a = {e),63,69) be the standard bass for R?, and let Bom fveyvay¥9} with vy = (1,151) =e) +0203, ¥2= (0411) weep hs t ° ° t] pac=[] m= [8]. pose 1 0017, fala = 101 017, fv le le ole rote 8.6 Show ha the vectors vs =, 2, 1a va = (8, O)and va = (8,8, 4) in the Space Bar a basis ‘rote 2.7 Show tt the ae (0,55 3%, n=} Saba for Pa(B), the votor ‘ceo al plprorinof degree wi aloof, ‘Problem $4 Inthe espace R, determine whether not coset fore, 62 ea, ny Syed O48} {stay dependent Problem 89 Let na denote the vor ia whee st 1 ordinates ae 20 {Ent whooe nt n= herd enodinte ate 1 Show tht the at (yp 2p a) al fr 3.3. Dimensions We often say that the line R! is one-dimensional, the plane #? ls two dimensional and the space Ris thre-dimensionl, os. This mostly due tthe fact thatthe Gredom in choosing coordinates foreach element in the space is 1,2 or 3, respectively. This means that the concept of dimension {2 cosy related to the eanept of bases. Noto that for a vector space i general there is no unique way to choose « bass. Honore, there is some ‘hing common to all bass, and this ls velstd to tho notion of dimension. ‘Weft uoed the following important lemma from which eno ean define the Ainetsion of a vector spac. Lemma 8.8 Let V be a vector space and let «= (81, 25 «5 Xn} be seb of r-vectors in V. 33, DIMENSIONS 80 (2) He spans V, then every sot of ectore with more Un’ vectors cannot 1 lnearly independent (2) Tete lnsarly independent, thon any sat of ectors with fewer than ‘ert cannot span V Proof Since (2) follows from (1) dretly/ we prove only (1). Let 8 (a, Yn vss Yn) beat of vectors in V with n > me. We wil show that Bis linecly dependent. Inde, since each vector yi inexe combination ofthe vectors in the spanning st 4, for j= Dy oom yy ait tana +b anin = S05, webave ayvtamteteys = elon tone ++ Onn) + alone Henpea Hanan) eeanx Hengen +++ On) = (ener tone teste) + (nen once + ame) 4 (@piea + angen ++ on ‘Thos, Uinerly dependent if and only if the system of linear equations ay tar tt eyn=0 Jnas a nonteivial solution (een). ea) (0,0,40)- ‘This trv ial the coefclent of ' are 2ro but not al of es are zero This means that ‘the homogeneous eystam of lineer equations in teealth hs & nontrivial solution, ‘This fe guarentend by Lemma 86, slce A i an mn mate with m < a 0 (CHAPTER 3. VECTOR SPACES tis lar by Lemma 3.8 that if set 0 = {xy X05 5X) fm vectors {is a buss fora vector space V, then no other set 8 = (¥2, Yn s-1 Ye) of ‘ects can be a basis for V ifr fn. This means that all bases fora vector space V have the same numberof vectors, even if there are many diferent base for a vetor space. Therefore, we obtain te following mmpertant real: ‘Theorem 3.0 If «lass fora vector epee V eonsate of vectors, then so doce every other Bass Definition 8.5 The dimension of «vector space V Is the numb, say n, of vetor in a bass for V, denoted by dim =. When V has a bel of « Finite mmber of vectors, Vi aid to be nite dimensionsl. Example 8.16 The flowing cn be esl vero {GD IV has only the aero vectors V= (0), hen din V =. (2) Irv'= BY then dink® =n, oace Vas the standard bala (STE yw tat pty of ders les than eq 9 then dis a(R) 8-1 soe (222%). 2°) oa bal fr V. (4) FV = Men(R) ofall mate, then dh Mey) = mn since Eig Dont, § = Lewesn) i a bas fr Vy whee By isthe rn mate whose (jj}thenty i} and el tors are zero 3 ICV = C(W) of all real-valued continuous functions defined on the real ting, then one can show that V ls not finite dimensional. A vector space V ‘s nflaite dimensional if ls uot Snite dmensionl. Ia this book, we are ‘concerned only with faite dimensional vector epanss unless otherwoe stated. ‘Theorem 8.10 Let V bea finite dimensional vector space. (2) Any ineary sndependent set fa V ean be extended to basis by adding ore vectors if neesary (2) Any st of vectors that spans V can be reduced toa basis by discarding etors fnacessery, Proof We prove (1) only and lore (2) as an exc. Lat @ = {359 sj} be linea independent set in V. Hea epanaV, then aia bain: If dies not span V, the sere ext a vetor Sa Xan V thai cone tall in th subspace saan bythe vectra Now (tn XyHast) istics independent (heck why). 1735, ss. ty Xt} spans Vy then 33, DIMENSIONS o this basis for V, If it doesnot pan V, then tho same procedure ean be repeated, yielding «linearly independent set that spans V, Le, a bai for V. This procedure must stop in afte step because of Lemme 538 for a finite dimensional vector space V. 3 ‘Theorem 3.10 shows thst base fora vector apace V is tof vectors in V which s masinally independent and mininaly spanning in the abore ‘soncs, In particular, if W le @ subspace of V, then any basis for Wie Tineacly independent also in V, and can be extended to a bass for V. Thos dim < dim V. Corollary 811 Let V bea wector apace of dimension m, Then (A) ony set of vectors thet apone V is bass for Vy and (2) ony set of ineary independent vectors isa basis for V. Proof: Agsin we prove (1) only. Ifa spenning set of n vectors were not lineacly independeat, thea the set would be duced to’ bass that has = smaller number of votor than n vectors. a Corley 8.11 menos that if ti known that dim V =n and fast of» vectors ethers linearly independent or spans V, then it Is already © basis forthe space V. Bxample 3.17 Lat W be tho subspace of Rt spanned by the vectors M12 (1-2 8, -9), = 0,1, 1,4), =A, 1, 0) Find » basis for W and extend it to a basis for Rt Solution: Note that im W < 3 sinco W is spanned by three vectors x's Let d be the 3% 4 matrix whoge rows atx), 7 ad 125-9 oui 1 010 125-9 yeloria 5 o oi § 2 CHAPTER 3. VECTOR SPACES ‘The three nonzero row wetors of U are clearly nearly independent, and ‘thay aio span W beste the vectors %:, 2 and xy can be expressed as ‘linear cotoblnstion of thee thre nonzero row vers of U. Hence, U provides a bass for W7. (Note that thie impor dim W = 8 and hence, Sey Xa alo a basis for W by Corollary 3.11, The linear independence of sade by-product of thle fet). "To exiand this bass, just add any nonzero vector of the form x¢ = (0, 0,0, #) tothe rome of U to gt m bass fr the space Rt Fy Prolem 810 Let W be a autepace of vostr apace V. Show tet if ia W” = ‘din¥, hen WV. Proton 8.11 Fin s ba and the dimension of ach of th owing spaces of Mees) of lnm tees (Gh apace fall n> agonal matrices whee trace ar 2; (2) ee epace fall nn symeteie matrices, (9) eh pce ofall n> weymmeti tin [Now consider two subepeces U and W' ofa vector space V. The sum of ‘these rubepnces U and Wis defined by UsWe usw weu, weW), ‘en not ad too that this ea subepace of V- rote 848 Let U and W be subepacs of econ space V (0) Show tht U4 1 isthe smallest eibepace of V containing U and W. (2) Prove that UW lls a sutepae of V. In UUW abepace of V7 Susy ‘Definition 2.6 A vecor spac Vis called the direct sum of to subspaces Vand W, witten V =U GW, #V = U4 and UAW = {0}. ‘For example, one ca easily show that RS = RE GR? =RL@R! OR. "Theorem 8.12 A vector space V is the direct sum of subopaces U and W, ie, V=UGW, if and only i for any v €V thee sist unique we U and EW such that y= ut. 35. DIMENSIONS 8 Proof: Suppose thet V = U@W. Then, for any v &V, thee ext vectors we U and we W such that v = u-tw, since V= UW. ‘To show the ‘nigueness, suppose that v is ela exprened an a cum W 4+-W for we fend we W. ‘Then u-+w=u'+w’ ipliee uw aw weUaW = (0), Henge, w= wand w = w! Conversely, if there exists a nonzero vector v in UAW, than ¥ can be written assum of vectors in U and 1 in many diferent wave: veo 12 a peiveusm, Example 8.18 Coorder the thee vectors, and e) ia Tet U = {everttae : 3. by €R) be tho eabepac panied by e nd ey (2a), fd let W = {eaea + exe © oncy ER) be the subspace of BY speed by a andes (peplae). Then a vector in U 4 W is ofthe fon (artes) + (0202 +e2e5) = 61 +0202 (be tener ner toyey axes where ay = by ey and oy, 02, on are ebitrary numbers. Thus U-+ 1 However, RU @ W since clearly @ € UMW # {0}. In fat, the vetor (05 CB can be wstten at many linear combinations of vectors a U and W: ) 1 dey a} ) feth 2 ot Be e+ 7 [Note ult if wed taken W tobe the aubepace spanned by ey alone, ‘then it would be easy to see that R? = U@ W. Note alo that there ave say choles for W. ° ‘Asa direct consequence of Theorem 310 andthe defition of the dirct sum, one can show the flloming Corollary 3:13 If i a cubspace of V, then there i 9 subypece W in V such that VU @W. Proof: Choos besis (uy, .-, us) or U, and extend it to basis (1, «5 ‘Uy kets sy hy for V. Thea the subspace WY epanne by (Vests --1 Ya) setifis the reulrement. 3 4 CHAPTER s. VECTOR SPACI Prollem 8.19 Let U and W be tha subspace of the vector spce Man(R) consist Ing ofall eymetie mates and al sew-eymmotic matin, spectively, Show {BEL Mag ll) = U QI. There the davornpoion of wages ats Agen In @) of Protea 110 © unique Proll 8.14 Let {vay Vas os Va) bo aba fr vector space V and let W, = eve er ER) be the eabspace of V epaned by ve. Show that V = 1% 05 eWs 3.4 Row and column spaces In this section, we go back to ystems of linear oqutions and study them in terms of th concepts introdied in the previous sections. Note that an ‘mm mairix A can be ebbrevitod by the row vectors or column vectors ss follows: gefme ae] |e [aa = 3] where the a's are the row vectors of A that are i R®, and the e's are the column vectors of A that ar in Definition 3.7 Let A be an mx mate with row vectors (fay «sos Pm) fan eon vetors (er, == a) (21) The row space of Ais the subspace in R* spanned by the ow vectors fen rs Pj denoted by RCA}. (2) Thecolumn space ofA is the subspace in ®™ spanned by the elma | ectors (ey very qs denoted by C(A), . (@) The solution cot of the homogeneous equation Ax = 0 i called the ull space of A, denctad by (4). [Note that the mul space N(A) isa subspace of the -space RY, whose mension is aed the mullty of A. Sine the row vectors of A re just the alum vectors oft tranepose AT, and the column vectors of Aare the row vectors of A?, the row space of A is the column space of A; that is R{A)=C(AT) and CA) = RCA), 44, ROW AND COLUMN SPACES 95 Since Ax= 26; #22960 ---246y fOr any vetor x = (21,225.53) €R eget (4) = (Ax : xe Rp. ‘Tins, for a vectr b ER, the aytem Ax = b has a solution if and only if be C(A) CR”. Thus, the column space C(A) isthe set of vectors b ¢ R™ for which Ax =B bs solution, 1 is quite natural to ask what the dimensions of those subspaces are, ‘and how one ean find bases for them. This wil belp ws to understand the structure of al the solutions ofthe equation Ax = b. Since the set af the row vectors and the aet of the column veetore of Aare epanning set forthe row space and the column space, respectively minimally spanning subset ‘ofeach of them wil be a base fr each of thet, “his snot dificult fora matrix of (reduced) row-ochelon form, Example 3.19 Let U bo ina reluced rov-ecelon form given os 100 22 o104 3 ool 4-4 00000 Clearly, the fst threo nonzero row vectors containing leading 1's ae in- carly indopendant and they form a basis for the row space R(U), so that dimR(U} = 8, On the other had, note that the fst three eckumns con- taining landing 1's are lineasly independent (ee Theorem 8.5), and that the last two column vectors can be expresed as linear combinations of them. “Honoo, they form a basis for C(U), and imC(U) = 8. T find a basis for the aul spaco N(U), we fist solve the eystem Ux = 0 with arbitrary vlues nd fog the fre variables ands, and get the sation a] fom 2) [2 » st i} [os ail=|-a + t [es] -4[eef 1 |= tem, % * 1 ° . = ‘ ° 1 where ng =(-2, 1, 4, 1, Ome =(-2, —8, 1, 0, 2). Heshows that these ‘0 vectors my and me span the nul space NV(U), and they ace clearly linearly independent. Hence, the st {iny,) ita basi forthe mall pace (0). 0 6 (CHAPTER 3. VECTOR SPACES In the following, the rom, che column or the null space of « matrix A ‘wil be dscussod in rlaton tothe corresponding space ofits (reduced) row- ‘cielo form, We Stet inveetigate the row space (A) and the nll space ‘(A ofA by comparing them with those ofthe reduced row-echelon form 7 ‘ofA. Since Ax =O and Ux = O have the some ecution et by Theorem 1.1, ‘we have (A) = N(}. 5 Let A= |: | bean mn matsc, where rs are the row voctors of |A. The three elementary row operations change A into the following three ‘ype n n y= | a | for 0, aaa]: | for ey, Asm | not key Tt clear thatthe row vocters ofthe throe matrices Aa, Ay and As are linet combinations ofthe row vectors of A. On the other hand, by the inverse ‘lamentary row operations, these matcins can be changed into A. Thus, the rom vectors of A can also be writen as inear combinations of those af ‘Avs, This moans that if matries A and B are row equivalent, then thee ow spaces must be equal, ie, RA) = RB). ‘Now the nonzero rom vector in the reduced row-echelon form U are slays linearly independent and span th row space of U (see Theorem 8.5) ‘Tis they forma a basis for the row space R(A) of A. We have the flloning theorem. ‘Vhoorem 8.14 Let be e(relucud) rowechelon form of e matric A. Then RA) = (U) ond NA) = NW), Moreover, f U has 7 nonzero row vectors containing leading 1's, then they {fern ¢ bass forthe row space RA}, 20 thatthe dimension of (A) ir ‘The following example shows how to find basos for the row and the mull, space and atthe same tie how to find bai forthe eoluma space (4) 84. ROW AND COLUMN SPACES ” Example $.20 Let Abe x matrix given as 120.2 5) [nm 2-51 -1-8/_|n Ae) 0 aa 4 atlas 3 60-7 2) [x Find bases for the sow space RA), the ull space WV(A), and the eotumn space (A) of Solution: (1) Find « basis for RA): By Gause-ordan elimination on A, ve got the reducod row-schalon form U ‘Since the three nanzro ow vectors Go 3 0, 2, (yy 0, 1), oo ony ” (of U are linearly independent, they form a basis fr tho row space RU) = ‘R(A), v0 dim R(A) = 4. (Note that in the proces of Gausian elimination, we did notte a parmstation matrix. This means that the three nonzero rows of U were obtained from the frst three row vectors my 2, ¥y of A and the fourth row rq of A turned out tobe a linear combination of thom. ‘Thus ‘theft three row vetors ofA also form a basis fr the row space) (2) Find a basis for /(). Ts enough to solve the homogeneons syst 0, since (A) = V(U). Tati, nepectng the fourth aero enetion, the equation Ux=0 tales the flowing system of equations: + Oy ers . ‘Since the fst, the second and the fourth coluns of U contain the leading Ys, we see that the base variables are 23,29, 4 ad the free variables are 98 CHAPTER 3, VECTOR SPACES 2, 25. By assigning arbitrary values and t to the foe variables 2 and 2s, we ind the solution x of Ux=0 a6 a) [-t) fay ca 2 = af fo x=|a |= «} afeel oo -+ of fa n ‘ 0 1 vehere ny = (-2, 1, 1) 0, 0) and ne = (1, —1, 0, —1, 1) In fact the two vectre my and iy ae the solutions when the valu af (24,26) = (6) fre (10) and thoee of (225) = (4) ae (1), espectively. They most be linearly independent, since (1,0) and (0, 1), a the (23,24)-coorinates of in, and ng seepectively, are clearly linesely independent. Since any selution ff Ux = 0 int linear combination of them, the st (my, my) is a basis for the mil space W(U) = (A). ‘Ths din 4/4) ‘he numberof free tortaies in Ux = 0. (@) Find o basis for C(A). Let ety 2, €3, 04) denote the column vectors of inthe given orde. Since thaw column vectors ofA span C(4), ‘we only nod to deeard some ofthe calumat that ean be expresed as linea! ‘combinations of other ealuma vectors, But, the liar dependence etme tae tae tr 0, ic, Ax=0, holds Hf and only fx = (21, +5 a8) © N(A). By taking x = my = (62,1, 1,0, 0) orx=m = (C1, “1, 0, —1, 1), the Dai vectors of V4) tiven in (2), we obtain two norrivil linear dependencies of e/'= er terte = 0, -ana-ate = 0, ‘eepectively. Hence the column vectors 9 and es corresponding to the fee ‘rable Ia Ax =O can bo written as = tae = abetes ‘That ithe clumn vector s,¢5 of A ae linear combinations ofthe column sectors 61,604, hich correspond tothe basi variables in Arc = 0. Hence, ‘ex, 2 €4) spans the column space C(4). 84. ROW AND COLUMN SPACES 0 ‘We claim that (ci, 6a, c4) is linearly independent. Let A= (ex ex ed), and T= [uy uz ul be submatices of A and U, respectively, where uy i the ‘th colums vector ofthe reduced row-echalon form U of A obtained in (1). ‘Then clearly O isthe reduced row-echelon form of Aso that N(A) = (0). Since the vetore us, ug, sf Just the eslumns of U cotainng leading 1's, they are inary independant, by Theorem 3.5, and Ox =O has only © trivial eoltion, This meene that Ax 0 has also only trivial olution, 0 {ei €2y 4) linearly indopendent. Thecefore, i ea basis for the column space C{4) and dimc(A) = 8 = the numberof basic variables Tht it the Cahn vctore of A correronding to the baie variables in Ux = 0 form 6 tis for the column apace C4). a In cummery, glen & mate A, we Set Bod the (reduoed) rom-echalon form U of A by Gause-Jorden elimination. Then a bars for R(A) = R(U) 1s tho ott of nonzero rows vectors af U, end a basis for AA) = N(U) exn be found by caving Ux = 0, which ic easy. On the other bend, one bas to be careful for C(U) # C(A) In general, since the column space of A not prserved by Gavse-Jordan elimination, Homever, we hove dimC(A) dimC(Z), ed a basis for C(A) can be selcted from the column veetors in ‘Avot in U, es those cortesponding to tho basic variables (r the leading 1's in U}. To show that thos column vectors indeed form a bass for C(A), we ted bass for the mal space JV(A) to eliminste the redundant eslumns, Note that basis forthe column space (A) can be elso found with the lementary olumn operations, which isthe same as finding « basis or the row space RAT) of AP Protlem 215 Let A be the matrix given in Example 3.20. Find a tlation of 8, eso tet the vector x= (a, 6 ed) belong ta C(A) Problem 8.16 Flnd bass fr (A) and NA) of he matte 1-2 003 o 6 we wo 2 6B 86 ‘Alb find bas for CCA) by Ang a bis for (AF). Problem 8.17 Let A ad B be two xr sates, Show that AB = 0 if a gly Ifthe column space of B ise sabpoce the mlopce of 4 Protlem 8.18 Find an example ofe mati A and ts ow-eclon fem U such that ota) #10}. An 100 CHAPTERS. VECTOR SPACES 3.5 Rank and nullity "The ergument in Example 20s so general that it an be used to prove the folloming theorem, wich fs ane of the most fundamental result in near Algebra. The proc given here is usta repetition ofthe argument ia Exam ple 820 in general form, and 99 may be skipped atthe reader's dscetion ‘Thoorem 8.18 (The frst fundamental theorem) For any m Xn ma tris A, the rw space and the column space of A have the same dimension: ‘hati, aim (A) = alc), Prooft_ Let dim (A) =r and Jet U be the reduced row-echlon form of A. ‘Then re the momber of the nonzero row (rcalumn) vectors of U containing lading U's, which i equal to the number af basi variables in Ux = O oF “Axe = 0. We shall prove that ther colums of A corresponding to the + lending I's (or basi variables) form a bass for (A), co that dim¢(A) = ro dim RA). . (2) They ace linearly independant: Lat A donoto the gubmatrix of A ‘howe columns are these of A corresponding to the r basic variables (or Tending 1's) in U, and let 7 denote the submstrix of U containing r leading Ys, Then, i gute clea that 0 isthe reiuced romechelon form of 0 that Ax =O If and only if Ox 0, However, Ox = 0 has only tiv- fal solution since the oslumns of U contaizing the lending 1's are linearly fndependeat by Theorem 35. Therefore, Ax = 0 alo has only the tival selution, so the columns of dare nearly independent. (@) Thay span C(A): Note that the columns A corresponding to the ‘roo vaviables are not contained in A, and each of these column vector of ‘Avcan be writen asa linear combination of the ealuma vectas of A (eee ‘Bxample 8.20), Infact if 25,2.) isthe eto ree variables whose ‘corresponding estumns are notin A, thea, for an assignment of value 1 to 1, and to all the other foe variables, one can always find a nontrivial ‘ohio of Axane1 tana tb aaea 0. ‘When the solution is subetitted ito thls equation, one can soe that the colina ey, of A eorerponding #9 =, ~ 1 is writen a a Iner combinstion ofthe columns ofA. Thit ea be done foreach j = 1... s0the columns fof A corerponding to thse fee variables are redundant In the spanning st f(A), 3 35. RANK AND NULLITY 101 ‘Remark: In the proof of Theorem 3.18, once we have shown that the columns in A are Inerly independent as in (1), ne may replace step (2) ‘by the following argument: One can easily ee thet dimC() > dn (A) ‘by Theorem 3.10. On th othr hand, sineo this inoquality bode for arbitrary ‘matzie, In particular foe AT, we get dimC(A7) > dim (47). Moreover, (aT) = (A) and (A?) = C(A) implies dinC(A) < dima (4), which nonans dinC(A) = dim R(A) This alo means thatthe clutan vectors of A span C(A), ado form a base In summary, the following equalities are now clear from Theorem 3.14 nd 3.15 dimR(A) = dim R(U) the number of nanzer row vector of U ‘the manimal number of linearly independent row vectors of A ‘the mumber of asic varlabes in Ux ‘the maximal numberof linearly independent column vectors of A = dinc(a). amv) Definition 3.8 For an m xn matrix A, the rank ofA is defined to be the n-+1 At distint pointe, 22, a then ther noed not be axy interpolating pelytomilsinoe the eystem could be ieonsstnt. In this cas, the best we ‘an do i to find polynomial of degree < m to which the data i closest ‘We wll review this statemont again in Section 5.8 3.9 Application: The Wronskian et yy yay soca Yu bo m vectors in an m-dimensional vector space V. ‘To check the independence ofthe vectors. consider its linear dependence cay enya beaya et a= (01, 22) von) Rm} be a bass for V. By expresing each yy a8 8 linear combitston ofthe bass vectors 6, tho linear dependence of y's can be written ae linear combination of the bass vectors so Ut all ofthe coalicients (which are aloo linear combinations of e's) must be zero It gives homogeneous ayetam of nar equations in a's, say Ac = O with ‘an xn matri A, aa in the proof of Lemme 38. Recall thatthe vectors 4s re licary independent if and only ifthe system Ac = O hss only @ ‘Givi solution, Hence the lnear independence ofa set of vectors infinite ‘dimensional vector space ean be tested by solving s homogeneous system of Tinear equations. But, if Vix not fate dimensional, this test forthe linear independence of a et of vectors cannot be appli us CHAPTER 3. VECTOR SPACES mn this section, we introduce atest forthe linear independence of ase of fonctions. Por our purpase, lt V be the vector space af ll functions on ‘hich ae dfforentsbleinfaltly many times. Then one can easily se that V isnot finite dimensional Lat fila), fla), > faz) be Sunetions in V. The n funtion ace linearly independent In Vf the linear equation exhile) + exhale) +o fale) = 0 for all € B implies that all ¢ = 0, By taking the diferentiation n ~1 times, we obtain n equations: fle) tata) + tenf Mle) 0, OSS 0-1, for all ¢ € R. Or, ina mate form: file) He) ‘z) Jf] fo fie) Ae) he) |fa}_|o FM) Ma MP) Lea} Lo ‘The determinant of the coefclent’ matrix is called the Wronskian for {ile} fale), Ja(2)} and denoted by W(2). Therefore, if there isa point 20 € Rsuch that W(z) 9 0, then the coeicient matrix fs nonsingular at 12 = 2 and so all = O. Therefore, i the Wronskan is nonzero aa point Jn, then {J4(2), fal), f(a) ane linearly independent. ‘Bxample 8.25 For the sets of functions F = (2,cos2,sinz} and Fy = {ete}, the Wronskians are Wie) det] 1 —sar eee 0 Stone sine 'Z 2] oe et Since W;(z) #0 foe x 0, both Fare linearly lndependent a Probl 3.0 Show that 1,3, ,--,2" ort ndapndent inthe vstr space (af continoas fasion tee wanaae] i 8 <== 240, EXERCISES ur 3.10 Exercises ‘8.1, Lat V be the sto ll pats, y) of el mmbers. Define (ult mn) = (+n ym) ts 9) = (eu 1a V e vstrspce with thes operations? 8:2. For, y €RY and KER, dine two operations as xeyax-y, bex= kn “The operations onthe right side are the unl ene. Which of thera in the deation of veto space ae eatsd for (RJT 3.3. Deerae whether the givn sts vir space with the wml ation fod salar maltiplation of fontions, (0) The ee ofall inesions feted on the interval [-1, 1 ach that f0)=0 (2) The ot of a fonetons fdlied on R mck the meee fl) = (8) Th et ofall toe ftontinble function f dene on noch that s"e)+ fe) =0. 3.4, Let C21, 1] be the veetoe space ofall functions with contimous seoad Geir as i, Woe ong tte see of 2-1, 1 ) W=(ye) ect, W: Me)4s(e)=0, <2 <1) WELCOME) Te) wah, “ise ST) 5, Which ofthe flowing wubsets of C[-1, 1} subspace ofthe voctor space (ni i of ease fanetons oa (=i, IP ( Wea(fle) €Ol-4, 11-1) = -40)). @) W= (7) eof, 1:7) 20 fr all = inf (@) Wa Gla) ech s(-1) =—2 and a. 7 (9 W= UG) eC, 4) =o) 86. Does the vector (3, —1, 0, -1) belong to the eubpace of spaned by the stars (2,1, 8,2) (is fy 1p ~8) and (1, 2, 8,3) 4.7, Ene th ge tea a cmt oto nh hen =e (2) plz) = -1— 3 +90? and Q = {pi(2), pale), pa(z)}, where + Bete A nea ema 0) pena teeth ey mc) vine ONO ota ee ee Sse ee ne ota, Mates us CHAPTER 3. VECTOR SPACE 8.8, Ts (cose, so, 1, €) ln independent inthe vector pce G(R)? ‘3.9, Show thatthe even sets of fancies re nel independent inthe vstor mene Cl-n, oh 0) (ie 2 9) 2) a (9) (2), coos, ny sinks, ence 8.40. Are the wears a8 v= 1 82h MER yO, MEQ LL) easly dependent inthe apace But, In she pce RY, et W be the tof al vectors (su, #3) that ty the syontion =) = 2)™3y =0. Prove that W lea subspace cf, Pod a baie {or the ulepace W. 8.1, With epact tote bse a = (1, 2 2) forthe vctr spc F(R), Sd he ‘ordinate vests of be floningpoloomile Mse)=2-241, Qse)astete-1, Ws) =245 8.18, Let W be the subspace of Cl, 1] contig of factions ofthe form a) = sins +Booes. Determine the dimen of W. ‘8.24, Lat V dani the st of line eqaene fra! manbers (xo (eda 2 eR) x (a) and y = (yh ate nV ten x+y the eguene (+ iva el imber, then ox ho sequence {er} 2 (0) Prove that V ina ec apace (@) Prove tat V i not Site dimensional, 18.15, Fortwo mate A nd B fr which AB canbe defied, rove he fling (0) Irboth A and have larly independent column vstar, then the tolua vectors of AB are also ney independent. (2) Inboth and have linearly independent row ctor, than he 0m sectors ofA ave al Isl depend (9) Ihe calamn vector of Bar ineariy dependent, then the elisa vectors ofA are al ncnly depen (4) Ifthe row vectors of Aare lnerly dependet, then the row eta of (AB aro lo inealy dependent. S, Let = yg) Bees a Desubepce af (ey): 2ta—280) 310, aut, aa 23 EXERCISES uo (0) Pind a tenis be UV. (2) Deere the dimension of + ¥ (@) Desebe U, Vy UV and U + geomet Blow many 55 permutation watscs ee there? Are hey net indepen deat Do they span the voetr space Me) ? indus forthe row epaen, the cluan space, andthe ml pace for ach ofthe flowing mates pelt cnn tere tne oaoo pias 1000 For any nora column vectors wv, chow tht tbe mati As uv has ‘ak Caoverey very astro ok 1 en be Wotten a Wo some termine wht the following statement ar 00 o fl, and justify our (Q) The et of ll nxn mtson auch that AT = A eo antec of the vector ep Myon (2) Itvand 9 ae lint Independent suse of veto apace V then so 4s the union a8. (9) ICU and W ar sbepaces of victor space V with basso ang 8 ‘espetvely en the aeecttion 293m bs or U 0. (4) Let U be the cow-esinon form ofa aguare matée A. Ge St ¢ ‘lune of Uae nary dependant, thn are the fast eslmns oA (6) Any two rome sates hae the same eam space 120 CHAPTER 8. VECTOR SPACES (6) Wet A bo en mm mate wth rank m. Then the caluma vcr fA pan Bm (7) Len bean mem matric with rank Then Ax = eat most one ‘nin (6) HU ia sbepace of V and x, ¥ we vets in V such that x+y ceatained in U, then ¥€U and y €U. (0) Wet 0 and V te veto paces. Then U a a sbspace of Vif and only aimUs din. (10) For any mm matic A, dim) + din (AP) =m Chapter 4 Linear Transformations 4.1 Introduction ‘Aa we sow in Chaptar 3, there are many vector spaces, Naturally, ne can ask “whether or not two vector spaces are the same, ‘To ay two vector spaces are The seme or not, on has to compare thom fst a ect, and then se whether root ther arthmatlel rues ate preserved. A seal way of comparing two tots defining # funtion between them, Recall that function fom 2 fot X into another set Y is e rule which assigns a unique element y in to cach element 2 in X, Such a function ie denoted os f =X —+ Y and Sometimes refered to asa transformation or a mapping. We say that f tratarms (or maps) X into Y. When given sts are vector space, ono can ‘compere thee arithmetical rulan alo by a transformation f iff presrves the arithmetical rules chat is, f(y) = JGx)+ Jy) and f(b) = f(s) for ay vectors x.y and any eslar Kn thle chopter, wo dscse this kind ‘of trasforantions between vector spaces vie the Iincar equation Ax » b For an mxcn matrix A, the equation Ax =D means that every vector xm (eu au ++ 29]? in RY the matrix multiplication AX assgns a vector 1 (= Ax) in BM. That i, the mtr A transforms every vector x in into a vector b in B by the matrix mulplcation Ax = b. Moreover, the distebutive law AQ + ky) = Axe Fy, for Be Rend xy € BY, of ‘matrix multiplication meana that A preserves the sum of vector abd sealer slipiaton. ~ Definition 41 Let V and W be vector spaces. A funtion TV — Wis called a linear transformation from V to W ioral, y © V end salar Je the following conditions hold: aa 12 (CHAPTER 4. LINEAR TRANSFORMATIONS @) The+y)= 700) +70), @) Thx) = kr 00). Wo often call simply linea. It isnot hard to se that th two condi tions for a linear teneformation can be combined ito a angle requirement Te+hy) = T(x) +471. Goometrically, this is just the requzement fora straight line to be trans formed into a straight line, since x-+ ky represents straight line through x in the direction yin V, and its image P(x)+KP(y) also rprecentea straight line through T(x) in the direction of T(y) in W. ‘The following theorem is 1 direct consequence ofthe defitin, and the proof i lft for an exerci, ‘Theorem 4.1 LetT:V—W be a linear tmnsformation, Then @) T0)=0, (2) For any xy 45 225g € V ad sealer hy hay he Tay Baa +o hen) = hPa) +b) boo Ba) Example 4.1 Consider the following functions: (2) f: RR defiad by f(a) = 285 (2) 9:8 1B dofned by o(2) = 2? 2; (9) Asm? RF defined by fa, y) = (ey 20) (@) mt RP defined by h(x, v) = (aus 28+) ‘One can easily ce tht g nd are not linear, while f and h are ines, ‘Example 4.2 (1) Foran mn matrix A the transformation 7" — RO ‘defined by the matrix multiplication Te) = axe Io tear transformation by the distributive law A(x} hy) = Ac ky for ‘any x, ¥ €R® and for any salar & GR. Therefore a mats A, identified vith 7, may be considered to be linear tenaformation of R to R™. (2) Fora vector space V, th identity transformation Id: V -> V is efi by Idx) =x forall x € V. IFW is another vector space, the nero ‘transformation Ty : V+ W is defined Wy To(x) = (the aero vector) for allxce V. Clearly, oth transformations ae inxs a 44, INTRODUCTION us [Nontival important examples of linear trasformations are the rota- tions, reflections, and projection in gometry defined in the fllowing ex sample ‘Bxample 4.8 (1) Let @ denote the angle between the axis and o fixed vector in R Then the matric w-[ Sr i] find oreo Asfins a liner transformation on B? that rotates aay vootor in R? Urough the angle # about the org, and ls called a rotation by tho ange 0. (2) The projection onthe z-axis the linear transformation T 32 — 1 define by, or x= (2,3) ©, reo-[0 8][5]-(2] (8) The tnear transformation T=? +R? defined by, for x= (2,3), els ‘IE) [4] . Protlem 41 Find the nt of refi abot the ine y == in the plan Example 4.4 The transformation tr Meny(R) + defined asthe sum of diagonal entries fe(A) = ana +039 Fo oe (au) € Myan(Q), i called the trace. Te euy to show that ta( A B) = tA) + 1e(B) snd th for exy matrices A snd B In Myen(B), which means that “te ie linear transformetion. In pacticular, one ean eeiiy show that the st of all mi atic with tace 0 isa sbepace of Mryn(®) 5 f(a) Protlem 4.2 Let W = (4.6 Mexe(R) (4) = 0}. Show eat laa aubopses, then id base for 4 CHAPTER 4. LINEAR TRANSFORMATIONS Problon 4.8 Show that, for any tes And B ia Myo) (AB) =e (BA) Example 4.5 From the sll, is wel kxowa that two transformations DPR)» Pa-a(B), Ts Pe(B) > Pal®) defined by erenition end ixegration, Dinte)=ste), Tene) = [see satisfy Hoon, and so they ar liner trenformations. Many problems related with difrential and integral equations may be reformulated in terms of near transormations, o Definition 4.2 Let V and W be two vector spss and let TVW be ‘lines transformation from V into W. (1) Ker(P) = {v € V :T(v) =0} CV scaled tho kernel of 7. (2) I(T) = {F(v) Wy eV} = TV) ¢ W is called the image of 7. Example 4.6 Let V and W be vostor spanas and let 1d : V+ V and Th: V — W be the identity and the 2eto transformations, respectively. ‘Then itis easy to se that Ker{Id) = (0), Im{ Fd) = V, Ker(T) = V, and I) = (0). 5 ‘Theorem 4.2 Let T : V— W te o linear transformation from @ vector ‘ace V ta a wector space W. Then the kernel Ken(T) and the image I) re subopaces of V and W, respetvay Proof Since 70) multiplication. (0) For any x, y © Ker(T) and for any salar k, The hy) = T(x) + kPy) = 0+ HO Hance x-+ ky € Ker(T) so that Ker(T) isa subspace af V. (@) lev, we In(T), then thre exist x andy in V such that T(x) =v sand T(y) = w. Thus, for any salar, vob hw = Tx) + HEY) = T(x by) ‘Tis ¥ + hw € Iin(T), 20 that Im(T) isa subspace of W, a each of Ker} and fm(t) is nonempty having 0. 42, INTRODUCTION 5 ‘Example 4.7 Let A:R" +B be the liner transformation defined by an ‘min mst A aon Example 42 (1). The kona Ker() ofA consis ofall ‘Scltion vectors ofthe homogencous syst Axe = O. Therefore, the keel ‘Ker(A) of Ate nothing but the mal space (A) of tho matric Ay end the image Im(A) ofA is jst the column space (A) = Im() = A(R") <3" of the matrix A. Recall that Axcis «Kear combinstion ofthe corn vectors ofA. 3 (One of the most nportent properties of linear trancformstions is that ‘they are completaly determined by their values on a asi ‘Theorem 4.3 Let V and W be wactr ances, Lat {Vay «oop Va} Be a Base for V ad lat way ony W be ay vectors (possibly repeated) in W. Then, There exits wigus linen transformation T= V ~> W such that T(¥) forth, oy root Lot x€¥ ‘Then hss wine atin = a cee ae Deva wy Te) = Sane Ih partes, (2) = we for 8, 2 ns "incr, Foe = Essa, = Tay hi €V and ka seals, we ave xt hy = Tbalee ke Then rica) Rocesini= Samah a7 06) niente, Sypt thak§2¥ ~ W mead St) = martin’ semen nana eteota 4) Soasien = oom =o ‘Hence, we have 3 . ~ ° ‘Therefore fom an ssigament (wi) =m; of en axbltrry vector in ‘Wo each vector vs ina basis for V, ona can extend it uniquely toa liner ‘ransfonmstion T fom » vector spice V into W. The uniqueness in the ove theorem maybe rephrased atthe following corollary 126 CHAPTER 4, LINEAR TRANSFORMATIONS Corollary 4.4 Let V and W be vector spaces, and let (v1, «oy va} Be @ ois for V. If, T2V —W are linear trensformations end S(vs) = Tee) fori, os n, thon $=T, be, S(x) =T(s) forall x eV. Bxample 48 Let wi =(, 0), w= ink (1) Let a= fe, en, eg} be the standard bass fr the }spaco BY, and Jet n'a? be the lseartraneformation defined by 2-0, wa = (4 8) be threo vctors Thos) =m, Tex)= wa, Ties) =e Find a formula fr T(21,24,24), and then us it to compute T(2, ~3, 8) (2) Let = {vs, va, va) be another basis for BS, where vi = (2,1, Dy vva= (1, 1, 0), ¥y = (I, 0, 0), and lot 77 — Rho the linear tear: ‘mation defined by TI) = Thea) =wa Tvs) = ws Find a formula for T(s1,22,22), and then use it to compute T(2, ~8, 8) Solution: (2) Foe x= (21,2, 43) = 2161 +202 + 50 € RY, Te) = Yate) = Yam (0, 0) +22, 1) +04, 8) (y+ 2a + Ary, ~20-+ 309) ‘hos, 2, 3,5) = (16,18) mati notation, hs can be wien ao S]-[pests 2 aie (2) In this case, we need to express x = (22, 23, 42) a5 linear combi- nation of Vi, Vay Vay bey (is 25 23) = Dokeve = ha(ly 1, 1) + Aa(l, 1, 0) + Aa(t, 0, 0) = (hatha halen th ha)on + hres, 42. INVERTIBLE LINEAR TRANSFORMATIONS aw By equating corresponding components we obtain system of equations ht hthen hee ch & = ‘The solution ie fy ay, emmy, by =n — 22. Therefore, (1,22, 3) = samt (er avet (21 zavy, and They, en 23) = Tl) + (2—29)T (v2) + (21a) va) a(l, 0) +(22—25)(2 -1) +(@1~ mall, 3) 4-2-1) [2] _[4n-an-2 ° (2 a[e)-(esze] (a) tod 118-0) (2 2)? lege, on you Sad an exprsin of) for (6120 2) OR? Profle 45 Let V and W be vector space and TV > W be near. Tat {fen wavs) bea inst ladependert robes ofthe age (7) ¢W. Sup- Meck a2 fi ve ea) nchenn otha Thy) = word oy Drove that aay indent. 4.2 Invertible linear transformations Note that function from a set X to sat ¥ is sid to be invertible if there ise function g, whlch sealed the inverse function off end denoted feo et X into another set then i gives @ one-to-one correspondence between ‘hove to sete eo that they ean be identified as ots, A useful rterion for {function between two given sete to be invertible is that itis one-to-one fd onto. Recal that «function f: X-—+Y Is sad to be one-to-one (oe 8 CHAPTER 4. LINBAR TRANSFORMATIONS Anjective) if f(u) = f(o) in ¥ impli w =v in X, and ead tobe onto (or surjective) if for ead elamenty in ¥ there ison clement» in X such that ‘le) = y- A function Ie sald to be bijective if tls both onetoone and ‘onto, that ii foreach element y in ¥ there is a unique element =n X uch that J) Lemma 4.5 A function {2X ¥ is invertible if ond only ts bietve (orone-t-one ond ont) Prooft Suppose f:X + is invertible, and let g: ¥ +X be its iverse HE F(o) m fo), then w= (f(s) = o(J()) = v. Thus f i one-to-one. For each y €¥, gly) = = € X. Then f(2) = flay) = y. Thus itis ono, Conversely, suppose fi bijective. Then, foreach y € Y, thee ie unique © X such that f(s) = y. Now for each y © Y define o(y) = =. Then Ce can eal check that g : ¥—~ X lea welldeined function such that a, Le, g eth inverse of J 3 ‘The following lemma shows that fa gven fonction i an invertible linear ‘waneformation ftom a tector gpce into another, then the lsearity i also reserved bythe aversion, Lemme 4.6 Let V ond W be vector space If: VW isan invertible ‘inear transformation, then it inverse T=! :W —» V salto linear Prooft Let wi, ws € W and let be any scalar, Since ie invertible, ‘ls onetocone and onto 20 there exist unique vectors vy and vs in V such ‘that T(vy) = wy and T(vy) = Wo. Then TH, +h) = TTI) +e (02)) raw +i) = yeh Ym) + kom). ° Deflultion 4.8 A lncar transformation T: V ~ W from a vetor space V 42. INVERTIBLE LINEAR TRANSFORMATIONS 120 Lemma 48 shows that if 8 an lsomorphlom, tho its inverse 7! is lso ‘en ieomorphiem with (T-!)-2 = 7. Therefore, iV and W axe isomorphic teach other, then it means that they look he same as vector spaces IT: VW and $1 — 2 are linea tanafrmations, thea te quite cexsy to show that their composition (© 7)) tretlrmation from V to Z, In partial, ft aiven by matriow A: RY = R™ and BR" — Rt, then their composition is nothing but the matrix multiplistion BA of them, ie, (Bo A)(¥) BUAx) = (BA Hence, if linear transformation i given by’ aa invert rn square matrix A! RY — RS, thon tho inverse matric A~* plays the Fnvere liner transformation, go thet itis an isomorphism of R. That is, ® linear transformation given yan mx n square matrix A: + Bis an Seomorphiem if and only frank A = n role 46 Suppose that and T arenas tanformations whose compostion| SoT i wol-defned Prove tt, (0) $07 ionetoone, so, (2) 807 ison, oe, (8) i 8 aad Tare nomorpbiems, then 0 = 807 (4) WA sid B ac two mem aren of rank, the 9 is AB. ‘Theorem 4.7 To vector spaces V and W are isomorphic if and only f din = din W. Proof, Let; V+ W be an iomoephism, and lit (vay «+, va) be basis for V. Then we show that the set (P(), vy T(v)} i bass for WY ao that dim W = n= dim. (0) lbs lnaory independent: Since T i one-to-one the equation Theses + 2+ + 65%) Inples that 0= o1¥; + ++ +e¥q Slaos the sar Linoary independent, we have 6 =O fo all 1, sy (@) It spana W: Since T is onto, for any y € W thare exits on x € V such that T(x) =. Wate ev, Then . YaT(s)=Tlavs + + og¥e) © aT) + + tanT Iva) sayy ia linear combination of T(01), +, Tal 10 CHAPTER 4. LINEAR TRANSFORMATIONS CConversly, suppose that dimV = dim. ‘Then one ean chocss aay bases vty sey Va} and (Wiy ---) Wa) for V and W, respectively. By ‘Theorems 42 thee existe a inooe transformation T 1 V— W euch that (vs) = Ws for (= 1, --, m. Te not hard to show that ie invertible 20 that Tl en lsomorphiem, Hence V and W are isomorphic, 3 Proton 4.7 at T:V + W bea nea ranatrmation, Prove tht (Gy Ti enatorne ian oly i ee(T) = (0) Gh) Sv Sie hen Tne one i ane ony onto. Corollary 4.8 Any n-dimensional vector space V te ssomorghe tothe n= space ‘An ordered basis for a vector space ita basis endowed with a specie order. Let V be «vector space of dimension n with an ordered basis a= {iy ny Va): Tet 8 = {035 «+5 Gg) be the standard bass for RY inthis order. Then clearly the Uneur transformation & deine by 4) = os an ‘Somorplem from V to BY, called the natural isomorphism with respoct to the basis a. Now for any x= Fn avs € V, the image of x under this natural omerphim ie writen 62 80) = Latin = Bawa teu soa) [ : | er, ‘hich called the coordinate vector ofx with respect tothe basis a, ad is denoted ty [xla(= (x). Cleary (vila = Example 4.0 Recall thet, om Example 4, the rotation by the angle of Eis gen by the mati Re [me 24] (Cast, itis avetble and henee an omorphism of 2. Ta fact, one can cas check chat the inverse Ri almply Roe Tet a= {eysen} be the standard basis, and let 8 = (viva), where ‘aos, i= 1,2. Then is also a basis for RE, Th coordinate wstors of ‘i with repost to a are themselves wer [24] me [3] 42. INVERTIBLE LINEAR TRANSFORMATIONS i wnie= [5] a= [2] 3 Example 4.10 In Protlam 4.1, one can notie that the reflection about the ine y = may be cbtained by the compositions of rotation by fof the plane, rfection sbout the o-ess, end rotation ty Actually, it {is mulplleation of the matrices given in (1) and (3) of Example 43 with that we dente rotation by & by a(S} mi]-[2 $) so in tt he a : 3 \ tetye 1 ols als 2] ‘The reflection about any line ¢ inthe plane can be obtained in this way: es aod the whose isthe reflection about the ees o role 48 Pid th mati of refpoton about the line y= Sea R2.* Protlem 4.2 Pin the coornate wtor of 5425-435 with respect othe sven ordered bai afr FY) Wen O.s 2h (2) on (4a Lhe, 2424 im HAPTER 4, LINEAR TRANSPORMATIONS ‘Example 4.11 Let A be an nm mtr I «linear transformation on the mapace R* defined by the matrix multiplication Ax for any x © 3. ‘Suppose that riy.+.4fn are linearly independent vectors in R* constituting * parallelepiped (se Remeck (2) on page 70). Then A transforms this par alltepiped ito another parllelpiped determined by Ary,.-., Atm. Hence, ifwe denote them x m matrix whose j-th column i by B, andthe n x ‘matrix whoce jth column is Ary by C, then elearly, C= AB, 20 vel(P(C) = | det(AB)| = | det Alldet B= det Alvl( CB). "This means that, for a aquare matrix A considered as a inar transformation, ‘the absolute vale ofthe determinant of Ais the ratio between the volumes ‘of parallelepiped P() and its image parallelepiped P(C) under the tans. fermetion by A. If dat A= 0, then the image P(C) ie « parallelepiped In 8 subspace of dimension less than m. 3 Proll 4.10 Lk : RO RB be the Kowa ransoration given by T(a,3,2) = (2+ yan2.2-42), Lat © denote che unit abe determined bythe andar ts ‘jens lad the volume of the ape paralepped T(C) of © unde 4.3. Application: Computer graphics ‘One ofthe simple applications ofa incr transformation I to animations or rephical display of pictares on a computer screen. Fora simple display of ‘the ida, lt us considera picture in plane B2. Note tht a pleture ar an Jmage ona screen usually consists ofa umber of points, ins or curves con- ‘necting some of them, and infommtion about how toil the regions bounded by tha lines and curves. Assuming thatthe computer haa information about how to connect the points and cuves, a fgure can be defined by a list of poise, Por example, consider the cpltal letters “LA” below (S] = ° ‘They can be represented by mate with cordate ofthe vert or the tle of berty we wie jt for "Las fllows: The coordinates 43. APPLICATION: COMPUTER GRAPHICS 138 ofthe 6 vertices form mst votics = 128 4 OB x-eordinate [0 0 05 08 20 20] _ 4 yoeordnate [0 20 20 05 05 00 ‘Of cours, wo assume thatthe computer knows which vertics ate connected to which byline va some othar algorithm. We know that line seinen are transformed to othr line segments by © matrix, considered ae linear trai formation, Thus, by multiplying a mates to 4 the vertices are transforsned tothe ther oot of varices, and the Line sagments connecting the vrtlce are presarved. For example, the matrix B= | 2 OF [et i or ‘A to the following form, which represents new coordinates ofthe verti! voting 1 2 3 4 5 6 ba [3 os oR us 3] 0 2020 05 05 00 Now, the computer connets those vertices propery by lines secording to ‘the given algorithm and displays on the screen the changed fro asthe lft side of the following 3 BA (ema S fh. SS 1 hs "he mupenon ofthe mae © = [25 9] to BA stint he wih of BA by ht he it ie fhe ov Bee. Tas, can in ‘Reshape of feu any ened ty compris of appre feet ‘rtrmaton New lb nephrons ‘torches aye nar oat, Spina foes ow he tap he fre canga Remark Inna ope cas what te competion of etaton (me bya reer shat an es hs sth spe a fethen we yt seta In parame aba treme conmuate pr ses sic ul coli eats ast CHAPTER 4. LINEAR TRANSFORMATIONS L= 7 Yetotn stata | Tom J ‘The sbove argument generally appllas to gure in any dimension. For instance, 3% 3 matrix may be used to convart © figure ia R wince each paint has 8 components ‘Example 4.12 It scary to sce that the matroes 10 0 Rea) = | 0 core ~sina |, Ri 0 sina core exey —siny 0 Ron = | sat omy 0 oo a axe the rotations about the zy, -ax0s by the angles a, 8 and, respectively. In general the matrix that rotates RP with eapect to «given exis Is ‘seul in many applications. One can easly express such general rotation ts» composition of bai rotations euch af Romy Rg) 824 Ben) 44, MATRICES OF LINEAR TRANSPORMATIONS 15 Suppose that the axis of «rotation ete line determined by the vector a= (@eaconf, corasln9, sna), Sac §, 0.6 B= Br tn aphrial ‘oordinte, and we want to find the mss Rey ofthe rotation abou the ‘ass by Be or thn we Bat rotate the tras abot the sx nto the eplane by Rep and then abut the asi Into the a by yoy ‘he rotation abot the wax the sane asthe rotation abot the 2 ‘eon can te tho tation Ry shou the zis. Aer this, wo get back to th rotation abot the wads a Ryo) snd Rag In summary, Ras) = Re RiyaiPeen yas) o Proton 4.1 Fg th mati rt tation seth in ie by an GL we 4.4 Matrices of linear transformations ‘Wesow that multiplication of an mn matrix A with an nx eolumn matrix 2x give is toa liner treneformation from R° to R™. In thie enction, we ‘how that for say vector spaces V and HV (aot necesaiy the n-spacs), 8 linear tensformation T': V+ W can be represented by mate "Recall tt, for any n-dimensionel vector space V with an ordered bass, there is a natural jsomorphiam from V to the respece R which depends ‘on the choice ofa boss for V. Let T: V+ W be s linear transformation from an edimensonal vector space V to an mdimensional vector space WV. "Tako ordared basas = [iy vey Va} for V and 9 = {ay ooo» Wn} f0E W, and fix thom in the following discussion, Then each veetor T(w,) in W {is expressed uniquay as a linear combination ofthe vectors Wiy «sor Win in the basis 8 for W, soy 11%) TW) ee ce rin short form, riny=Sagms tein 196 CHAPTER 4. LINEAR TRANSFORMATIONS for some scalars og (= 1, «25 mj j = 1, «sy n). Notice the indexing order of in this expression: The coordinate vector [T(v)]p of vs) with respect to the bass can be written sa column vector < Te) = Sestin) = Sas S owe - EG) -8 (Ee) ‘Tharefore, the coordinate vector of T(x) with respect to the bass 9 is Thaw | en a Epes | Lem + oom J Len he [ne = fy o> ql? is tho coordina wstor of x with respect to tho bass ain V. In this seuss, wo sy that matrix mulkipiation by A represents the tcensformaton 7) Note tat A ay) ss the mars whose Colum vectors ar fut the eodinat vets [(Wp of Tv) wh rene {the bcts9. Moroverfor Ue Band bse or Van for W, the metre ‘Associated with the inet tresformationT with rept to ths Bass is ‘iq, Because the eocrnate expression of vector with respect tow bes Jr onic, ‘Thus, the wsgameot of he mts Ato a ima Wansoraton| Pi wlldefned. [vse Now for any vector x= Dla zivy €Vs ee, Ales Definition 4.4 The matrix A i called the associated matrix for T (or ‘matrix representation of 7) with respect to the buses a and 8, and de- noted by A= (TI [Now the sbove argument can be summarized in the following theorem, ‘Theorem 4.9 Let T : V+ W be linear transformation from on n= limensional vector space V to an medimensinal vector space W. Por fied 44, MATRICES OF LINEAR TRANSFORMATIONS ast ‘onderad bases a for V and 8 for W, the coordinate vector T(3)\p of 71 ‘with respect to i guen esa mate product ofthe astcited matrix [3 OFT and Ry Le, (Tp = (Ee ‘The associated matic [Ig is given as (TIE = [Eads (valle + (Phvalle) "Thissituation can be incorporated inthe folowing commutative diagram: v + w x Ts) ol 7 lw ble ITs a=W ‘where & and i denote the natural komorphiss, defined in Section 42, {oom V to B® with respct to a, and frm W to BM with respect to respectively. Note that the commutativity af the ebove diagram means hs Aod= WoT. When V=W and a=, we simply write [Ty for [1 ‘Remark: (1) Note that an m xn matrix Ais tho matri representation of A ita with respect to the standard bases a for "and for 8, 4, ‘A (Al. In particular fA ian invertible nm squace matt, then the column Yectors€), +. Gx form another basis 8 for R*, Thus, A is simply ‘he linear tansformation on that tls the standard basis to 8, i fact, “(i the $.2h column of 4 (2) Let V and W’ be vctor spaces with bases a dnd 8, eepectively, and let 7: V— W bea linear transformation with the matte represetation A. Then it is quite clear that Ker(T) and lm(T) ate omorphic 0 AV(A) and (A), respctvey, via the natural isomorphisms. Ta partial, FV wR and W = R® with the standard bases, then Ker(T) = N(A), and I(T) = C(4), horefor, frm Corllary 8.17, we have Aim Ker(1)) + dim(in (7) = aim V. 138 CHAPTER 4. LINEAR TRANSFORMATIONS ‘The following example ilustate the computation of matrices associated ‘wih near tranaformstions Example 4.18 Let [dV — V be the identity transformation on a vector space V. Then for any ordered basis a for V, the matrix [dy =, the entity matrix Example 4.14 Let 7: F(R) ~ Fa(R) be the linear rensformation defined wy (rene) ‘Then, with the bases = (2, 3) and 6 al). (2, 22) for F(R) and Pt), respectively, the sasocinted mate for 7s (7S a Example 416 Lat T 8? — RY be the liner transformation dein by Tle, 9) = (+42, 0, 25+ 3y) with respect tothe standard bass and 9 foe BP an BS especialy. Then Ties) = TU 0)=(1, 0, 2)=16,+00 +205, Ties) = TO, 2)=(2, 0, )=20) +0023 a3 ao]. o 12 [Example 4.16 Let 7: ? — R? be a linear transformation glven by TQ, 1) = (0 1) and T(-1, 1) = 2, 3). Find the matrix representation (Te of T with respect wo the standard basis a= (61, €2) Solution: Note that (a, 8) = ao} + er for any (a, 8) € R#. Thus the ation of T shows Tle) +Te) Tes) +Te2) La Hess [3 ;| 19 = (on on othe iE of Tles-+e3) = TU, 1)= @, 1) =n By solving thew equations, we obtain Ter) Ther) ert Pex 44. MATRICES OF LINEAR TRANSFORMATIONS 199 a1 Toes Me | TE o Bxample 4.7 Let be the linea transformation in Bape 416, Pad []p fora basis = {vay va}, where vi = (0, 1) and va = (2, 3). Solution: From Example 416, [3 s][2]- [2-20 [ta] f2]- [2] Wiring these vectors with respect to 8, we get [3]-mrsoe-[48]:[i]-oree= [8 Sohviag ea, 8 ean dw bein crown ($]=3[3]- am toon [$] =} [5] itty i[ §] ° 7) 7) Pron 4.8 Pd he man eprmttionf achfh ilnig nine roan F af ih top tbe tashad bale o 2 fot eho Before GF ray tn sey ts 4), 8 Me deeyee zd an Prem 19 Lt 724-39 be the ea mat dead by Tews) ey, stata, test tan 9 bth stad tn oR 0d psi Fi Proton 4.14 Lat 14: +R be the Mec trafrmaton. Lt dente the ‘ctr a whee fat ki cordon re road thea ni} cordnates el, Then deny B= (4 ony Xa) a bs oe BY ow Probie 89) Lat Gam fon) bo the ard bua for R. Fd the mats reprsertsons Ue ne uo (CHAPTER 4 LINEAR TRANSFORMATIONS 4.5 Vector spaces of linear transformations Let V and W be two vector spaces, Let C(V;W) danote the st of all near ‘tansformetions fom V to W, te, L(V;W) = (P+ Tin tinea tranaforn on fom V into W). For 5,7 € £{V;W) and A € 2, defi the sum 5 +T end the scalar multiplication AS by (HN) =Sw)+TH), and ASK) =A1SIw)) for any v € V. Then casey $-+ 7 and AS belong to £(V;IV), so that -E(V;W) becomes u vector space. In particular, EV = 2" and W =, then ‘he set Men(R) is prelely the vector space of the linear transformations ‘of RY ito with respect to the standard bases, Hence, by Sing the standard bass, wo have Identified £(R";R™) = Mmen(R) vs the mate "eprecetation, Tn general, for any vector spaces V and W of dimensions and m with ‘ordered bases’ and 8, respectively, there isa one-to-one corespondence between £(V;1) and Mpxn(3) via Ue matte representation, et os it define transformation 8: £(V;W) — Mr) 8 87) = I € Mrnn(®) for any T © £{V;W) (oe Section 48). IF [SI = (TIE for $ and Te (VW), then we have S = TT by Corollary 4, This means that ¢ ie one-to-one. (On the other hand, an m xn matrix A, considered a linea tranafe- sation from B° to BT, gives rive to a nea? transormation T from V to W via the composition f A with the natura isomorphisms © and i, y\0.A0®, which elisfer [TZ = A. This teas that i onto. ‘Therefore, ¢ gives an one-toane corespondenee between £(V:1V) and Mran(B). Furthermore, the following theorem shows that ¢ le linear, so {ha in fact am isomorphism fom £(V:IV) to Men). ‘Theorem 4.10 Tet V and W be vector spaces with ordered bases a and f, sespetioly, ond let §, Ts V+ W be linear. Then we hve [S+718 = [518+ Ine 45. VECTOR SPACES OF LINEAR TRANSFORMATIONS al Proott Lat a= {vty vo:y Ya} and 8 = (Wy - Waa} Thon we have srlaw ero Sy) = Fier ayms and Py) = Bs ym french 1sj W isan isomorphism, ‘thon (Ff isan invertible matrix for any bass for V and 8 for W. 46, CHANGE OF BASES 168 Proem 4.17 For thn vector spaces F(R) aR chose the bae 2= 0,2) ‘iB aad 8 = ey, on or Re rape, Let? F(R) = B? be the eat transformation defied by Tabs) =(o, © +8) (0) Show that siete (2) Ped (i aa 4. 4.6 Change of bases 1 Section 42, we saw that any vector space V of dimention n wih aa erdered bss some othe n-opact Rv the tral komorpties 1 eases the coordinate vectra R teach V4) = ble OF oust, we en eta deren somorphim if we tke ath Dass inead a: That the cooriate expression x of wth repect to maybe V with rxpect to any two bases rand fie eld the transition matrix of the coordinate change matelx from B to a. Since the identity tanaformation Id: V — Vie invertible the transition smatzix Q/= [1d is also invertible by Theorem 4.12. If we had taken the fpresions ofthe vectors in the bass with respect tothe base Talys) = SR, agws for J =, 2, oy m them me woul have [py] ~ Q and (ele = Uae = Oe ‘Example 4.18 Let the 3-space B be equipped with the standard ryz- coordinate system, Le, wih the standard bats a'= (01, tnt). Take ‘new 2'/2-cooedinate syste by sotting tbe syrayetm eround ie © tals counterclockwise through an angle &, te, we fake @ ow bess {eh ef) by rotating the Dats sbou ae though . Then vege ie) [F fl ° i 16 CHAPTER 4. LINEAR TRANSFORMATIONS enor, the transltion mate from 6 to ais ee? sind 0 nd ooe0 0 [% ° ‘| 2] [ewe -sne 0) [2 blew |u| =| sine cana 0 || ¥ | = ob > o 0 1? Moreover, Q= [fj is nveribe andthe transition matrix fom a tof is : Q= ud cond sing 0 id= | sind coed 0 ri 2 cmd sind 0] [2 y|=|-sno exe ol] y ” 0 0 ills a Problem 418 Find the transition mats fom base to soother bata 8 forthe ‘Lepace B®, where = (15053), 0,350) 1.19), B= {23,29 2,0), 2.0431) co that 4.7 Similarity “The eure exrsion ofa vector ina vector space V depends onthe ‘hole fan creed basso, the mats opeentaion of» Ines {ramon ia dependent onthe ho ass. Tet Vand W be tro actor sacs of nwasonsn and with two onde bse nd 9, epatvely aist T's V = WY be ea ant Statin, In Section cw dame how tod (7. I weave iteent Sanco! sod ff Vand W, mepecvy, then we ge oer ati regrentation 7 of TW, fc, hve ow iret express Lie and [ela in foreach x € ¥, [T(elp snd [Fp i for T(x) € W. 47. SIMILARITY ur "They are elated by the tnmsiton mattis i the flowing equations: [rlor = (dv1glaos snd (Taller = UawlZ a (On the other hand, by Theorem 4, we eve (ells = (Taba and (Px)le = Iie ele ‘Theretore, we get (ible = ely = Law] rele = (Caw) Ets) rawif sav tee. ‘Actually, Rom Theorem 4.11, thie relation canbe obtained diet ax IN = ltd oo avi = (drip Etta, since = Fy 9 ro Fay. Note tnt (Toad (TE, ce m x 8 mato, (tay ann xem mates ad [da am x ti "he relation can ato be incrposied inthe flowing commtatve da eon me a Xe . v (V,o!) —————- (W, 8) ud. | |r tran (a) ————- (W, 8) 5 ’ x me Ni » - 148 CHAPTER 4. LINEAR TRANSFORMATIONS ‘Theorem 4.13 Let T = V+ W be a linear transformation on a vector ‘ace V with bases ond a! to another vector space W wath tases 8 and 2 The : ing = Pore, suhere Q-= [ley ond P = [Faiz], ave the transition matrices. In partiula, if we take W get tothe fllowing corollary, Bf then P= @ and we Corollary 4.14 Let T : VV be a linear transformation on @ vector space V, and let ond 8 be ondered bases for V. Let Q = [ld be the transition matric rom 8 to a. Then (1) Qs suerte, end Q>* (2) For eny x6 V, [ela = Ole (8) [Me= O79. elation (8) of [Tp and [Tq in Corellery 4.14 is ello a simiarity. In general, we have the fellowing definition. Definition 4.6 For say square matrices A and B, A ald to be similar to-B if there exiss » nonsingular mate @ such that B= Q"™AQ. Note that if A is similar to B, then B is also similar to A. Ths we smply say that A and B are similar matcices. Wo ew in Theorem 4,14 that MA and B are nxn matics representing the same linear transformation 17 then A sad 3 ore similar. Example 4.10 Lot 6 = {viy va, Ys} bea basis for BY consisting of vi = (11, 0, vg= (1 0, 1) and va = (, 1, 1). Let Te the linear transfor. ration on B® given by the matrix 2a i=} 12 at Leta = (ero 3) Bethe standad bass, Find be transton mate [dg sd Ma 47, SIMILARITY 9 Solution: Since i = 01 +n, W201 ten, va = 2.03 have xample 4.20 Let 7: R?— RD be the linear transformation defined by Tees, 25528) = oa tan ah zat Bes, 2). (er, ene} be te standard ordered bans, Then we daly ave 210 mes}1 13 o-10 {vis Wy va} be another ordered base for RY consisting of (1, 0, 0), ve = 2 1, 0}, and vy = (1, 1, 1). Let Q = [re be the ‘transition matex from f toa: Since ais the standard ordered bass fr B, ‘the eons of @ ate amply the vectors n 6 wniten nthe same order, with ‘a cally ealulatd inverse. Thus “2a a=; 011), @ oo A straightforward multiplication shows that one e= a". i . a4 ‘Toshow that the the correct asi, wan very thatthe image under the jh vector of ithe nese combination ofthe ver of 9 with he ‘se of hej clan of (7p as coeficents, For example, rj =2 10 CHAPTER 4. LINEAR TRANSFORMATIONS ‘we have T(vz) = T(2,1,0) = (6.9,-1). On the other hand the coefcets Gt [C(val re jst the entries of the second extn of [p. Therefore, Tia) = Btdve—ve : 120; +4(2e, +6) ~ fe) tenes) = Sey +8ey—e) = (6,9,=1), ss expected, a ‘The next theorem shows that two silar metrcas are mate represn- tations ofthe seme linear transformation, ‘Theorem 4.16 Suppose thet A represents a linear transformation TV —» ona vnctor apace V ath rppct to an ordered lass a = (Vi, «5 Vaby ey fle = Ac If B= Q-1AQ for some nonsingular matric Q, then there exists a basis for V such thet B= (Fp, ond Q = (dg Proofs Let Q [qj] and let Wi, 5 Wy be the vectors in V defined by = mm tana toa We S qavitanve te tava we tiv tata tt te ‘hen the nomsingulasity of @ = [a] imps thet B= (wy... Wa ian Crderd base or V, ao Theorem 414 (2) show that [Ip = G-'HT40 = Qriag = B with Q = [Ie a ‘Example 4.21 Let Dbethedifrentsl operator on the vector space F(2), Given two ordered bases = {2, =, =} and 6 = (1, 25, 4x*—2} for F(R), wwe fist note that DQ) = 0140-24027 Dis) = 114 0-240-2 Det) = 0142-2402 noe, the matrix representation of D with respect to ais given by o1o [Dle=]0 02 ooo 47, SIMILARITY 181 Applying D to1, 22 and de? 2, one obtain DO) = 0-10-2240. (4s? 2) Die) = 2-140-26-+0-(4s"—2) Die? =2) = 0-14 4-26-40-(42?—2) o20 004 ooo ‘Tho transition matex Q from f = (1, 2, 429-2} to. its iver are ously caleulated as 10-2 1 02 0, gtapaget}o oo 4 "Ts, ©, a, =, 23) and A simple computation shows that [D]p 101.9. a Proton 419 Let 7: — RS be the nar tenformation define by Then tn 29) = (+202 425, 24) 21+) eta be the sandard bas, and It 9 = (v1, va, va} be anther ans bls ‘oassting of vs = (1, 0,0) a = (oy 1p Why and vy = (0, 1) or BS. Find the aocatd mate of with epost ar and the ocaed mosrx of 7 wi rept to 8. Aze ey sae? Problem 420 Suppose that A and B arias n Xm matooe, Show that (0) et Andes, Beane, (9) rink A = rank B, Problem 4.21 Let A and B ben x matrices Show that if A i similar to B, then Beata to 182 CHAPTER 4, LINEAR TRANSFORMATIONS 4.8 Dual spaces In this section, we are concerned exclusively with near transformations ftom a vector space V ta the one-dimensional vector spqoeR!. Soch linear ‘raisformation is called « near functional of V. The definite integrals of coninoooe fnetions Is ane of the most important examples of linear functionals in mathemati ‘Por emattix A regarded st alinear transformation A: RY — R™, we ssw ‘thatthe transpore At of Als another linear transformation AP : Rh — RY For liner transformation T: V + W an a vector space V to W, one ean rturally ask what its transpose la and what the definition i ‘Tie section ‘will aster those questions ‘Bezmplo 422 Let lb he veto sp of al conse ean fonts cn te inter eo, The dbs tga FC) m esed i r= [soe tnt of i pry tr a pees LP rank 19a stirs hour ct ‘Example 4.28 The trace funtion tr Mryn() +B is linar funetional of Man) Note that at we sa in Section 4, tho set ofall inar functionals of V fs the vector space (V3!) whose dimension aqualsthe dimension of V (see age 14), Definition 4.7 For «vector epace V, the vector space of all linear fane- tional of Vis called the dual space of V and denotad by V* ‘Recall that such alnear transformation TV —+ Bs completely deter sina bythe vals on. bai for V. Thusifa = (Wi, Vays Wa) 50 asi fora vector space V, then the functions vj: VB dsfined by Wi(v) = 6 for ech f= 1, «1, m are clearly near funetionss of V, called the 2h coordinate function with respect to the basis a. In parteular, for xy x= Dawe € V, vi (8) = ay, the #th coordinate of x with respect to a 48, DUAL SPACES 188 ‘Thoorem 4.16 The set a" = (vf, Vis «+ vB} forms «bass forthe dual ayace V", ond for any TV" we hove reSonwont Proof: Clearly, the set a* = {v}, vj, ---, vg} is linearly independent, dod bee op ints Ge due) Sy eee oreo, tbe nn say 16 Pd yw (Sr Ereatsteen) = 70. Henoe, by Corollary 44, we get T = DE Mit ° ‘This theorem says that, fora Bxod bass transformation «: VV" given by (vi) = vj isan leomonphism between V ead V*. Thorforo, we have tho following corollary. Corollary 4.17 Any finitedimensional vector space is isomorphic to its tual space. Example 4.24 Let a {(1, 2,(1 3)) bea basis for B2. To determine the dual basis 0° = (f, 9) of oe cocker the equations 410, 2)= Fler) +2F(ea) © = 0, 8) = foes) +34) Solving these equations, we obtain that fe) = 3 and fea) = (flap) = 32 ~y. Simlaly, i can be shown that g(2yy) = 229. | 0 xample 4.25 Consider V = R* withthe standard besis a= (e1, fand its dual basis at = (e,...ye3} for BO, ‘Then for a vector (etye--100) = “one BR", we nave (a) = ef(aert Fone) = 24 That (6.0) = Hla) 64 CHAPTER 4, LINEAR TRANSFORMATIONS On the other hand, when we write @ vector fa RY a6 = (21y.--4Z) in coordinate factions (or unknowas) 2, it means tht gven a point = (Geneon) € Reach x gives ue the th coordinate ofa that iy (yey) = (2a) = (0.02404) 1m this way, we havo identified of = zy for § Ly..ymy fey BY = RY ‘Th, the sctual meaning ofthe sual coordinate expression (2,.--,25) of aft eta B sch he (2. 2)() = 05) fa at acm. a ow, consider two vector spaces V and W with Saad base and 6, respectively. Let SV — W be liner trensormetion om V to W. Thea for any Dinar fonctions 9 € Wie 9: Wms By its ary to see tha the conposton g2S(2) = (Ste) for x € V defines alia Ractinal on V, ey geS.€V™ Thun wo have x tansornation §: 10" ~»U* defied by SQ) 908 forge" ‘Theorem 4:18 The mapping S*:W"—V* deine by $4) = 928 or EW" ie tear tunsormation and (Sf = (\5ig) Proof: ‘The mapping Sf cles sear bythe dfniton of compotion th fanetions, Leto (vis ose) a0 B= (wy svn Wg} Be bas for V yon YG) abd P= 5, os wh lia! Then, Ean sid = Sons and SC) for Sin and $j Sm. Tos by Sms ve) = (wFeS)v) = sas) = wi (Seoum) si)" a ‘Remarks Totem 4.18 shows tnt the mas rprseniton of just ‘the transpose ofthat ofS. And honco, the inoar transformation 5* Ie alld the transpose (ar adjoint of S, denoted also by $7. Suwon) = a lence, we get [If = 48, DUAL SPACES 186 Example 4.26 With the Wentifcation R°* = BY in Example 4.25, the transpowe At ofa matric Ais actully AP at Rm oR, ° For two linear transformations $: 0 + V and T= V+ W, it quite cacy to show (he renders may ty to) that (Posy stor. ‘Thus, i$: V — W is a iomnorpiism, then soi its transpose $* W* 'V*. in particule, since «: VV" is an isomorphism, 20 Is ite tranepove 0, V" os Ve, Note thet even though the omorphism «:V — V* depends fon a choir ofa basis for V, there isan lomorphiem between V and V™ that doesnot depend ona hole f base forthe two veto spac: We fit define foreach x € V, £2" — B by S{f) = f(a) for every f eV". Its fasy Uo verily that isa linear functional on V*,0 8 € V". We will tow ‘below that the mapping @ : V + V" defied by (x) = 3 ls the desired Isomorphiem between Vand Lemma 419 [f3(/) =0 for all f€V*, ke, 8=0 in V™, then Prooft Suppose that x #0. Choose a basis a = {vy Vay «ons Ya} for V with vy ax Let of = (vj, V5, ny Wah be the dual bast of. Then vf) = vila) = ¥7G00) 1, ‘hich contendet the hypothe 3 ‘Theorem 4.20 The mapping ® + V+ V* defined by B(x) = 3 iv en isomorphism from V to Proof To show the linearity of @, Jet x, y € V and ka salar. Then, fo ay FeV", Het) = OTR 198 Profle 4.22 Let CHAPTER 4. LINEAR TRANSFORMATIONS (Q, 0,3), (2,2, 14 (0, 2)} Dew bal fr BP, Find the (aad base Problem 4.28 Lat VR and dein f,€V* a fom: Prove that {fy fy fo) ia Das for V* fly )=2-2y fle naeetyes, hls y a=y—8e snd then fd a ase for V for which fete dal : 49 43, rm 4s. 46 Exercises ‘Wich of the flowing fueton Tare near transformations? 0) Te, a) =v, 4) Bee erin meres, (9) Te, 3) = (ns. (@) Tle, 9) = G1, 24 +0) (9) Me, )= (oh 0. Lee: PAR) — F(R) bea ner eatin uch hat 7) = 1; Te) Pyand Met) aa42. Find Tos e+) Find $07 and/or To $ whenever i dion, () Te, 2) = env 242) Sle, = (6 2-H vi 0) Te, Hat ye 2H, 9 Sle, = 028 yh L$ C(R) ~ COR) be the Santon om the vector space CR) dein by, fr fect), sure) = 10 [oe Sy ht star eaemson ee 8 Let Tbe a liner transformation on vector space V auch at T? TAIL Les (veViTy) ov} ead Wm (9 eV.T) q UT: RBs dnd by Me, yy 2) = (22—2, 82-24, 229-42), (2) detain the ml space W1T) of, (Q) detarmine whether Ti oneo-on6, (@) fad bse for A(T), dant =v). Sho sf oto of and Wea nonaeosubepase of Bove oh 49, EXERCISES 187 47 Show tnt ech ofthe felling linear teanslrmations 7 on R? ser fd fds fra fee 7H () Te, 2)= (x, 2-H Be+y 42) ) Tle, 9. 2)= Gs, dey, 29-4952). 48, La 8,7: V —V bo linea easfnmatins of vector space V. (Q) Show tha 3°70 is oneto-oe than T sa eomorpis. (Q) Show thr 705 is ona, then T san ieomorphiem (@) Stow tha 7 ean Snamorpiea for some poe, tben Tan Sromerphi, 49. Lat Pe lser aneformaton fom R to, and Bet $ bea near trans ‘nation from to B® Prove hat the compton oT te ative 410. Let Tbe linear transformation ona vector space V eying 7-7? = 1d. ‘Show that 7 i laverte 4.11, Lat 7: BR) — F(R) be he inne ranfrmatin deed by Tyla) = se) = 472) He) Find the atic [forthe bal a= fo, Ls, 24, 2) 4.12, Lt T be the oar tensormaton on RE dese by, y)=(-y, =) (Q) What the mate of T with pest a ordered bis a= (0, ¥3) share i= (2) v9 = (1) Dt (2) Show chat fr euy eal nimber eth ier transformation Tc Ide ‘4.18, Find the matric ropresntaton of ach ofthe lowing nar teaslorations {on B wi eet othe standard bai (o, 2) @ Tx, 9) = Gx, 82-9). @ The, )=(Ge—ay, #459), aan O13 (0) ing the sigue line trnufarmaton TBP + ‘mae of T with respect to tho bass oi) GILT) 00 4.18, Find the mati representation of cach of he owing nar rarsrations ‘To Fuh) with rope tothe Ba (1, 3,2). ) Tf) = p(e+ 0). 404 Lat Mi vo tht M the 138 (CHAPTER 4. LINEAR TRANSFORMATIONS 0) Tsp) +9). (@) Tika) + He (Tine) AAD 4.18, Conse the Slowing edad bases of RE a = {01 ey 3) the standard ‘ass and = (uy = (1 3), t= (yy 0) wa = Cy 0, (0) Pind the tanstion tee Pom a t (2) Find te actin matse Q fom #2 0 (3) Vay the Q= P (a) Show Ua [Vja= Pile fr any car v € RE (6) Show ha le Te, 92) = yes, 417, Thee are no matron A and Bin Mryyn(R) sich st AB BA = Iq 4.8, Lot T: RR? be the liner tratlrmation defined by Tle, y 2)= (Or42y~ 4s, =~ Sy 432), and eo = (1, Dy Cs 1 Oy CL 0, 0} and B= (C8), 2, 8)} be Tver and repetiay (0) Find the esacatd mate (for. (2) Very [Tvl = Ee for any ww 4:19, Find the nation matate [a frm to 8, when 0) = (0,8), 20}, 9= (64), (8, 2) a= (6.1), 0.2) 9 = (0,0), C.D, @) am (1,2), 1,0), (40.0), 8 ant heen Gere e (2.03), (1,41), (289: 240, 8 snd Sad cone 420, Sorta at aot | Jeena 421 sow tenar a= [9] amet besa dg mai tas] prod a dretteman [21S] oa [00 3 | ster fot p03 4.28, fora sntmaton Fon vector sce Vs tat Ts onetoane {and enya ampere otc 426 Wet TR? BY be the mer ron defn by Tian )eQutn ce dyen 220) (Compt [Ta and [Jo fo he sandr bs a= (3,3, 0). 49. EXEROISES 180 ‘Let be the oe trataforsnton fom R? int dened by lay, 23, 23) = (21+ 22, 2921) (0) Forth standard ordeed basso and 9 for RY and RY respectively, ad the aceuiatod mare for Twit spect ot haus and 8 (xp Xn A and Bm (yu, 92), where i= (10,—, (10.0), and ya = (1), Jo = (1,0), Fld the asad aod I Let Tbe the liner transformation rom B® to BS dened by Tie, y 2)=(Qrty és, e+ y42e y+ On zy as) Fad the range end the lene of T. What ie the dimension of CCT]? Find (Uf and (I, where = (1.0) (0.20), (0.0.1) B= (00,00) (2,0.0), (141140), (to 2). Lat Tbe the linear tansormation on V = BY, for which the esocated ‘atic with pet &o the tat ordre basi is as| ord 1 eneetermeeer Define thre linear fuctlonals on tho vector space F(R) by Alp) = fSnlehde, 19) = FFolehde, fla) = fy" welds Show that (fi, fo J} is 8 bal for V* ty finding is dual basi fr V Detersine whether or nat te follming statements are tre in general, and Sty your anewers. (2) Fora tinear tasformaton 7: — RM, Kee(T) = (0) i'm > n (2) Por liner transformation 1:0" — RM, Ke(7) {0} ifm < (8) A tiene erantamation TR” ~+ Rs onto oe if and oly Ifthe ‘lpace of [7 a (0}, for any bases a ad at B® and R" respesivey (4) Fora linear transformation Pon R, the dimension ofthe imege of ‘equal fo that of the row space of fr any basi for R (5) Ay polypomil p(s) near if and ony if th degen of po) ie. (6) Let P:R — RY be a funesion given ax T(x) = (F(x), Ta) for any 26 BY, Thon Ts linear if and only Uther cordate functions, fo, 2 ce Hines. (7) Rove ear ranaormason T= R° — BY, (TE = Ia for some bases a ‘nd § of R, then T mst be the ety transormation, (9) Hes loca snafrmation P:R -+ RY Is onetoone then any onic ‘repesentation of is nonsingua. (0) Any mm mates A ean ben nti representation ofa Hiner rans: formation P=" 3 Chapter 5 Inner Product Spaces 5.1 Inner products In order to study the geometry ofa vector space, we go back to the cme of the Encidoan Sspoce , Recall thn! the dot (or Euclidean inner) Product of two vectors x= (#1, 24,73) and y = (My 0m) va) in RE is fined bythe formula xyenn tan tae=x"y, ‘where x7y isthe matrix product of x" and y. Using the dat prod, the Tength (or magnitude) of « vector x= (21, #2 25) i defined by bil= Got = fra, snd the distance of two vectors x andy in Ri dined by aly) = xy In this way, the dot product canbe considered tobe a ruler for measuring the length of line segment ia B®, Furthermore, it can also be wed to Imeasure tho angle between two vectors: in fact, tho angle 9 between two ‘ects x end yn Ris measured by the formula involving tho dt product since the dot product satisfies the formula x-¥ = belly cos 181 162 (CHAPTER 5. INNER PRODUCT SPACES In particular, wo vectors x and y are orthogonal (Le, they form aright angle 6 = 7/2) if and enly if the Pythagorean theorem holds: Ihe? + vi? = b+ 9 By rowriting this formula in terms ofthe dot product, we obtain another quivlent condition soy = nn bea +29 Infact, this dot product is one ofthe mort important structures with ‘which R? is equipped. Bncidesn geometry begins with the vector space RE together withthe dot product, besuse the Boeidean distance can be efinod by this dot prodivet ‘The dot product has a direct extension to the mapace R” fo any pastive Intogarn, i, for vectors x= (2h, vy 4y) ay = (ny ves do) in the dt product, sso called the Euclidean inner product, and the length {or magnitude) of a vector are defied szlarly ae sey = ante tran = xy, Ix] = (eg = ena, Tn onder to extend this notion of dot product to vector spaces general, ‘we extract the most esentil properties tht the dot pode in R* stat nd take those properties as ems for an inner product of a vector space '. First ofall, wo note that it a rule tht assigns a reel mimber xy ‘teach pair of vectors x and y in Rand the esentil rules i ates are ‘those in the flowing definition, Definition 6.1 An inner product on a rel vector space V is «fuction that essocites a eal mmber (x,y) to each pur of verte x and y in Vin such a way thatthe flloming rules are sated forall vectors x,y and in V and all szaars fin Q) Gy) = 9) (ommetry), 2) (e+ y.2) = b.2) + (9,2) (ceding), @) Gouy) = key) (bomogenety), (8) (3) 20, and (ex) 0 (positive dessitenes). ‘A pale (V,(,)) of (eal) vector pace V and an inner product (,) is called ® (veal) inner product space. In paticulr, the pelr(R*,) i called the Euclidean nspace. 51, INNER PRODUCTS 168 [Noe tht by symmetry (2), additivity (2) and homogeneity (8) also hold {or the svond variable: 2, 2) (ey +a) = bax) + (meh (8) Gaky) = Abs) [Now it scary to show that (0,9) x0,¥) snd (x0) Example 6:1 For vectors x= (ea, m2) and y ns a) in Refine ce whete 0,8 nd o re acbitrary real numbers, ‘Then this function (larly tls the Bet thee rles ofthe Inner product, Moreover, if > O and fab > O bod, then it abo sstisics rls (4), the postive defisitensss of the inner product, (Hints The equation (xx) = az? + 2ezi21 + bal > 0 if'and only if ether +4 =O or the discriminant of (x,x)/2} ls unpostve.) "Note thatthe equation ean be written as matrix p waif 6][2) na = ha Mo a eh tee es ea sn tenga + exam + box, (a) Inthe car of. am (ones) Example 6.2 Let V = Cp, 1] be the vector space ofall real-valued con- ‘tinuous functions on 0,1) For any two fanetions f(z) and a) in V, define i1a)= [eee (ner tt oo ht {-% tosesh =lo pesca, ™ 0 tosesh, tent #]eret Thea f #07. bot (Hs) =0 +a By a eubspoce W ofan inner product apace V, we men asap of the vector space V together with th ier product that is te restreton of ‘the inner product of V to W, 164 (CHAPTER 6. INNER PRODUCT SPACES Example 5.3 Theat W = D'G, 1] ofall rable ferential on ‘ns on] a eatspace of V'= C0 "he serena W of he Inner produc oa V dened in Bmp 82 ke W en nner pod eb ‘pace af V. Hever, spp we defn another inner pt n W by he {wing mula or ay two fants (2) and) a W, tsah~ [ teroees [I saidteite ‘Then ((, } is also an inner product on W but isnot defined on V. This ‘means that this inner product is quite diferent fous the retrcton of the Se roduc of V to Wd ce W wih hs ew iene pod ot ‘subepace ofthe epace V ar an inner product space, 5.2. The lengths and angles of vectors ‘The following inequality will enable sto define an angle betwen two vectors fn am inner product spece V. "Theorem 5.1 (Cauchy-Schware inequality) 1f x and y are vectors in ‘an tmer product spece V, then (a)? $ tm xr9. Proof Ifx=0, itis clear. Assume x #0, For any salar ¢, we have 0 (oct y.te+y) = bax)? + 2ba gle byl ‘This inequality impli that the polynomial (xx? -+ 26x, y}t + (yy) ia ¢ ‘bs either novel roots oF repeated real root. Therefore ite datzsnant must be nonpastive (a9)? oaliny) <0, ‘whic implies the Inequality a roll 6. Prove ha gua in the Cac Shinran hldefand only Inthe vectors and y are nearly éepender, 52, THE LENGTHS AND ANGLES OF VECTORS 165, ‘The longths end sages of vector in an inner proviut space are defined similarly tothe case ofthe Buclidean space. Definition 6.2 Let V bean inner product space, Then the magnitude ot the length of a vector x, denoted ty Ix, Is dain by Int = yee ‘The distance betmeun two vectors x and y, dnoted by dy) s defined by ate.y) =v Fo the Cauhy-Schvart equa, we have sx) “'S peilvd $* one, these unique mumber @€[0,x] such that cos = 1S Definition 6.8 The rel number 6 in the intrral[0, thst stisties oy) cast oe (xy) = bly 28, Hig & 2) = Iativh Is called the angle besween x aod y. Example 54 In B? equipped with an inner product («,y) = 2511148220, tho angle between x = (1,2) and y = (1,0) is computed os buy) 2 slid ~ Viet coe"*() a Profle £2 Prove the following properties of lng in an nner product space V Foray verre x, ¥ eV, ‘Thus 0 = (0) bel 20, BAO wander x9, : (3) bed = Pie (3) lhe vs Pl (sina inequality). role 6.9 Let V be an nner product space, Show that for any veto x,y end ria, 165 CHAPTER 5. INNER PRODUCT SPACES 6) dy) 20, (2) diy) = itand oly x=y, (2) diay) = ayn), a) + la.9) (isle fogs), (@) diay) sa ‘Therefore, on inner product in the -apece R¥ may play the roles of a ruler and a prtrsctor in our pysica world. Definition 5.4 Two vectors x and y in an iner product space are sid to be orthogonal (cr perpendicalar) i (e,y) = Note that for nonzero wetors x and, (9) if and only i= x /2, ‘Lemma 5.2 Let V be an inner product spoce ond let € V. Then the vector x se orthogonal to every vector y in (Ce, (iy) =O for ally in V) fond only x= 0. for ally in V. Suppoos that (x,y) = 0 ‘in particular. The positive defnitenes of Corollary 6.8 Lat Ve on tae roduc space, ed et = {v1 ns Ya} bora bais for V. Then w vcr in V te orthogonal to cory basi vector Ye tnarifend only Proof If (xv) = 0 for 1 then (x,y) = Ey wesw) = 0 for sayy =Dh ww ev. 5 Example 5.5 (Pythagorean theorem) Let V be an inner product space, and let x and y be any two veto in V withthe ange 0, Then, (9 [xllyleos@ gives the equality bet ylP = lhel?-+ Eyl? + 2 co. Meret due Pyhgsrnn ores et a? +P tr ‘any orthogonal vectr > and y. 53, MATRIX REPRESENTATIONS OF INNER PRODUCTS 167 "Theorem 5:4 If (ey, Xa) <1 Xe) nonzero vectors in an ner product syace V are mutually orthogonal (i.e, each vector is orthogonal to every (ther vector), then they ee linearly independent, Prooft Suppose aa teres +:+ssheana = 0, Then foreach © = Ox) = (am te toonix) ann) tes been) + Fala) = abel, bocease x1, «=» ate mutually athogonsl. Sloe each xs ot the zero sector, [xi] #0; 906) = Ofor i= 1, ony 2 Prem 54 Lats) god fe) be cote eed nts on, 1. Prove (9 [B rewteas|"< (f Peon] [fons ee) (Buc +oepree]! s (8 Peer]? + [ore]? 5.3 Matrix representations of inner products ‘As we saw atthe end of Example 81, the inner product on an inner product space (V, (}) cau be expeesed in terms ofa symmetric matrix, In act, let vn} bes xed ondanedbasiefor V. Then for any = Da 26 (9) = DS saplrans) fet holds. If we set oy = (Vas) for J = 1, sy my then those numbers consitnte waymmetrie mate A = [ay sinc (v,¥4) = (vv). Thus, ia ‘matrix notation, the ier product may be written «8 tsa) = ES aiey= WA ‘The matrix A ie called the matrix representation of the Inner product with respect bo a 168 (CHAPTER 5. INNER PRODUCT SPACES “Example 6.6 (1) With respect to the standard basi en, 625 ny) of ‘he Balden wopace RY, the rx representation ofthe dot product the dentty mee, ee ey ey = By. Thos or x= Sone, Y= Dyes € he dt prod the mate product xy: n saa] ts ren NE (2) Lot V = F(R), and dain en ner produet of Vas orm [i sleoteae ‘Then for a bass @ = {{i(2) = 3, foe) = 2, fle) = 24} for V, one can sil find A'= [og for instance, "The expression of th dot product asa matrix product is very wef in stating and proving theorems in the Bulidean apace On the other hand, for any syenmetie matrbe A and for» Sed base a, {fF Aly seems to give rine to an inner product on V. 4a fact, the formula clearly sales the first three roles inthe definition of the inner product, but not necesarly the fourth rae, postive dfiaitenes ‘The following theorem gives a necsry condition for'» symmetric mati ‘A to give ree to an inner product, Some necessary and sufficient conditions ‘ll be dscused a Chapter 8, ‘Thoorem 5.5 The matris representation A of on inner produc (with re spect to any basis) on 0 tactor pace V ts invertible Proof It enought show cat the ealunn vectra Aa tne de nde. Let am (jy v= Ya} De a ba fran nner product space V. ‘We denote the column vectors of A= fg) = [vw] by 4 ford (Conia the near dependence of the clam vectors ofA fo 1+ + Fea, 53. MATRIX REPRESENTATIONS OF INNER PRODUCTS 169, ivlZAlele 0 = aero senate = Sans se ‘Tas, by Corollary 8.3, wo got e= D2 4 fad the columns of A are linexsy independent, 3 ‘Recall that the conditions > 0 and ab~

0in (2) of Example 5.1 ae suicint for A to gle rie to an Inner product on R? ‘Tho standard bass ofthe Euclidean n-space R has 8 special propery: ‘The bass vector re mutually orthogonal and are of length J. Ta this sens, itiscaled the rectangular coordinate system fr R®. In ai nner product ‘space, a vector with lngth 1 is called a unit vector. Ix is nonzero vector Iman inner produet space V, the vector Faye ao eon ‘The process of obtaining aunt voctr from » nonzero wentar by multiplying by the averse of ts length sealed w normalization. Thus, if thar is a ott of vectors (or « basis) in an inner product space consisting of mutually erthogoaal ‘vectors, then the vectors ean be converted to unit vectors by normalizing them without losing their mutual orthogonality. Protlem 5.5 Normalize each ofthe following recta in the Bucldean space Gre @ vat 13, 1/6). Definition 8.5 A set of vectors x,y... x4 Ina inner product space Vie said tobe orthonormal if copia [8d (haat, Aso (ay ay soy Xa} of voctrs is called an orthonormal basis fr VIE it ise basis and orthonormal. 1 wil be shown later thet any ier product spaces an orthonormal ‘esis, just Ile the standard bass for the Bulidean n-space 170 CUAPTER 5. INNER PRODUCT SPACES roblem §.6 Determine whether each of the felling sof vectors in ? is or ‘hogntl,orthonermal; or nether wth reepet to the Baldea nner offs). {J} {f2}[5]} © {tt E30) {Lina 38} ‘Theorem 8.6 1f {vy Vay +5 Va) 4 an orthonormal bass for an inner product space V and’ i any vector in V, then viva + Ge alva too 6 Proott For any vector © V, we can write x= n.vs-+29¥s t:--+2n¥m ‘6 linear combination of bala vetors, Howeve, foreach i= 1, fw) fave eave ve) = Alva) tie aviy) totaal Docaus {¥1, Vay ++ Wa} ie orthonormal a Ina inner product space, the coordinata expression of a vector depends on the coe of an ordered basis, and the inner product is just @ mattix product ofthe coordinate vectors with repect toa erdered basis involving some symmetric matri between them, a we have eee already, ‘Actually, we wll show in Theorem 5.1 nthe following setion that every Jnner product space V bas an orthonormal basis sty a= (Vi, Vay «1 Yo) ‘Then the matrix representation A = fa] of the inner product with respect {0 the orthonormal basis ls the identity matrix, since ay = (v4) = By ‘Ths for any veetor x= Sozivs and y = Dyvs la V, Bz hew= Lem ‘This expresion lois Ike the dot product inthe Euclidean space R*, ‘Thos any inner product on V can e writin just like the dot product in B®, if {8 equipped with an orthonormal basis. by) 54, ORTHOGONAL PROJECTIONS m 5.4 Orthogonal projections Tet U be aaubepace ofa vector space V. Then by Corollary 3.18 there i other subspace W of V rach thet V-= U1 @ W, so that any x € V has a ‘unique expression as x= ww for we UT and w € W. As an easy exerci, ‘ne can show that a funtion T= V > V defined by T(x) T(a+w) = wis ® linear transformation, whoee image I(T) = 7(V) isthe subspace U and kernel Ker(P) is the subspace WV. Definition 5.6 Let U and W be subspaces ofa vector space V. Alina? ‘tansiormation 7: V ~V ls alld the projection of V auto the subspace V slong Wi = U@W end Thx) =u fore = usw eV OW. [Note that for piven subupace U of V,thero exist many projections 7 ‘depending on the choice ofa complementary subspace W of U. However, if we Sx « complementary aubepace W of U, then projecton T ento Vis tniguey determined and by definition T(x) = for any ue U and for any Chote of W. nother words, P07 = T for any projection Tof V0 Example 6:7 Let U, V and W be the I-dimensional subspaces of the Bu Clidean Dspace R? spanned by the vestors w= ey, W = 63, and ¥ = (1,1), reepectively. w Tw) ‘Since the pare, w) and u,v) are linearly independent the space 1 can be expromed as the det eum in two ways: BE =U @W =U GV. ‘Thus a vector x= (2,1) €B? may be writen in two ways: 2(1,0)+ (01) « Tew xeon (E9288 § vay nm CHAPTER 5. INNER PRODUCT SPACES Let Ty and Ty danote the projections of R? onto W and V along U, respec- tively: Then . Tw(x) = (0.1) EV, and Ty(x) = (1,1) €W. 1 ako shows that projection of R? onto the subspace U (= the z-axis) Aepends on 8 choice of complementary subepace of U. a "The flowing shows an alggbraie characterization of «projection, ‘Theorem 8.7 A linear transformation T = V —+ V is projection onto & subspace U ond only f P= T (= ToT, by deiiion). Proof The neessiy is clear, because T'o = T for any profetion 7 Fore slicieny, let 1? = T. We want to show V = Im\(T) @ Ker(T) sod T(u-+ w) = uw for u-t-w € I(T) @ Ker(T), Por the frst one, we ‘ed to prove Im(7) 0 Ker(T) = {0} and V = Tm(T) + Ker). Indeed, i U6 im(F) Ker), then there is x € V such that T(x) = w and Ta) = 0, But T(x) = T%(x) = TT (x)) roves in(T). 0 Kent) = {0}. Net ta this alo shows Ta) = w fr w¢ Im(T). Then, dim V = dim(Im(T)) + dim(Ker(T)) (see Remark (2) in e618) pln = in) e(7). Now seen Tut) = Te) feraay awe ln) Ke(t) 3 Lat: VV bea projection of V, so that V = Im(T) © Ker(T). Ie is not diet to show that lm{ dy ~T) = Ker(T) and Ken(ldy-—T) = ina), Corollary 5.8 A linear transformation T : V+ V is a projection if and ‘only ify ~T i a projection. Moreover, fT te th projection of V onto a subspace U along W, then Id ~T isthe projection of V onto W along U. Proof: It is enough to show that (dy ~) (dy =) = Idy —T, But (dy ~7) 9(ldv -1) = (lay 1) (7-19) =1¢y 7, g Proll 5.7 Let V =U @W. Let Ty dente th projection of Venta U along W, and Ty dnote the projection of Vento W alg U. Prove te following. 54. ORTHOGONAL PROJECTIONS a (0) For any x6 ¥, x= ToC) +Tw(0 (2) Tyotldy—TH) =0. () Tyo =TwoTe =o (4) For any proton T: V = V, In(Fdy ~1) = Ke() and Kerf =) = I(r}: Now, let V be an inner product space and let Ube « subspace of V. ‘Recall that there exist many kinds of projections of V onto U depending on the choios of complementary rubspace W’ of U- However, nan lanes product space V, there i particalar choice of W, called the ordhgonel complement (6 0, slong which the projection onto Ui called the orthgonalprojetion. ‘Almect all projections weed in liner algebra are orthogonal projections a an inner product space V, the orthogonality of two vectors can be extended to subspeces of V. Definition 6.7 Let U and W be subspaces ofan inner product space V. (1) Two mbepaces U and W are sald to be orthogonal, wrttan by UL Whit (usw) =O for each we U and we W. (2) The set ofall vetoes in V that aro orthogonal to ovary vector in U is calla the orthogonal complement of U, denote by US, ie, Uta (ve: (yu) =O feral weU). (One oan easily show that U Isa subspace of V, and v € Ui and only if (vu) = O for every u € 6, whore Bis e basis fr U. ‘Therefor, clearly WLU if and only fC OF. Protlem 53 Shim: (1) IU LW, UW = {0}. (2) U GW Hf and ony f whee ‘Theorem 6.9 Lat U be a subypoce of an inner product quce V. Then (1) dim + dim = dim, @ oh (8) V=Ue@U4: that i, for each € V, there are unique vectors xy € U and ys €US such that xy +x1. This cae the orthogonal decomposition of V (or ofx) ty U m4 CHAPTER 5. INNER PRODUCT SPACES Proof (1) Suppose that dimU =k. Choose «base (vy, oy vy fr U sd ten extend oa (oy Vn Yen ny a) Vy wane n= Si V Then = 5} 3504 € 0" ifand aly 0= fa) = Tey ay 1ig4 5 & where og = (0) The later equton form »Kamogentons syle of incr equoins nv unlkxowns hat i precaly the ml space ofthe fn oer trix B= (ay), which submatrix ofthe ‘East representation of the inaer product. Ths, by Theorem 5:5 th own of B are nearly iependet, 00 B eof rank. Therefor, the ml space has dimension nk, or dimU+ =n B= n~ div (2) By deioition, every vector in Uinrthogonl to, 07 ¢ (U2) On the oer hand, by (1), din(U4) =~ din = dm This proves ‘that (U+)+ =U. (8) Fora basi (yn ve fer Uy ak ay bs (ag os Ya) fo 4 Sace UU = fo), Uh et vo Yn Yate on Ya) i oeatly Independent, 20 i isa base for V. ‘Therefor, every vector = V has & sunlque expression xe Zoomer bys Now tke xy = Chav €U and x4 = Thar bey € U4 To show unique: rise, let = u + w be another expossan with w'¢ U and w © U2, Then xp 2a ways CU NUE = (0), So, xy = wand xy. = W. a Definition 5.8 Let V be an inner product spac, and let U be a subypace of V so that V =U @UL, Than the projection of V onto U along U~ is ‘alled the orthogonal projection of V onto U, denoted Poly. Fer eV, the component vector Projy(x) €U is called the erthogonsl projection of x nto 0, Example 6.8 Asin Brample 57, lot U, V and W be subspaces of the Bulidean Dspace BR generated by the vectors u = e1,¥ = (1h 1), and w = op, spectively. Then deely W = U* and V 2 UL. Hence, forthe projections Ty aud Tw of given in Example 67, the projection Ty the arthogonal projection, but the projection TV isnot, so that Tw = Projy and Ty 9 Projy 5 54, ORTHOGONAL PROJECTIONS M5 ‘Theorem 5.10 Let U be « subspace of an inner product space V, and let XV. Then, the orthogonal projection Projy(x) of x satisfies x= Projy Ql , Thos, for ally €U, Bx-yl? = le Projy() + Proje(s) — 901? = be Proiy()[P + TPrejy(s) ~ 91? 2 lx-Proly(ell, ‘where the second equality comes from the Pythagorean theorem for x — Proj(x) L Projy(x)~ y. a ‘The theorem means thatthe orthogonal pojstion Projy(x) of x 1 the unique vector én U that is closest to in the eens that it minimizes the Astance tox from the vectors in U. Geometscally, the following pctare ‘depicts the vector Projy (x) X= Projy(x) = Projys(x) Probe 6:9 Lat U and W be sbepeoes ofan nner podust mca ¥. Show tat (0) W+ Ww} =U os. @ Wnwy suse, Problem 510 Let Uc RS with the Bucdean inner product te the eobepate ‘spanned by (1,1, 0, 0) aod (, 0, 1,0), and WY c RY the subspace spanned Tr, I, 0,1) and (0,0, 1,1). Pind bate for and the denon of ach of he ‘loving sbepaces 0+ QU, @uteWs, UAW, 176 (CHAPTER 5, INNER PRODUCT SPACES ‘Lemma 5.1 Let U be @ subspace of an inner product space V, and let {iui tg, --y tha} bean orthonormal bass for U. Then, for any x € V, the orthogonal projection Projy(x) of into Us Proi() (ea) + ( Ua)a ++ tt (xa) + (ey) +--+ (tnt. TEs enough to Show that y = x~ 4 is orthogonal to U, because iy = x~2 €US, then X= 2+y €U@U4, tothe uniquenees of thie orthogonal decomposition {ves 2 = Projy(x). However, foreach j= 1, «ym Gey) = Gay) — 4) = Ge) — G4) =O, sce (41, Ua, ta) i an orthonormal basi or U. That i, the vetor xa a =x EP fet orthogonal toU, 5 In particular, if U = V in Lemma 5.11, then Prog) ‘Theorem 56. ‘A ont vector w in a inner product space V determine a 1-dimensionl subspace U= {ra + 7 © R). Then, for a vector x in V, the orthogonal projection of x into U is simply and we get Projy(x) = (xu), ‘where (xu) = fxfeosé. On the other hand it i quite clear that y = X= (su) isa vetor orthogonal to, ‘Thus fouutyeven* 0 that fl? [yl + [which is just the Pythagorean theres. Ia particular, if V = 5° tho Bulidean space with the dot product, then Projy(x) = (u-x)u= (u7xju = u(ux) = (wai? (Flere the third equality comes from the matrix products). This equation shows that the matrix uu is the matrix representation of the erthogonal projection Projy with respect to the standard bats for R°. Purther di ‘ustons about matrix representations of the orthogonal projections will be given in Section 6.10 55, ‘THE GRAMSCHMIDT ORTHOGONALIZATION aw Example 5.9 Lat P(zo,yo) be & point snd ax + by + = 0 8 line in the 'R? plane, One might know already from calculus that the nonzero vector 1m = (2) ls perpendicular to the line ax + by +e = 0. Tn fat, for any to points Q{si.41) and Rlzz,yp) on the line, the dot product QRt-m = loa ~ 21) + Yn ~ v1) =O that i, TH Ln, For any plat P(29, yp) in the plane 32, the distance d between the point ‘Plz, up) aod the line az-+by-+e= 0 is simply the length of the orthogonal projection of QP into m, for any point Q(=i, gn) Inthe Hine. Thus, 4 = UProig@PIE [GPa Ter la(ou— 23) + 860 — wl Crs [oso + bo [Note that the las equality is due tothe foct thet the pont Q ison the line (ie, any +t +e 0) ° Proll 6.1 Lt V = PR), the vector space of polyoma of degree < 8 teuipped with tbe inerpradact iat f° flee) de for any f and gia V. Let W be th subspace of V spanned by (1,2), and deine J(2) = 2% Pid the ‘ontogonnl projection Pol (f) oF 20 5.5 The Gram-Schmidt orthogonalization "The construction of the orthogonal projection onto a subspace deseibed in Section 54 con be wed to Sind an orthonormal bass from aay given bess, 1 the following example show Example 6:10 Let : 118 CHAPTER 5. INNER PRODUCT SPACES Find an orthonormal bass fr the elumn space C(A) of A. Solution: Let ex, €2 and eg be the columa vectors of Ain order frm ef to right, Tes easly werifed that they are near independent, 0 they form 'a basis for tho 3-dimensional subspace C(d) of the Buclidean space Rt ie, the column space, but thls basis not rthonormal. To make an cexchoncrmal bass, et hich is unit vector. Cleary, va, 62 and ey span the cluma space (4) Tet W; denote the subspace spanned by wy. Then Proj (ea) = (euvilvi =2¥i5 and e2~Projy (69) = 2¥s = (0, 1, ~1, 0) is nonzero veetrerthogonal to-vy. To convert It to 4 unit yetor, we sat can aoa eal wee moe 6% *) Since cy = 2v; + V2vay we sill have © spanning set {vi ve, ey) of the ‘column space C(4) and thus a basis. Let Ws denote the subepace spanned by vi end va. Then rod (e0) = (es va} (a,¥2) 50.69 ~ Projys (es) = 69 ~ dvs + VBv2 = (0, 1, 1, 2) ira nonsero vetor ‘orthogonal to both v; and va. In Tac, (es— v1 + VB v2 v2) = (65,¥1) ~A(vesva) + VE (¥asv) (ea 4uy + VB v0, va) = (eave) —4ivssve) + VB (va.¥2) <0, since vy and vg are orthogonal. Thus we cen normalize the vector ey — Prose) and set = lenvidvs = feuvava “jose ea Ae A ‘Phe otc nly soy htt oy vv) ll am UA) an forms an orthonormal bass fri 55, THE GRAMSCHMIDT ORTHOGONALIZATION a7 In fact the orthonormaliztion proces in Example 510 indicates how to prove the following general varsion called the Gram-Schinidt orthogy onallzation, "Theorem 6.12 Buery imer product space hasan orthonormal basi Proof (Gramm orthogonalztion proces) Lat (i 2 «o1 Xa} ‘bea uss for an mdineasional ase produ space V- Let eh, ye bee Opal a= Gavel Of couse 22 ~ Gavi) 0, because fa, a} ner independen. ‘Generally we deine by lndveton on k= 1.2, m Xian) ~ Ga vabva Pas (ravi) = (su vale = fe Vin Maal ‘Thus, va the normalized veto of xe ~ Proj, (va), whese Wes Is the aubepace of ¥ spanned ty {ay. may vos id (or equaled by {iy Vaso Vics}. Then, tho vectors vy, v2) wy Ye aro orthonormal fn the n-dimensional vector space V. Sinoe every ofthotormal sete linealy independent ts an orthonormal bai for V 3 ‘Here fs a simpler proof of Theorem $9. Suppose that U iso subapace of an ines product space V. Then eesly we have UU, by dellniton ‘Toshow (U+)* = U, take an orthonormal basi sey @= {v1, Yin ss Me), for U by the Gram-Schmidt orthonormalzation, and then extn isto 2 orthonormal bass fr Vs sy = {Vin Vay ony Yay Visas von Yada Which is always poesia, Then clestly 7 = (whens. vq) forma an (orthonormal) basis for U4, which means that (U2) = U'and V =U @US ‘Proll 5.12 Find an orthonormal bss forthe subspace of he Bucldean space B® ven by = +2) Se = 0, which ls the orthogonal cmpleent ofthe vear (20-9) in i, Pres 91 ¥ = wh ne oa ae ffs bra tg Pat kn na he pe pel ya 180 (CHAPTER 5. INNER PRODUCT SPACES We can now identify an n-dimensional inner oduct epace V with the [Buclidean m-space R* via the Gram-Schmidtorthogonalizaion. Infact, If (V, (, }) isan inner product space, then bythe Graim Schmidt arthogonal- laston we can choose an orthonormal bass @ = {Wty Vay --y Va) for V. Wi this orthonormal basis a, the natural laomorphismn © V'—> RY given by Biv) = [ila = 0}, 6 = 1-1 (Oe the lat remark of Section 44) preserves the inner product of vectors: Every vector xe V has a unique fxpresion x = FV; with z= (xv). Thus the coordinate vector of x ‘wth spect tos clam matte ‘which ie vector in B®. Moreover, f then ‘Ela wv another vector iV, tex) = (Sem 3e ws) = Soa = i ‘Ta right side of the equation ust he dot produ of ecto In he Bacidoan gee RY. Tot (9) = Bible = 860-809) fer any x, € Vs Hency, the natural emo & preserves the omer ‘roduc, andetifes tener product on V ihe dt proac on RO inn ne, me may rec stay ofa Ser pct ace to the ‘aes of the Bielieanrspace Bit tbe dot product pedal Lod of Uneur anfomation that press the ner product sch atthe nara ororphin frm V to pays iportaat fle a ner lgebea td we wil dy thle Knd of randoraion im Section 58. Pl 514 Un he Gram Simi orgrolaton ea he Ena pce tna te bac 2 0, (4, 3,0, 04 0,0) a) ito notte bi 0-0) Prole 5.15 Find the pint on the plane =~ y= £ = 0 tha scot to p = 3.0, 56, ORTHOGONAL MATRICES AND TRANSFORMATIONS 181 5.6 Orthogonal matrices and transformations In Chapter 4, wo saw that a linear transformation can bo associated with ‘amattix, and vice vers, inthis section we ae malaly intrested in those linea trecaformations (or matics) thst preserve the lengths of vectors In ‘an iner product spec. Let A= [er = eq be an nxn square matrix, whats, .--, Gy € BY ‘aro the column Yectors of. ‘Thon a simple computation shows that eae| ‘ “|{} =H os -¢ onc, fhe calm vetors are orthonormal, ey = fy than ATA Uti A islet avers of A, ad vos wera. Sie Ait aoquare mst thsi ives mst be the right nvr ofA, AAT = fy Bgl, the ow vectors of Are abo orthonormal ‘This argument can now bo snaraed a flor ‘Lemma 6.19 Let A bean n xn matrin. The following are euivalent (2) The cohann vectors of A ae orthonormal @) ATAm Ia () ata a7 (8) Ast = Ty (8) The row vectors of A are orthonormal Definition 6.9 A square matric A fe called an orthogonal matrix if A ‘atises one (and hence al) ofthe statements in Lemna 5.1, ‘Therefore, Ais orthogonal if and only if AP is orthogonal ‘Example 5.11 I is easy tose that the matrices cmd sind 008 sind a-[n Ss]: 2-[%e 2] ae orthogonal, and satisfy @-[ 25 26], -2-[S5 25] 82 CHAPTER 5, INNER PRODUCT SPACES Note thatthe linear transformation 7: R? +R? delined by T(x) = Ax le ‘rotation through the ange 8 while $:R? —> R defined by (x) = Bx ls the reBection about the line pasg through the origin that forms an angle 8/2 withthe positive a-axis a Example 5.12 Show that every 2x2 orthogonal metric must beone ofthe following forms cond —aind coed sind ind ced fin? —oa8 2B] sternal mat, tat A, From th Sat equi, we gto + = 1 act =, 1m te mond eal we pte? re = abot, Subs ame pend bo ten “alsin dhe 6 that aed mad 88 3 Solution: Suppose that A= Proto 5.16 Fin te ners of ach of the faowing mtsces, v2 -vi 0 @ | AN Ns o ‘What re the a ner transerations ox Rs stations flog oer? Intuitively, any rotation or reaction on the Bucidesn space preserves ‘both the length of vectors and the ange of two vectors. In general, any ‘orthogonal mrix A preserves the lengths of vectors: [ef Ae Ax = (A(t) = TAP Ac Te Definition 5.10 Let V and W be two inner product spaces, A Tinear tants formation TVW ip called an Isometry, or en orthogonal transfor- ‘mation it preserves the legs of vectors, that i, for every vostor x € V ITC = bh ‘Geely, any orthogonal matrix i an nometry a near transformation, LET: V = isan isometry, then Tis a one-cone lnc the kamal of is trivia: T(x) = 0 implies fp] = [P(x)| = 0. Ths, if im V = di W, then sn gometry also a lomorphism. ‘The following is an ineretng characterization of en isometry 56. ORTHOGONAL MATRICES AND TRANSPORMATIONS 185, ‘Theorem 8.14 Let T : VW be a lneer transformation on an smner product space V to W. ‘Then T is an isometry fend only if Tr preserves fier products, that is, (Pex). T19) my) {or any victors x, ¥ nV. Proofs Let T bean isometry. Then |T(x)F* = bx for any x ¢ V. Hence, (ety). They) = ETC IP = etl? = ety ty) for any x.y € V. On the other hand, (Toe+y),Tee+y)) = (FX), TC) +27). TY) + (Cy), T)), Getyaxty) = boa 426m9) + Ona) fiom which we get (1(%x),7(9)} = (9. "The converse is quite clear by chocsing Corollary 6.18 Let A be annxn matris. Then, A it an orthogonal mate Vand only A: B* —+ R", asa linear transformation, preserves the dot product. That is, for any vectors x, YER", Ax Ay cy. Proof One way i clear. Suppose that A preserves the dot product, Then for any vectors x, YER, Ax: Ay =xTAT Ay =xTy =x. ‘Take x =e, and y = oy, Then this oquation i Just [47 4) Since d(x,y) = [x ~ yl for any x and y in V, one can easily derive the following corollary. Corollary 6.16 A liner sransformation T: V+ W ks an teometry if and only ate), Ty) on9) for any x andy iV. 184 CHAPTER 5. INNER PRODUCT SPACES ‘Recall that ifthe anglo betwoun two nonzero vectors and y in en inner product epace V, then for any isometry TV —V, (ay) (Dx.Ty) : isi ~ Wstiryt Hence, we have Corollary 5.17 An txmetry preverues the angle ‘The following problem shows thet the converse of Corllary 6.17 isnot ‘eve in genaral. ‘Probie 6.17 Plod on example of «linear transformation on tbe Baciean space i tha prose the angles bat not the longo vector (Le, et 50 oa) Such bar taafomntion ie ald lation. We hove een that any orthogonal mtx Is an isometry s the near ‘eansformation T(x) = Ax. The following thecrem says that the converse is also true, tat is the matrix representation of an isometry with respect fn orthonormal basse an orthogonal matrix. ‘Theorem 5.18 Let: VW bean iomety of an faner product space V toW ofthe same dimension. Leta = {iy .-y Yq) and B= {Wy «ony Wa) be orthonormal Sases for V and W, respstisly. Then the rtrc (18 for TT with respect tothe bass and 48 an orthogonal matris. Proofs Note thatthe k-th column vetor ofthe mate [2 just [7(va]s ‘lace preserves inser products snd o i orthonormal, we get (walt = Tive). Te) ‘hich shows that the eslimn vectors of [Ig are orthonormal, o (vw) = Be ‘Therefor, a linear transformation T: V > V is an fomety i and only if Tas an orthogonal matrix for an orthonormal basis. Moreover, square matrix A preserves the dot product if and only if it preserves the lengths of vectors Prion $18 Fad tr > 0, +> bach ai @ rg wo-[5 a4. oe-[! x i| 57, RELATIONS OF FUNDAMENTAL SUBSPACES 185 Proton 5:19 (Bess Insults) Tat Ve an tne orodct eps, and let Gources vq) boo os ofeomncoal waters V (ot namely © bas 13 Be ie anya Vs BPS Dav Pre 529 Deter her te ning errno an space Bare rtbogona (0) Te, 9) =, Potdy, $F). 0) Ten 2) = Get Bs By Be 2) 5.7 Relations of fundamental subspaces ‘We now go back to the study f the systom Ax = b of linear equations, One ofthe mere important applications of the orthogonal projection of vectors ‘ntosarubspaceitostudy th elations or structures of he four fundamental subypaces (A), RA), C(4), and (A?) of 2a m xm mats A. Lemma 5.10 For any mn matris A, the mll space NA) and the row space R(A) are orthogonal in B™. Similarly, the mall space NAP) of AP Gd the column space C(A) = R(AT) ere orthogonal x Proof Note that w € (A) if end only if Aw'= 0, ke, for every row sector rin A,¢-w 0. For the scond statement do the same with AP. ‘This theorem shows that A/(A) 1. RA) and CA) 4 (AT), homeo (A) ¢ RCA) (or R(A) ¢ (AY!) and N(AT) & C(A)* ( of CCA) © N(AP}E), bat the equalitie between them do uot follow immediately. The ‘ext theorem shows that we have qualities in both inclusions, that i, the tow space R(A) and the mull spece (A) are orthogonal complements of ‘ach other, andthe colamn space (A) and the mull space N'A?) of AM are ‘orthogonal complements af enc other. Note that the above theorem also ‘Shows that 4/(A) MPR(A) = (0) and C(A) ANAT) = (0) ‘Theorem 8.20 (The second fundamental theorem) For ony mxn ma: trie A, (a) NARA) =, 186 CHAPTER 5. IVNER PRODUCT SPACES Proof: (1) Since both the row space R(A) and the mull space V(A) of A are subspaces of RY, we have N(A) + R(d) CH in general However, im(V(A)+R(A)) = dim (A) + dim RA) ~ di V(A) ORCA) . = dim (A) + dim R(4) aim (A) + renk A n= dime, since dim(tow space) + aim(all space) = rene that (A) + R(A) = RY. Actually we have (A) @ (A) = RY since W(4] OR(A)= (0). A similar argument epplis to AP to get (2). 0 Corollary 5.21 (1) NA) = RAY, and hence RA) = (2) M(AP) =C{A}+, ond hence CCA) = UAT), Kays For an m x m matrix A considered at linear transformation A: RY — BRM, the decompositions R” = R(A) BNA) and R™ = C(A)@.N(AP) given ‘in Theorem 5.20 depict the following figure with r = rank A RUA) eR cua) MA)=R MAT) aR [Note thet if van A =r, then dim R(A) =v = dlm{A), im N(A) = nar and dimN(4?) = mr. ‘The figure shows that for any bein the ‘column specs C(A), which s the range of there i an x € B® such that Ax = be, Now there exist unique x, © P(A) and xy €.N(A) such that Xa ye The be = A. ‘Axe Moreover, for say ENA), Aber +x) = Ak, =e sce AX’ = 0. Thecefore, the ost ofall solutions to Ax = be ls preiely x,+.N(A), which isthe nr dimensenal ‘lane parallel tothe mul space A/(A) and passing through 58, LEAST SQUARE SOLUTIONS er In pestcular frank A= m, then A'(A?) = {0} and hence C(A) = R™. "Thus for any b ¢ 3, the aystrm Ax = b has solutions ofthe form XX ‘where an € AA) i arbitrary and € (A) Is unique (this he case the existence Theorem 8.28). ‘On the other hand, frank A =n < mm, thon N(A) = {0} and hence ‘R{A) = BM. Therefore, the stom Ax = b has at most one solution, that Js, I has @ que rlution x in the row spece if b € C{A), and has no olution (that i, the eytem ie Snomsistent) if f C(A) (this is the casein the uniqueness Theorem 3.24) Tho later case curs when m > + = rank A: tha le A/CAP) it nontelval subspace of Probe 6 Show tat (i Ae= band ATy =0, hea y7b=0, and (2) Wax O and Aty =e, then xe =0. Protlem &:88 Gwen smo vectors (L, 2, 1, 2) and (0, in taht are pepe othe 2), Bad al eters Proll 6.9 Pind bai fr th orthogonal complement of her spss ofA ouclE EE) omelt 5.8 Least square solutions Wi consider again eystem Ax = of linear equations, Recall that the syste Ax = b has at leat one solution if and only if b belongs to the ‘alomn space C(A) of A. In this age, suc a solution is unique if and only {Ethel space A) ofA tiv. "Now the problem i “uhat hoppens if ¢ C(A) CRM so that x= b i inconsistent?" Nota that for any 2 € R®, Ax € (A). Thus the best we can {dois to find vector xo © RY such that Axp is closest to the given vector ‘bin B®, ce, [xy — bi is es amall ae posible. Soch a voltion vector 1p gives the best approximation Axp to by and ip called least square Solution of Ax = b. However, since we have the orthogonal decomposition 1B = CLAJON(AT), we know tht for any b ER", Projea(b) = Be€ CCA) 188 (CHAPTER 5. INNER PRODUCT SPACES {nthe closest vector to b among the vectors in (A). Therefore lest square solution xp € R" satisfies the flowing: Any = be = Proje), [xq = bl = Ax= bh foc any vector xin B, Sines by € CA), there aware ese as sae futon xp € Hash ha Aig ~ Det qu ey to sow tal her ins oqute atone ae te wt ln x0 P08). ‘In summary, & least square solution of Ax = b, when b ¢ C(A), is simply sltion of dx = by where be = Proc) © CCA nt ne Srthogonl deorpaiton of Da bet by ECA ONT) =, with by = b=b. CV (A"), That to find such a east square solution, we frst have to find be and then solve Axy = be Practically, the computation of by fom b could be quit compcatd, since we fst have to find an orthonormal basis for C(A) by using the Gram Seimidt orthogorslisation (whose computation is cumbersome) and thes ‘expres be with respect to this orthonormal basis forgiven b, "To find an easier method, let us examine sear sate solution nite ‘more detail” If € RY it lest aquare solution of Ax = b, te, «scution of Axy = bey then Axg—b = Axi (bet ba) = by €.N(AE) bol, Tho, ‘by applying AT to the equation, we get A? Axo = Ab, ce, xo ea soliton ofthe equation AP Ax = AT, ‘This equation is very intersting because it ls gives suicent condition ofa lest square eoltion as tho fllowing theorem shows, and s i defined tbe the normal equation of "Theorem 6.22 Let A be an mn matris, and let b € R be any vector, ‘Dhan o vector xp €B" i alent aquare solution of Ax = b if ond only f x9 {a solution ofthe normal eqeation At Ax = AT, Proof: We only need to show the aufficieney ofthe normal equation: IE xo Js e solution ofthe equation ATAx = Ab, then, AY (Axo ~ b) ‘Axp—b= Axo (by-+ba) €N (A?) This means that, os a vector in (A), ‘Axp— bs EN{AT)AC(A) = {0}. Therafare Axa = by = Projcyy(b), Le, 58, LBAST SQUARE SOLUTIONS 180 1p ie least squateeoution of Ax =. a ‘Note tht ifthe rows of A are Uneasy independent, then rank A om snd C(A) =3 (or N(AF) = 0). Ths least square solution of Ax = b is simply © ual slutien. Example 6:19 Find all the least square solutions to Ax = b, and then Aeterisine the erthogenel projection be of b ito the column epaco C(A) of ‘A, whare rad 1 a } Awl a af) Pala p-5 a ° Sot raat sf} 3) psu wae( 231 3][ 23 4]-[ae Pa. ift a PS 8s sd 1a ajfs] po aon f 21 3] ofe[ a Pa ao] |af" [4 From the normal equation, «least squere solution of Ax = b isa sation of AMAR = APB, be (43 JEL) [By solving thie sytem of equations (lf for an exerci), we obtain al the least square solutions desired: 8] fs asf +e] 3 of [a 190 (CHAPTER 5. INNER PRODUCT SPACES for any number t Now . boaaxet a a 3 | ects) 1 -3 Note thatthe vector x= | —5 | tence in RCA). One nade to do ale a sore computation to Bind Teast square soltion x € P(A). 3 ‘Proll 5.2 Pin ll net square slatins xia B® of Ax =b, where Coe Practically, nding th solutions ofthe normal equation depends very such on ATA. In the most fortunate cae, ifthe square matron ATA is the identity matrix, then the normal equation A® Ax = Ab of the rytem ‘Ax = ravers to x= ATD, which ie simply a least square soltion, Even ATA ie ot the identity mates, we may sil have several simple cases Remark: Let us now discuss the solvability of this normal equstion. Ob- serve that AT :R™ — RY and the row space RA) of A end the eckima space C(A") of AT are the same. ‘Thus, fr any x, € C(47) = R(4) there cists a vector b © R™ such that A= x,. If we waite b= b+ by for tanique be €C(4) and by €.V(AP), then x= AT = Tbe, Therefore the restietions 4 A= Aly :R(A)CRP CAVE R™ an BP = Aes 1A) RP RA) SR sv orooone ad onto tanfomations, that i, they ae nvr, How there in thi casn edo ot have 187 = Jy aor APA = Tin onal The tenpowe AP ofa matric Astin the flowing equation: For xeR' ad yen™ ACR" = Axcy = (Ax)"y =x7ATy =x ATy. ‘The flloing trem sve 8 coniton oe ATA tobe inert 58, LEAST SQUARE SOLUTIONS 191 ‘Theorem 6.28 For any mn mats A, ATA a symmetric mx square ‘matris ond rank(A A) = rank A. Proof Glerly, ATA is square snd symmetric: (ATA) = AT(AT)? = ‘ATA, Since the numberof eolumas of A and ATA are both n, we have renk At dimA(A) =n-= rank (APA) + dim (ATA), Hence t sulle to show that A and ATA have exactly the same nll space to that dimN(A) = dimA(ATA) If x © (A), then Ax =O and also AT Ax— ATO =0, 0 that x © N(ATA) Conversay, suppose that AP Ax 0. Then Aa A= (Ae (Ax) = 28 (APA) = ATA = 0. Hence Ax =0, 48, x €N(A). a Inthe following dlseusson, we assume that ce columns of A are lnetly independent, ie, 80k A =n, e0 that N(A) = {0}, oF A ls onetoone nos the systems x = by base unique solution xin e(A) = BY. Moreover, by Theorem 828, the quate matrie ATA Sale of rank andit is invertible. Tn this ese, from the normal equation, least square slution Is x= (APay tate, Corollary 6.24 If the columns of A are nary independent, then (8) ATA is invertible oo tht (ATAYTAT i aloft verse of A, (2) the vector x = (ATAY-1ATb de He unique last equoe solution of @ stem Ax=b, ond (8) Ax = ALATAY“ATD the projection by of into the column space ea), By applying Corollary 8.21 to AT, we can sy tht, frank A= m for an ‘mx nate, then AA ie invertible and A?(AAT)* i a rght Inverse of ‘A (ce Remark afer Thaorem 3.24). Moreover by using Theorem 5.28, we ‘on show that for a matrix A, ATA is invertible if and only if the columns ‘of Aare Unceny independent, and AAT is invertible if and only if the rows ‘of Aare linearly independent. 192 (CHAPTER 5. INNER PRODUCT SPACES Example 6.14 Consider the allowing system of linear equations HIRE acn[is][z] [4 albIB ay he cls of Ane Hoy independent and (A) is they Bnei Dela), Note tat La wee [PES ES] -[2 3) hich is invertible. By a simple computation one ean obtain wowrorened 3 [Jal S]-[28 setts afin bea ax=|1 5 [Ma}- 3 tC Protiem £25 Let W be the subspace ofthe Buidean space RP epanied by the 11,2) and va (yy =I). Find Prob) for b= (1. 32), 5.9 Application: Polynomial approximations In this suction, one can find a reason for the name ofthe “east square solutions, and the flowing example illustrates an application of the least square solution tothe devarzineton ofthe spring constants ia pes. ‘59, APPLICATION: POLYNOMIAL APPROXIMATIONS 199 Example 8.15 Hook's law for cprings in physics aays thet for» uniform springy the length stretched or compressed is « Knear fiction of the force plied, that tthe foree applied tothe spring i elated to tho length x ‘Siretched or compressed by the equation F tks, whore o and fare some constants deteminod by the sping ‘Suppose now tha, givens spring of eagth 6.1 Inches, we want to deter mine the coastants o and k under the experimental dts: ‘The lengths are {ound to be 76, 87 end 104 Sackes when fores of 2,4 and 6 kilograms, reopectivey, ave applied to the spring, However, by ploting these data (.F)= (61,0), (76, 2, (84, 4), (104, 6), ln the 2F-plane, one can easly ropgnize that they are not on a steal line of the form F = at kr in the 2F-plan, which may be caused by experimental errors, This moans that the system of linear equations ee oie a4 Te 43th as 04k % 2 6 fs inconsistent (has no solutions so the second equality in enc equation ‘may not be a tr aquality), Thu, the bes tng one can do is to determine Uhestraiht line that "Ot" the dat, eat the Lino that minimizas the eum ofthe aquaes of the vertical ditenow from the line to the data: i.e, one needs to minimize (0-H @- Fi +4 - BP + 6 - Fo "This quantity is simply the squace ofthe distance berwoon the vector (0,2,4,6) in Rand the vectors (F), Fa, FF) inthe column space C(4) of the #2 matric 1 1 18 1 1 194 CHAPTER 5. IVNER PRODUCT SPACES since the matrix form ofthe sytem of near equation e A] pa BY] 73 |pe . Ball te [t] ecu RJ [1 aoa ‘The minisnum of the sum of squares le chtained when (Fy FF) is the projection ofthe vector b = (0,2 4,6) lato the column space C(4), that is, ‘what we are Joking fori the ess! square solution of the eystam, which i ow easly compte at 86 1 [| 86 + LAs, 5 arate, Te gives {In general, «common problem in experimental work i to obtain poly nomial y = f(2) in two variables 2 aud y that “fis” the data of various ‘values of y determined experimentally fr Inputs 2, say (en, vas (22, vay ose Com Ys plotted jn the zy-plane. Some possibilities are (1) by a steslgbt line: y = ‘+5e, (2) by aquadtatic polynomial: y = abr‘, of (3) by a polynomial of dagree Es y= 0g haya +--+ 42, et. ‘As a general case suppose that we are looking for & plynomial y = S(2) = aystoyeoget saya" of degree that pastes through the given ‘data, then we obtnin a system of linear equations, Slo) = eter tosh t-test = on Han)=eoteie toads tash = Slee) = 00% 185 +0528 +-- Fane = ty cr, n mac frm, te pam may be wittn a Ax = tas a) fn re dcgl(sy_ fe ity aoe ak] | om Yo 159, APPLICATION: POLYNOMIAL APPROXIMATIONS 105 ‘The lef side Ac cepreent the vlus of the polynomial at zs and the right side represents the data obtained fom the inputs 2' in tho experiment, ‘Min s k-+1, then the cases have already boon dscussod in Section 3.8 In goaeal, tit had of system may be Inconsistent (Key it may have no solution) if» > k-+1. ‘This means that there may bo 20 polynomial of ogrs F< n—1 whose graph pasos through the m date (ey) in the =y- plane, Practically, i is du tothe fet thatthe experimental data usualy have some errs. "Ths, the bast tng we can do ito fnd the polynomial fo) that min- Sizes the sum of the equares ofthe vertical distances betwoon the graph ofthe polynomial and the data. In matrix and vector space language, an Inconsistency ofthe system menos tht the vector b € BY representing the Gta isnot inthe column space C(A) ofthe coefcient matrix A. And mine mizing the sum of the aquaes of the versal distances between the grap ‘ofthe polyno and the data meane looking forthe least square solution ‘ofthe syst, Because for any © © C(A) ofthe form Lm af af] [om] [aota t-test radcd)fe)_fetsatcsa | ag atin ah] Lav] [aot oime te tous how We-elF = (1-0 ~ eres —---—agat)? + “Hon 00 eite oo et) “The previous theory says thatthe orthogonal projection be of b into the Column space of A tlealaes this quantity and shows how to find by and a least square solution x ‘Example 6.16 Find straight line y = a-+ br that fis the given experi- ‘mental dats, (1, 0) (2 8), (8) 4) and (4, 4), that is line y= a+ be that Ihinimies the sun of squares ofthe vertical distances [yy ~ a ~ x's from the line y= a-+ Br to the data (re). By adapting matrix notation, ae[t2]-[1 2], -[g] oe 196 CHAPTER 5, INNER PRODUCT SPACES sme have Arc=b and want to find last equate solution of Ax = b. But the columns of A ae inary independent, and the least square solution Is x= (ATAy IAT, Nom, “ [ib a) oto | Hence, we have a 3 1 a -arorien| |: 1 Heit ee ie 2 Poti 8 fo Ne ni in oy ae ef CA ati Sa soemintt ter hc th as th ty elo ine ve Te a atl ly ty il aoe tin sot ti sal Ctra ch an 2 STRING ston rare ec seein heiaieads Siete on ae os oes, ‘patie pa Sy a 5.10 Orthogonal projection matrices In Section 5.8, we have son thatthe orthogonal projston Projgyy of ‘on the coluran space C(4) ofan m xm matrix A plays an importa coe in Ainding s least square solution of Ax = b, Note that any subspece W of B™ ‘5 the colamn space of such a matrix A, whese column are the vectors ia ® basis for W. ‘Therefore, inthis section, we only consider the orthogonal projection Proj of an inner product epace V onto w subspace Wand ai fond its assciared mati, callod an orthogonal projection matrix, for ‘the projection Proj This wl give us epratieal way of computing agen orthogonal projection, 510. ORTHOGONAL PROJECTION MATRICES 1st Fiat of ll subspace W ofthe Bucidean space R™ has an orthonor- tal basis 9 ="(¥1, May +» Wah then for any ERM, Proje) = (aux) + (0a=8)uy +" (a) su (uf) + ug(af) ob aC) (oral ago ++ unix, by Lemma 6.11. Note tht inthis equation, Proj i linea transformation, bt the right side i the wal matex product of vortor. It imple that if sn orthonormal basi for a rbepace W ls ven, the matrix representation (Projection matrix) ofthe orthogonal projeetion Projy with respect tothe Standard basis for Ri ven se [Proiyle= wan ++ ugu +---F [Note thet if we denote by Proj, th orthogonal projection of R™ on the subspace spanned byte bala vector my for eth then maui ropresenta- ‘on is uu (os page 170. Moreover, by using the matrix representations, itean be shown that Projyy = Proly, + Prog +++ Pol ote, oigotie= (Sou, SL Pron 5.27 ot w= (pb mst i wich tern ston ‘pus ={o0= (Gyr © B.S tt he mate w7=3[2]on-3[1 1]. co ert oo, eget oe sme Prolem 528 Show tha If (v, ¥n --y Yq} Was erthonormal bass for, then Sake sie panes Definition 6:11 Let W be » subepace of the Bucidean m-epace R™. An ‘mx mats P i ealled the (orthogonal) projection matrix oo a sub Spece W if Projy(x) = Px for exy vector x in R™. Equivalntly,P is the Istrix representation ofthe orthogonal projection Projy of R™ onto W vith reepat to the standard basis for R™. 108 CHAPTER 5, INNER PRODUCT SPACES ‘thas already been shown thet uyuf + uguf +--+ uyuf is a proeetion matrix for any orthonormal set (tm, Up, ..y uj) in R, Such an expresion ‘ofthe projection matrix on a subepace W'can be obtained only when at ‘orthonormal basis for W i known, [Now lot W bo an n-dimensional subepace of the Bucidean space and let (v1, v2, «15 Va} be (sot nocrseriy orthonormal) base for W. we find an orthonormal basis for W by the Gram-Sehnde rthogonaliza- ton, then we can get the projection matrix ofthe previous frm, But the Gram-Schmidterthogonalization proces could be cumbersome and tedious. Sometimes, oo can avid this cumbersome process Let A= [vy va“ al bbe the m x m matrix heving the bas vectors v's ee eolumns, Clearly, we Ihave W = C(A). Por any vectr b € BM, the projection vector Pry (b) fs simply the vector Axo for lust square golution xp of Ax = b that i= lution ofthe normal equation AT Ax = AT, ‘On the other band sinc theealumne ofA ae nett independent, ATA 1s invertible, s0 9 = (AFA)-"ATb, and, furthermore, Projy(b) = Axa = AAP A)2A%D, by Corollary 5.24. This means that A(A™A)-1AT ithe projection matic on the subspace W = O(A). Note that this projection materi independent of the choice of bass fr de to the uniquenes ofthe matrix representation ‘of alinar transformation with rexpect toa fixed bass. Some possible simple ‘computations for the mate AAT A)! wil fllow later, This argument ‘roves the following theorem, ‘Theorem 5.28 For any subspace W of B, the projection matric P on W can be writen as P= [Proiyla = AATA)"AT fora matris A whose columns form a buss for W. Example 5.17 Find the projection matrix Pon the plane 22— y~ $e In and ealculate Pb for b= (1, 0,1). Solution: Choose any basis forthe plane 2 — y— 32 = 0, say, w= 0,3, -1) and ve (1, 2,0) 510, ORTHOGONAL PROJECTION MATRICES 190 | emt nest ryrafo eytia[ 5-8 wear'=[% $] onl 2 33] ‘The projection matric P= if 2 6] {17 fs poe dfa is -a|fol=3]— 6-3 5) |r a a ‘Remark: In particular, ifthe columns ofA consist ofan orthonormal bass = (yyy te) for W, then VL. | " ll 1 sinc uf, = by, Hence, the normal equation ATAx = AM becomes [22 -I-LS) va “|: . we lla Loe whic i just the expression of Projy(b) with respect to the othonores] roi fr W chet are the cclunns of aA 200 CHAPTER 5. INNER PRODUCT SPACES Corollary 6.28 Suppote that the cals vectors (tay ..5 ty) of A form an orthonormal basi for W in B™. Then we get, Pe AAPA) EAT = AA? = wal + wan 4-4 aul In porticulr, if A is en m xm orthogonal matris, then, forall b € R, Axe b has the enue slaion x= Ab = AT, Proof For aay x€8", Peoatts = [a «I mal = (onl togul + un, here cach fi salar the Inner prod ft a ° Example 5.18 If A= ey], where 6: = (1, 0, 0), = (0, 1, 0), thea the column vectors of A are erthonorinal, C(A) isthe zy-plane, and the projection of b= (2, y=) € B® onto C(A) is be = (2 yo 0). Ta fact, 1 1 ° 3 [Proia witha general base for W, we exhibit a eiterion fora square matzx tebe a projection matrix. ‘Theorem 5.27 A square mats P iso projection matrix if and only fi fs symmetric and dempotent, ce, PT = P and P® =P. Proof: Let P bea projection matrix. Then, by Theorem 8:25, the matrix ean be written as P= AATA)-1AT for some matrix A whose caus face linearly independent. A simple expansion of Pm A(AT A)-1AT gives PE me (ACATAY EADY! = ATA) aT waaay (447 ay 2AP) (ataay-2a R (ATA AT =P, 510, ORTHOGONAL PROJECTION MATRICES ao. ‘We have arondy shown the second equation in Theorem 87 or the convere, we have the orthogonal decomposition RM = CP) @ (PT) by Theorem 6.20. But (PT) = AN(P) since PT = P. Nota that PP implies Pu =u for we 6(P) (coe Theorem 5.7) 3 From Corollary 68, if Pie « projection matrix on C(P), then I~ P is slko a projection matric on the aul space N(P) (= {I~ P), which is ‘orthogonal to C(P) (= NUT P)}. Example 6.19 Lat P): 2" — B™ be defined by Paley voy Sin) = (25 O38 8} ove Dy for#= 1-04 m. Then each P, isthe projection of R™ onto the th axis, ‘vows enti form looks Like ‘When we strict the image to R, Fs an element in the dual pace R°, and ‘ually danoted by 2a te i-th coordinate function (see Example 4.25), Prolem 6.0 Show ha ny equae matte Pthatcatisies P'P = P's yojecton In general, {fA [oy ++ ey an. m x m matrix with linerly indepen int lum vectors e}y «yen; then vank Amn jou aby = ala 2 j, Hene, boy tag om d Asie ta) ‘ bo =OR oO bn ‘he marie Q= [oy ~~ wl nnn sare with ethonorat clus cal te orthogonel part of 4 2d by ba ba Oi be oO he ‘san invertible upper triangular matrix, called the upper triangular part ‘of A (oote that all the diagonal by # 0). Such an'A= QR is called the QR factorization of an m xn matric A, when reak A =n. With this

You might also like