You are on page 1of 495
Elementary — Linear Algebra with Supplemental Applications INTERNATIONAL STUDENT VERSION Howard Anton Se Seceeee ert Seat ay tts WileyPLUS is a research-based online environment for effective teaching and learning. © what to do + how ta do it © if they did it right active resources along with a complete digital textbook that help more, With WileyPLUS, students take more initiative so you'll have greater impact on their achievement in the classroom and beyond. For more information, visit www.wileyplus.com 1 Class -» AND BEYOND! 2-Minute Tutorials ot the resources you students naed to get WileyPLUS Quick Start Pro-loaded, ready-to-use assignments and presentations created by subject matter experts experienced student user www.wileyplus.com/support WileyPLUS ALL THE HELP, RESOURCES, AND PERSONAL SUPPORT YOU AND YOUR STUDENTS NEED! www.wileyplus.com/resources WileyPLUS Wiley Faculty Network ai Collaborate with your collea gues, find @ mentor, attend virtual and live events, and view resources www.WhereFacultyConnect.com Your MifeyPLUS Account Manager providing personal training and support Elementary Linear Algebra with Supplemental Applications International Student Version HOWARD ANTON Professor Emeritus, Drexel University CHRIS RORRES University of Pennsylvania WILEY Elementary Linear Algebra with Supplemental Applications, ISV Eleventh Edition. Authorised reprint by Wiley India Wve. Ls, 4436/7, Amani Ross, Darya, ‘New Dethit 10002, Copright © 2ORS, 201 Fol Wiley Se Sons (Singrpors) Phe Lk Cover Image: Linda Bcin/Cetry Images No purt of this honk, including imerior design, camer design, and ian, maybe reprabaced or framsmited i ay form once with the perminion of John Wiley & Sone, Ine HIE River Seret, Hoboken, N}.07030, (201) 74-6011, fax 200) 7486008, website hrape/Awurwaly.com/ edpemmiaions. Limit of Liability/ Disclaimer of Warrantye The publisher andthe author make sp represeniations or woreantiea with respect tthe aseuracy or enmgieeness of the sements ofthis work ana specifically deca all waransies, inching without limitation warranties ff fies Fir particular purpose, No wean wy Be teats rented by sak promotional mutcriak. The advice und states contatned bercin may not be austable For very seuation: This work fi with the understanding tat the publi is ne er in rendering legal, ncounsing, or ether penfemional services, If pressinal asatance i require, the service of a campetent joss peewon should be sought. Nether the publisher wt the sshor shall be lable for damages arisirhereftom, The fcr that an snyanisation or website is referral to in this work axa citation and/or a potential wutee of further information dees not can that the author or the publisher enehrnes the information the onpniaatiom or webwite ray prvide o# ecutnmerdations it may make. Further, readers should be aware that dnterne vwcbies Ht his work my hue hard spree ewe wher this work ws writen ana when i read “TRADEMARKS: Wiley, the Wiley laps are trademarks or rostered trademarks of John Wiley {5Son the, and/or ts afFates, in the United States ard ahr eaitries, ancl may ne Be sed rthour writen petenssion, Al cetcttadkemarks are the property af their rapectins wer. ‘Wiley Publi, Ks wt asco with uty prvabuete venebor meetonsal bi this book, Wiley alu publishes ite hak ima wureey electrons formate. Sine content that appears in print may nat he available in electronic books. ‘This eatin is authortcd formal in the [nian Subsomtinent aah “Te socens WileyPLLIS, students need x WileyLLIS regtration cle. To purchase WileyPLUS ipo to wewarileyplas.com or contact yom nearest Wiley representative. Authuesed [nein Elion ISAN, 97881-265620641 ISBN, 9781.26595129 (chk) To: My wite, Pat My children, Brian, David, and Lauron My paronts, Shirley and Benjamin My benofactor, Stophen Girard (1750-1831), ‘whose philanthropy changed my ite Howard Anion Chris Rorres ABOUT THE AUTHOR Howard Anton obtained his B.A. from Lehigh University, his M.A. from the University of Illinois, and his Ph.D. from the Polytechnic University of Brooklyn, all in rmuthematics. In the carly 1960s he worked for Burroughs Corporation and Aveo Corporation at Cape Canaveral, Florida, where he was involved with the manned space program. In 1968/he joined the Mathematics Department at Drexel University, where he taught full time until 1943. Since then he has devoted the majority of his time to texthook writing and activities for mathematical assaciations. Dr. Anton was president ofthe trapeL. Section of the Mathematical Association of America (MAA), served on the Board of Gevernors of that organization, and guided the creation of the Student Chapters of the MAA, In addition to various pedagogical articles, hs has published numerous research papers in functional analysis, approximation theory, and topology. He is best known for his textbooks in mathematics, which are among the most widely wed in the world. There are currently more than 175 versions of his books, including translations into Spanish, Arabie, Portuguese, Italian, Indonesian, French, Japanese, Chinexe, Hebrew, and Cicrman, For relaxation, Dr. Anton cnjoys travel and photography. Chris Rarres earned his B.S. degree from Drexel University and bis Pb. trom the ‘Courant Institute of New York University, He was a faculty member of the Department ‘of Mathematicn at Drexel University for more than 30 years where, in add teaching, he did upptied rexcarch in solar engineering, acoustic scattering, popul dynamics, compuicr system reliability, geometry of archacolagical sites, optimal animal harvesting policies, and decision theory, He retired from Drexel in 2001 as a Professor Limeritux of Mathematics and tx now a mathematical consultant, He also hax arescarch position at the Schoo! of Vetcrinary Medicine at the University of Pennsylvania where he doss mathematical modeling of animal epidemics, Dr. Rorres is recognized expert on the life andl work of Archimedes and has appeared in various television documentaries on that subject. His highty acclaimed website an Archimedes (hitp://www math nyu.cdu!~erorres/Archimedes/contents htm!) is a virtual book that as become an important teaching tool in mathematical history for students around the world. Summary of Changes in This Edition Hallmark Features This textbook is an expanded version of Blementary Linear Algebra, eleventh edition, by Howard Anton, The firstnine chaptersofthis book are identical tothe firstnine chapicrsof that text; the tenth chapter consists of twenty applications of linear algebra drawn from business, economics, engincering, physics, computer science, approximation theary, cology, demography, and genetics. The applications arc largcly independent of cach father, and each includes a list of mathematical prerequisites. ‘Thus, each instructor hax the flexibility to choose those applications that are suitable for his or her students and to incorporate each application anywhere in the course after the mathematical prerequisites hhave been satisfied, Chapters 1-9 include simpler treatments of some of the applications ceowered in rwore depth im Chapter 10, This edition gives an intraductary treatment of lincar algebra that is suitable for a first undergracuste-course. Its aim isto present the fundamentals of finear algebra im the clearest possible way—sound pedagogy is the main consideration, Although calculus ix nota prerequisite, then pional material that isclearly marked for students with ‘calculus background, If desired, that materia can be omitted without loss of continuity Technology is not required ti use this text, bul for instruciom who would like to se MATLAD, Mathematica, Muple, or calculators with linear algebra capabilities, we hhave posted some kupporting material that can be accessed at the following companion website www.wiley.con college/anton Many parts of the text have heen revised based on un extensive set of reviews, Here are the primary changex: + New Exerelses Hundreds of new exercises of all types have been added throughs the text Technology. lixcrciscx requiring technology such ax MATLAI, Mathematica, or Maple have been added and supporting data sets have hech pasted on the companion websites for this text. The use of technology ik not essential, and these exercises can be omitted without affecting the flow of the text Exercise Sets Reorganized Many multiple-part exercises have been subdivided to create a better balance between odd ancl even exercise types. To simplify the instruc tor tusk of creating axsignments, exercise sets have been arranged in cleurly defined ‘categories. Appendix A Renritien ‘The appendix on reading, and writing proofis has been ex- panded and revised to better support courses that focus on proving theorems, + Web Materials. Supplementary web materials naw include various applications mod= ules, three modules on lincar programming, and an alternative presentation of deter- rinants haxed on permutations + Relationships Among Concepts Onc of our main pedagogical goals is to convey to the student that linear algebra is a cohesive subject and not simply a collection of isolated definitionsandtechniques. One way'in which we do thisis by using a crescendo of Equvalent Statements theorems that continually revisit relationships among systems of'equations, matrices, determinants, vectors, linear transformations, and cigemvalucs. To get a general sense of how we use this technique sce Theorems 1.5.3, 1.64, 23.8, 4.8.8, and then Theorem 5.1.5, for example. About the Exercises ‘Supplementary Materials Jor ‘Students Suppieontary Matoriats Sor Instructors A Guide for the Instructor ‘+ Smooth Transition to Abstraction Eecause the transition from R" to general vector spaces is difficult for many students, considerable effort is devoted to cxplaining the purpose of abstraction and helping the student to" visualize” abstract ideas by drawing, analogies to familiar geometric ideas, + Mathematical Precision When reasonable, we try to be mathematically precise. In keeping with the level of student audience, proofs are presented in a patient style that is tailored for beginners. + Suitability fora DiverseAudience ‘This text is designed to scrvethe needs of students, im engineering, computer science, biology, physics, business, and economics as well as those majoring in mathematies. Graded Exereise Sets Hach exercise set in the first nine chapters begins with routine drill problems and progresses to problems with more substance, + True!Palse Exerelses The Truc/Fulke exercises are designed to cheek conceptual un- derstanding and logical reasoning. ‘To avoid pure guesswork, the students are required 1 justify their responses in some way. + Technology Exercises léxercises that require technology have also been grouped. To avoid burdening the student with keyboarding, the relevant data files have been posted ‘n the websites that accompany this text + Data Files Data files for the technology exercises are posted on the companion websites that accompany this text, MATLAM Manual and Linear Algebra Labs. This supplement contains a sct of Mt {ait laboratory projects written by Dun Seth of West Texas A&M University, It is designed to help stilents learn key linear algebra concepts by sings MATLAB ad available in Pov form without charye to students at schools adopting the 11th edition of the text + Videos A complete set of Danicl Solow's How to Read and Bo Proofs videos tx available to students through WileyPI.US ax well as the companion websites that ac- ‘company this fext. Those maicrials include a guide to help students locate the lecture videos appropriate for specific proof im the text. + Instructor's Solutions Manual ‘This supplement provides worked-out solutions to. ‘most exercises in the text. + PowerPoint Presentations PowerPoint slides are provided that display important def- initions, examples, graphics, and theorems in the book. ‘These can also be distributed. to students ay review materials or to simplify note taking + Test Bank Test questions and sample exams are available in Por or 1X form, + WileyPLUS An online environment for effective teaching ard learning. WileyPLUS ‘uilds student confidence by taking the guesswork out of studying and by providing a clear roadmap of what todo, how to-do it, and whether it was done right. Its purpose is to motivate and foster initiative so insiructors can have a greater impact on classroom achicvement and beyond, Although linear algebra courses vary widely in content and philosophy, most courses fall into two eategories—those with about 40 lectures and those with about 30 lectures Accordingly, we have created long and short templates as possible starting points for constructing a course outline. Of course, these are just guides, and you will certainly ‘want to customize them to fit your local interests and requirements. Neither of these sample templaics includes applications or the numerical methods in Chapter 9. ‘Those can be added, if desired, and as time permits Exercise Contributions Chapier 1: Syricmsoflinar Hqaiioavand Matis | Slectans | 6 lstuss Chapter 2: Detseminants 3 hectares 2 lectus CChapier 3: Euclidean Vostor Spacer ects | 3 sures Chapter 4: Goosral Vester Spaces WW lectures | 9 besten Chapter 5: Klass and Klassen Sextus | 3 stares Chapter & Inner Pode Spaces Sietwes | este “Gaurd Deiowion wd Guano | leant | Slee Chapter 8; General Lins Transformations Aecturos | 3 stares Toda 39 hetures | 30 lectures "The following people reviewed the plans for this edition, critiqued much of the content, and provided me with insightful pedagogical advice: John Alongi, Northwestern University iw Bing, University of Southern Mississip Kugene Bon, City University of New York at Queens John Gilbert, Univernity of Texas Austin Danrun Hung, Si. Cloud State University (Craig Jensen, University of New Orleans Steve Kahan, City University of New York aw Queens Harihar Khanal, £mbry-Riddle Aeronautical University Firoo, Khosraviyant, Texas A&M International University YY, Gorge Lal, Hilfred Larter University Kouok Law, Georgia Perimeter College Mark Mact.can, Searle University Vasileios Maroulas, University of Tennesse, Knewville Danie! Reynolds, Southern Methodist University Gin Sheng, Baylor University Laura Smithies, Keni State University Larry Susanka, ellewne Collene Cristina Tone, University of Loudevidte Yvonne Yar, Milwaukee School of Engéneering Ruhan Zhao, State University of New York at Brockport Special thanks are duc to three talented people whe worked on various aspects of the Preemyslaw Bagacki, (ld Dominion University —who solved the exercises and created, the solutions manuals. Roger Lipsett, Brundeis University — who prooficad the manuscript and exercise solu- tions for mathematical accuracy. Daniel Solow, Case Western Reserve University ~ author of “How to Read and Do Proofs.” for providing videos on techniques of proof and a key to using those videos in coordination with this text. ‘Sky Pelletier Waterpeace — who critiqued the technology exercises, suggested improve- ments, and provided the data sets, ‘Spacial Contributions | would also like to express my deep appreciation to the following people with whom | worked on a daily basis: Anton Kaul — who worked closely with me at every stage of the project and helped to write some new text matcrial snd exercises. Cn the many occasions that | needed ‘mathematical ar pedagagical advice, he was the person | turned to. | cannot thank him enough for his guidance and the many contributions he has made to this edition David Diets — my editor, for his paticnce, sound judgment, and dedication to producing quality book, Anne Scantan-Rohrer —of Yo Ravens Kaitorial, who coordinated the entire project, and brought all of the picees together, equeline Sinacari — wha managed many aspects af the content and was always there to answer my often obscure questions Carol Sawyer of The Perfect Proof, whomanaged the myriad of details in the production process and helped with prootrcading. ‘Maddy Lestre — with whom | have worked for many years and whose clepant sense of design is apparent in the page ofthis book. Litiun Brady — my copy editor for almost 25 years. | fee! fortunate to have been the benc- ficiary of her remarkable knowledge of typography, style, grammar, and mathematics Put Anton - of Anion Textbooks, Inc, wha helped with the mundane chores duplicating, shipping, accuracy checking, and tasks too numerous to mention. John Ragosich — of Pechveiters, fc., who prograrnmed the design, managed the com- Position, and resolved many difficult technical issues, Brian Haughwout ~ of Fechsetters, In., for his careful and accurate work on the hive trations, Josh Etkan ~ for providing valuable assistance in accuracy checking. Howard Anton Chris Rorres CONTENTS CHAPTER 1 Systems of Linear Equations and Matrices + 1.1 Introduction to Systems of Lincar Kiquations 2 1.2 Gaussian Elimination 9 1.3 Matrices and Matrix Operations 21 1.4 Inverses; Alechraic Propertics of Matrices 31 1.8 Elementary Matrices and a Method for Finding" 42 MI H 1 ms and Invertible Matrices 49 {6 More on Linear Sys 7 Diagonal, Triangular, and Symmetric Matrices 54 Applications of Linear Systems 60 + Network Analysis (Traffic Flow) 60 © Hleetrical Circuits * Balancing Chemical Equations 64 ‘+ Polynomial Interpolation 67 9 Leontief Input-Ouiput Models 70 CHAPTER 2 Determinants o1 2.1, Determinants by Cofactor Exp Ealuating Determinants by Row Reduction 96 2.3 Properties of Determinants; Cramer Rule 404 CHAPTER Euclidean Vector Spaces 117 3.1 Vectors in 2-Space, 3-Space, and n-Space 117 3.2 Norm, Dot Product, and Distance in * 127 3.3 Orthogonality 138 3.4 The Cicomeiry of Linear Systems 148 3.8 Cross Product 183 CHAPTER 4 General Vector Spaces 169 4.1. Real Vector Spaces 169 4.2 Subspaces 176 4.3 Linear Independence 165 44 Coordinates and Basis 184 4.8 Dimension 201 4.6 Change of Basis 208 4.7 Row Space, Column Space, and Null Space 214 4.8 Rank, Nullity, and the Fundamental Matrix Spaces 295 4.9 Matrix Transformations from €* tok" 299 4.10 Properties of Matrix Transformations 247 4.11 Goometry of Matrix Operitons on A? 255 4.12 Dynamical Systems and Markov Chains 269 xi Contents CHAPTER 5 CHAPTER 6 CHAPTER 7 CHAPTER 8 CHAPTER 9 CHAPTER 10 Eigenvalues and Eigenvectors 200 51 Vigenvalucsand Bigemvociors 263 52 Diagonalization 201 53 Complex Vecior Spaces 310 $4 Differential Equations 321 Inner Product Spaces 209 6.1 Inner Products 333 62. Angle and Orthogonality in nner Product Spaces 341 6.3 Gram-Schmidt Process, @R-Decomposition 347 64 Hest Approximation; Least Squares 358 6.5 Least Squares Fitting to Data 367 64 Function Approximation; Fourier Serios 372 Diagonalization and Quadratic Forms sa0 7.1 Orthogonal Matrices 989 72 Orthogonal Diagonalization 306 73 Quidratic Forms 402 7A Optimization Using Quadratic Forms 413 7.5 Hermitian, Unitary, and Norenal Matrices 420 Linear Transformations as 8.1 General Linear ‘Transformations 435 82 lsomorphinm 444 8.3 Compositions and Inverse Transformations 481 84 Matrices for General Lincar'Transformations 456 8.5 Similarity 463 Numerical Methods 479 9.1 LU-Decomponitions 470 92 The Power Method 4n0 9.3 Internet Search Engines 405 914. Comparison of Procedures for Solving Linear Systems 500 9.5 Singular Value Decomposition 505 946 Data Compression Using Singular Value Decomposition 512 Applications of Linear Algebra 521 10.1 Constructing Curves and Surfaces Through Specified Points 522 10.2 Geometric Linear Programming 527 10.3 ‘The Earliest Applications of Lincar Algebra 598. APPENDIX A APPENDIX B 104 Cubie Spline Interpolation sas. 10.5 Markow Chains 555, 106 Graph Theory 565 107 Games of Strategy 574 10.8 Leonticf Keonomic Models. 583 109 Forest Management 532 10.10 Computer Graphies 599 10.11 Uguilbeium Temperanue Distributions 607 10.12 Computed Tomography 617 10.15 Fractals 626 Age-Specific Population Growth 670 10.18 Harvesting of Animal Populations 668 0.19 A Least Squares Model for Human Hearing 695 10.20 Warps and Morphs 702 Working with Proofs via Complex Numbers 717 Answers to Exercises 725 Index 757 ee / KC Systems of Linear Equations and Matrices CHAPTER CONTENTS Introduction to Systems of Linear Equations 2 Gaussian Elimination 9 Matrices and Matrix Operations. 20 wid Inverses; Algebraic Properties of Matrices 34 Elementary Matrices and a Method for Finding A! 42 More on Linear Systems and invertible Matrices: «8 aos Diagonal, Triangular, and Symmottic Matricos. Applications of Linear Systoms 60 + Network Analyse (Trafic Flow) 60 + Eleciical Cea #2 * Balancing Chemical Equations 64 + Polynomial interpolation. 6? 1.9 Leantiaf Input-Output Medals. 70 1 1 1 1 1 1 1 1 ey INTRODUCTION Information in science, business, and mathematics is often organized int columns to form rectangular arrays called “matrices” (plural of “matrix"), Matrices often appear ax tables of mum ical data that arise from phywiew! observations, but they ‘occur in various mathematical contexts ax well. For example, we will see in this chapter that all of the information required to solve a system of equations xuch as Srty=3 2r-ye4 a] land that the solution of the system can be obtained by performing appropriate is embodied in the matrix operations on this matrix. This is particularly important in developing computer programs for solving systems of equations because computers are wll suited for ‘manipulating arrays of numerical information. However, matrices arc not simply a ‘notational too! for solving systems of equations; they can he viewed as mathematical objects in their own right, and there is rich and important theory associated with them that has a multitude of practical applications. It is the study of matrices and related topics that forms the mathematical field that we eall “linear algebra.” In this chapter we will begin our study of matrices. 2 Chapter 1 Systems of Linear Equations and Matrices 11 Linear Equations Introduction to Systems of Linear Equations ‘Systems of lincar equations and their solutions constitute one of the major topics that we ‘ell stay inthis course. In this first section we wil introduce some basic terminology and discuss a mcthod for solving such systems. Recall that in two dimensions a line in a rectangular x)-coordinate system can he repre~ sented by an equation of the form ax thy =e (a,b mot both 0) and in three dimensions. plane in arectangular.xyz-coondinate system ean be represented. by an equation of the form ax +by-bec=d (a,b,c not all) These are examples of “linear equations,” the fist being a linear equation inthe variables cand y and the xecond a linear equation inthe variables.c, y, and z. More generally, we ddofine a fluear equation in the w variables 41,43, ++.9-4 ta be ane that canbe expressed in the forms ayn, bag, ++ + ate = b ay where a), a2, ...) dg and b ure constants, and the a's are not all zero, In the special case where 1 = 2 or n= 3, we will offen use variables without subscripts and write lincar equations ax ax-+azy=b (a), a; nat both 0) @ yx tay bay = b (ay, as, ay mot all 0) a In the special case where b = 0, iquation (1) has the form any Faas +o + ake = oy which is called a hamogeneous linear equation in the variables.xy..%)..-. 54: » EXAMPLE 1 Linear Equations (Observe that a linear equation does not involve any products oF roots of variables. A variables occur only to the first power and do not appear, for exampic, as arguments of trigonometric, logarithmic, orexponential functions. ‘The following are linear equations: x43y=7 y-2y-3n tn qeny4+8 Babb The following are not linear equations: x+3ytad 3e+2y—ay sinx + y Vi +in tay < A finite set of tinear equations is called a system of linear equations or, moce briefly, a linear system. The variables arc called unknowns. For example. system (5) that follows. fas unknowns x and y, and system (6) has unknowns 1), x2, and x3, 1 Sxty=3 fy tis Qe-y=4 0 Bt +9 = oe ‘The double subscripting on the coefficients ay of the ua ‘knowns gives their Jocation in the system—the fist sub- serip indicates the equation ‘which the cootBelsat occurs, andthe seeorindicatoe which ‘waksown A eltiplis, Ths, 4a ini the frst equation and ‘utilis, Linear Systems in Two and Throe Unknowns Introduction to Systems of Linear Equations 3 A general linear system of m equations in the m unknowns 2,,."3, xx, can be written ues bangs +--+ aaa = angi tenn + --- tanta EXAMPLE 2 A Linear System with One Solution Solve the linear system xoysl tty =6 Solution We can eliminate x from the second equation by adding —2 times the first equation to the second. This yields the simplified system 1 4 ay ‘From the sccond equation we obtain {and on substituting this value in the first equation we obtain x = | + y = 7, Thus, the system has the unique solution xe} ys¢ jcometrically, this means that the lines represented by the equations in the system: inirsect atthe single point (7, ¢}, We leave it for you to check this by graphing the lines, > EXAMPLE 3 A Lingar System with No Solutions Solve the lincar system wey are3y Solution We can climinate x from the second equation by udding 3 times the first equation to the second equation, ‘This yiclds the simplified system hye 4 O= 6 The second equation ix contradictory, so the given rystern hax nasolution, Ceametrically, thix meansthat the linex corresponding tothe equations in the original xystem are paralie! tand distinct, We leave it for you to check this by graphing the lines or by shewing that they have the same slope but different y-intereepts, > EXAMPLE 4 ALLinear System with Infinitely Many Solutions ‘Solve the linear system 4x-2y=1 Vax = 8y ‘Solution We can climinate x from the second equation by adding —4 times the first equation to the second. This yields the simplified system Ax —2y= 0 ‘The second equation docs net impose any restrictions om x and y and hence can be ‘omitted. Thus, the solutions of the system are those values of x and y that satisfy the single equation 4x —2y @) ‘icometrically, this means the lines corresponding to the two cquations in the original system coincide. One way to describe the solution set isto solve this equation for x in terms fy toobtains = 4 + Lyandthenassign anarbitrary valuc t(called.a parameter) © Chapter 1 Systems of Linear Equations and Matrices in Example 4 we could have also obtained parametiiccqua- tions fo he solutions by say ing (8) fr y intermsot x, and Ketting «= 1 be the parsincte. ‘The resulting parametric qua ise woul Hook differest bat ‘woulddctine the samcxation set Augmented Matrices and Elomentary Row Operations ‘As noted in the introduction tovthis chapter, the acm “ma tris ia used in mathematica to denote a rectangular array of mumbers. tn. later scetion we will smdy matrices in de= tall, but Far Baw we: will only be concemed with augmented maces for Tikes systema. to y. This allows us to express the sol equations) by the pair of equations (called parametric sa}4ln y ‘We can obtain specific numerical sol 18 from these equations by substituting numeri- cal values forthe parameter. For example, ¢ = 0 yields the solution ({.0) ,¢ = I yields the solution (7,1), and f= —1 yields the solution (4, —1) . You eam confirm that these are solutions by substituting their coordinates into the given equations. EXAMPLE 5 A Linear System with Infinitely Many Solutions ‘Solve the linear system r- ythe= 5 x 2y $42 = 10 We 3y $62 = 15 Solution "This sysicm can be solved by inspection, since the sccond and third equations are multiples of the first, Geometrically, this means that the three planes coincide and that those: walues of x,y, and ¢ that satisfy the equation Xnyehess o automatically satisfy all three oquations, Thus, it sufticos to find the solutions of (9). We can do this by first solving (9) forx in terms of y and c, then assigning arbitrary values r land (parameters) to these two variables, and then expressing the solution by the three parametric equitions xeStr—2, yun rms Specific salutions ean he obtained by choosing numerical values for the parameters 7 ands, For example, taking r= | and.x = 0 yields the solution (6, 1,0). 4 ‘As the number of equations and unknowns in a linear ystem increases, 40 docx the complexity of the algebra involved in finding solutions, The requites carsputations ean bbe made more manageable by simplifying notation and standardizing procedures, Kor cxample, by mentally keeping track of the location of the +s, the x, and the = in the linear sysicin yy) aay bet digky = by fizty b agny bot dank = be Ce ere ee we can abbreviate the system by writing only the rectangular array of numbers ay 2 ig By a a Simi Bz ~ Pa. ‘This is called the augmented matrix for the system. Kor example, the augmented matrix, for the system of equations nt thy to1o2 8 2y tay yal i 24-3 1 Sn + 6m — Su =O 3 06-5 0 Introduction to Systems of Linear Equations 7 “The basic method for solving @ linear system is to perform algetwaic operations on the system that do not alter the solution set and that produce a succession of increasingly simpler systems, until a point is reached where it canbe asccriained whether the sysiem is consistent, and if 50, what its solutions are. Typically, the algebraic operations are: 1. Multipty an equation through by a nonzero constant. 2. Interchange two equations 3. Adda constant times onc equation to another. ‘Since the rows (horizontal lines) of an augmented matrix correspand to the equations in the associated system, these three operations correspond to the following opcrations on the rows of the augmented matrix: |. Multiply a row through by « nonzero constant. 2. Inverchange two rows. 3, Add a constant times ong row to another, "These arc called elementary row operations ona matrix. In the following example we will ilustrate how to use clementary Fow operations land an augmented matrix to solve a linear system in three unknowns. Since a systematic procedure for solving. lincar systems willbe developed! in the next section, do not worry labout how the steps in the example Were chosen. Your objective here shoul! be simply to understand the computations. > EXAMPLE © Using Elementary Row Operations Inthe left column we solve a system of linear equations by operating on the equations in the xystem, and in the right column we solve the same system by operating on the rows of the augmented matrix, a+ y+2emd 1.129 de tay 301 24-301 3k + Oy —Se=0 3 6-5 0 ‘Ad —2 times the fist equation to dhe second Addl —2 ts the first row to the ses ie toobtain ‘otal a+ yt2em ro ly T= o 2 34 + by — Se 3 6 ‘Add — tines the fest cquation tothe tied 10 Addl —¥ times the first row tothe third to obtain, ‘ba rt yt ze 9 a a a) Oh apa o 3 - -27 ‘Multiply the second equation by # to obain ——Nultiply the second row by $ to obtain a+ yt 2 “oe BF yk ce 3y— Me oO % 27 Chapter 1. Systems of Linear Equations and Matrices ‘Addl —2 times the sseond equation tothe third tn obtain ‘Multiply the thie equation by —2 1 obtaia weyt de yok z ‘Add! —1 times the sesnd equation tothe Hirst tw obtain ‘Add —U hs Uthat tothe ot ak [Adit — times the second row tothe tied wo obtain Multiply the the vow by 2 o obtain ° yor} ¥ or oo ae x 1 ‘Adit —1 times the ses vow to the first to obtain ‘vat 4) cs hs ti ot the et [/morthethirdequationtathe secondo obtain times the thin! rw Xo fe wean o bean = at 1 0 OF y=? o 1 0 2 r= oo 1s The solution. = ty = 2,¢— 3 gnaw evident. Concept Review + Linear equation + Homogeneous linear equation + System of lincar equations + Solution of linear system + Ordered a-tuple + Parameter * Consistent linear system ‘+ Incomsistent linear system *+ Parametric equations + Aupiented matrix + Elementary row operations Skills + Determine whether a given equation is linear. + Determine whether a given n-tuple is a solution of a linear system. + Find the augmented matrix of a linear system, + Find the linear system corresponding to-a given augmented matrix. * Perform elementary row operations on a linear system and on its corresponding: augmented! matrix. + Determine whether a linear system is consistent or inconsistent. * Find the set of solutions to a consistent linear system. True-False Exercises In parts (a}-(h) determine whether the statement is true oF false, and justify your answer. (a) A lincar system whosc equations arc all homegencous ‘must have a unique solution, (b) Multiplying « tinear equation through by zero isan acceptable elementary row operation (©) The linear system 1.2 Gaussian Elimination @ (6) each equation ina consistent linear system is multi- plied through by a constant , then all solutions to the ay new system ean be obtained by multiplying solutions 2e-2y=k cannot have a unique solution, regardless of the value of k. (@) A single linear equation with two or more unknowns ‘must always have infinitely many solutions, from the original system by ¢. (e} Elementary row operations permit one equation in a linear system to bes subtracted from anather. (h} The linear system with corresponding augmented ma- trix (©) Ifthe number of unkciwns in a linear system excecds Fj ‘| the number of equaitons, then the sysiem must be con- sistent, Considerations in Solving Linear Systems {x consistent. Gaussian Elimination In this section we will develop a systematic procedure for solving systems of lincar equations, The procedure ix bad on the isa of performing certain operations on the sows of the augmented matrix that simplifies i to form fiom which the solution of the system ean be axceriained by inspoction, When considering methods for solving systems of lincar equations, it fs lnportant to distinguish between lange systems that must be solved by computer and small systems that can be solved by hand. For example, there are many applications that lead to liner rnyntems im thawsands or even millions of unknowns. Lange systems require special fcchniques to deal with issucs of memory sire, roundol? errors, solution time, and 80, forth, Such techniques are stuclied in the field of munterical analysis and witl only be touched on in thivtext. However, almost all of the methodxthat arc wed for large system fare Baise om the ico that we Will develop in this section, {In Example 6 of the last seetion, we solved a linear system in the unknowns 2, y, and by reducing the augmented matrix to the form [rood o102 oo13 from which the solution x = 1, y = 2, ¢ = 3 became evident. This is an example of a ‘miatrix that is in reduced row echelon form. To be of this form, a matrix must have the following properties: 1. Ifa row docs nat consist entirely of zeros, then the first nonzero number in the row iva |. We call thisu feading J. 2. Ifthere are any rows that consist citirely of zeres, then they are grouped together at the bottom of the matrix. 3. Inany two successive rows that do not consist entirely of zeros, the leading | in the Jower row occurs farther to the right than the leading 1 im the higher row. 4, ach column that contains a leading 1 has zeros everywhere else in that column. ‘A matrix that has the first three properties is said to be in row echefom form. (Thus, 2 matrix in reduced row echelon form is af necessity in row echelon form, but not conversely.) 10 Chapter 1 Systoms of Linear Equations and Matrices > EXAMPLE 1 Row Echelon and Reduced Raw Echelon Form. ‘The following matrices arc in reduced row echelon form. ‘The following matrices arc in row echelon form but not reduced row echelon form. fa 0 fi : ‘ 0 0 ‘ : 1 > EXAMPLE 2 More on Row Echelon and Reduced Row Echelon Form ‘As Example | illustrates, a matrix in row echelon form has zeros below cach leading 1, ‘whereas a matrix in reduced row echelon form has zeros below and ahove cach leading, 1 Phu, with any real numbers xubstituted for the #', all matrices of the following typex. are in row echelon form: Dlebesenee Tees) flees . ) eeewes tise latest false Rees elie gore looral’ fooool [ooo g tt ttt 0 0 soot] [voool looove] frooganoer. All matrices ofthe Following types are in reduced ow echelon farm: ewe) peony prewar Peet ees he e100 0108 Olee (eee Coo eal core] foorels foooals [Pee hoes ee oooi} [ooool loooal fare eeieeee I, bya sequence of elementary row operations, the augmented matrix for asystem of |incar equations is put in reduced row echelon form, then the solution sct canbe obtained: cither by inspection or by coaverting certain linear equations to parametric form, Here are some examples, » EXAMPLE 3 Unique Solution ‘Suppose that the augmented matrix for a linear system in the unknowns 2), x3,.%, and sx has been reduced by elementary row operations to 19 oo 8 o 1 0 0-4 oo 1 00 oo a1 8 tn Hxample 3 we could if desi expres the sobution more uccinetly ac the Atuple B.-1,0,5), 1.2 Gaussian Elimination 11 ‘This matrix is in reduced row echelon form and correspond to the equations "Thus, the system has a unique solution, namely, a1 = 3, x2 Tan =O, x4 > EXAMPLE 4 Linear Systems in Three Unknowns {In cach part, suppose that the augmented matrix for a linear system in the unknowns 4%, Y-and ¢ has been reduced by elementary row operations to the given reduced row echelon firm. Solve the system, Looe tooo 1-5 1 4 @mlo12o0/ @lo 1-4 2} @lo o o 6 door o 9 0 9 o 0 0 o ‘Solution (a) The equation that corresponds to the last row of the augmented matrix is Ox + 0y+ Or = 1 Since this equation sot sitistied by any valucsof.r, y, and z, the system is incemsistent, Solution (b) ‘The equation that corresponds to the last row of the augmented matrix is Or} Oy 4 Oe= 0 This equation can he omitted kince it imposes no restrictions on x, y, and 2; hence, the linear system corresponding to the augmented matrix ix x +3re-1 yodem 2 Since x and y correspond to the leading 1s in the: mugmented matrix, we call these the Heading variables. ‘The rersaining, variables (in this case ¢) are called free wardables. Solving for the leading variables in terms of the free variables ives xe-lok yaddar ‘From these equittons we see that the free variable ¢ can be treated as « parameter and ‘assigned an arbitrary value f, which then determines values for x andy. ‘Thus, the solution sct can be represcnted by the parametric equations 1-3, y=2+a, car By substituting various values for in these equations we can obtain various solutions ‘of the system. For example, setting ¢ = 0 yields the solution and setting ¢ = 1 yields the solution a= 4, 1 ‘Solution (c) As explained in part (b), we can omit the equations corresponding to the zero rows, in which case the linear system associated with the augmented matrix conists of the single equation aasytr a) 142 Chapter 1 Systoms of Linear Equations and Matrices We will usually denote pa ‘ameter in a general solution by the lets yf) oy bull any letters that do not ‘con fit with the names of the unknowns ean be used. Ror systetes wil spare thas bree unknowns, subscripted letrs uh 38 a fy RE CEE pent. Elimination Methods from which we see that the solution set is a plane in three-climensional space. Although (1) is avalid form of the solution set, there are many applications in which itis preferable {to express the solution set in parametric form. We can convert (1)-to parametric form by solving for the leading variable-r in terms of the free variables y and r & obtain =445y—z From this equation we sce that the free variables can he assigned arbitrary values, say y=sand ¢=, which then determine the value of x. Thus, the solution set can be expressed parametnically as 4 @ 44 Se Formulas, such as (2), that express the solution sct of a lincar system parametrically have some astaciated terminology, DEFINITION 1. Ifa linear system has infinitely many solutions then & set of paramet- re equations from which all xolutions can be obtained by axsigning, numerical values to the parameters is called a general solution of the system. ‘We have just seen how cary it ito solve a ryster of lincar equations once its augmented ‘matrix is in reduced row echelon form. Now we will give a step-by-step elimination ‘procedure that can be wied to reduce any matrix to reduced row echelon form. As we state euch step in the procedure, we illustrate the idew by reducing the following matrix to reduced row echelon form, oo = oO 7 2 4-10 6 12% 204 -5 6-5-1 Step 1. Locate the lefimost column that does nat consist entirely of zeros oo -2 0 7 2 4 = 6 12 ® 204 -5 6-8 -1 Li ceniat samara cates ‘Step 2. Interchange the top row with another row, ifnecessary, to bring anenzere entry to the top of the column found in Step 1. 2 4-10 6 12 28 0 0-2 0 7 WD] + Metra ant singin epson 24 -5 6 =5 -1 = Step 3. the catry that is now at the top of the column found in Step 1 is a, rmubtiply the first row by 1/a in order to introduce a lending 1. 12-5 3 64 © 0 2 712] ntti pt sg ma 2 4-5 6 -§ 1 anes 1.2 Gaussian Elimination 13. ‘Step 4. Add suitable multiples of the top row to therows below so that all entries below the leading ¥ become zeros. 12-3 3 6 1 0 0-2 0 7 1a] aimee at ri cing o @ S$ 0 -17 ~29 Step 5. Now cover the top row in the matrix and begin again with Step 1 applied to the submatrix that remains. Continue in this way until the entire matrix isin row echelon form, 1255 3 6 « 0 0-2 0 7 0 0 5 0 =I7 =29 | he submatrhs 1-2-5 39 6 0 0 1 0 =} 6] «— memo npemanan om o 0 5 0 =I ~29 See eee to2ss 3 6 4 0 0 1 0 =} 6] — =summne naam emtenrn \ Seam Gomnioa tere 60 0 0 $ 4 Ten dr 12-5 3 6 Ww i eee eee O01 of oe) — nomena oo 9 0 44 {emo suncer coh nthe ew saber 2s 3 6 CS ec vcntinaay auton oo 0 Oo 1 2 Send “The entire matrix ix now in row echelon form. To find the reduced raw echelon foram we ‘ced the following additional step. Step 6. Beginning with the last nonrcro row and working upward, add suitable multi= piles of cach row to the rows abore to introduce zeros abswe the Ieading I's 12-5 3 6 Ww © 0 1 0 OA] jn et te etn 6 ee 8a 48 nara ws ade send on, 12-5 3 0 2 o o tf 0 oO 1 — tm wa nad te o 0 0 0 1 2 1 200 3 0 7 oo foo 1 $s eset at aa te o 0 0 0 12 ‘The last matrix is in reduced row echelon farm. 14 Chapter 1 Systoms of Linear Equations and Matrices ‘The procedure (or algorithm) we have just described for reducing a matrix to reduced. row echelon form is called Gauses—Jordan elimination. This algorithm consists of two paris, forward phase in which zrosarcinirexiuced below the leading V's anda backward Phase in which zeros are introduced! above the leading 1s, IF only the forward phase is used, then the procedure produces a row echelon form only and is called Gaussian elimination. For example, in the preceding, computations a row echelon farm was obtained at the end of Step 5. EXAMPLE 5 Gauss-Jordan Elimination Solve by GaussJordan elimination, m+ 3ay— 2m + Das =o In -On Sn — Mates I= —1 Sry + 10 +1Sry= 5 D+ Or, tO MAa ty + Ty Solution "The augmented matrix for the system ix bos? 0 2 0 0 206 -$ -2 4-3-1 oo sm Oo s 2 6 0 # 4 sO) ‘Adding ~2 timex the first row to the second and fourth rows gives 132 0 2 0 @ o 0-1-2 0 -3 -1 o 0 5 mw oO 5 o 0 4 8 OW 6 Multiplying the second row by —T and then adding —S times the new second row to the ‘third row and —4 times the new second raw to-the fourth row gives 1320 2 6 o 0 1 2 0 3 o 0 0 6 oO o 0 0 0 oO 6 Interchanging the third and fourth rows and then multiplying the third row of the resulting: ‘matrix by 4 gives the row echelon form wens 13-2 0 2 0 0 noo 8 TT atcap tte pate ee 0 0 0 0 Oo 1G] MevnSeuaetrulang ta o 0 0 0 8 Oo 0 Adding —3 times the third row to the second row and then adding 2 times the second row of the resulting matrix to the first row yields the reduced row echelon form 1300 4 2 0 Oo oo 1 2 0 o Oo Tsp tp oe 0 0 0 0 0 1 Ef} aerewrebstee moveme adng 1 oo 0 9 © o oO ‘Note that ia constructing the linear system in 3) we ignored the row of mos in the cor sesporaing augments, ‘Why is this justified? Homogeneous Linear ‘Systems 1.2 Gaussian Elimination 15 ‘The corresponding system of equation ny 43m + 4ay Dy a+ Dee @) % Solving for the Icading variables we obtain ay = ~34y ry — 2 x x Finally, we express the gencral solution of the system parametrically by assigning: the roc variables x, 44, and x5 arbitrary values s,s, and 1, respectively, This yields napa ecw ds 2, en eds, x ‘A system of linear equations is stil to be homogeneous ifthe constant terms are all zero; that is, the wystem has the form anny ata beet inte =O anny ata bet dante =O amit bunts bs dinat 0 very homopencous sytem of lincar equations is consistent because all such system Fave) OX Os ese ath = Oana woltion, ‘Thin solution bicalled the trivial salution; ifthere are other solutions, they are called nonariviad soludions. Because a homogeneous linear system always has the trivial xolution, there are only two possibilities for its solutions: + "The nystem hax omly the trivial solution + ‘The nystem has infinitely many solutions in addition tothe trivial solution, {In the special case of a homwagencous linear system of two equations in two unknowns, way aux iy =O (asd, maha rer) ax + hay =O (aacbs so both ree the graphs of the equations are lies through the origin, and the trivial solution carre- anya 0 / ‘nt asrehye0 Chay ew si | ste are “There is one case in which a homogencous system is assured of having nontrivial solutions—namely, whenever the system involves more unknowns than equations. To see why, consider the following example of four equations in six unknowns, > Figure 1.21 146 Chapter! Systoms of Linear Equations and Matrices Free Variables in Homogeneous Linear Systems > EXAMPLE 6 AHomogeneous System Use GaussJordan elimination ta solve the homogeneous linear system ay +3 — 25 +25 2ny $64; — Sx) — Day Hey — 3 7 @ HK + 1Sty Dey $6 + Be ts + IBY Solution (hscrve first that the coefficients of the unknowns in this system are the same as those in Example 5; that is, the two systems differ only in the constants on the right side. The augmented matrix for the given homogencoux system ix 13-20 2 0 0 2 6-5-2 4-3 0 o 0 5 HW 0 1 Oo e) 2 6 0 8 4k OO ‘which isthe sume ax the augmented matrix forthe system in Example 5, except for zero& in the last column, Thus, the reduced row echelon form of this matrix wil be the same «4 that of the augmented matrix in Example 5, except for the last column. However, a ‘moment's reflection will make it evident that a column ef zeras is not changed by an. clementary row operation, x0 the reduced row echelon form of (5) is so4a200 oo12000 oooov1oe oooo0 00, The corresponding system of equations ix M48 HAR HD =O wa $e =o yao (6) Solving forthe leading vuriables we obtain fy 3 dey = 2s ns Be mm nao If we now assign the free variables x2, x4, ad x5 arbitrary values r,s, and, respectively, then we can express the solution set parametrically as nes 4s-21, a Note that the trivial solution results when r Jlementary row operations do not alter columns of zeros ina matrix, so the reduced row echelon form of the augmented matrix for a homogencous lincar system has final column of zeros. This implics thal the linear system corresponding to the reduced row echelon form is homogeneous, just like the original system. 2. When we constructed the homogencous linear system corresponding to augmented. ‘matrix (6), we ignored the row of zeros because the corresponding equation Ory + Oas Oey Ore + O45 + Org = 0 docs not impose any conditions on the unknowns. Thus, depending on whether or ‘not the reduced rew cchelon form of the augmented matrix fora homogeneous linear Note that Theorem 1.2.2 ap- plies only to homogeneous systems nonhamegencons system with more wakaiwras ‘than equations nod not be ‘consistent, However, we will [prove Inter that if & nonbo- magencous systers with rote ‘unknowns Chen ouations ix ‘coisiient, then fas it Anitely many solutions, Gaussian Elimination and Back-Substitution 1.2 Gaussian Elimination 17 system has any rows of zero, the linear system corresponding to that reduced row cchelon form will cither have the same number of equations as the original system or it will have fewer. ‘Now consider « general homogencous linear system with # unknowns, and suppose that the reduced mvr echelon form of the augmented matrix has r nonzero rows. Since each nonzero row has a leasing 1, and singe each leading | corresponds to a leading variable, the homogencous system corresponding to the reduced row echelon form of the augmented matrix must have r leading variables and n — r free variables. Thus, this system is of the form my +20 a +E @) m+ EO=0 ‘whereimcach equation the expression 9X ) denotexa sum that involvexthe free variables, if any [sec (7), for example}, In summary, we have the following result, ‘THEOREM 1.2.1. Free Variable Theorem for Homopeneous Systems Ufa homogeneous linear syste has m unknowns, and ifthe reduced row echetan form of lc augmented matrix has r nonzero rows, then the xyntem has n ~ ¢ free variables ‘Theorem 1.2.1 tis an important Implication for homoyericoss Har systems ith more unknowns than equations, Specifically, if a homogeneous linear system Nas mt equations in n unknowns, and ifm EXAMPLE 5 ‘Suppose that the matrices below are augmented matrices for linear systems in the un- knowns), 3,03, and.xa, These matrices are all in row echelon form but nat restuced row echelon form. Discuss the existence and uniquenexs of solutions to the corresponding, lincar systems 72°55 1-372 8 1-39 28 2-401 or 24d or a4 1 6 9f “lo o 1 6 of “la o 1 6 9 oo 1 00 0 6 0 ooo 1 © ‘Solution (a) ‘The last row corresponds to the equation Or, + Ors +O + Or = 1 from which itis evident that the xystem is inconsistent ‘Sofution (6) ‘The last row corresponds to the equation On $0405 +0r=0 which has na effect on the solution set. In the remaining three equations the variables 1%, 2, and x correspond to leading 1's and hence are leading variables. The variable x, Some Facts About Echelon Founs Roundolf Error and destability 1.2 Gaussian Elimination 19 isa free variable. With a litle algebra, the leading variables can be expressed in terms Of the free variable, and the free varisble can he assigned an arbitrary value. ‘Thus, the system must have infinitely many solutions. ‘Solution (6) ‘The last row corresponds to the equation net ‘hich givesus anumerical valve for x. Ifwe substitute this valucinto the third equation, ‘namely, Hy +644 =9 ‘we obiain x = 9, You should now be able to sec that if we continue this process and substitute the known values of x, and xy into the equation corresponding ta the second row, we will obtain a unique numerical value for x9; and if, finally, we substitute the known values of.r4, x3, and x into the equation corresponding to the first row, we will Produce & unique numerical value for.r), Thus, the system has a unique solution. "There are three fucts about row echelon firms and reduced row echelon forms that are important to keow but we will not prove: 1. every matrix has a unique reduced row echelon form; that i, regardiess of whether you une Gauss-Jordan climination ar some other sequence of elementary row oper tations, the same reduced ri echelon form will nemul in the env” 2. Raw echelon forms are not unique; that is, different sequences of elementary row ‘operations can result in different row echelon form, 3. Although row echelon forms are not unique, the reduced row echelon farm and all row echelon forms of a mairix.A have the xare number of erarows, and the leading VV always occur in the same positions #9 the row echelon forms of A. Those are called the pivot positions of A. A column that contains x pivot position is called a pivot column of A. & EXAMPLE @ Pivot Positions and Columns arti in this section (immediately after Definition 1) we found a row echelon form of fo o 2 o 7 204-0 6 12 Ww 204 -5 6-8 =I tobe tome os og td oo Fb 0-7 6 oo 6 oO 1 2 “The leading 1's occur in positions (row 1, column 1), (row 2, column 3), and (row 3, column 5). ‘These are the pivot positions. ‘The pivot columns are columns 1, 3, and 5 < ‘There is offen a gap between mathematical theory and its practical implementation — auss-Tordan elimination and Gaussian climination being good examples. The problem isthat computers generally approximate numbers, therchy introducing raundaff errors, “A proof af this rout canbe found in he article “The Reduced Row Uchelon Form of a Matrix is Unique: A Simple oof” by Thoma Vaster, Mathematics Magacine, Wl. 57, No.2, 1964, pp. 93 84. 20 Chapter! Systems of Linear Equations and Matrices ‘so unless precautions are taken, successive calculations may degrade an answer to a degree that makes it useless. Algorithms (procedures) in which this happens arc called, unstable, There are various techniques for minimizing roundoff error and instability Por example, it can be shown that for lange linear systems Gauss—Jordan elimination involves roughly $0% more operations than Gauss climination, so most computer algorithms are based on the latter method. Some af thesc matters will be considered in (Chapter 9. Concept Review + Reduced row echelon form * Gaussian elimination + Nontrivial sotution + Row echelon form + Gaussfordan elimination + Dimension Theorem for + Leading 1 Forward phase Homogencous Systems + Leading variables + Hlackward phase + Back-sunstitution + Proc variables * Homogeneous linear system + General solution to alinear system —* Trivial solution Skills + Recognize whether a piven matrix ix in row echelon form, reduced row echelon form, or neither. + Construct solutions to Linear systems whove corresponding augmentcd matrices that arc in row EXAMPLE 1 Examples of Matrices ‘Some examples of matrices are 12 fe -v2) Zo, 2 1 oO -3 fo fo oa fe ih Wi 14 o 0 oO “The size of a matrix is deseribed in terms of the number of rows (horizantal lines) tnd columns (vertical lines) it contains. For example, the first matrix in Example 1 has three rows and two columns, so its size is 3 by 2 (written 3 x 2), tm a size description, the first number always denaics the number of rows and the second denotes the number of columns. The remaining matrices in Example | have sizes 1 x 4,3 5¢ 3,2 1, and 1, respectively, ‘We will usc capital Ietersto denote matrices andlowercase lettersto denote numerical quantities; thas we might write [: he ] de f. a=P} 3] 342 22 Chapter Systems of Linear Equations and Matrices Matix brackets are often ‘xniti from 1 x 1 mations, raking it impossible 10 tll, for camp, whether the sym bol 4 denotes tho number “foue" or the maatis [4], This rarely causce proilems bo ‘cause it is usually posse to tell which & meant from the ‘context, ‘When discussing matrices, it is common to refer to numerical quantities as sealars, Unless stated otherwise, scalars will be real numbers; complex scalars willbe considered later in the text, The entry that occurs in row j and column j af a matrix A will be denoted by ay, ‘Thus a general 3 x 4 matrix might be written as Aslan oy on oy My Oy Oy Oy and.a general m x n matrix as a ae 0 ay ao as[™ % oe a Cr ‘When a compact notation is desired, the preceding; matrix can be written as tay law OF tay} the fist notation being wsed when it is important in the discussion to know the size, and the necond when the xize necd not be emphasized. Usually, we will match the letter denoting, a matrix with the letter denoting its entries; thus, for a matrix J we would: ‘aencrally use fy for the entry in row / aed column j, and for a matrix €° we would use the notation ¢,. The entry in row { and column j of'a matrix A is also commonty denoted by the symbol (ey, ‘hus, for matrix (1) above, we have Ay = ay and forthe matrix 2-9 A= [; a wo have (A), = 2, (Alig = =3,(A)ay = Toand (A) = 0 Row and column vectors arc of special importance, and it is common practice to denote them by bekfuce lowercase Iettcrs rather than capital lettcrs, Kor such matrices, doublesuhscripting ofthe entics is unnecessary, Thus general 1% n row vector wand ‘apeneral m x 1 column vector b would be written ax | and ‘A matrix A with» rows and m columns is called asquare matrix of order m, and the shaded entries a)y, ez2,-.-.¢ux in (2) ate said to be on the main diagonal of A ay a2 ain a a --- an : @ iat ana an Operations on Matrices “The equality of two matrices A= lay) and B= the ‘of the same sine canbe x rosie ther by welling y= By ‘or by wing ah ‘where it isunderood that the ‘quali hold fr al vals of and 1.3 Matrices and Matrix Operations 23 So far, we have used matrices to abbreviate the work in solving systems of linear equa- tions. For other applications, however, itis desirable to develop an “arithmetic of ma- trices” in which mairices can be added, subtracted, and multiplied in a useful way. The remainder of this section will be devoted to developing this arithmetic. DEFINITION 2 ‘Two matrices are defined to be equal if they have the same sive and ‘their corresponding entrics are equal b EXAMPLE 2 Equality of Matrices Consider the matrices 24 21 210 ed Fi A ca FO Wx = 5, then A =, but forall other values of ¥ the matrices A and 1 ae not equa, since not all oftheir corresponding enrics are equa. There tno wal of for which A= € since A and ( have different sizes. <4 DEFINITION 3 If A and J are matrices of the same size, then the sum A+ 8 is the matrix obtained by adding the entries of 8 ta the corresponding entries of A, and the difference A — it ix the matrix obvained by subtracting the entrick of & from the ‘corresponding entries of A. Matrices of different sizcs cannot be added or subtracted. In matrix notation, if A = [ay)| and ff = {b,j| have the kame xize, then AE My = My + Dy = ay hy an A= By = Ay — (Dy = ay ~ by > EXAMPLE % Addition and Subtraction (Consider the matrices 21042 4 30s 0 ey As{-1 9 2 4], B=] 202 @ -1), C=], 420700 302-4 5 Then 2 4 8 4 6-2-8 2 Ages]. 2 2 3] od a-ea[-3 -2 2 5 703 5 1-4 on os ‘The expressions A+ C,B-+C,A —C, and B —C are undefined, <4 DEFINITION 4 If A is any matrix and c is amy scalar, then the product cA is the ‘matrix obtained by multiplying cach entry of the matrix A by c. The matrix cA is ‘said to be a.sealar multiple of A. In matrix notation, if A — fay, then (cA)y = eC A)y = cay 24 Chapter Systems of Linear Equations and Matrices > EXAMPLE 4 Scalar Multiples For the matrices 23 4] raat we have a(t 6 ‘I, -oa-[t 2 262 1-3 I is common practice to denote (—1)B by —B. Thus far we have defined multiplication of a matrix by a sealar ut not the rmutti- plication of two matrices. Since matrices are added by adding corresponding entries land subtracted by subtracting corresponding cntrics, it would sccm natural to define ‘multiplication of matrices by multiplying corresponding entrics, However, it turns out ‘that such a definition would not be very useful for most problems. Experience has led ‘mathematicians to the following more useful definition of matrix multiplication, DEFINITION 5 If A ix an m xr matrix and # is any xn mattis, then the product ABB is the me > m matrix whowe entries are determined as follows: To find the entry in row d and column j of AM, single out row ( from the matrix A and column j from the matrix. Multiply the corresponding entries tram the row und column together, tnd then add up the resulting products. » EXAMPLE 5 Multiplying Matrices: Consider the matrices 4143 a-[L2 4h fea at 27 5 a] Since A is a2 2 3 matrix and. is.a.3 x 4 matrix, the product AB is a2 x 4 matrix ‘To determine, for example, the entry in row 2and column 3 of Ad, we single out row 2 from A and column 3 from #. ‘Then, as illustrated below, we multiply corresponding, ceniries together and ucld up these products, pet 4 3)-( aoa, > 5 2) [OO (2-4) 446-3) + (0-5) = 26 The entry in row | andl column 4 of AB is computed as follows: smegli | + & 1 era: 0-3) +2-D+4-2=13 Partitioned Matrices 1.3 Matrices and Matrix Operations 25 ‘The computations for the remaining entries are a-9+e 2 a-n-@ 7 a-]+@ 30 227 0 ete 2 [2 souls Q)-@ 23+ O- ‘The definition of matrix multiplication requires that the mumber of columns of the first factor A be the same as the number af rows of the second factor I i order to form the prduct Ad. IF this condition ts not satisfied, the product is undefined. A convenicnt way to determine whether a product of two matrices in defined ix to write down the size ofthe first factor and, tthe right ait, write down the sizeof the second facto. If asin (3), the inside mumbers are the same, thea the product is defined, The outside numbers thea give the sizeof the product. A a AB mar rxmomxn ie a ste > EXAMPLE © Determining Whether a Product Is Defined Suppore that A, #, and € are matrices with the following. sizes: A B i Bad dx7 Tad Thenby (3), AB is defined und isa 3 x 7matrix; BC ivdlefined andlix ad x 3 matrix; and CA is defined and is 7 x 4 matrix, The peoduets AC, CR, and BA ure ull undefined. < Im peneral, fA = [ay] inv amon xr matrix anel B= [Py | 8 an r sc matrix, then, ak ‘lustrated by the shading in (4), Hay ay au an diy | Pbu tao ig Dw 23 2 | foo han ++ day + ban AE | aimraig ai} | 2 : © EY Lee tra se dig sb fink it foe, the entry (AB), in row i and column j of Ad? is given by (AB =a.sby) + aiaboy + arb; +--+ aby) 6) A matrix can be subdivided or partitioned into smaller matrices by inscrting horizontal and vertical rules between selected rows and columns. For example, the following. are three possible partitions of a general 3 x 4 matrix A—the first is a partition of A into 26 Chaptor 1 Systems of Linear Equations and Matrices four submatrices Ay), Ay, Az, and Azz; the second is a partition of A into its row vectors ry, f2, and fs; and the third is a partition of A into its column vectors €1, €2. €s, and ey: A=|ou om on aul =[ An Az. ay aa an @ As|@x an an au| =| ay an an au) [es yy | dis | ay | ae Aslan an} an|ax| =e er ex ea) ay | aw | ay | a, ‘Malrix Motiplication by Partitioning hax many uses, one of which is for finding particular rows oF columns of Columns and by Rows a matrix product A without computing the entice product. Specifically, the follow: ing formulas, whose proofs are left ax exercises, show how individual column weetors of AB can be obtained by partitioning # inte column vectors and how individual row ‘vectors of AB can be obtained by partitioning A ito row vectors AB = Alt) By - Byl= TAB) Aby Abad (6) (AW compated cataeun by cba) a ae son|™ lam] 28 om Ma a (Al coomputed em ty ran) In words, these formulas state that ith column vector of Ait = Al jth column vector of B (8) (th row vector of Al? = [ith row vector of ALB oy > EXAMPLE 7 Example Revisited If A and 2 are the matrices in Example 5, then from (8) the second column vector of AB ean be obtained by the computation [| bial- Matrix Products as Linear Combinations 1.3 Matrices and Matrix Operations 27 and from (9) the fist row vector of AB can be obtained by the computation 414 ce “4 4 227» a) <4 275 2 "The following definition provides yetanother way of thinking about matrix multiplication, deoeny Ay are matrices of the same size, and ifc1,625 +++ 6 | ‘are scalans, then an expression of the form DEFINITION IFA, Aa, CUAL Aa Ferber is called a fimear combination of Ay, A3,..., A, with enefficients ¢\.63,..-.6)- ‘Tosee how matrix proxtucts can be viewed as linear combinations, let Abe an x smutrix and an n x 1 column vector, say iy Ao a ea Ae Ree wd ore fay das X hen aay + aut, 4 + inks ay ayy ay ne | Hemet eee emt Pet aL af amit + amas toot Omokel — Lei) Lea ne ao) “This proves the following theorem, THEOREM 1.3.1 Jf A isan mi x n mairic, and if ix ann column vector, then ‘the product AX can be expressed ax a linear combination of the column vectors of A Lin wiich the coefficients are dhe entriex of x. > EXAMPLE © Matrix Products as Linear Combinations "The matrix product, a 2), 2 ' 1 28 =|-9 201-2 3 ccan be written as the following linear combination of column vectors “1 a z 1 2{ i}-afal43}-s]=|-9 2 1 2] Ls 28 Chaptor Systems of Linear Equations and Matrices Matric Form of a Linear Systom > EXAMPLE 8 Columns of a Product AB as Linear Combinations We showed in Example 5 that ao. 4 [! 2 ‘| AR= Oo 3 Rw Hw 1B 2 6 0 27 5 2 B48 12 It follows from Formula (6) and Theorem 1.3.1 that the jth column vector of AB can be expressed as a linear combination of the column vectors of A in whish the coefficients {in the linear combination are the entries from the jth column of #. ‘The computations, (A) EE] [I EE Eee El EE] tel [el >E- Eel cation has an important application to xystems of linear equations. Con- sider a xyster of m lineur equations in unknowns: ayn + aR b+ ainda = BL anit + amma bob ata = ba 8 gra HE Many = Pe ‘Since to matrices are equal ifand only if their corresponding entries are equal, we can replace the m equations in this system by the single matrix equation ay ty + apt beet iat hy ant) + ann bed amie by nt + dint bo + nin, be, The m x | matrix on the left side of this equation can be written as a product to give aya ae) Pv) hs on da ++ am | faa] _ | bs me ad Gun) | Xe be, If'we designate these matrices by A, x, and by, respectively, then we can replace the original system of m equations in m unknowns by the single matrix equation Ax=b “The vertical bar in [4 |b] is a convenient way to apart “A fom W visually, i has 90 ‘mathematical significance. Transpose of a Matrix 1.3 Matrices and Matrix Operations 29 ‘The matrix A in this equation is called the coefficient matrix of the gystem. ‘The aug- ‘mented matrix for the system ix obtained by adjoining b to A as the last column; thus the augmented mairix is aya =~ ay | ana = ay | be IAL bh= i mt m2 --> an | bs We conclude this scetion by defining two matrix operations that have no analogs in the arithmetic of real numbers. DEFINITION 7 If A is any m x a matria, then the érangpese of A, denoted by AT, is defined to be the m 5 m matrix that results by interchanging the rows and columns ‘of A; that is, the first column of A ixthe first row of A, the second column of AT is the sccand row af A, and so forth, > EXAMPLE 10 Some Transpeses “The following are xome examples of matrices and their transpose fan az an aie a9 Awlan an an ax], H=}l 4], Call 3 Sh Dalat au ay an au 36 fan an aw 1 42 a ay a0 ate | Be 3), pra a an an a4 4 ae au aw. ‘Observe that not only are the columns of AT the rows of A, but the rows of AT are the columns of A. Thus the entry in raw / and column j of AT isthe entry in row j and column é of A: that is, Ay = Ay a ‘Nate the reversal of the subscripts. In the special ease where A is a square matrix, the transpose of A can be obtained by interchanging cntrics that arc symmetrically positioned about the main diagonal. In (12) we see that A” can also be obtained by “reflecting” A about its main diagonal. a a ED 1 -s A=|3 7 als 2 a} (2) 5 8 6 40 6 ‘Souder 30 Chapter 1 Systems of Linear Equations and Matrices DEFINITION If A is a'square matrix, then the trace of A, denoted by tr(A), is defined to be the sum of the entries on the main diagonal of A. ‘The trace of A is undefined if A is not a square matin. > EXAMPLE 11 Trace ofa Matrix The following arc examples of matrices and their traces, tn(d) = ay) bam +a s “1207 0 na ay Sone 4 ap an}, @=) > 5 5g ay ay mB) =-14+5+740=11 4 In the exercises you will have some practice working with the trunspase and trace ‘operations. Concept Review + Matrix ‘Matrix operations: sum, + Row-column method + Bintries difference, scalar multiplication + Calumn method + Column vector (oF column satrix) + Row vector (or row matrix) + Square matrix + Main diagonat » Hiqual matrices multiplication) * Submatrices ‘+ Linear combination of matrices ‘Product of matrices (matrix ‘+ Partitioned matrices Skills Determine the size of a given matrix, + Identify the row vectors and column vectors of a piven matrix, * Perform the arithmetic operations of matrix addition, subtraction, scalar multiplication, and multiplication. + Determine whether the product of two piven matrices is defined. + Compute matrix products using the row-column ‘method, the column method, and the row method ress the product af a matrix and a column vector as a linear combination of the columns of the matrix. press a linear systern as a matrix equation, and identity the coefficient matrix. * Compute the transpose of a matrix. + Compute the trace of a square matrix, True-False Exercises In parts (a}-(a) determine whether the statement is truc or false, and justify your answer. afi 22 (a) The matrix |, 5 3] asm main diagonal (b) Anum sc m matric has m column vectors and n row wee tors (o) HAR = BA, then A must equal 2. (d) If Ax = 6, then b must bea linear combination of the columns of A. (@) For every matrix A, itis rue that (A7)7 = A. ¢£) IFA and B are square matrices of the same order, then r(AB) = tr Abr. fg) IFA and B are square matrices of the sume onder, then (Apyt = ATA. (i) For every square matrix A, it is true that ar(A¥) = (A), () WA isa6 x 4 matrix and 2 is an. > n matrix such that BTAT is a2 x 6 matrix, then m = 4 and n = ()) HA isan xn matrix and cis a scalar, then te) = eur). (k) IFA, B, andC are matrices ofthe same size such that A~C=8—C, then A= B. 1.4 Imnverses; Algebraic Properties of Matrices 31 ()) IFA, B, and C are square matrices of the same order such that AC = BC, then A =. (m) If AB + BA is defined, then A and & are square ma- trices of the same size (n) If 8 has & columm of zeros, then so does AI if this, product is defined. (0) If B bas a column of zeros, then so does BA. if this Product is detined, 1.4 Propertios of Matix Addition and Scalar Multiplication Inverses; Algebraic Properties of Matrices {this section we will discuss some of the algebvaic properties of matrix operations, We Will seo that many of the basic rules of arithmetic for real numbers hald far matrices, but we will also soe that sorne do not. ‘The following theorcrn lists the basic algebraic properties af the matrix operations. ‘THEOREM 1.4.1. Properties of Matr Arthmetic Assuming thal the sizes of the matrices are such that the indicated operations can be performed, the following rules of mats arithmetic ave valid. fo) At Bm BHA (Comma a fr marin wditina) (1) A&B +E) = (AM) + C (Acvacatve ta for mate aditiont 1) AUC) = (ARE {Awvorintive law ornate muilcation) Xd) AUF C) = AB AC (et tsbutive tow) ) (BE C)A=HALCA Rig strtatie tay Uf) MB—C) = AB AC (g) (B=C)A= BA=CA fH) a8 +0) = ah tac ) (B= C)= a8 ~ac GU) @ Me aac + he &) a WC = at — be! 0) a(be) = (abe (my a( BC) = (aC = Bal) ‘To prove any of the equalities in this theorem we must show that the matrix on the left side has the same stze ax that on the right and that the corresponding entries on the two sides arc the same. Mast of the proofs follow the same pattern, so we will prove part (d)asasample. The proof af the associative law for multiplication is more complicated than the rest and is outlined in the exercises. Proof (d) We must show that A(B + C) and A 4 AC have the same sive und that corresponding entrics are equal. To form ACB + C), the matrices # and C must have the same size, say m xm, and the matrix A must then have m columns, 50 its size must bbe af the form r x m. This makes A(B +€)an rx mmatrix. It follows that AB + AC isalsoan r x n matrix and, consequently, A(B + C)and AB + AC have the same size. 32 Chapter Systems of Linear Equations and Matrices ‘There are tre basic ways to prove that wo matrices of the same sire are oqual—prove that corresponuting entries are the same, prove that conee> sponding row vectors are the same, or prove that conro- sponding, column. vectors are the same. Proportios of Matrix Muttiptication Suppose that A = [aj], # = [dyj|,and.C = [c, |. We want to show that carrespond- ing entries of ACB + C) and AB + AC are equal; that ix, [AB + Oly [AB + ACIy for all values of j and j. But from the definitions of matrix addition and matrix mutti- plication, we have LACE + Coy = a(n +01j) + airhbay + 62)) +++ inp + Emp) = lab) + aiabay +--+ MnP) + (Bie + 2:2eay + ~ LABly +1ACL = 1AB+ ACL, 4 + inn) Remark Alihowgh the opseaions of matrix adiion and matrix mulipisation were defined for pairs of matrices associative laws (b) ant (} enable ws to dome imatricesas A+ H-+ C and AMC without inserting any parcnteses, This is justified by the fac ‘hao mater how parenthox are inserted, the associative laws puarantce thatthe kame end result willbe obiainod. In gsnsral, given any sum ar any produc of matrices, pies of pareheses con re verted or deleted anywhere wihin the expression without afin the ond esl > EXAMPLE 1 Associativity of Matrix Multiplication ‘Awan illustration of the associative law for matrix multiplication, consider 12 2 to anf], off 3), cof o1 20 2 3. - 47 ‘aap o wo 9 Aba ls 4 [ |- 20013) and wef I = ] 2b d-|2 3 bik sl-[0 5 ‘Thus: cane =]20 aa] =| os 24 43 and wacy=s a] t]-[s0 oOo 43 50 (ABC = A(HC), as guaranteed by Theorem 1.4.1(0). Do not let Theorem 1.4.1 lull you into betieving that aif laws of real arithmetic carry over to matrix arithmetic. For example, you know that in real arithmetic itis always trac that ab = ba, which is called the commutative law for multiplication. In matrix arithmetic, however, the equality of AB and BA can fail for three possible reasons: 1. AB may be defined and BA may not (for example, if A is 2x 3 and B is 3 x 4). 2. Ad-and BA may both be defined, but they may have different sizes (for example, if Ais? x 3and B is3 x 2) 3. AB and BA may both be defined and have the same size, but the two products may be different (as iflustrated in the next example. bo nat wea too auch eto Bx ample 2—it does not re out ‘theppossibility that AB an BA may be eal in certain eases, Just that they are not equal in «all cases. {f itso happens that AB = BA, then we say that AB an BA commute, Zero Matrices 14 Inversas; Algebraic Proporties of Matrices 33 > EXAMPLE 2 Order Matters in Matrix Multiplication Cia tt Lo 12 Gade Multiplying gives Thus, Aw A BA. A matrix whose entries are all zero is called a cere matrix, Some examples are oo z - o i 7 oo o|, (22 HE a 0 i + 101 oo oof oe 0 a ‘We will denote a zero matrix by 0 unless itis important to specify its size, in which ease we will denote the mx 7 22r0 matrix by Oy nn In should be evident that if A and 0 are matrices with the same sic, then AtOmO+AaA ‘Thus, @playsthe same role inthis matrix equation that the number plays inthe numerical equitiona +0= 042 = 4 ‘The following theorem I should be self-evident, we wi the basic properties af zero matrices. Since the results jomit the formal proof, ‘THEOREM 1.4.2 Properties of Zero Matrices Ife ie.a seatar, and if the sizes of the matrices are such that the operations can he perfomed, then: (a) A+0=04A=A () A=0=A () A-A=A4(-A)=0 (d) OA=0 (©) fed =0, then =00r A =0. ‘Since we know that the commutative law of real arithmetic is not valid in matrix arithmetic, it should not be surprising that there are other mules that fail as well. For ‘example, consider the following two laws of real arithmetic: + ab beanda £0, then b = c, [Teecancetation tw) + fab-— 0, thenat least one of the factors on the left is. ‘The next two examples show that these laws are not universally true in matrix arithmetic. 34 Chapter! Systems of Linear Equations and Matrices Identity Maticos > EXAMPLE 3 Failure of the Cancellation Law Consider the matrices Although A # 0, canceling A from hoth sides of the equation Al = AC would lead to the ioorrect conclusion that A = C. Thus, the cancellation luw does not hold, in: ‘general, for matrix multiplication, > EXAMPLE 4 A Zero Product with Nonzero Factors: Here are two mairices for which AB = 0, but A ¥ 0 and Bt 9 0; “hat Els ‘A square matrix with 1.01 the main diagonal and eros elsewhere is called an ideatity ‘matrix. Some examples are ooo toe a) roo uh | |: a o1 ait 1 oro oo oF zee ‘An identity matrix is denoted by the letter /. If itis important to emphasize the size, we will write J, for the a A identity mati, To explain the role of identity matrices in mutrix arithmetic let ux consider the effect of multiplying a general 2 x 3 matrix A on cach side by an identity matrix. Multiplying, ‘on the right by the 3 x 3 ickntity matrix yields fn alers AL, and multiplying on the left by the 2 > 2 identity matrix yields wf ie 2 al-f 22 ‘The same result holds in general; that is, if A is any m x m matrix, then Al, =A and 1,A= A ‘Thus, the identity matrices play the same role in these matrix equations that the aurnber 1 plays in the numerical equation -1 = 1-a =a. Inverse of a Matrix 14 Inversas; Algebraic Proporties of Matrices 35 As the next thearem shows, identity matrices arise naturally in studying reduced row echelon forms of square matrices. THEOREM 1.43 ff is the reduced row echelon form of ann x n matrix A, then cither R has a row of zeros or Bis the identity matrix [,. Proof Suppose that the reduced sow echelon form of Ais mona erm pa|™ Pel tad Faw ither the last row in this matrix consists entirely of zeros or it does not. IF not, the ‘matrix cantains no zero rows, and consequently cach of the n rows has a leading entry fof 1, Since these leuding I's occur progressively farther to the right as we move dewn the matrix, each of these 1's must occur on the main diagonal, Since the other entries in the same column ax one of these I's are zero, must be J,. Thus, cither K has a.row of voros.or R= I. nrcal arithmetic every nonzero number has reciprocal a '(= Ia) with the property aatea'ant ‘The number a " ix sometimes called the muftiplicasive inverse of a, Our next objective {scto-develop an analog of this result for matrix arithmetic, Hor this purpose we make the following definition, DEFINITION 1 If A ix a square matrix, and if'a matrix 8 of the sume sire can be found such that AB = BA = J, then A is said to be éavertibte (or wansingudar) and 1 is called an inverse of A, If no such matrix & can be fourd, them A is suid to be singular Remark The relationship AB = BA = 1 is ot changed by interchanging A and 50 HFA ie invertible and B is an inverse of Athen it isalso tue that Jf is invertible, and A isan inverse of 8. Thus, when we say that A and fare dmvernen . [jeeps ae(2 edb = walt §13 JB = ‘Thus, A and 2 are invertible and each is an inverse of the other. 36 Chapter'1 Systems af Linear Equations and Matrices Properties of Inverses at Parl A Figure 14.1 aetgay = ad — Be > EXAMPLE © A Class of Singular Matrices Ingeneral, asquare matrix witha row or column of zeros issingular. To help understand Why this isso, consider the matrix 1 aa|2 3 0 0 0 To prove that A is singular we must show that there is no 3x 3 matrix # such that AB = BA = 1. For this purpose lot 1, ¢2, 0 be the column vectors of A. Thus, for any 3.x Jmatrix B we can express the product BA ax BA = Bley) Ol= [Bey Bey | (Formule (6) of Section 1.3} ‘The column of zeros shows that BA # J and hence that A is singular. It is reasonable to ask whether an invertible matrix cam have more than ane inverse. The ‘next theorem shows that the answer is no—an invertible matrix has exactly one inverse, THEOREM 1.44 / 8 and € are both inverses of the matrix A, then B= C, Proof Since #ixan inversc of A, we have HA = 1, Multiplying both sides:on the right by C gives (BAVC = 1C = C, But it ie also true that (BA)C = (ACY = BT = B80 Can ‘Axa consequence of this important result, we can now speak of “the” inveree of an invertible matrix. If A ix invertible, then its inverse will be denoted by the symbol A. Thus, AA‘ = 1 and A‘A=1 ay The inverse of A plays musi the same role in matrix arithmetic that the reciprocal a! plays inthe numerical relationshipxaa = 1 anda ‘a= | Inthe next section we will develop amethod for computing the inverse of an invertible ‘matrix ofany size, Korniow we give the Following theorent that specifies corditions under which a 2 % 2 mattis is invertible and provides a simple formula forts inverse. ald is invertible if ard only if ad —be #9, in which cave the inverse is given by the Jorma imams Gl ® THEOREM 1.45. The macrix We will omit the proof, because we will study a more general version af this theorem later. For now, you shauld at least confirm the validity of Formula (2)-by showing that AA =A '=d Remark Figure 1.1 illusicates thatthe determinant of a2 x 2 matrix A isthe peosuct of the cniries on its main deagonal minus the product of the entries aff is main diagonal. In words, Thearem 11.5 states thet 32 x 2 matt A is invertible Wand oily i ts 14. Inverses; Algebraic Properties of Matrices 37 and if invertible, then is inverse can be cbained by interchanging is diagonal entries, eversing, the sigasof its off-diagonal entries, and multiplying the entries by the reciprocal ofthe dctcrminas of A. » EXAMPLE 7 Calculating the Inverse of a 2 x 2 Matrix {In cach part, determine whether the matrix is invertible. If so, find its inverse, 61 -1 2 wA=|o,| OA=l, G ‘Solution (a) ‘The determinant of A is det(A) = (6)(2) — (1)(S) = 7, which is nonzero, ‘Thus, A is invertible, and its inverse is es We leave it for you to confirm that AA = A 'A = ‘Solution (b) ‘The matrix is not invertible since det(A) = (—1}(—6) — 2)3) = 0. > EXAMPLE & Solution of a Linear System by Matrix Inversion ‘A probiicm that arises in many applications ix to xolve a pair of equations af the form wsax thy user 4dy for.x und y in terms of w and». One approach isto treat this as a linear system of two equations in the unknowns rand y and use Gauss-Jordan elimination to solve forx and yy. However, because the coeflicients of the unknowns are literal rather than numerical, this procedure is a litte clumsy, As un alternative approach, let us replace the two ‘equations by the sirygle matrix equation ()-[23] eal ‘we assume that the 2 x 2 matrix is invertible (L¢., ad — be 0), then we can multiply through on the left By the inverse and rewrite the equation as EEE dF Je) (4) GE] Using Theorem 1.4.5, we cun rewrite this equation as which we ean rewrite as which simplifies to welt WEE) from which we obtain 38 Chapter 1 Systems of Linear Equations and Matrices If a product of matries is singular, then at least one of the factors mist be singular, why? Powers of a Matrix ‘The next theorem is concerned with inverses of matrix produets, THEOREM 1.4.6 [fA and B are invertible matrices with the same size, then AB ix invertible and (amy ‘=a ta Proof We canestablish the invertibility and obtain the stated formula at the same time bby showing that (apy TA ‘y= (8 1A YAR ' But (ABB TA ABB NA T= AIA! = AAS =P and similarly, (#2 'A ')(Ad) = Fo ‘Although we will not prave it, this result ean be extended to three oF more factors: A product of any mumber of invertible matrices is invertible, and the inverse of the product is the product af the inverses in the reverse order. > EXAMPLE 9 The lnverse of a Product Consider the matrices a7 a-[} ? Thu, (AB)! = 8 A | as guaranteed by Theorem 1.4.6. 1A isa square matrix, then we define the nonnegative integer powers of A to he A’ =f and A" =AAL. A. fu fnctnes and if A is invertible, then we define the negative integer powers of A tobe Ate (ASA NA SAT in factorsy Because these definitions parallel those for real numbers, the usual laws of nonnegative exponents hold; for example, a and (Ay = AM Matrix Polynomials 14 Inversas; Algebraic Proparties of Matrices 39 jon, we have the following properties of negative exponents. THEOREM 1.4.7 {fA is invertible and.n isa nonnegative integer, then: (a) A‘ isimertible and (A ") | = A. (0) A* ts imertiie and (A*) ' = A *=(A Yt. 4) A is invertible for any nonsero scalar k, and (kA) ' =k ‘A ' We will prove part (¢) and leave the proofs of parts (a) and (b) as excreises, Proof(c) Property (chin Theorem 1.4.1 and property (/ }in’Theorem 1.4.2 imply that UAV TA Ns kh kA)A bk AA t= (DP = : Thus, kA is invertible and (kA) ! and similarly, (k *A.")(LA) > EXAMPLE 10 Properties of Exponents Let A.and A! be the matrices in Example % that is, “Pye e-L 4] wren) HE ALE LS eee lb STE Es 8] t a | i 30 apan=Gncsy Las] Las on Also, Wt =a > EXAMPLE 11 The Square of a Matrix Sum In eal arithmetic, where we have a commutative law for multiplication, we can write (a+ by sa? +ab+ba +h =a? +ab+ab+ ba? + 2ab +b However, in matrix arithmetic, where we have no commutative law for multiplication, the best we can dois ts write (A+ my aA Amy Bay BP {tis only in the special ease where A and B commute (1c, AB = BA) that we.can goa step further and write (A+ By =A? +2AB 48? 4 IEA isa square matrix, sayn x nm, and if PUR) = ay tax tage? tty ‘is any polynomial, then we define the #5 matrix pA) to he P(A) = ag $a, A+ a AE pag A™ 8) 40 Chapter 1 Systems of Linear Equations and Matrices Propertios of the: Transposo where Fisthe mm identity matrix; that is, p(A) is obtained by substituting A for x and replacing the constant term ag by the matrix ay. An expression of form (3) is called a ‘matrix polynomial in A. b& EXAMPLE 12 AMatrix Polynomial Find pA) for —2-3 and an(! 2 = 03 ae) eer “3-3 0. r 1 I b ‘| 3 0] fo 0 o a}*lo 0, Remark it follows from the fact that A"A' = A" = AU = A‘A’ that powers of a square ‘matrix commute, and since a atrix polynomial in Ais ult up fiom powers of A, any two mais ‘polynomials in A also cornmute; that i, for aay polynomials pan jy we ave pn(Aypn Ad = alvin (AY w or more briefly, (A ‘The following theorem lists the main propertics of the transpose. THEOREM 1.4.8 If the sizes of the matrices are such that the stated operations can ‘he performed, then: (@) (ANT =A (hy Atay =a 4 a (©) (A=#y" =AT— aT (d) (kA) = aT (©) (AB) = BTAT If you keep in mind that transposing a matrix interchanges its rows and columns, then yyou should have little trouble visualizing the results in parts (a}-(d_). For example, part (a) states the obvious fact that interchanging rows and columns twice leaves a matrix, ‘unchanged; and part (4) states that adding two matrices and then interchanging the rows, and columns produces the same result ax interchanging the rows and columns before adding. We will omit the formal proofs. Part (c) is less obvious, but for brevity we will ‘omit its proof az well. The result in that part can be extended to three or more factors. and restated as: ‘The manspose of a produet of any number of matrices is the product of the transposes in the reverse order. ‘The following thcorem establishes a relationship between the inverse of a matrix and the inverse of its transpase, 14 Inversas; Algebraic Proparties of Matrices. 41 THEOREM 1.4.9 /f 4 ivan invertible matrix, then AT is also invertible and ay =a or Proof We can establish the invertibility and obtain the formula at the same time by showing that AMA NYT = (A TAT = ‘But from part (¢) of Theorem 1.4.8 and the fact that 7 = J, we have ANA YT ath A IT ad ANAT =A eI ad which completes the proof. > EXAMPLE 13 Inverse of a Transpose (Consider 8 general 2 x 2 dnvertible matrix ancl is teanspioe: ao r_fe « amet a Since A is invertible, is determinant ad — he ls nonzero, Hut the determinant of A ix also ad — he (verity), A” is also invertible. It follows from Theorem 1.4.5 that a i. | at=be “ad ot a Tad=be ad = be. A ay pe which isthe same matrix that results if A! is transpeased (verify). Ty haa 5h ‘ax guaranteed by Theorem 1.4.9. Concept Review + Commutative aw for mates + Left and right distributive laws + Nonsingular matrix adtition + Zoro matrix + Singular matrix + Associative law For matrix addition 5 jgontity matrix, + Determinant + Associative law for rmatrix + Inverse of a matrix + Power of a matrix ‘multiplication «+ tvertible rautrbs + Matrix polynomial Skills + Know the arithmetic propertics of matrix operations. + ‘Be able to solve a linear system of two equations in two + Beable to prove arithmetic properties of matrices. unknowns whose coefficient matrix is invertible. «@ Know the prapertica of sero rasiricea. + Be able to prove basic properties invalving invertible + Know the properties of idemtity matrices. + Be ableto recognize when two square matrices are inverses of cach other. matrices. + Know the properties of the matrix transpose and! its relationship with invertible matrices. * Be ableto determine whether a 2 x 2 matrix is invertible, 42 Chaptor Systems of Linear Equations and Matrices True-False Exercises In parts (a)-{k) determine whether the statement is truc oF falsc, and justify your answer. (a) Two x n matrices, A and B, are inverses of one an- oiher if and only if AB = BA = 0. (b) For all square matrices A and B of the same size, itis true that (A+ 8)? = A? + 2AB + BP (6) Givean example ofa x 2 mate A that is idempotent mat is not the aera mairis or the identity maizix, (@) if A and # are invertible matrices of the same size, then AW is invertible and (Ast) '= A 'H | (e) HA and 8 are matrices such that Ad is defined, then itis truc that (AB)" = ATHY. 6) The matrix la 8 afd] is invertible if and omly if ad — be 0, ég) If A and # are maitices of the same size and & is a jconstart, then (KA + BYT = KAT + BY, th) IPA isan invertible matrix, then so is AT pagx™ and Pix pay tay bot (0) I pla) =a +ayx Hage + an identity matrix, then p(d) am: () A square matrix containing a row or column of zeros ccunnit be invertible, tk) The sum of two invertible matrices of the sume size ‘must be invertible, 1.5 Elementary Matrices and a Method for Finding A ' [mn this section we will develop an alporithm for finding the inverse of a matzix, ancl we will discuss some of the basic properties of invertible matrices, In Section 1.1 we defined three elementary row operations on a matrix A. 1, Multiply a row by a nonzero comstant ¢. 2 Inferchange two rows, Add a constant ¢ times one row to unother. It should be evident that if we let B be the matrix that results fram A by performing one of the operations in this list, then the matrix A. can be recovered from # hy performing, the corresponding operation in the following ist: 1. Multiply the same row by 1 /e. 2. Interchange the name two rows, 3. IB resulted by adding c times row ry of Ato row, then add —e times to sew 72, It follows that if 8 is obtained from A by performing a sequence of elementary row ‘operations then there is a seeond sequence of elementary row operations, which when applied to B recovers A. Accordingly we make the following definition clementary row operation, DEFINITION 1 Matrices A and Jf are said to be row equivalent if ether (hence each) | can be objained from the other by a sequence af elementary raw operations. (Our next gos! is to show how matrix multiplication can he used to earry out an DEFINITION 2 A. matrix & is called an elementary matrix if itcan be obtained from an identity matrix by performing a sigle clementary row operation. 1.8 Elementary Matrices and a Method for Finding A'* 43, > EXAMPLE 1 Elementary Matrices and Row Operations Listed below are four clementary matrices and the operations that produce them. 7 ea 103 100 i ‘| coo oto fore cere foes] [oon eye, cet, ttm aw taby 3. 1, Ge fe rm, Ay a ‘The following theorem, whate proofs eft ax anexercise, shows that when matrix A ‘is multiplied onthe eft by an elementary matrix #, the effeet isto perform an elementary row operation on A. ‘THEOREM 1.5.1 Row Operations by Matrix Multiplication ‘Ifthe elementary matrix results from performing a certain row operation on Ln and fA isan mx m matrix, then the product EA ix the matrix that renules when this same row operation is performed an A. > EXAMPLE 2 Using Elementary Matrices (Consider the matrix Awl? -1 3 6 land consider the elementary matrix Eelo 10 Bo 1 ‘which results from adding 3 times the first row of J, to the third row. ‘The product A is 10 23 #a=|2 -1 3 6 44 w 9 hich is precisely the matrix that results when we add 3 times the first row of A to the third row. ‘We know from the discussion a the beginning of this section that if is an clementary ‘matrix that results from performing an clementary row operation on an identity matrix 4,then there is-@ second elementary row operation, which when applicd to £, produces back again. Table | lists these operations. The operations on the right side of the table are called the inverse operations of the corresponding operations on the left. 44 Chaptor1 Systems of Linear Equations and Matrices Table 1 Row Operation a ‘Row Operation an E Thea Produces ‘Tha Repredaces Multiply row iby ©# 0 ‘Maltiply sows by He Waerchaage own dana | Weg won a] ‘Addl times raw {10 row | Add timex tow to row f > EXAMPLE 3 Row Operations and inverse Row Operations I euch of the following, an elementary rose operation is applied to the 2 2 identity ‘matrix to obtain an elementary matrix F, then is restored to the identity matrix by applying the inverse row operation. lo 10 1 biota fd t ' Mato sect) Magy he se omy tm yb edotd-bi rit nea I-bi ' tras the bid-b t 4 The next theorem is a key result about invertibility of clementary matrices, It will bbe a building block for many results that fallow. THEOREM 1.5.2 very elementary matric is invertible, and the inverse is also an elementary matrix. Proof If Eis anclementary matrix, then F results by performing some row operation, (on 1. Let fy be the matrix that results when the invcrsc of this operation is performed, ‘on 1. Applying Theorem 1.5.1 and using the fact that inverse row operations cance! the: effect of each other, it follows that EE =i and EEy=1 ‘Thus, the elementary matrix Eo is the inverse of E. Equivalence Theorem I may make the lupie of our proof of Theorem 1.5.3 rare ‘apparcnt by writing the kmpli- ‘cations (=> = 2) = G9 Co) we 3 “This kus i evidom visually ‘that the validity of any one ‘statement imple the valaity ‘ofallthe others, and hence that ‘the falsity any one impos the falsity of the ethers, 1.8 Elementary Matrices and a Method for Finding A'* 45 (One of our objectives as we progress through this text is to show how seemingly diverse ideas in linear algebra are related, The following theorem, which relaics results we hhave objained about invertibility of matrices, hamogencous linear systems, reduced row ccchelon forms, and elementary matrices, is our first step in that direction. As we study ‘ew topics, more statements will be added to this theorem. ‘THEOREM 1.5.3 Equivalent Statements IAs ann x m matrix, ten the following statements are equivalent, that i, all rue cor al false. (a) Ais invertible. (6) Aw =O has only the trivial solution. Xe) The reduced row echeton form of A is I, (d) Ads expressibte as a product of elementary matrices. Proof We will prove the equivalence hy establishing the chain of implications: (ah => (b) => (ch => Ad) => (a). (a) = (6) Assume A is invertible and let x9 be any solution of Ax both sides of thix equation by the matrix A ' gives A '( Axo) = A '@, or(, OF IX = 0, oF xy = O. Thus, Ax = @has only the trivial solution. Multiplying ‘Ayty = 0, (b)= (e) Let Ax =O be the matrix form of the system: yk Hata + Ane =O 1b day +---+ dagky =O a Mgnt Oats bose Maucty me tand assume that the system has only the trivial solution. If we solve by CGause-Sordan climination, then the system of equations corresponding to the reduced row echelon form of the augmented matrix will be 4 =0 * <0 7 a) = “Thus the augmented matrix ay aye s 07 an az ae 0 a) 100 oo o10 oo oor 0 oo0°0 0 46 Chaptor 1 Systems of Linear Equations and Matrices A Method for Inverting Matricos for (2) by a sequence of elementary row operations. If we disregard the last column (all -reros) in each of these matrices, we can conclude that the reduced row echelon form of Ais tee (©)5(@) _Assumethat the reduced row echelon formof A is /y,30 that Acane reduced to I, by a finite sequence of clementary row operations. Hy Theorem 1.5.1, cach of these ‘operations can be accomplished by multiplying on the left by an anpropriatc clementary ‘matrix, Thus we can find elementary matrices K, F,..., Ey such that ye EBA ay are invertible, Multiplying hoth sides of Kquation (3) 28," B, * we obtain By Theorem 1.5.2, £1, B; cn the left successively by Am BNE Ba = By By! By Theorem 1.5.2, this equation expresses A ax « product of elementary matrices, ay (d= (a) IE A is product of elementary matrices, then fram Theorems 1.4.7 and 1.5.2, the matrix A is a product of invertible matrices and bence is invertible, <4 ‘Axa first application of Theorem 1.5.3, we will develop a procedure (or ulgorithen) that ‘can be used to tell whether a given matrix is invertible, and if 80, proxies it inverse. Ts derive this algorithm, assume for the moment, that Ais an invertible m2 matrix, In Equation (3), the-clementary matrices exccute a sequence of row operations that reduce |f we multiply both sides of this equation on the right by A. and simplify, we te ‘Bur this equation tells ux that she same sequence of sow gpenations that reduces Ato , will cansform 1,10 A '. "Thus, we have extublished the following result Inversion Algorithm ‘To find the inverse of an invertible matrix .A, find a sequence of clementary row operations that reduces A to the identity and then perform thut same sequence of operations an 1, to obtain A! ‘A simple method for carrying out this procedure is given in the following example. EXAMPLE 4 Using Row Operations to Find At Find the inverse of 1 a=|2 1 Solution We want to reduce A to the identity matrix by row operations and simultane. ously apply these operations to J to produce A. '."To accomplish this we will adjoin the identity matrix to the right side of A, therchy producing a partitioned matrix of the form wi 1.5 Elementary Matrices and a Method for Finding A"! 47. ‘Then we will apply row operations to this matrix until the left side is reduced to 4; these operations will convert the right side ta A so the final matrix will have the form wiat ‘The computations are as follows: 1 2 2 5 3 1 5 tase th sed 3 es 12 O|-4 6 8 o 1 of 3 =s =3] « oo af 8-2 1 We aithd 2 e ‘Thus, a) Ate| 3 os 3] <4 $ 2-1 ‘Often it will not be known i advance ifa given n % n matrix A is invertible. However, tts not, then by parts (a) and (e) of Thearem 1.5.3 it will be impossible to reduce A to J, by elementary row operations. This will be signaled by a row of zeros appearing, on the deft side of the partition at some-stage of the inversion algorithm. If this occurs, then you can stop the computations and conclude that A is not invertible, D> EXCAM PLE Showing That a Matrix Is Not invertible Consider the matrix, 48 Chapter Systems of Linear Equations and Matrices Applying the procedure of Example 4 yields 1 6 4 0 2 4-1 -1 8 ‘Sota ow tthe Bad. 6 ' —" 2 Wea ea a alee ‘tm are de Since we have obtained a row of zeros on the left side, A is not invertible. » EXAMPLE 6 Analyzing Homogencous Systems Use Theorem 1.5.3 to determine whether the given homogeneous systert has nontrivial solutions. tintin =0 (a) 24) + Sry + Say 0 * + xy = 0 Sotution rom parts (a) and (6) of Theorem 1.5.3 a homogeneous linear ry'stem has only the trivial solution ff and only if its cocticient matrix ix invertible, From Examples H+ Oey + aay = 0 (b) 24) +4 — ay 0 =H) + 2p + Sey = 0 and 5 the coctticient matrix of system (a) is invertible and that of system (b) is not, Thuis, system (a) has only the trivial solution whereas system (by has nontrivial solutions, Concept Review + Row equivalent matrices + Hlementary matrix * Inverse operations ‘+ Inversion algorithm < Skills + Determine whether a given square mutrit is an elementary. + Determine whether two square matrices a row equivalent. + Apply the inverse ofa given elementary row operation toa matrix. + Apply clementary tow operations to reduce a given square matrix to the identity matri. * Understand the relationships between statements that are equivalent to the invertibility of a square matrix (Theorem 1.5.3), + Use the inversion algorithm to find the inverse of an invertible matrix. + Express an invertible mairix as a product of elementary matrices, True-False Exercises In parts (a)-(g) determine whether the statement is true oF falsc, and justify your answer. (8) The product of two clementary matrices of the same size must be an clementary matrix. (b) Every elementary matrix is invertible. (©) If A and # are row equivalent, and if B and C are row equivalent, then A and C° are row equivalent. 1.8 More on Linear Systems and Invertible Matrices 41 (@) HEA is ann x a matrix that is not invertible, then the (f) If A is invertible and a multiple of the first row of A linear system Ax =0 has infinitely many solutions. is added to the second row, then the resulting matrix is, invertible, (©) IA is ann x n matrix that is not invertible, then the oa ‘matrix obtained by interchanging tworowsof Acannot —(g) An expression of the invertible matrix A as a product be invertible. of elementary matrices is unique, 1.6 More on Linear Systems and Invertible Matrices In this section we will show haw the inversc of a matrix can be used to solve a lincar system and we will develop sone mare results about invertible matrices, Number of Solutions of a In Section 1.1 we made the statement (based on Figures 1.1.1 and 1.1.2) thatevery linear Linear System —syxtcm has either no solutions, has cxactly onc solution, or has infinitely many solutions Weare now in a position to prove this fundamental result, THEOREM 1.6.1 4 system of finmcar equations hax zero, one, or infinitely many so- dutions, There are no other possibilities, Proof If Ax = iis axystem of linear equations, exetly one of the following ix true: (a)the nyster haw no solutions, (b) the aysiem hax exactly one solution, of (c) the xystem has more than one solution, The proof will be complete If He ean show that the xylem ‘has infinitely many solutions in cane (e) ‘Assure that Ax = b has more than ane solution, and let =x) —x3, where xy and 1 are any twodistinet solutions, Because x) and xy arc distinct, the matrix xq is nonzero; moreover, Aw = AQ) a) Any AQ bbe He we now let & be any sealar, then A(R, +AXG) = Ay + A(R) = AM, + (AMG) =eb+ieb+teb But this says that x) +x» is a solution of Ax = b. Since ay is nonzero and there ure infinitely many choices for k, the system Ax —b has infinitely many solutions, Solving Linear Systems by ‘Thus far we have studied two procedures for solving linear systems —Gauss—Jordan ‘Matrix inversion —climination and Gaussian elimination, The following theorem provides an actual formula for the solution of a linear system of m equations i. unknowns in the case where the coefficient matrix is invertible. ‘THEOREM 1.6.2 fA is an invertible n s n.matrix, then for each m x | matrix b, the -oxtem of equations Ax = has exactly one solution, namely, x= A. 'b. Proof Since A(A 'h) = b,it follows that x = A.‘ issolution of Ax = b, Toshow that this is the only solution, we will assume that xp is an arbitrary solution and then show that 4g tmust be the solution A. "b. fxg is any solution of Ax = b, then Axp = b. Multiplying bothsides ofthis equation by A |, we obtain sy = Ath. ep in mind that the method ‘of Example | only applies whem the sytem has a aay exquatonsa unk a the ‘coofticicnt matrix is invertible. Linear Systems with a Common Cootticient Matrix ‘Chapter 1 Systams of Linear Equations and Matrices > EXAMPLE 1 Solution of a Linear System Using A* Consider the system of linear equations In matrix form this system ean be written ax Ax = b, where reas n s a=|2s 3}, x=|a], p=] 3 108 5 17 In Example 4 ofthe preceding section, we showed that A is inveruble anc 0 16 9 Ate! 13-5 -3 5-2-1 By Theorem 1.6.2, the solution of the system is 9 16 9] fs ’ xeAtbe| 13-5 -3/] 3) —/—1 5-2 1] [17 2 ore lne lye Frequently, ane is concerned with solving a sequence of systems Ax=by Ax=b, Ax= by, Ax= by cach of which has the same square coefficient matrix A, If A is invertible, then the ‘salutions mye A'by ag A "by, aye A by... 1 A hy ‘can be obtained with one matrix inversion and & matrix multiplications, An efficient way to do this is to form the partitioned matrix 1A by fs fo bal oy in which the coefficient matrix A is “augmented by all kof the matrices By, Bas and then reduce (1) to reduced row echelon form by Ganss—Jordan climination, In this: way we can solve all & sysicms at once. This method has the added advantage that it applies even when A is not invertible, > EXAMPLE 2 Salving Two Linear Systems at Once Solve the systems @) o+te+3n=4 ob) m+2n+in= 1 2 + S42 43x In tS Hans 6 my HBR, mn +8n=—6 Solution The two systems have the same coefficient matrix. If we augment this co- ficient matrix with the columns of constants on the right sides of these systems, we obtain Properties of invertible Matrices Equivalence Theorem 1.6 More on Linear Systems and Invertibie Matrices 51 Reducing this matrix to reduced row echelon form yields (verity) 1 oa of] a} 2 o 1 ofolo o a fafa 11 follows from the Iast two columns that the solution of system (a) i x 33 = Land the solution of system (b) is x; = 2, x a =1,4=0, Upto now, to show that an m x m matrix A is invertible, it has been necessary to find an nx m matrix B such that AB=1 and BA=T ‘The nextthcorem shows that if we produce an n x n matrix B satisfying either condition, then the other condition holds auimatic THEOREM 1.6.3 Let A ho asquare matrix (a) I B is a square matrix satisfying BA = 1, then B= A *. (6) Wie a square mairix satisfying AB = 1, then B= A *. ‘We will prove part (a) and leave part (6) ax an exercise, Proof (a) Axsume that BA = 1. If we can show that A ix invertible, the proof can be completed by multiplying #A = 4 on both aides by A ' to obtain BAN IA! or BhalA' of BoA! “To show that A is invertible, it suffices to show that the system Ax = (hax only the trivial olution (see Theorem 1.5.3). Let xp be any solution of this system. If we multiply both ides of Axo == On the left by #, we obtain BAxo = 6 oF fxy = 0 or-x9 =. Thus, the system of equations Ax = ® hasomly the trivial solution, ‘Weare now in.a position to add two more statements tothe four given in Thearem 1.5.3, ‘THEOREM 1.6.4 Equivatont Statements IFA ts ann 2 n matrix, then the following: are equivalent, (a) A isinvertile, (6) Ax =O has onty the trivial solution (e) The reduced row echelon farm ofA is le. Ad) A is expressible as a product of elementary matrices. {e) Ax = bis consistent for every.a x | matrix b. (P) Ax = has exactly one solution for every n x 1 matrix b, Proof Since we proved in Theorem 1.5.3 that (a), (6), (c), and (4) are equivalent, it will be sufficient to prove that (a) => () = (e) = (a) (@)= (f) This was alrcady proved in Theorem 1.6.2. 52 Chapter 1 Systems of Linear Equations and Matrices (1) 4@) This is almost self-evident, for if Ax = brhas exactly one solution for every nx I matrix b, then Ax = b is consistent for every n x 1 matrix b. (@)=>(@)_fthesystem Ax = bisconsistent forevery m > I matrixty, then, in particular, this is so-for the systems Se Ane Let x, 1% be solutions of the respective systems, and let ux form an ax matrix C having these solutions as columns. Thus C has the form ¢ [ni [2 |---| tal ‘As discussed in Section 1,3, the successive columns of the product AC? will be AML AM, AM [see Formula (H} of Section 1.3). Thus, t follows from the equiva a Heney of pata () and (that iF you can shaw that Ax = ty hhanat feast cine solution for ev~ AC = 1am [AX |---| AR] = [0 0 ery 91 anairix b, then you ia ‘can eonetude that thas exact oo ‘one solution for every mx Prats Be By part (6) of Theorem 1.6.3, follows that = A '. Thus, A is invertible. We know from earlicr work that invertible matrix factors produce an invertible prod- vet. Convericly, the following theorem shows that if the product of aquutre matricex ix invertible, then the factors themselves must be invertible. THEOREM 1.6.5 Let A and 8 hesquare matrices of the same size. AB is invertible, ‘then A and B must also he invertible, In our later work the following fundamental problem will occur frequently in various contexts, ‘A Fundamental Problem Let A bea fixed m x n matrix. Find all m x 1 matrices B such that the system of oquations Ax = b is consistent IFA isan invertible matrix, Theorem 1.6.2 completely solves this problem by assert- ing that for every m 2 1 matrix B, the linear system Ai bb has the unique solution =A tb, IA is not square, or if A is square but not invertible, then Theorem 1.6.2, docs not apply. In these cascs the matrix b must usually satisfy certain conditions in coder for Ax of Section 1.2 can he used to determine such conditions. to be consistent. The following example illustrates how the methods 1.8 More on Linear Systems and invertible Matrices 53 > EXAMPLE 3 Determining Consistency by Elimination ‘What conditions must 1, fz, and bs satisfy in order forthe system of equations nto tln ah no + Sh Bay +a +34 to be consistent? Solution ‘The augmented matrix is 112m 101 by 213 by which can be reduced to row echelon form as follows: tot ot om Ot 1 by = met a © -1 -1 by —26, (eat an abl a a Prop & 0 1 1 mds = Resume © -1 1 by 2 a 112 by O 1 1 byt = Pee ener ed ae ere tothe oo ‘tbs mow evident from the third row in the matrix that the system Ras a salution if and only if by, ba, and by satisfy the condition by by = Py 0 oF Dye dby bby “To express this condition another way, AX = b ix consistent if und oxy if b ix a matrix of the form bm b=| hm by + be, where by and by are arbitrary. > EXAMPLE 4 Determining Consistency by Elimination ‘What conditions must ), by, and by satisty in order for the system of equations ay + 2m + ny = by 2a + Siz + San = be 4 +t = by to be consistent? ‘Solution ‘The augmented matrix is oun ree 54 Chapter Systems of Linear Equations and Matrices Reducing this to reduced raw echelon form yields (verify) solution pled tell you about the eoefi> ‘cient mattis of the system? Skills + Determine whether a linear system of equations has no ‘Solutions, exactly one solution, or infinitely many solutions + Solve linear systems hy inverting its coetfickent matrix, F) = ADS) + Vly + forall values of ba, and by, 1 0 0 40m + 16b2 + 969 0 1.0 13h, — Sb; — 30, 0 0.1 Sh — 2b.— by @ In this case there are no restrictions on b,, B,, and bs, s0 the system has the unique 32 = 13h) —5by—3hy, y= Sy 2hy— by (3) + Solve multiple lincar systems with the same coefficient ‘matrix simultaneously, * Be familiar with the additional canditions of invertbility stated in the Hquivalence Theorem, True-False Exercises: Im parts (a)-(g) determine whether the statement i true OF false, and justify your answer. (a) is impossible for asystem of inear equations to have cxactly two solutions (b) ifthe lincar system Ax = bihas a uinique solution, then the linear xyster Ax = ¢ alsa must have a unique so lution, (e) HEA und Harem n matrices such that AB = Jq, then BA = fe, (4) HEA and 6 are row equivalent matrices, then the linear systems Ax = and J#x = 0 have the same solution set (6) HFA is an nm xm matrix and § is an nx m invertible matrix, then if x ira solution to the linear eystem (8 'AS)x = b, then Sx isa solution to the linear mys tem Ay = Sb. (€) Let Aboanm x m matrix, The linear system Ax = 4x ‘has a unique solution if and only if A — 44 is an in verte matrix, fa) Let A anid be mom matrices. If AD is invertible, then both A and J# muxt be invertible, 1.7 Diagonal, Triangular, and Symmetric Matrices Inthis section we will diseuss matrices that have various special forms. These matrices arise in a wide varisty of applications and will play an important role in our subsoqucnt work. Diagonal Matrices ‘A square matrix in which all the entries off the main diagonal are zero iscalleda diagona? ‘matrix, Mere are some examples: lo ok [i 2 0 ‘A general n x m diagonal matrix can be written as too fe & 8 8 © 0-4 0 0 | oro ee og sl loot oo 0 8 ao Oe 0 = oy OO ad ‘Confirm Fema (2) by show ing that pp =p'p=4 Triangular Matrices 17 Diagonal, Triangular, and Symmetric Matrices 55 A diggonal matrix as invertible if and only if all ofits diagonal entries are nonzero; in this case the inverse of (1) is 1p 0 0 0 I/d, -- 0 Dt= te ® o 0 Vd, Powers of diagonal matrices are easy to compute; we leave it for you to verify that if {2 is the diagonal matrix (1) and kis a positive integer, then dO. 0 od. els: : 9 0 Oo > EXAMPLE 1 Inverses and Powers of Diagonal Matrices we 18 0 A=lo 3 0 o 0 2 1 At=Jo -| Of, at=|0 -203 0}, At=10 -y 0 o Matrix products that involve diagonal factors are expecially euxy to compute. For example, 4 9 Ol lay a2 ay ae Ayr yyy dy daz, dyttry ody yay dyasy yay O dy Ol lan a2 an ay 9 0d} lan de an au au au dal ry 9 9 dyayy dyatyy ay ay ay dyazy dyes ae ay ox ol |) oS tas, dyasy Ay yg yy dyagz yttgs In words, fo multipiy.a matrix A on the left by a diagonal matrix 1), ane can multiply successive rows of A by the successive diagonal entriex of D. and to multiply A on the right by D, one-can multiply successive columns of A by the successive diagonal entries of D. [A square matrix in which all the entries above the main diagonal are zero is called lower ‘riamgutar, and a square matrix in which all the entries below the main diagonal are zero iscalled upper triangular. A matrix that isither upper triangular or lower triangular is, called triangular. 56 Chapter 1 Systems of Linear Equations and Matrices Propertios of Thiangular ie ing Figure 1.24 Matricos > EXAMPLE 2 Upper and Lower Triangular Matrices ayy ay ayy yy m0 0 Oo 0 ay ay ay ay ool, a 0 aw am au aa 0 o 0 0 aw. ast an aie, Remark Obscrve that diagonal matices ave both appar Wlangslar aed Lower tdangalae sine they have zeras below and above the rain diagsinal, bssrve also that aquare mats kn om cchelon for is upper triangular since it has ora below the main dagoaul. Example 2 illustrates the following four facts about triangular matrices that we will state without formal proof + A square matrix A = [ay] 18 upper triangular if and only if all entries to the left of the main diagonal are ero; that ix, ay = O11 > f (Figure 1.7.1). +A square matrix = [ay | is lower triangular if and only if all entries to the right of the main diagonal are vero; that is, a = Oi 7 © f (Figure 1.7.1). + Arsquare matrix As: [ay | is upper triangular if'and only if the /th row starts with at least / — 1 7eros for every /. + A-square matrix A s= fay |i lower triangular if and oaly ifthe jth column starts with, at least j ~ 1 zeros for every j. The following theorem lists some of the basic properties of triangular matrices. THEOREM 1.71 (a) The ranspase of a lower teianguar matrix ls upper triangutar, and the iranspose of an upper triangular matrix is lower triangular (b) The product of lower triangular matrices ix tower wlangudar, and the product of upper triangular matrices is epper triangular (©) Atrianguar matrix is invertible ifand ontyifits diagonal entriexare all nonzero, (d)_ The imerne of an invertible lower eriangular matrix i lower tiangufar, and she inverse of an invertible unper triangular matrix. is upper sriangular. Part (a) is evident fram the fact that transposing a square mairix.can be accomplished by reflecting the ehtries about the main diagonal; we omit the formal proof. We will prove (b), but we will defer the proofs of {c) and (i) to the next chapter, where: we will have the tools to prove those results mare cicicntly. Proof (b) We will prove the result for lower triangular matrices; the proof for upper triangular matrices is similar. Lot A = [aijfand B= [by] be lower triangular > a ma- trices, and let C = [cy] be the product © — AB. We can prove that € is lower triangular by showing that ey = 0 fori EXAMPLE 3 Computations with Tiangular Matrices (Consider the upper triangular matrices 1 follows from part (¢) of Theorem 1.7.1 thatthe matrix A is invertible but the matrix 2 ix not. Moreover, the theorem also tolls us that A ', AB, and BA must be upper triangular, We leave it for you to confirm these three statements by showing. that v-} 4 3-2 -2) 305-1 atalo 4-2], asalo o 2], sanlo o sl ooo 4 oo s ooo s DEFINITION 1 A square matrix A is said tobe symmetric HA = AT ] > EXAMPLE 4 Symmetric Matrices ‘The following matricex are symmetric, since cach is equal to its own transpose (verify) CC ra, [tt 3] [bao oly ee 4 6 ood 0 0 0 0 & Remark [1 followe from Forrnula (11) of Section | Sthat asquare matrix A ie eymmetric fad aly if Aly = Ae “ tall value off andj ‘The following theorem lists the main algebraic propertics of symmetric matrices. ‘The proofs are direet consequences of Theorem 1.4.8 and are omitted.

You might also like