You are on page 1of 235
SCHAUM’S OUTLINE OF THEORY AND PROBLEMS MATRIX OPERATIONS KICHARD BRONSON, Ph.D. Professor of Mathematics and Computer Science Fairleigh Dickinson University SCHAUM’S OUTLINE SERIES McGRAW-HILL Now York San Francisco Washington, D.C. Auckland Bogoté Caracus Lisbon London Madrid Mexico-City Milan Montreal New Dehli San Juan Singapore Sydoey Takyo Toronto To Evy RICHARD BRONSON, who is Profetsor and Chairman of Mather and Computer Science at Fairleigh Dickinson University, received his Ph.D. in applied mathematics from Stevens Institute of Technology in 1968, Dr. Bronson is currently an associate editor of the journal Simla ‘don, contributing editor to SLAM News, has served as a consultant to Bell Telephone Laboratories, and has published aver 25 technical articles. and books, the latter including Schauen’s Outline of Modern Introductory Differential Equations and Schaum's Ouiline of Operations Research, Schaum's Outline of Theory and Problems of MATRIX OPERATIONS. Copyright © 198%by The McGraw-Hill Companies, In. Al rights reserved. Printed in the United States of America. Except as penmited under the United States Copyright Act of 1976, no par of this publication may be reproduced or distributed in any form or by any rican of stored ina data hase oF retreval system, without the prior written permission of {he publisher HO UL 1215 14 15 16 17 18 19-20 PRS PRE ® IS8N O-07-007976-1 Spoosaring Esitor, David Beckuith Producsion Supervisor, Louise Karam ising Superson, Mortne Grice Lneary of Congress Cataloging ta-Pubiicaton Data Beamon, Richard ‘Denaufas out of theory and problems of mare operations (Semum's oatane sere) Ince sade 1, Matsices, 1 Tide. Tcl: Matrix ‘operations QaLés 8759 1990 sa0e aka ISBN 0-07-007974-1 McGraw-Hill Fed ‘A Division of The McGrellCompanies Preface ‘Perhaps na arca of mathematics has changed as dramatically as matrices aver the last 25 years, This is due to both the advent of the computer as well as the introduction and acceptance of matrix methods into other applied disciplines. Computers provide an efficient mechanism for doing iterative computations. This, in turn, has revolutionized the methods used for locating eigenvalues and ‘eigenvectors and has altered the usefulness of many classical techniques, such &s ‘those for obtaining inverses and solving simultaneous equations. Relatively new fields, such as operations research, lean heavily on matrix algebra, while csteb- lished fields, such as economics, probability, and differential equations, continue to expand their reliance on matrices for clarifying and simplifying complex ssoncepts. This book is an algorithmic approach to matrix operations. The mare complicated procedures are given as a series of steps which may be coded in a slraighttorward manner for computer implementation. ‘The emphasis throughout is on computationally efficient methods, These should be of valve 10 anyane who needs to apply matrit methods to his or her own work ‘The material in this book js self-contained; all concepts and procedures are stated direetly in terms of matrix operations. There are no prerequisites for using. mast of this book other than a working knowiedge of high school algebra. Same of the applications, however, do require additional expertise, but these are self-evident au ate Timited tu sluxt purtions of the buok. For example, clemen- tary calculus is needed for the material on differential equations. Each chapter of this book ix divided into three vections, The fret introducer concepts and methodology. The secoad section consists af completely worked-aut problems which clarify the material presented in the first section and which, an ‘occasion, also expand on that development. Finally, there i8 a section of problems with answers with which the reader can test his or her mastery of the subject matter. 1 wish to thank the many individuals who helped make this book a reality. warmly acknowledge the contributions of William Anderson. whose comments an coverage and content were particularly valuable. I am also grateful to Howard Karp and Martha Kingsley for their suggestions and assistance, Particular thanks the Edward Millmas for his splendid editing and support, David Beckwith of the Schaum staff for overseeing the entire project, and Marthe Grice for technical editing Ricaro Bronson Contents Chapter I BASIC OPERATIONS 6.000000. 00eccee Murices, Vectors and dot products. Matix addon and matrix subtraction. Scalar multiplication and matrix multiplication, Row-echelon form. Elementary 1ow and colusin operations. Rash. Chapter 2 SIMULTANEOUS LINEAR EQUATIONS. ...... un Consitenoy. Matrix nosation, Theary of ealutions. Simpltying operations. Gaussian elimination algorithms, Pivoting strategies, Chapier 3 SQUARE MATRICES... cece u Diagonols. Elementary matrices, LU decomposition. Simultaneous linear equations. Powers of a matrix. Chapter @ MATRIX INVERSION M The inverse. Simple inverses. Caleulsting inverses. Simultaneous linear equations, Properties of the inverse, Chapter S| DETERMINANTS. . cee a Expansion by cofactors, Propertics of determiaants. Determinants of partitioned matrices, Pivotal condensation. Inversion by determinants, Chapter 6 VECTORS... eee ce eee cee eeeetereseee Ey Dimension, Linear dependence and independence, Linear combinations, Prapaatien of lncatly dependent vactoes, | Rerw fanh wad vulva sank Chapter 7 EIGENVALUES AND EIGENVECTORS. . oo Qharactentic equation. Pengemies of eigenvalues and eigenvectors. Linedrly independent eigenvectors, Computational considerations ‘The Cayley: Harnlton theorezn Chapter 8 FUNCTIONS OF MATRICES .......... om Sequences and series of matrices, Wellefined functions. Computing functions of matrices. The function e™ Differentiation and integration of matrices. Differential equations, ‘The matrix equation AX + XB = C. Chapter 9 CANONICAL BASES weet e ec eteteeeeeesetees 2 Generalized eigenvectors, Chains. Canonical basi. The minimum polynomial. Chapter 10 SIMILARITY ce a wee a Similar matrices. Modal matrix, Jordan canonical form, Similarity ans: Totdan canonical form oF matrices Chapter I] INNER PRODUCTS ...-..02.00 02020 108 Complex conjugates, The inner product. Properties of inner products, Onbogonalty. Gram-Schmist oftogonsization. Chapter 12 NORMS. nee ~ perce MO Veetoe narms Nowmsltaed vestors and distance. Marni nen tnviuced norms. Compatibility. Spectral radius. Chapter 13 HERMITIAN MATRICES . . ig [Normal matrices. Hermitian matrices. Real symmeteic matrices. The adjoint, Self-adjoint matrices, Chapter 14 POSITIVE DEFINITE MATRICES ..... cece HBB Definite matsices, Tests for positive definiteness. Square roots of satrices, Cholesky decomposition. Chapter 15 UNITARY TRANSFORMATIONS... 136 Unitary matices. Schur decomposition. Elementary rellectors. Summary of simiarty transformations, Chapter 16 QUADRATIC FORMS AND CONGRUENCE..... 146 Quadratic form. Diagonal form. Congruence. Inertia. Rayleigh quotient Chapter 17 NONNEGATIVE MATRICES wee . . 182 Nonnegative and positive matrices. reducible matrices. Primitive matrices. Stochastic matrices. Finite Markov chains. Chapter 18 PATTERNED MATRICES . ae 160 Cirewlant matrices, Band mat es. Triiagonal matrices, Hessenberg form, Chapter 19 POWER METHODS FOR LOCATING REAL EICENVALUES - 169 Numerical methods, The power method, The inverse power method, The shifts inverse power method, Gerschgorin’s theorem. (Chapter 20° THE QR ALGORITHM a cece IE ‘The mocifes: Gram-Schmist proces. QR decomposition, The QR Mgoriten,Acceleriing convergence ‘Chapter 21 GENERALIZED INVERSES ........0... 00.0005 ere 19D Propertics. A formula for generalized inverses.Singular-valve ecompostion. state formula for the generalized inverse. Leni-squares solutions ANSWERS TO SUPPLEMENTARY PROBLEMS..............00.0..0.00200005. 0 MM INDEX... betes ee Gecvetesecess BP Chapter 1 Basic Operations MATRICES A matris rectangular array of elements arranged in horizontal rows and vertical columns, and ‘usually enclosed in brackets. In this book, the elements of & matrix will almost always be numbers or functions of the variable 1. A matrix is real-valued (or, simply, real) if all its elements are real numbers of real-valued functions; itis complex-valued if at least one element isa complex number or aa complescvalued function, If alls elomewts are umbera, Uhen a malsia ill & cumin! m Example 1.1 and (1.7, 2416, 2] [OS sine ett el eee are all matrices, The frst two on the left ace real-valucd, whereas the third is complex-valued (with (= V=Ths the fist and thied are constant matrices. but the second is not constant Matrices are designated by boldface uppercase letters. A general matrix A having r rows and c woolumies may be itt Bay dyn ae | fn fate where the elements of the matrix are double subseripted to denote location, By convention, the row index precedes the column index, thus, a,, represents the element of A uppearing in the second row and fifth column, while ay, represents the element appearing in the third row and first column. A matrix A may alse he dented as [a,), where a, denotes the general element of A. appearing in the ‘gh row and jth column. ‘A matrix having r rows and c columns has order (or size) “r by c.” usually written xc, The matrices in Example 1.1 have order 22, 243, and 1x4, respectively frum left ww right. Two matrices are equal if they hive the same order and their corresponding elements are equal. ‘The wanspase of a matrix A. denoted as A’. is obtained by converting the rows of A into the columas of A” one at a time in sequence, If A has order m Xn, then Al has order 1 * a Example 1.2 it Fa [e An[3 4] then ars 34 bie VECTORS AND DOT PRODUCTS ‘A vector is a matrix having either one row or one column, A matrix consisting of a single row is, called a row vecior; 2 matrix having a single column it called a coliwmn vector. The dot product &- BE ‘of two vectors of the same order is obtained by multiplying together corresponding elements af A and B and then summing the results. The dot product is a scalar, by which we mead it is of the same jeneral type as the elements theniselves. (See Problem 1.1.) 1 2 BASIC OPERATIONS [oHar. 1 MATRIX ADDITION AND MATRIX SUBTRACTION ‘The sum A+B of two matrices A=(2,] and B=[6,] having the same order is the matrix obtained by adding corresponding elements of A and B. That A+B=[4,]+ la, + 8, Matrix addition is both associative and commutative. ‘Thus, A+(B+C)=(A+B)+C and A+B=BHA (See Problem 1.2.) ‘The matrix subtraction A ~B is defined similarly: A and B must have the same order, and the subtractions must be performed on corresponding elements to yield the matrix [a,~,). (See Problem 1.3.) SCALAR MULTIPLICATION AND MATRIX MULTIPLICATION For any scalar k (in this book, usually a number or a function of #), the matrix KA (or, equivalently, Ak) is obtained by multiplying every element of A by the scalar k. That is, kA = kay] = [ka,|. (See Problem 1,3.) Let A={a,]'and B= (4) have orders Xp und pc, respectively, ae that the mumber of golumas of A equals the number of rows of B. Then the product AB is defined to be the matrix ] of order rx ¢ whose elements are given by 6,= E,eaby = 1.2, LB.) Each element c, of AB is a dot product; itis obtained by forming the transpose of the ith row of A. and then taking its dot product with the jth column of B. (See Problems 1.4 thraugh 1.7.) Matrix multiplication is associative and distributes over addition and subtraction; in general, itis not commutative, Thus, A(BC)=(AB)C A(B+C)=AB+AC (B-C)A=BA-CA but, im general, AB BA, Also, (ab)! = Bla” ROW-ECHELON FORM row in a matrix is a row whose elements are all zero, and a nonzero row is one that nit, A matin i8 a 2ero erafrix, denoted B, if it contains anly cexe A ren ‘at least one nonzero: elem rows, ‘A matrix is im row-echeton form if it satisfies four conditions: {RI}: All nonzero raws precede (that is, appear above) zero rows when beth types are contained in the matrix. (R2): The first (leftmost) nonzero element of each nonzero row is unity. (RS): When the first nonzero element of u raw appears in culumn ¢, then all elements in column < succeeding rows are zero, (aye The frst nonzero element of any nonzero row appears in a later column (further to the right) than the first nonzero efement of any preceding ro CHAP. I} BASIC OPERATIONS 3 Example 12 The matrix bo3sa3 Aorn? ouooo tices all four conditions and so is in row-echeton form, (See Problems 1.11 t 1.13 and 1.18) ELEMENTARY ROW AND COLUMN OPERATIONS ‘There are three slemeniary rom operations ‘which may be used tn transform 4 matrix inte row-echelon form, The arigins of these operations are discussed in Chapter 2; the operations themselves are: (EI): Interchange any two rows, {B2): Multiply the elements of any row by a nonzero: scalar (3); Add to any row, clement by element, a sal sow. times the corresponding clements of another “Three elementary column operations are defined analogously, ‘An algorithm for using elementary row operations 10 transform a matrix into row-echelon form is as follows: STEP 1.1: Let R denote the work row, and initialize R= 1 (so t row). STEP 1.2: Find the first column containing a nonzero element in either row R or any succeeding row. If no such column exists, stop; the transformation is complete. Otherwise, let € denote this column, STEP 1.3: Beginning with row R and continuing through successive rows, locate the first row ‘having # nunzery element in wulumn C. If this row ix mot auw R, interchange it with sew LR (elementary row operation E1), Raw will now have a nonzero element in column €. This element is called the pivor; let P denote its value. STEP {42 I P is not 1, multiply the elements of row & by I/F (elementary row operation E2): otherwise: continue. STEP 1.5: Search all rows following row R for one having a nonzero element in eoluma C. If no such row exists, go to Step 1.8; otherwise designate that row as row N, and the value of the nonzerw clement in row N and column Cus V. STEP 1,6: Add to the elements of row N the scalar —V times the corresponding clements of row R (Clemeniary row operation E3). STEP 1,7: Retum to Step 1.5. STEP 1.8: Increase R by 1. If thie new value of R is larger thon the number of rows in the matrix, Stop: the transformation is complete. Otherwise, return to Step 1.2 the top row is the first work (See Problems 1.12 through 1.15.) RANK ‘The rank (oF row rank) of a matrix is the number of nonzero rows in the matrix after it has been transformed to row-echelon form via elementary row operations. (See Prablems 1.16 and 1.17.) BASIC OPERATIONS (CHAP. E Solved Problems Find A+B and B+C! for cf] Ly ones deme a0) +90 14-0, rer ‘|: a Show that A+ B=Be A for acta a] a a(S Sl aoe sbLe abeLs6 Cadel a] wemelS SED sbelots atslels 4] Since the resulting matrices Iiave the samc order ond all corresponding elements are equal. A 4B Bea. = 47) + 6-8) + (-7)(-9) = 0 Find 3A —0.5B for the matrices of Problem 1.2. 0 Legs 5a fr 200] fost o.sesy sa-oaB=3[) j]-oalf 4-33 533) -[038 os(-7) _[o-2 3-25 ]_po2 os [eb Sb )-(5 8] Find AB and BA for the matrices of Problem 1.2, orl? 3) (80:88 BASIL LS Ho = Lay AED wo wa-[5 Il ale Laayt= THA) 60) +(- AI] “Las -15. Note that, for these matrices. AB # BA. Find AB and BA for 123 _j7 8 al mt [oS Since A has three colin while B has only two rows, the matrix product AB is not defined. But wae[T ELE [eB EMey aE ocPLENee 0 -9)ka 5 wl" Lot) + 994) 029+ (99-5). 003) + (36) (222 = 45-54 CHAP. I] BASIC OPERATIONS 5 = A'B! for the matrices of Problem 1.5, 14h, gy [Me MOsH-8) | 7 39-26 war[2 3]2 9p-faasee Sew oayree]-[ 2. 3 6. A(T) + 68) MO) + 6(-9) = SA, which i th espe ofthe poet BA found in Problem 1S Find AB and AC if w[2 igs 2-11 [ A(2) + 212) +(~ Ly 43)+ 2-2) + 2) M1}+ A 2) + OC) AB=} 2(2) +12) +-1) 2(3)+ 1-2) + 02) 21) + M2) + OL) HBR) + (KD) MM 29) + NEB) KR) 21) K-29 2 ao) =| 6 40 Tad 43)+ 20) 40-1) 4019 422)402) HEB) #01) 2 se reef $210) 20 RnB em FN} |-f fo] AAG) mH ~20pe EF IY =r IMT L-F =z ‘Note that, for these mattices, AB = AC and yet BC, This shows that the cancellation law js not valid for mateex multiplication A. matrix is partitioned if it is divided into sthaller matrices by horizontal ar vertical lines drawn between entire rows and columns. Determine three partitionings of the matrix 12a 4 ac|oa 5 6 TR 1-2 ‘There are 2 ~ 1 = 31 diferent ways in which A can be partitioned with atleast one partitioning line By placing « line between each 1wo rows and each two columns, we divide A into twelve 1 x 1 matrices, ‘ebtalning By placing one line bercen the fist and second rows and snorher lie bevwen the seend and thid solumas, we coasttuct the partitioning vere ss } Lt 4] Ati paring canbe construed by placing singe a etcetera func orate A eat ser Bal C=f3.4] 1.10 BASIC OPERATIONS (CHAP 1 t23 4 c=lo 0 8) and n=l 78-1 -2 A partitioned matrix can be viewed as a matrix whose elements are themselves matrices ‘The arithmetic operations defined above for matrices having scalar elements apply as well to Partitioned matrices. Determine AB and A — B if ac[S 2) a na[E § sweel$ a) ool owl) ef ol a CF+DF CG+DE ae (Ger poset Partitioning can be weed to check matrix multiplication, Suppose the product AB is to be fund and checked. ‘Then A and B are replaced by two larger partitioned matrices, such that thcir pronto! io A AB AC [nliwcle [ite ne ‘where R is a new row consisting of the column sums of A, and C is « new column consisting af ‘the row sums of BL The resulting mptrix bas the original product AB in the upper left partition. If no errors have been made, AC consists of the row sums of AB; RB consists of the column sums of AB; and RC is the sum of the clements of AC as well as the sum of the elements of RB. Use this procedure to obtain the praduct 2 lls 3 2] CHAP. 1) BASIC OPERATIONS 7 ‘We fom the partitioned matrices and find their product fae yo) fla. Bll; [3 57) ‘The product AB is the upper left part of the resulting matrix. Since the row sums of this product, the ‘column sums, and their Sums are correctly given in the matrix, the multiplication checks. 1.11 Determine which of the following matrices are in row-echeton form: afro as) fig ofa opera] eli 3a) (Only A and D are in row-echelon form. B is not, because the frst (lefimast} nonzero element in the second fow Is further left than the first nonzero ‘element in the top row, violating condition RE. Condition R2 is violated in the first row of C. Matrix E violates condition R3, because the first nonzero ‘element in the lower rom appears in the same cofvina as the Firs! nonzero clement of the epper row. 1.12 Use elementary row operations ta wansform matrices B,C, and E of Problem 1.11 into row-echelon form. We fo Sep. tough 1 neck seu For siptey tony those septa eu in maine mani Far Bait st Sieg VT) sad Ft Wey 1) ae ah ep aed Tictcinge Sows fad 2 bing, Lay lo ia) Which is in rowechelon form. For matrix C. with R'= 1 (Step 1.1). C= 2 (Step 1.2). and P=2 (Sten 1.3), we apply Step 1.4 and multiply all elements in the firs row by 1/2, obtaining our toil which isin row-echelon form, For matrix E, with R= | (Step |.1},€= 1 (Step 1.2), and N= 2and V=4 (Step 1.5), we apply Step 1.6-by adding, 10 each element in row 2, ~ 4times the corresponding element in row 1s the result is 1 2 3 1203 [asco 94 (=492) recae)) La i 3 ‘which Is in row-echelon form, 1.13 Transform the following matrix into row-echelon form: 8 BASIC OPERATIONS (CHAP 1 Here (and in later problems) we shall use an arrow to indicate the row that results from each elementary row operation @ 2 2-8) and 2-1 2 2) row to the second row. [: 2-1 i Step 1.6 with R~ 1, LE H1 0} Step 1b with = 1,-1.9 0 2 12-8) and a2 Add -2i 0-5 4-14) row to the third rom. 12-1 6) Step 14 with R=2.C alo 1 oe - Muliply the sewn 20 0-5 4 18) by 02 £21 6) Step 6 with R-2, 6-28-32, 01 & mt) and V=—5 Add 5 times the lo 0 34-34) second row to the third 10 12-1 6] Steps with R=2,C=3, and 01 6 4] P= Multipiy the third row by slo 1 ai) 434 1.14 Transform the following matrix into row-cchelon form: 2103 3611 S718 ft v2 0 5/2] septa with R= 1, C=1,and 38 1 E | P22: Bhakti the frst cote by s7 tel in 1/2 0 $12) Seep 10 mith R=1.C=1, {0 92 1 W172] and Vea: Add = times the tre SF. 8 ow to the second #6, 1/2 0 5/2] step 16 wih R=1,C=1,N=3, 92 1 ~ 13/2] and V= 3: AGd ~3 times the Fire 1 =912 | row to the thicd raw, 1 2/9 -15/9 | P=912: Multiply the second row 1 o 0 142 0 ‘| Slep LA with R=? 0-2, and 0 ooo 1-92) bya and V=9/2: Add —9/2 times th 12 8 5/2) Step 16 with R 129-149 1 0 vo oO 2 112 0 ve] Step 1.4 with, o 0 second rom 10: the third ro. toa sis <2 Multiply the thind cow by oo 1 12, CHAP. 1) BASIC OPERATIONS 9 LAS Transform the following matrix into row-echelon farm: 321-4 4 2 30-14 1-63 -8 7 2313-43 ") step 1 1 230 0 -t =t 1-6 ya 7 4: Multiply the frst cow 1s 13] Step Lote Add -2 times the first 053 513] fw 16 the second row. 1-6 7 1023 3-4/3 1/8] Step 16: Add —1 tnnes the frst © 5/3 213. $13 ~3/3| row to the third tow. O23 83 20/8 03 123 WB =H 2] Step Lots Multiply the sexo com or -us 1-1) by ais. 8-2/3 83 20/3 20/3 12/3 1/3 43 WO] Step Lo Add 20/3 times the [aie Bs PE] ame io ‘la e @ 0 © Tetermine the rank of the matriy of Problem 1.14 [Because the row-echelon form of this matrix has thtee nonsero rows, the rank of the original m wa 117 Determine the rank of the matrix of Hroblem 1.13. Because the rom-echelon form.of this matrix bas two nonzero rows, the rank of the ortginal ma ie, 1,18 Show that row-echelon form is not unique if a matrix has rank 2 oF greater Such a matrix has at least two monzero rows after it is transformed ints rom-eshelon form, By aiding the second row to the First row, a different row-echelon-form matri is produced. As an example, if we add the second row to the first row of the Fow-echelon-lorm matrix obtakned in Problem 1,14, we obtain oo dy -19 ooo 1 (; va M0 5] ‘which is also in row-echelon form, 10 BASIC OPERATIONS [CHAP 1 Supplementary Problems a3 30 22 1 2-31) nefaa] Ref] rep 2 124 63 3. F ALAS Find a) A+B: (hy 3A: (c) 2A ~ 3B: (=: ant (0) A Ix Peolerns 1.19 threwgh 1.72 let ral ed 1.20 Designate the columns of A as A, and A, and the columns of C as C,.C,, and Cy, from left 10 right Then calculate fa) Ay Ay: (4) C.-C; and fe) CC, A.2L Find fo) AB; (6) BA; (6) (AB)'; (4) BIAT, and (e} AB" 1.22 Find (#) CD and (>) BC, LES Find A(A +B) 1.24 Find (e) CE and (b) EC. 1.25 Find (a) CB and (6) FC. 1.26 Find (a) BF and (0) FE. 1.27 Transform A to rowechelon frm, 1.28 Transform B 40 row-echelon form. 29 Trarmfurm © wo suweecticlon forms 1.30 Transform D to tow-echelon form, 1.31 Transform E to row-echelon form, 1.32 Find the rank of (a) A: (8) Bs (e) C: (4) Dz and (e) B. 1.33 Find two matrices, neither of which is & zero matrix, whose product is a zero matrix, ‘The peice schedule for a New York to Miami light ic given by the vector P = (240. 10, 89], where the elements denote the costs of first class, business clas, and tourist class tickets, respectively. The number ‘of tickets of each class purchased for a particular fight is given by the weetor N=[8, 28, 113]. What is the nificance of P- NP 1.35 The inventory of computers at each outlet of a three-store chain is given by she matrix 92 N-|is 4 7 0. ‘where the rows pertain to the different stores and the columns denote the number of brand. X and brand YY computers. respectively, in cach store, The wholesale costs ofthese computers are given by the vector D = |700, 1200|". Calculate ND and state its significance Chapter 2 Simultaneous Linear Equations CONSISTENCY A system of simultancous linear equations is a sot of equations of the form But, tats tary te tat, by Bydy + Oyyy + yt, +17" Oayd, OE Ogg, Ogyly 1 Back = Diy aay ‘The coefficients ay (i= 1,2,....m; j=1.2.....n) and the quantities b, (i= 1,2,...,m) are Known constants. The a (/=1,2,...»M) ate the unknowns whose values are sought, ‘A solution for system (2.1) is a set of values, one for each unknown, that, when substituted in the system, refders all its equations valid (See Prahlem 2.1) A system of simultaneaus linear equations may passess no solutians, exactly one solution, of more than one solution. Example 2.1. The system atyel xtged thas no solutions, because there are no values for x, and x, that sum v0 1 and 0 simultaneously, The system a +2, has the single solution x, =O, x, = 1: and hs ay ky Bot every ale Of A set of simultaneous equations is consistent if it possesses at least one solution; otherwise it i inconsisient MATRIX NOTATION System (2.1) is algebraically equivalent to the matnx equation AX-B (22) where As ‘The matrix A is called the coefficient matrix, because it contains the coefficients of the unknowns. Thee jth tow of A (i— 1,2...) corresponds to the éth equation in system (2.1), while the th column of A (j= 1,2,....n) contains all the coefficients of x), one eacfficient for each equation. The augmented matrix corresponding to system (2.1) is the partitioned matrix [A|B]. (See Problems 2.2 through 2.4.) 12 SIMULTANEOUS LINEAR EQUATIONS [cHar. 2 THEORY OF SOLUTIONS ‘Theorem 2.1: ‘The system AX = B is consistent if and only if the rank of A equals the rank of |A.| B] ‘Theorem 2.2: Denote the rank of Aas k, and the sumber of unknowns an, If the system AX = Bis consistent, then the solution contains m= & arbitrary scalar, (See Problems 2.5 t0 2.7.) System (2.1) is said wo be homogencous if B=; that is, if by = by =""+=b, <0. HBO [ie., if at leaet one b, (= 1,2,..., mh ie nox zero), the system is nonhomogeneous. Homogencous ‘systems are consistent and admit the solution x, = r,--+= x, =0, which is called the erival solution a nonirivial solution is one that contains at least one nonzero value. ‘Theorem 2.3: Denote the rank of A as &, and the number of unknowns as n. The homogeneous system AX = 0 has a nontrivial solution if and only if # #4, (See Problem 2.7.) SIMPLIFYING OPERATIONS Three operations that solution sct are: Iter the form of a system of simultaneaus linear equations but do nat alter (01) Interchanging the sequence of two equations. (02): Multiplying an equation by a nonzero scalar. {03}: Adding to one equation a scalar times another equation Applying operations 1, 02, and 03 to system (7.1) is equivalent to applying the elementary ‘ow operations El, E2, and E3(see Chapter 1) to the augmented matrix associated with that system. Gaussian elimination is an algorithm for applying these operations systematically, 10 obrain a set of equations that is easy to analyze for consistency and easy to solve if it is consiste GAUSSIAN ELIMINATION ALGORITHM STEP 2.1: Form the augmented matrix [A] B] associated with the given system af equations. STEP 2.2 Use elementary row operations to transform [A | B] into row.echelon form (sce Chapter 1), Denote the result as [CB]. STEP 23: Determine the ranks of C and [C|D]. If these ranks arc equal, comtinue; the system is consistent (by Theorem 2.1), If nat, stop: the original system has no solution STEP 24: Consider the system of equations corresponding to [C| D], discarding any identically zero equations. (If the rank of C is k and the number of unknowns is, there will be in k such equations.) Solve each equation for its first (lowest indexed) variable having. ‘a nonzero coefliciont STEP 2,5: Any variable not appearing on the left side of any equation is arbitrary. All other variables can be determined uniquely in terms of the arbitrary variables by back substitution (See Problems 2.5 through 2.8.) Other solution procedures are discussed in Chapters 3. 4. 5. and 21 PIVOTING STRATEGIES Errors due to rounding can become a problem in Gaussian elimination. To minimize the effect of roundoff errors, a variety of pivoting strategies have been proposed, each modifying Step 1.4 of the algorithm given in Chapter 1. Pivoting strategies are merely criteria for choosing the pivot element. CHAP. 2] SIMULTANEOUS LINEAR EQUATIONS. 3 ‘voting involves eearching the work column of the augmented matrix for the largest element in absolute value appearing in the current work row or a succeeding row. That element becomes the new pivot. To use partial pivoting, replace Step 1.3 of the algorithm for transforming a matrix to roweechelon form with the following: STEP 1.3: Beginning with row R and continuing through suovessive rows, locate the largest clement in absolute value appearing in work columin C. Denote the fist caw in which this clement appears as row f. If fis different from , interchange rows f and R (elementary row operation El). Row R will now have, in column C, the largest nonzero element in absolute value appearing in column C of row R or eny 10 succeeding it. This clement in row A and column Cs called the pivor; let P donot value. (See Problems 2.9 and 2.10.) ‘Two other pivating strategies are described in Prablems 2.11 and 2.12: they are successively more powerful but require additional computations. Since the goal is to avoid significant roundoff error, it is nat necessary to find the best pivot element at each stage, but rather to avoid bad ones, ‘Thus, partial pivoting is the strategy most often implemented. Solved Problems 2A Determine whether 2,2, 2, — 1 and 2) ~—11 iy solution set for the sysien 2x, + x, 5 Bay +Gaytay—t Sx, + 72, 44,08 Proposed values for the wnknowne into the 2) +10) -s 32) +61) eH) = 1 5) eT) eT) =6 “The last equation does not yiekd B as required: ence the proposed values do nat constitute 2 solvtion Ft cide of wach equation giver 2.2 Wiite the system of equations given in Problem 2.1 asa matrix equation, and then determine its associated augmented matrix, Esl El tl ‘The original opstem con be waitten ac AX = win-| ite augmented matrix 5 14 SIMULTANEOUS LINEAR EQUATIONS (cHar.2 2.3. Write the following system of equations in matrix forin, and then determine its augmented matrix: Qa #2agt aya ey tdey ey 7 y,— 6, #3¥y 8H = 7 “This system is equivalent to the matrix equation 321-4) pa 2 30 -al et) - yne small] Lo. ‘The associated augmented matrix is Bozi-air talpj-|203 0-24 1-63-85 7 ‘Observe that in both A and [A |B], the zero zero coefficient of z, in the second equation of the second row and third column corresponds te the ¢ Otiginal system, 24 Write the ser of simultaneous equations that corresponds to the augmented matrix 123 18-45 1U9 oO 1 -25 4 }- oo 0 0 ‘The corresponding set of equations is rt iay + ia— ay byt aod ‘The third equation reduces to the tautology O-= and és not writen. Mor do we write any variable having a zero coefficient 2S Solve the set af cquations given in Problem 2.1 by Gaussian elimination. ‘The augmented matrix for this system was determined in Problem 2.2 to- be 2 1 ais (alpj=|3 6 1 S71 Using the reswits of Problem 1.14, we transform this matrix into the row-echelon form Lue reife | oo Of Ff Is follows from Problem 1.16 that the rank of [C|] is 3. Submatsix Cis also in row-echelon form and has rank 2. Since the rank of C does not equal the rank of [C| DJ, the original set of equations is Inconsistent, The problem ty dhe last equation wssuciated with [CD]. winch is Or, +02, #0r, = 1 land which cleaely bas no solution, ‘CHAP 2} SIMULTANEOUS LINEAR EQUATIONS. 18 2.6 Solve the set of equations given in Problem 2.3 by Gaussian elimination. ‘The augmented matrix for this system was determined in Problem 2.3 10 be Bo 2b -4in fp sec] ‘Using the results of Problem !.13, we transform this matrix into the row-echelon form VRS VE ad ee tclop-fo 1-251 at oo 0 o io [alr 11 follows from Problem 1.17 that the rank of [CD] is2. Submatris:C isalso in row-echelon form, and it also fas rank 2. Thus, the original set of equations is consstent Now, using the results of Problem 2.4, we write yt dx, t bay - in, ody te 13 the set of equations associated with (C| DJ, Solving the first equation for +, and the second for x). we Pa dao bye tn fines Since x, and x, do not appear on the lef side of any equation, they are arbitrary. The unknown x, is ‘coinpletely detennioed in termes of the aibitiary wokoowis. SuDsibuing itil Ure Fist wiuationy, we saleulate TE HT Tey oad Bet dae Sim faye dx, ‘The complete solution to the original set af equations is I= try +a vlrinnn swith x, and x, arbitrary, 2.7 Solve the following set of homogeneous equations by Gaussian elimination: Tx, +94, =0 2k t ga 0 54, +64, 42x50 By converting this system 10 stgmenied-matrix form and then transforming the marrix imo row-echelon form (Steps 11 theough 18}, we get a7 9i0 [Almp=]21 -1 $6 2io. Lowe -12s0) tcim=}o 1 o7io oo of 16 28 2 SIMULTANEOUS LINEAR EQUATIONS IcHAar. 2 "The rank of the coefficient matrée A ix thus 2, and fecause there are three unknowns inthe original set of equations, the system has nontrivial solutions. The sel af equations associated with the augmented mates [C| Dy is a) hin band at fn=0 usu Solving far the first variable in each equation with a nonzero coefficient, we obtain apse tn thn, nent, Therefore, 1, i arbitrary, Solving for x, and x, im terms of x, by back substitution, we find Solve the fallowing set of equations: ayt2ay— 6 3x, 48x, +9x,= 10 2ey- x, +2n,--2 ‘The augmented mateit associated with this systesn is 12 (A| BI [ 8 29] 24 2 which. ia Problem 1.73, was teansfoumed into the eow-eehelon form 12 cm [a an oth € and [| D] have rank three, so the system is consistent. The set of equations associated with this augmentea mates 15 Solving cach equation for the frst variable with a nongere coetficient, we obtain the system ayn oder apna 6x, aect Which cu be: solved easily ly back substitution beginning with the last equation, The safution to this system and to the original set of equations is x, Lx, =2, and x, = Solve the following ast of equations by (a) standard Gaussian elimination and (b) Gaussian himination with partial pivoting, rounding all compuiations to four significant figures 0.000014, + 4; — 1.00001 xytay=2 ‘CHAP. 2] SIMULTANEOUS LINEAR EQUATIONS " (6) We write the eystem in matrix form, rounding 1.00001 to 1.000. Then we transform the augmented ‘matrix inta rowechelon form using the algorithm of Chapter 1, in the following steps: 1.000 2 yo00H0 | 100000 2 ] 1 1 lo ne { 1 mona) -l : (Note that we round to —1O@D00 twice in the next-to-last step.) The resulting augmented ‘matrix shows that the system is eoasistent, The equations associated with thes matrix are + 100000x, = 200000 aol Which have the soliton x,=0 and r,=1. However, substitution into the original equations shows that this is not the solution to the orighnal system, (b) Transforming the augmented matrix into rowechelon form using partial pivoting yield om 1 pre) 1 4b? 1p 2] Rows | and 2 are interchanged {0.0001 1 [1.000] because row 2 bas the largest element in columa 1, the curvest ‘work collin, [bt 2 ] Rare bene aloo ih a J figures. ‘The system of equations associated with the last aupmentecl matrix i cvasistent anal ix xem? 1 Its solution is 4, ~ 2, ~ 1, which is also the golution to the original set of equations, All computers round fo a number of significant figures & that depends on the machine: being used ‘Then. an equation of the form i eo will generate resus like that of part a unless some pivoting strategy is used, (We had k =4in part @) a8 rule, dividieg by very small numbers ea Jead to significant roundoff ertor and should be avoided when ponsible ae 2.10 Solve the following set of equations using partial pivoting: ay t2e,t Bays 18 Bay + x= 4x = 30 Sa, +84, +172, = Te augmented mata for Us systems is ‘In traneforming this matrix, we nead to wee Step 1.3! immediately, with R~ 1 and C= SIMULTANEOUS LINEAR EQUATIONS [cHar. 2 Tho largest ‘element in absolute value in column 1 is =5. appearing in row 3. We:interchange the first and third rows, cand then continue the 1 fn -1.6 34} -192 201 aa haa 12 34 ie 1-16-34 alo 42 2a 1p 3 1-16-34 [: aoa | slo 36 4) ate ‘We next apply Step 1.3 with R= 2 and 2, Considering only rows 2 and 3, we find that Largest element in abeolute value in columa 2 is 4.2, s0 f= 2 and no row interchange is required. Continuing. ‘with the Gaussian elimination, we ealculate bo-ke -34 [-t9.2 +]0 1 nsec: 2 0 36 ba | ane 116 34 fia |: 1 aaseaun | -lo a 4 » Lo-ie -34 f=192 0) Be6ep67 | 2 sla a nf a8 ‘The system of equations associated with the last augmented matrit is consistent aid is Ayo Lbky= Baie = 19.2 5; + .686007H,, ‘Using back substitution (Iseginning with +), We obtain, asthe solution to this set of equations 2s well as, the original eyeter, x, = 15, 3, and 4, =7.5, Te use seated pivoting. we first define, ns the scale factor for each row of the coefficient matrix A. the largest clement in absolute vilue appearing in that row. The scale factors are computed ‘once and only once and, for easy reference, are added onéa the augmented matrix (A B] as Another partitioned column, Then Step 1.3 ot Chapter 1 1s replaced with the following: Divide the absolute value of cack nonzero clement that és in the work column and on or hiclow the work row by the scale factor for its row. The element yielding the Ioegest quotient is the new pivot; denate its row as row /. If row J is different fram the current work row (row R), then interchange rows [ and R_ Row interchanges arc the only elementary row operations that are performed on the scale factors; all other steps in the Gaussian elimination are Himited to A and B. Solve Problem 2.10 using sealed pivating. ‘The see factors forthe system of Probvem 2.10 are 9, man(1.2,3)=3 max(2,1,|~tl) =4 (si.8.7) £17 CHAP. 21 SIMULTANEOUS LINEAR EQUATIONS 19 a2 We add a cokimn concisting of these scale factors fo the augmented matrix for the system. and then transorming it to row-echelon form as follows 120 34 ty 3 ‘The scale-lactor quotients for the oe | ‘ om |i, = first and second sows are interchanged. [New work rie is 2 and the wt column i 2. The quotients ace 1,573 = 0.500 and 10.5/17 = 0.618 ‘The largest quotient 46 0.618, 30 the pivot is 10.5, which appears in rove 3, The second ane third revs are interchanged. Wiiting the tet of equations anouninted with this augmented maria (ignoring the columa of scale factors) and solving them by Buck substitution, we obtain the solution x= 1.5, 4j~ 3.4, -75, To use complete pivoting, we teplace Step 1.3 of Chapter | with the following steps, which involve both row and column interchanges: Let the current work row be R, and the cuttent Work column C. Scan all the elements of submatrix A of the augmented matrix that are on oF below cow R and ou ot ty the right uf column C, co determine which is Largest in absolute value. Denote the row and column in which this clement appears as raw / and column J. If Pe R, interchange rows J and Ri if JC, interchange rows J and C. Because column interchanges change the order of the unknowts, a bookkeeping mechanism for associating columns with unknowns must be implemented. Ta do so, add a new partitioned row, row 0, tbuve Ue ual agumented maisix, Its elemems, which are initially in the order J, 2... -. 10 denote the subscripts on the unknowns, will designate which unknown is associated with each column. » SIMULTANEOUS LINEAR EQUATIONS [eHar. 2 Solur the Sestem of Proilem 7 10 using cemplete pivoting We aii the bankkeeping row 0 to the augmented matrix of Problem 2.10, Then, beginning with raw |, we transform the remining rows inte row-eehelon form Re 1 and C= 1, ‘The largest clement sf absolute talue In the lee left subiates i 17.18 ee Sand columa 3. We first inversbange rms 1 and 3, and Utes siting Ta 3 FDacceZerueee al 1 Games WBS 0 288985 0.873528 0 5882351. KE3S ‘The work row and work column aie now R=? and C= 2. The latest element im abso wale of the four under consideration is 2.88035, for which J= 2 and J =2. Sime OA and JC. mo interchange is required 1 OBST {i aad aa 0 0.588936 1.88235 Sete -237143 Bama, 1 wahoo Bedi Ot atasiia —le @ iima Boose ss pndnes arte TORR a aT a 1 omastie 23m ooo 1 1 s0001 The fist coluenn of the resulting eow-echelon matrix corresponds to 1. and the third ealuma te 4,59 ‘the usociated set of equations is 2+ O.AT0SHRr, ~ 0.294118, 44; + 0.288714, = -2.57148 4,— 150001 Solving each equation forthe fist variable with a nonzeta coefficient, we obtain y= 9.08i00— thames, +0.204E (BK, ay=-2.Sria) 2887102, ay- 1 S0001 which, when solved by: back substitution, yells the solution x, 73000, ha 5.68706 1 sat CHAP. 2) SIMULTANEOUS LINEAR EQUATIONS a 23 Gauss Jordan elimination adde a step between Stept 2.3 and 2.4 of the algorithm for Gaussian mination. Once the augmented matrix has been reduced to row-echelon form, it is then feduced sill further. Beginning with the last pivot element and eantinuing sequentially backward to the first, cach pivot element is used {o transform all ather elements in its columa wo zero, Use Gauss Jordan elimination to solve Problem 2.8. “The Hist to steps of the Gaussian elimination algorithm arc used to redce the augmented matrix 4 row echelon farm ae in Probleme 1.13 and 2. 12 a1 oo ‘Then the matrix is reduced further, 2 follows: 1 6] Add ~6 times the third row wo the o 2} second row. 0 fd i ‘The set of equations associated with this augmented matrix is x, solution set for the original system (no back substitution is required} | “Add the thied row to the First rom: ‘Addl “2 times the second row the firs row, Boa," 2, and xy = —1, which i the 2.14 Use Gauss Jordan elimination ta snive the systars af Problem 27 ‘The first two steps of the Gaussian elimination algorithm provide the augmented row-echelorfarm matrix vue -12 ‘| reio-[8 1 aio oo @ asin Problem 2.7, This matrit is reduced further by using the pivot in the (2.2) position o place a zeto- in the (1.2) postion | a ih] Add ~ 1/2 tmes the seoond 10¥ to a1 gto] the fist row po a fo. ‘The set af equations associated With tas aulgueated aiatth fe aotyad at bya o=0 Sling for the frst variable in each equation with = nontera coefficient, we obtain x, = $x, and x," "hy, which i the solution (ao back substitution is requited) with x, aribiteary, a as 26 aur 2a aay 2.20 20 ay cy SIMULTANEQUS LINEAR EQUATIONS [cHar 2 Supplementary Problems Which of (@) nereet (b) x)= 82, 2-1 x20 Rayo? dpc hase 8 (onary are solutions 10 the system ota + y= > Bt odes aye Iny tine T Wote the augmented matriy for the system given in Problem 2.15 ‘Write the augmented matrix for system Bay — Aig + Ta, +6 n fa, <44,-42,- Sr. 2 Qn tix, + x — 2, Me, = 10 Solve the set of equations sseocisted with each ef the follawing augmented matrices: -2asar 32a tz gy foot 2 oi ooan Solve the system ven in Problem 2,15. Solve the system given in Problem 2.17. In Problems 2.20 thtough 2.27. solve for the unknowns it the given systesn AAlns 80 RR mt eI ad 24,24, +9 2 4a, 4 8r, the, = Hs Set x, tte, =0 Ta, + Br +94, =28 By t= 6 BBE ae? eR 8 ate, t3ne-7 dx -2k) = nyt =F safle, 2-12 cB tarde & 2 2a, + Ba, Se, + 44, ba + byt by + y= 00 r+ bay ln + kaye da) + by + e+ ye Ley ebaye iny bead CHAP. 2] SIMULTANEOUS LINEAR EQUATIONS 2B 226 aar ae aa 230 2 22 1.0001, +2.0000:, + 3.0000, + 4.0000, = $ 1.00003, +2:0001 <5 + 3.0000, + 4.0000, = 6 1.0000. + 2.0000 + 3.0000, + 4.0000, =7 {.0000, + 2.0000, + 3.0000, + 4 0001, = & What would be the result of solving this system by working with only four significant figures? 0.00001, + ,+ 0.00001, = 0.90002 xt 2n, + wat 0.00001, + 6, ~ 0.00001, ‘omn01 Use Gaussian eltmination (a determine values of & Hor which solucIaNS ex ¥-the toBoWANg systems, and then find the solutions @)nttym y=4 (b) mde nd B= e353 dene 6 Bt tek in-a- & |A mamufucturer produces thee types of desks: custom, deluxe, and regular. Each custom desk c requires 12 worker hours to cut and assemble, and $ worker hours to finish, Each eluxe desk d requires 10 hours to cul asd assemble, and 3 hours fo finish: each regular desk F requires 6 hours to cul and assemble, and 1 hour to Finish, On a daily basis, the manufacturer has available 440 worker hours for ‘cutting and assembling, and 120) worker hours for finishing. Show thay the problem of determining how many desks.of each type (a prodnice 0 that all werkpawer is used is equivalest ta solving twn equations fn the thtee unknowns e. d, aid ¢. How many solutions are there” The end-of-ahe-year employee bosus b is 3 percent of tuxable income f after city and state taxes are educted. The city tax ¢ is | peteent of taxable income, while the state tax s is 4 percent of taxable income with credit allowed for the city tx a3 a pretax deduction. Show that the peobfem of determining the bonus is equivalent to solving three equations in the four unknowns b. i, c and 5 Prove that if ¥ and Z are two solutions of the linear system AX = homogenous system AX = then ¥ ~Z isa solution of the Prove that if ¥ and Z are io solutions of the linear system AX-= then Y= ZeH, where His a i uf the bonnyeieunn ayotein, AX 8, Chapter 3 Square Matrices DIAGONALS A matrix is square iff has the same number of rows and columns. Its general form is then By yy ayy 0 The elements ay. dss. dy, -.-.a,, lie on and form the diagonal, alse called the main diagonal or principal diagonal, The elements 4,2. 4,;,.-. , 4,.,, immediately above the diagonal elements form the superdiaganal. and the elements a3,.4,;..:.d,,,-; Immediately below the: diagonal elements constitute the subdiag onal. ‘A diagonal matrix is u square matrix in which all elements not on the main diagonal are equal to zero; the diagonal elements may have any values. An idenuity matrix 1 is a diagonal matrix in which all of the diagonal elements are equal to unity. The 2% 2 and 4% 4 identity matrices are 1000 10 o1ae {oi} foes ooon matrices play the same role in matrix arithmetic as the qumber 1 plays in real-aumber hmetic, In particular, for any matrix A. AL= A and IA = A provided, in each case, that Tis of the appropniate order for the indicated multiplication, Ider ELEMENTARY MATRICES An elemeniary mains i is & square matrix that generates an elementary row operation on a given rmuttin A under the multiplication EA. The order of E is dictated by the order of A, such that the rultipliration is defined There are three general kinds af elementary matrices, cnrrespandling te the three different elementary row operations (See Chapter 1). A specific elementary rsatris is obtained by applying the desired elementary row operation to an identity matrix of the appropriate order. (See Froblems 3.1 and 3.2.) LU DECOMPOSITION ‘A square matrix is upper rianguiar if all elements below the main diagonal are zero; itis dower triangular alt elements above the main diagonal are zero. The elements on or above the diagonal in an upper triangular matrix (and on of below the diagonal ia a lower triangolar mates) may hive any values. including zero In most cases, a square matrix A can be written as the product of a lower triangular matrix Land an upper triangular matrix U, where Land U have the same order as A. This factorization, when it exists, is unique if the elements on the main diagonal of U are all ts. That is. A=LU an mu CHAP. 3] SQUARE MATRICES 25 0 Vtg Ws 7° tag Ly he Oo 0 OT uy, 4, where L=|G, hy bs Of and v=|0 0 Te ay Har tae tes fee OO 0 - - 2 Lap pz ooo yr we we Example 3.1 2-1if-|2-2 0 fo roo ara ls -1 1 Ilo o 4 Crow's reduction is an algorithm for calculating the elements of L and U. In this procedure, the first column wf L is determines frst, thea the fist sow of U, the seeond column of L, the sevond row of U, the third column of L, the third row of U, and so on until all elements have been found. The onder of L and U is the same as that of A, which we here assume is 1% 7. STEP 3.1; Initialization: If a,, = 0, stop; factorization is not possible, Otherwise, the first column ff Eis the first calrima of A: remaining elements of the first raw of L are 2era. The first row of U is the first row of A divided by {,, = a,,; remaining elements of the first column ‘of U are zero, Set a counter at N= 2. STEP 3.2; For i= N.N-41,....m, set Li equal to that portion of the ith row of L that has already been determined. That is, I.) consists of the first WV 1 clements of the ith row of. STEP 3.3: For j=N,N+1,....2, set U equal to that portion of the jth column of U that has already been determined, That is, 17; cansists af the first W—1 elements of the jth ‘column of U. STEP 3.4; Compute the Nth column of L. For each element of that column on or below the main diagonal, compute 4 ae (LU ENN AT. a Tf any fy = 0 when Nn, stop; the factorization is not possible. Otherwise, set the remaining elements of the Nth row of L equal to zero. STEP 3.5: Set uy 1, IE N=, stop, the factorization is complete. Otherwise, set the remaining cclements of the Nth columa of U equal to zero and compute the Nth row of U. For each element of that row to the night of we main diagonal, compute = ope WR om) iv Stee 40: increase W oy 1, and retum 10 Step 82 (See Problems 3.4 through 3.6.) Partial pivoting (sec Chapter 2) Is recommended when exact arithmetic is not used and roundott error is possible, Prior to Steps 3.1 and 3,2 (for N= 2,3,...,), scan the Nth column of A for the Jargest clement in absolute value appearing in that column and on ar helew the main diagonal. If this element is in row p, with p # N, then interchange the pth and Nth rows of A, as well as the pth and ‘Mth rows L up to the Ath column (which represents the parts of those two rows it L that have already been determined) SIMULTANEOUS LINEAR EQUATIONS Lu decompositions are usetul for sotving systems of simultaneous tinear equations when tne number of unknowns is equal to the number of equations, The matrix form of such a system is 26 SQUARE MATRICES [CHAP 3 AX= 0B, which, in light of Eq. (3.1). may be rewritten as L(UX) © decampose A and then solve the system associated wit Ta obtain X, we first Ly-B G2 for ¥. Then, once ¥ is known, we solve the systern associated with Ux=¥ Gs) for X. Both (3.2) and (3.3) are easy to solve-the first by forward substitution, and the second by backward substitution. (See Problem 3.7.) When A is a square matrix, Li! factorization and Gaussian elimination are equally efficient for solving a tingle set of equations. LU factorization is superior when the system AX = B must be solved repeatedly with different right sides, because the same LU factorization of A is used forall B. (See Problem 3.8.) A drawback with LU factorization is that the factorization dacs not exist when a pivot clement is zero, However, this rarely oscurs in practice, and the problem can usually be eliminated by reordering the equations. Gaussian climination is applicable to all systems, and for that reason is, ‘often the preferred algorithm, POWERS OF A MATRIX If n is a positive integer and A is a square matrix, then AT= Ade g times In particular, A7= AA and A’ = AAA. By definition A’ =, (See Problems 3.10 and 3.11.) Solved Problems 3A Find clementary matrices that When. multiplied on the right by any 3% 3 matrix A will (a) interchange the first and third rows of 3 ()-multiply the sceond row of A by 1/2: and (c) add 4 ‘times the second row of A to the third row af A. ‘Since an elementary matris is constructed by performing the desired elementary ro ope 1n identity matrix of the appropriate size, m this cuse the 3% 3 identity, we have oor 1 0 0 1 0 0 @ E-/0 10) @ E-fo 12 0] @E-|o 10 100, ooo 4 Od 1, 3.2 Find elementary matrices that when multiplied o the right by any 4% 4 matrix A will (a) interchange the second und fourth rows of A; (6) multiply the third row of A by 6; and (c) add & times the first row of A to the fourth row of A. e 0 , i tao 10 00 1 oon a1 00 ° w efor) ay ele) 8 a] @ BHD 0100 oo on * CHAP. 3) SQUARE MATRICES ” 33 12-4 a-|3 8 9 2-1 2 The matrix A consists of the Hirst thee cotumns of the mati conekéered rm Frobiem 13, 9 te ame sequence of elementary row operations utiized im that problem ‘will conver tht matfix 12 rowecheloa form. The elementary matrices corresponding (0 those operations are, sequentially, 100 Loo Loo E=|-3 10 gE o10 B=]0 12 0 O01, BOL oO 1 io0 10 0 Ce a) ee os 0 0 4/38 1 a4 0 Then Peeeee8,-| 32 12 0 19168 S168 20 1 @ oy 2-17 a and Pa=| -3/2 1/2 0 13 8 g9l=lo1 6 =s9i6s sia eal? -1 2) loo 1 34 Factor the following mata into ao upper tuiangular statin and a Rewer tiiangular matrix 12-2 3 “11 02 Ae 3-3 401 211-2 Using Crowr’s reduction, we have STEP 31: 1eoo pare toe at zoe. : STEP 32 Ly =[-i), L5=[3} and Li = 22). STEP 33 U;=[2}, Uj =1-2), a8 Uy =. STEP 3.4 Fypmetey = (LE) UE t Eat apo t= (y= 8 = (G5) U5 = 3B) STEP 35: ay LS) Us ee a, -L as SQUARE MATRICES [cHar. 3 STEP 3.6: Ta this point we have - 0 STEP 32: STEP 3.3: STEP 34: STEP 35: STRP 3: To this point we have 1 ood 12-203 “1 300 o 4-2/3 sis a9 dof ™ Ul go 1 oe 2a a - ooo - Since N= 2 and a =4, we increase N by 110 N = 4, STEP 32; 3.3) STEP 3.3: STEP 34: We have A= LU, with 12 2 3 OL 23 5/3 oo 1 7 oo o 1 STEP 35: wy =1. Since N= 4 =, the factor 1ooo a -1 30 4 od and 2-33 -394 Factor the following matrix into an upper triangular matrix and a lower triangular matrix: 12-2 9 -1-2 0 2 Aelycs ug 1 20 1 1-2, The first three steps of Grout’s reduction here are identical to those in Problem 3.4, Then: STEP 34: 2-[-1 2-(-2)=0 Since [,, = but Wn. the original matrix cannot be factored us an LU product. CHAP. 3} SQUARE MATRICES » 3.6 Factor the following matrix into an upper triangular matric and a lower triangular mat 22010 J 30-1 s“lo1 a3 -11 90 Using Crow's reduction, we have STEP A. 2000 niin e pee as 3 Q Ne “ae - o- - STEP 3.2: Ly =(3),L}=[0], and Ly = [=I] srep3 1.U= [72), and Ut = 10) STEP 3.4: dag = yy — (15) 0-1}: (1) 0 - (y= -3 fag = yy ~ C5)" = 1 (0-E = At STEP 3,5: tn (YU Te Se MUU, t= Bla) BS rch Lua) 8 <3 é 2 00 0 ae) s-s00 ob sf =U wel PE] and ual g “lone e, oo - - we increase N by 1 to N=3. 1 and L; = [=1, 2]. $3) mt wel Sal retrse ~auy eee SLSR] -0-go§ tocaacttatoieo-[ YL siele0-de-g uy SQUARE MATRICES ICHAP. 3 STEP 36: To thie point we have 200 oF riauz oo | 33 0 6 a)0 1 56-10 Er) 9 a -s6 0) "Ula w 1 aes 12 one -. 00 0 = ‘Since N= 3 and n= 4, we increase W by 1 to N= 4, STEP 32° U)=[-1.2,-716). STEP 3% ° uz=] 13 a3 STEP 3a awn 0- [ =) [sak m4 n, the factorization is done, We have A= LA, with 20 0 0 Lili oo 2-3 oO _ (0 2 se 1a elo 1-516 0 ot U0 gd 3205 <1 2-116 ~34i5, ooo ft Solve the system of equations ttt = Be Tay A= nit a tins 5 =x4 5 =u ‘The coefficient matrix for this system is matrix A of Problem 3:6. Using L-as determined in that problem, we write the system corresponding to LY B as 2y, = 0 yy == Y2~ 516y, os = 4 my, + 295 = 1/67, 35 Solving this system sequentially fram top to battom, we obtain yy vera ‘With these values and as given in Problem 3.6, we can write the sysiem corresponding to UX = ¥ y= BI, yy = 22/5, and atest by = ayt hy dry Solving this system sequentially trom Bottom (0 lop, We BbKasn tke solution to the original system aye da, 10x, = 72, and x,= = CHAP. 3] SQUARE MATRICES a XB Golve the system of equations given in Problem 3.7 if the right side of the second equation ix ‘changed from —11 to 11. The cowtictent matriv A is unchanged, se hoth IL and U are as they were. From (3.2), 2m = Bn 30 on nmin =s5 my #29, = Ex Hod Solving this system sequentially from top to bottom, we obtain y, = 5, y, 4/3, y)—~22/5, and yom “34/17, With these walues sind Uae given im Paoldees 3.6, (3.3) becomes atari = 8 aytias~ dam ft Ray a= Solving this system sequentially tram bottom te top, we obtain the eolution to the system of interest 22 13/17, xy 225/17, x, 25417, and, = ~ 28/17. 3.9 Verify Crout's algorithm for 3 x 3 matrices. For an arbiuazy 3% 3 matrix A, we seek x factorization of the form ay OO ys] fl atts fe on tee OO Tus P= Pty baat la dats + aes By fe aL 0 TD Phy nite tae batts hatin Fb By equating corresponding coefficients ia the order of frst columa, remaining first row, remaining second column, remaining secand row, and remaining third column, and then solving successively for the single unkeown in each equation, we would obéaia the formulas of the Crout reduction algorithm. 310 weane[T UE Le 7] comes 2 8 3) S11 Show that A’ 9A + 101= 0 when 1-2 2 A=]o 2 0 1-1-3, We have -20 2p -2 2] fa 8 9 Weaaselo 2 ofo 2 ols] 0 4 0 -P-1o a1 -3iln 1-3, 3-8 ayn -20 2 was] 0 4 offo 2 0 -2sroujli -1 3 2 SQUARE MATRICES [cHar. 4 “1-18 18) [1-2 2) 710 0) foaa wesaciae[ ot o)-sfo 2 ale ‘- os ‘| oe nee] Lia esl” loo elo ao, 32 A.square matrix A is said to be nilpotent if A? = 0 for some positive integer p. If p is the least positive integer for which A’ =, then A is said to be nilpotent of index p. Show that is nilpotent of index 3 1s -2 As|1 2-1 36 =3 ‘That is indeed the case, because Ls 2p s -2) fos - =}1.2 -1ffr 2 -1]-fo 3 -2 36 -3l3 6 -3) lo» -3 oa -1yis -2) foo -Sa-}0 3 -1]]1 2 -1]-lo 0 0 a9 -3ils 6 -3) loa. Thee and Supplementary Problems 313 Find clementary matrices that, when multiplied on the right by any 33 matrix A («) will interchange. the second sed third ros of A: (4) wall multiply the First row aif A hy 7: and (e) will ab —3 times the Bist rom of A to the second row of A. ind elementary matrices tat when multiplied on tne ngnt by any 4c matrex A (a) wall interchange: the second and third rows of A: (0) will add ~3 times the fits tow of A to the fourth row of A and (2) will add 5 times the third row of A to the first row of A. SAS Find (e) a mateic P such that PA isbn romeechelon form and (4) 8 matrix Q such that QA = when pi? [yd] ‘Use elementary matrices to find a matrix P such that PA = 1 when 202 asfo12 325 MIT Prove that the product of two lower triangular matrices of the samme onder is itself omer riangulo. In Problems 3.18 through 3.23, wri and an upper tramgular matrix ee ee ee ee) 456 [i d3] leach of the gives matrices as the prosiuct of a lower sriangular matrix pay 20. Fens pens CHAP. 3] SOUARE MATRICES 33 an a3 gas 226 sar am a30 au an aay 4 as Lor a 4) amypr 2a 4 pa-tid 2-t-l 1-3 4 L-¥ 4 22-3 4 por « In Problems 3.23 through 3.29, use LU factorization to solve for the unknowns. Dn +28, +, BM tlds ea 2 Baayen att Deed tS Bey -3et4x + n= 16 n+ tnt at cde & (Hina: See Problems 3.7 and 3.8.1 (Hine: See Problem 3.4.) Repeat Problem 3.24, but with B=[-3,~1,0, 4)" ky ay Baye 454, +68, = 16 Be, tr, = 26 (Hint: See Problem 3.18.) ‘Repeat Prablem 3.26, but with B= [6, -7, —12]", Br, xptdey= 6 AM nd tanta & ayt3nj tinge -7 Dey - dep xt 3 nxt =-12 A,—Jiz #4, 8 (Hint: Set Problem 3.19.) de, 428) -34,+4n,=-2 (Hit: See Problem 3.21.) Find A? and A” for the matrix given in Probers 3.18. 20 oO a-[i 1 ‘| oo What does AY look Tike when Ais a diagonal matrix” Find A° tor ‘A square matey is sald 10 be ifempouens if A’ = A. Show that the following marx fs idemporear: 2-2-4 as[-1030 4 1-2 43. =a, Prove that if A. adempotent, then 0 too Peawe that (a)! = (aT)? Chapter 4 Matrix Inversion ‘THE INVERSE Matrix B is the inverse of a square matrix A if ABS BAST an For both products to be defined simultaneously, A and B must be square matrices of the same order. Example 4.1 “21 ret [sit i) sme imeneot [5 4] 12-2 1 )y-2 pape becave bills tal-L2 alld JLo 2] ‘A square matrix is said to be singular if itdoes not have an inverse; a matrix that has an inverse is called nonsingufar or invernibte. The inverse of A, when ic exists, Is denoted ax A” SIMPLE INVERSES Elementary matrices corresponding to elementary row operations (see Chapter 3) are invertible, An elementary matrix of the first kind, one that corresponds to an interchange of two rows, is its own inverse. The inverse of un elementary mattis af the second kind, one that corresponds to:multiplying ‘one row of a matrix by a nonzero scalar &, is obtained simply by replacing the value of & in the elementary matrix with its reciprocal 1/&, The inverse of an elementary matrix of the dhird kind. ‘which eorresponds to addieg to ane row a constant k tines another row, is obtained by replacing the value & in the elementary matrix with its additive inverse ~k. (See Problem 4.2.) ‘The laverse of an upper triangular macrix is itself upper triangular, while that of a low triangular matrix is Jower triangular (see Problem 4.13), provided none of the diagonal elements zero, If at least one diagonal element is zero. then no inverse exists, The inverses of triangular ‘matrices are constructed iteratively, one column at a time, using Eq. (4.1). (See Problems 4.3 and 44) CALCULATING INVERSES Inverses may he found through the use of elementary rom operations (see Chapter 1). This provedure not only yields the inverse when it exists, but also indicates when the inverse does not exist. An algorithm for finding the inverse of a matrix A is as follows STEP 4.1: Form the partitioned matrix [A 1], where 1 is the identity matrix having the same order as STEP 4.2: Using elementary row operations, transform A into row-echelon form (see Chapter 1), applying each row operation (6 the entire partitioned snatrix formed in Step 1. Denote the revult as [€| B], where € is in rowechelon form. STEP 4.3: If Chas a zero row, stop; the original matrix A is singular and does not have an inverse, Otherwise continue; the ariginal matrix #8 inveruble, M CHAP. 4) MATRIX INVERSION 35 STEP 4.4; Begioning with the last columa of © and progressing backward iteratively through the second column, use elementary row operation E3 to transform all elements above the diagonal of C to zero, Apply cach operation, however, ta the entire matrix [C |D]. ‘Denote the result as [I] B]. The matrix B is the inverse of the original matrix A, {See Problems 4.5 through 4.7.) If exact arithmetic is not used in Step 4.2, then a pivating stratcgy (sec Chapter 2) should be employed. No pivoting strategy is used in Step 4.4; the pivot is always one tf the unity elements-on the diagonal of C, Interchanging any rows after Step 4.2 has been completed ‘will undo the work of that step ond, therefore, ix act allowed SIMULTANEOUS LINEAR EQUATIONS: ‘A set of linear equations in the matrix form AX=B (4.2) verse is known Multiplying each side of this matrix , which simplifies to Xea'R (43) (Sec Problems 4.8 and 4.9.) Equation (4.3) is most useful as a theoretical representation of the salutinn t0 (4.2). The methnds given in Chapter 2 fnr solving simultaneous linear equations generally Fequire fewer computations than the method indicated in (4.3) when A is nat known. ean be solve easily if A is invertible and equation by Av’ yields A"'AX = A” PROPERTIES OF THE INVERSE Property 4.t: “The Inverse of a nonsingular matrix is unique. Property 4.2: If A is nonsingular, then (AT')"! =A. ‘Property If A and B are nonsingular, then (AB)! =B-'A™' Property 4.4: If A is nonsingular, then so ten is AY Further, (AT)! = (A)? (Sce Problems 4.10 to 4.12 and 4,30.) Solved Problems 41 Determine whether 008 @-[ das ais] is the inverse of any of the following matrices: [4 -8 123 2-4 (Po) eld a) ef o] ‘We consider each of the given mattces in turn. Since ao-[f “olL-tas ais]-[6 2] ‘is B04 the identity matrix, G is not the inverse of A. 2-40 2 00 9 ot 42 aa MATRIX INVERSION CHAP. 4] not square, #0 it has ne: inverse, Im particular, he presduct BG is not defined, For C, matrix multipiestion gives 2 ollcas eisl-[o 1] = ce=[ cas caslls olcla ‘il 50 G fs the anverse of C, 1G and D- do not have the same order, 30 they cannot be inverses ef one another, Determine the inverses of the following elementary matrices: oon 100 soo -Jo10) peljoor 100 O10. not, ° ° i 1 00 100 lo p-}o 2 0| e-fo ro] F-| 34 o 01 O21 oo oth A and are elementary matrices of the first kind; thus, A“! = A und Bo! = B. Matrices and Dare elementary matrices of the cecond kind. Their inverses ate 14 0 0 1 0 6 sa 1 of and Dt=|0 -12 0 oo 1 oo Matrices E and F are elementary matrices of the third kind, Their inverses are ao too To] and ee[iro 24 oor 213 Ar}O 12 0038 Since A is upper triangular with no zero elements an its diagonal, it has an inverse and the inverse is upper triangular. Purthermore, since A'A =, we may write lea AAS Sale 3 sth tke Best matin othe lef representing, AC! Wie peeve the ivicated matriculation aoe equate corresponding clements on. and above the diagonal. ‘eginsing with the lefmost colume and sequentially moving through successive colurens, we determine that 0 Determine the inverse of 212) + 80) + 0 J) BO) FA OL) + at) + eC) = $0) + (- RQ) +) =0 3) + 12) + 3, (3) + O29 + (3) <1 eeeege MATRIX INVERSION cy CHAP. 4) wa an 16 Tr, wee['h 1" S| ooo VW. 44° Determine the inverses of 30 oo -1 9 00 1-2 00 2-2 00 Awl 4 40] a BH} 3) 2 0 13-10, r-1 33 Hotn matnces are lower utangular. Since A has a zero element on its main diagonal, it does not hhave an inverse, In contrast, all the elements on the main diagooa! of B are nonzero 30 it has an inverse which itself must be lower triangular, Since BI we may write -1 00 0 Oye 008) Ji ooo 2-2 coffe nal force 3 1-2 old « Fal"loa10 ret Salen ei] loood ‘vith the second matrix on the leit representing B~', We perform the indicated mateix multiplication and fequale corresponding elements on and below the diagonal. Beginning with the leftmow eolumn and sequenially moving direagh successive culumins, we devermne bat -le+ 06 +0d +0g= 1 2-1) + (2b +d +0g=0 HOD + 1-1) + (2d + Og=0 OD CDC B+ a2) 43e-0 20) + (-2)¢ +06 + O41 340) + 10-142) + (230 + 08 (0) + (-1Y=1/2) + 4-1/4) + 3h =0 310) +1(0) + (-2)f + OF 10) + (10) + 4-142) + 37=0 MO) + (=1KO) +340) #37= 150 doe bf) 3, BSeesseeg <1 12 0 “2-1-1702 2 1W2 WP ‘Therefore Eece 45 Determine the inverse of _[s3 Lil We follow Steps 4.1 through 44, beginning with (a | [ifs 1] = [06 03 9] Matching te fon ty 2 ot [5 28/22 0] ding 22 sine te fra row to elo -0.2/-04 1) the second row 29] sping te seeond ro by -s} -102 ale 38 MATRIX INVERSION CHAP. 4 ‘Th left cide of thie partitions’ matrix i in rourechalon form. Since it contains no zero rome, the eriginal cvairix has an inverse. Applying Step 4.4 to the second columa, we obtain TPL SEL 2] Aan, 96 ces the ssc OLE 2S) tow te the Brat rom Thesefote, w-[-} 3] 44 — Determine the inverse of ‘Adding ~7 tines the first row to. the third row ‘| Mulkiplying the second row by. 3 Adding 6 times the second row to ‘he tra sow {ng left ste of tvs partitioned matnix it in rowseenejon form. Since its (mnra TOW ws ZeFO, the ettginal matrix does not have an inverse. 47 Determine the inverse uf oo {Aly= [: 1 23 Interchanging the first and second ‘| Multiplying the frst row by L/S o 1 WS -US,0 1/5 ‘ ‘Adding ~2 times the first row to 1 1 E10 Of the third row aris 1385 0-265 Har. 4] MATRIX INVERSION 39 oad 0 0} tothe third row 0 0 ais ans -2/5 1 [: Us -1/5; 0 US ‘| Multiplying the third row by 5/4 [: US -Ws: os ‘ [Adding 1715 timer the second row or rit oo 0 © 0 4 line -74 Sia ‘The left side of this partitioned matric is im row-echelon form and contains no- zero rows; thus, the ‘eriginal matrix hac an iwerse. Applying Step 4.4, we obtain LS -WS; OWS @ 7 Adding -1 times the third row to Job 6 fais 274-814] the second row 0 0 4 f ts -24 514 0 1 ofthe tie 544) the first row 0 0 4! ae -2re 54, [i uO; Be 0 Bre) Adding “1/5 slmes the second row 1a o1 “[ Uso; 120 1/10 x) ‘Adding 1/5 times the third row x0. 1344 $78] to the First om ie -24 Sta o MM 2d S14 aha SUK, 48° Solve the system Sr +3558 Br + =e! “This system can be written in the matrix form (E)-(4) Using the result of Problem 4.5 with Eq. (4.3), we have [e)-T3 al4-Pa] The solution is 2, = 11, x, "21 49° Solve the system at x22 Srt no x53 24, ~ 33, — 34, 6 This system can he written in the matrix form tae Using the result of Probiem 4.7 in Eq. (4.3), we have 0 MATRIX INVERSION CHAP. 4) aL} 14] 4.10 Prove that the inverse is unique when it exists. and C. Then AB = Band CAI. It follows that The solution is x, = 0, 4; “Assume that A has ewo inverse, C= CI= C(AB) = (CA)B = 15 =B 4.11 Prove that (A"')' =A when A is nonsingular. (Ao) is, by definition, the inverse of AW. A also isthe inverse of A"! These inverses must be ‘equal as a consequence af Problem 4.10. 412 Prove that (AB)"' = Br'A™! if both A and B are invertible (AR) "is, hy definition, the inverse of AB. Furthermore, (AB) = and (ADIN AT") = AQBB "pA “(AAR = BIB = BB AT AAT = Aav? so B'A'' is alo at inverse of AB, These inverses must be equal as a consequence of Problem 4.10 4.13 Prove that the inverse of a lower triangular matrix A with nonzero diagonal elements is itself lower teiangular. “The proof fx inductive an the rows of A" Denote the inverse of A [a,] a8 A°' = [a, | Since the product AA 1s the sdesity maui te element sh he (row anu jh woven Ot thi product must Be ‘zero when i # j. In particular, the element in the frst row and jth column of AA”, with > M. is zero. We may write that element as 12+ 3 Oe) e004, We are given a,, #0, which implies that a,, =O for j > 1 ‘Now assume that @, = U OF >t and all 1p ~ 1;:compute the pth row of AAT" Since AA 'Es che Mestity matrix, the element in ils pth row and jth column, tor j>p, must satisty ~ Seams S, Since #,,.#0, it follows that a,, =O when {> p. 4.14 Prove that any square matrix that cam be reduced to row-echelon form without rearranging any rows can be factored into a lawer triangular matrix L, times an upper triangular matrix U. ‘The reduction of 2 matrix A (0 row-echeSon form can be exprewed as the product of a sequence of clementary matrices, one for each elementary row operation in the reduction process, multiplied by A. tf U is the resulting row-echelon form of A, then U8 upper trangolar stil (BB, RBA wy CHAP. 4) MATRIX INVERSION a Bach F, is an elementary matrix af either the second or thind kind, 10 each it lower triangular and invertible, It follows fram Problem 4.12 that if PRE ORE, thes PP= (BE, y-BB,) = BE: (Pi hue the praict af lier trinnlar entices atl is elf lower triangular (Problees 3.17). Pex (2), PA=U whereupon eo BEST A= 1A = cP tPA — era — Po Supplementary Problems In Problems 4.15 throwph 4.26 find the inverse of the given matrix if it exists als i) 1000 9 o100 o7no 3 @looral “ looro . boo ooon 1 416 [20 an pia Gl fb sl ory om pee i fe] 103, 4m ps0 0) a pra x -120 a 2-1 116 Pero4 4p) an) aie pe [i i] 23 yt 53 32 Jn Problems 4.77 through 4.29. use matrix inversion to salve for the unkaowns 427 dy, + ,+3¥)--3 4M 2,424, +4, =0 fae as 5 tay 0 De te 2 Brent =6 (blunt: See: Prislew 4.73) (Hint: See Problem 4.25.) 429 De, ode, ode, ear 2a + 3n 4dr ¢ 2x = 1 Sr, 438, 475,493, =1 Ar, 22n,¢ar,45x,=1 (Hint; See Problem 4.26.) 4.30 Prove that (A°')7 = (A) 4.31 Prove that ifthe commutative property af multipication holds for nonsingular matrices A and By then it alto holds for the following pairs of matrices: (a) A"! sed B'; (b) A" and B:; {e) A and BU. Chapter 5 Determinants. EXPANSION BY COFACTORS The determinant of a square matrix A, denoted det A or |Al, is a scalar. If the matrix is written out as an array of elements, then its determinant is indicated by replacing the brackets with vertical tines. For 11 matrices, det A= | For 2% 2 mattices, yy es fasy 33 Determinants for nm matrices with n>2 are calculated through a process of reduction and expansion utilizing minors and cofactors, as follows. ‘A minor M, of an nn matrix A is the determinant of the (n~1)%(n—1) submatrix that remains after the entire ith row and /th ealumn have been deleted from A. deta = My = 04,03, Examele 6.1 For A= or? a4 ;| 68 ai=[f S[-4o-s0y=-3 Mas= [2 | =a) - 16)= -6 Maa | Gl ste a= -2 A cofacior A, of an m Xa mnateix Ai defined in terms of its associated minoe as Now for any # or fi, #2 1,2)... cas Sa, Buy Guy For each i, the fit sum in (5.1) represents an expansion along the /th row of A; for cach j. the second sum represents an expansion along the jth column of A. Choosing to expand alang a row or column having many zeros, if it exists, greatly reduces the number of calculations required te compute det A. (See Problems 5.2 through 5.4.) PROPERTIES OF DETERMINANTS 8A and B are square matrices of the same order, then det AB = det A det B, ‘The determinant of an upper or lower triangular square matrix is the product of the diagonal elements. 2 cHar. 5) DETERMINANTS a Property §.3: If B ic formed from a square matrix A by interchanging two rows of two columns of A, then det A= ~det B. Property 8.4; IF B is formed from a square matrix A by multiplying every clement of 4 row ar column of A by a scalar &, then det A= (1/k) det B Property £8: If B ic farmed from a equare matrix A by adding a constant times one row (or column) of A to another row (or column} of A, then det A =det B. fc zero, ite dete Property 8.6: If one rox or one column of a square mat jimant 36 zero. Property 5.7: det A’ =det A, provided A ts a square matrix, Property §.8: If two rows of a square matrix are equal, its determinant is zero. Property §.9: A matrix A (nat necessarily square) has rank kif and only ses at least one kx k submatrix with a nonzero determinant while all square submatrices of larger order have zero determinants. Property §.10: If A has an inverse, then det A™' = L/det A DETERMINANTS OF PARTITIONED MATRICES A block matrix is one whose elements are themselves matrices, Property 5.2 can be extended to partitioned matrices in black upper (or lower) triangular form. If Au A a-[ fae oo0 where cach of the submatrices Aj, Azg.---.A,, is square, then Get A= det Ay, det Ay, det Ayy det A, (5.2) (See Problem 5.8.) PIVOTAL CONDENSATION Properties 5.8 through 5.3 describe the effects of elementary row and column operations on a determinant. Combined with Property 5.2, they farm the basis for the pivotal condensation algorithm for caleulating the determinant of a matrix A. as follows: STEP $.1: Initialize D= 1. D is a scalar that will record changes in det A as a result of elementary row operations. STEP 5.2: Use clementary row operations to reduce A to row-echelon form. Each time two rows are interchanged, multiply D by —1; each time a row is multiplied by &, multiply D> by Tk, Bo not change D when an elementary row oper STEP 5.3: Caleulate det A as the product of D and all the diagonal elements of the row-echelon matrix obtained in Step 2.2. (Sce Problems §.6 and 5.7.) This algorithm is easy to program for computer implementation; it becomes increasingly more elficient inan expansion Dy colactOrs as te order of A becomes larger” If rounding is to be used, then the pivoting strategies given in Chapter 2 are recommended. 4d DETERMINANTS [CHAP 5 INVERSION BY DETERMINANTS ‘The cofactor mairix AT associated with a square matrix A is obtained by replacing each element of A with itz cofactor. If det A #0, then a Daye ate hay 63) If det A is zero, then A does not have an inverse. (See Problems 5.9 through 5.11 and Problems 5.18 through §.20:) The method given in Chapter 4 for inversion is almost always quicker than using (5.3), with 2% 2 and 3x3 matrices being exceptions. Solved Problems S.1 Calculate the determinants of “LE det-A = Hd) ~2(3) = -2 det B= 263) (3004 AS pa] am §.2 Calculate the determinant of expanding along (a) the first row, (b) the first column, and (c) the second. column. (2) Expanding along the frst row, we have SA yay taeda + BoA Sal = A= WPES() 88) +1) = CNY + KT) UC S68) 50H = A AN-3) + 3(— 1-8) + AL 78) = 45 (b) Along the first column, paid al a alemurs 2 DUSOD 68) HC PRO) AEN KE) 406 = MUM—3) + (5) 1-5) + M2) = 48 cHaP. 5] DETERMINANTS 45 (c) Expanding along the second columa gives se det wet Ags tO axes” 24 5 6 == P97 = 0009) #1) LEE) — ATI) # = (210) = AE 9}) = TK R7) + 5{1)(~ 10) + 8-133) = a8 3 40 2°76 5-8 0. bby expanding along (a) the second row and (jt) the third column, (a) Expanding along, the second row gives 40 3 #0 s = 2 1YEACD) = (= 8Y) #1) = BHO) = O45) + = NIP E—3)(- BP — AES) = =2(-19(0) 4 7140 + 6-194) = —24 5.3 Calculate the determinant of det B= (-2(-1)" 47-18 (b) Along the thied cotuma det B= 0B,,+68,, +0B,,=68,, 3 =o] S fg] =6-nt 3-8) 059) - 26 Part b involves less computation because we expanded along s column that has mostly zeros $4 Calculate the determinant of 1-4 2 2 Jj 473 5 A} 3 0 8 0 “S18 9. We expand along the third rom, because it fs the row or column containing the most zeros: BELA = 3A, HDA, BAY, + DA = MYM, FRCL IM, Now we may write lea a5 | 7 3 7-3 Mu=| 7 Bema sg ena]? 8) +(-2n-a 3 = AIMS) 2 1A) + (20139) = 14 ba 2 eich 75 a5 wy a7 we 47 sll 7 Sleeocn| $ Sfecacar| fh = HUN) + (Aye 1N161) + (-2)LAYLT) = 280 Thus, det A = 3{1}(84) 431280) = 2002 46 DETERMINANTS ICHAP. § 5.5 Verify thar det AB = det A det (Property $.1) for the matiivcs given in Problems 5.2 and 5.5. From the results of those problems, we know ghat det A det B= (~45)(~24) = 1080, Now 2347-3 so] pe -3 1 ap=|-5 5 6l/-2 7 6l=]9s -33 wo peal s sol be i249 To calculate det AB, we expand along the frst tow, Hnding that 56 Use pivotal condensation to evaluate the determinant of 022 aq}1 03 211 ‘We initialice D = 1 and use elementary row operations to reduce A to row-echelon form: =]0.2 2] rows De D=1)=1-1)=-F “yey seas 193] Adding ~2 times the first row ta 02 2) the thied row: D remaine res Multiplying the sceond row by 1/2: DeDQI= -1Q)=-2 4 nee 3] Adding ~1 times the second row to the third row: D remains ~2 Multiplying the tied sow by —16 D-D-6)=(-24~6) = 1 ss. es s+ ew “The diagonal elements of this last matrix ate all ones, so det A= D(1)(2}(1)= 12. 5.7 Use pivotal condensation to evaluate the determinant of 12-3 4 2-2 5 -6 Ae). 3-4 6 6 5-3 6 We initialize = 1 and reduce A te row-echelon form: o-6 ul - the secnavt tows 2 remmaing | 3-4 6 12-3) 4] Adding =2 times ane frst row 10 1 6 5-3 6, CHAP. 5] DETERMINANTS, a7 1 ‘Adding 1 timec the Bret row 10 the 0 thied tow B remains 1 slo 6 f Adding ~6 times the frst row to 0 the fowinh tow 2 remains 1 o Lo 1 4] Muliplying the second row by [ wis | eb ocweies 0 0 0-7 a8 12 4) Adding —5 times the second row 01-1176 1416] to the third rom: D temains —6 ~|0 0 136 -s/3 o-7 18 WIR 12-3 4) Adding 7 times the second row to D1 -1nv6 14/6] the fourth row: D temas ~6 00 136 $73 alo 0 he 5/3 120-3004 Mtiplying the third row by 4?93: [ t-te tie] De buses was 2s =|00 4) -tors 90 WR 33 12-3 4) Adding -13/6 times the third row 2 rns 186 | Ao ame fourth rows D remnins = 18 90) m3 aloo 9 6 ‘The matrix is now in row-echelom form with diagonal elements 1, 1, 1, and Thus, det A= = BEE MOD = 0. 5.8 Calculate the determinant of a2 a 214 a4 aa -1 ‘This matrix can be partitioned inca block upper ingonal, We inteoduce the partitioning, jangular form with square matrices on itz main and it follows from Eq. (5.2) that aeva=isift 5||} j]=sencay= 195 1 so s.10 sal Sz DETERMINANTS (CHAR 5 Calculate the inverse of 3 a-[3 a] ‘We shall use Eq. (5.3). Since the determinant of a 1 1 matrix is the clement itself, we have Ay = 1) etfs] = (Y= 4 AED! ter [= (10) = (1) det (1 = (1) = (1? det 3} = (DB 3 1 determinant of A is 4) ~(-13(5) = 17, 50 wel] om 234 As]-5 56 Ta 9. In Problem 5.2, we calculated » number of cofactors for this matrix. In pariécular, AyS-3 Aye 87 Ayo HTS Ayr 3 Ano Ane? Ay and det A= 45, In addition, Caleulate the inverse of Ane (er |2 3)=(-nesies and Awe? |) yf=aKe) =a 3-7 3s 2 Thes we] 5-10 13) and ated] 0 -2 -2 =" 25, 33 8 Find the inverse of the matrix given in Problem 5.7. Since the determinant of that eiatris was found to be zero, the matrit does not Rave an inverse Boda d ap doc 1-6 3-8 7. ‘The rank of A was determined in Problem 1,17 to fee 2, s0 there chould be at least one 2x2 submatrix of A having; a nonzero determingni. There are many, ineluding the one in the upper left Verify Property 5.9 for 32 23 AIL x3 submatrices, obtained by deleting any two columns of A, have zera determinant CHAP. 5) DETERMINANTS a9 Bat3 Prove that the determinant of an elementary matrix of the first kind is =1 ix with two rows interchanged. The prowt ‘An elementary matrix E of the first kind is an ientity 2 fe inductive 00 the order of RICE is 2 2, then o1 elt al and det B= ~1, Now assume the proposition is rue fr al lmenar maces ofthe fst tnd wi ender (k— 1) ¥ (1). and comsider an elementary msatsit Raf nace & x. Find the fixe iowa rh ‘was not interchanged, and denoee it as cow m. Expanding by cofactors along row im yields WIE = 0, Aes + Baatlas 471+ Band, becmuse @,, = 0 for all jm, and aq 1. Now Asal the determinant of an elementary matrix of the first kind having order (& ~ 1) (de ~ 1), 30 Be ng = Mag = = E But M,, by inction it ie equal to =I, Thar, Prove Property 5.3. IC B is obtained from A by imerchanging 1Wo 10s of A, then BEA, where E isan elementary rmatsix of the first kind. Using Property 1 and the reall f Problem 5.13, we obtaio det B= det EA det Ecdet A~ (—1}det A from which Property 3.8 immediately follows, Prove Property 5. ‘Assume that B fs: obtained {rom an n > matrix A by melting the ii row of A by the seabark. EEvalusting the determinant of B by expunsion of cofactors lang the ith eo, we obtain eB hays + hag ig b—F ROA, Kay agA,g to 4,4.) = edt from which Property 5.4 follows Prove that the determinant of an elementary matrix of the third kind is 1. ‘An elementary matrix E of the thied kind is an identity matrix that has been altered by adding a constant times one row of Ito another row of I. The proofs inductive on the order of E. If B is 2% 2 ™ 44] In either case, det F = 1. Now astume the proposition és tm for all elementary matrices of the third kind with onder (— 1) x (4 — 1), andl consider a8 elementary mates B of onder bx A Find the fest cow af that was not altered from the £% identity matrix, andl denote this row at row m. The proof aow follows that in Problem $.13 except that here Md... =1 by induction. e[! 9] oe Prove Property 5.5. 11 fis obtained feom square mattix A by adding to one row of Aa constant times another row of A, then B= EA, where E is an clementary exatrix of the third kind. Using Property $.1 and the results of Problem 5.16, we obtain det B = det BA = detHdet A= I det A det A 50. DETERMINANTS (CHAR 5 KLIS Prove that if the determinant of a matrix A ix zero, then the matrix dock not have an inverse ‘Assume thai A does have an inverse, Then 1 met 1 = diet (AA) = det A det A =det A-"(0) = 0 which is absurd. Thus. A cannot have an inverse. 5.19 Prove that ifeach element ofthe ith row of an nn matrix is multiplied by the cofactor of the corresponding element of the Kih row (i, k= 1,2,....m; 1K), then the sum of these products is zero. For any nxn eiatex A, construct a new matrix B by replacing the kth row of A with its #th row (i= 1,2)... .m; (#8). The ith and kilt rows of Bare identical, for both are the ih row of A: it follows from Property §.8 that det B= 0, Thur, evaluating det B via expansion by cofactors along its ih row, we may write ay where B, 8 the cofactor of By Far fach element b (/=1-2,-.. 4) inthe ith row of By compare the submatrix I, obtained fom toy deleting sh row and yeh columa tothe submatrix A, cbtalncd from A by deedng ts ds row and ;hh column. They ae the sume except forthe ordering ef their row, each subomatritcantaine all the 10s Of A except forthe Ath and all the columas of A except forthe jth, Exactly i— &|— 1 row reorderings fate required 10 make B, equal 1 A,,, 501 follows from Property 5.3 dee, = (1) Adee ky, @ ‘These deteriminaats are minors of B and af A, respectively, so (2) may be written in cofactor notation as (ony = ea By=—Ay ay Combining (1) and (3), we have which, when multiplied by 1, gives the desired reeul, $.20 Prove that A(A‘)" = [AIL ‘Considee the (i, 8) element of the praviuct ACA 3 ta,n(, 4) element of (')"} = 3, (a,)4j) element of) 1t follows from Problemy $.19 that this sum is zero: when fet A. Wher f= & the sum is det A because it is fn expenston by cofactors along the row of A. Therefore, we may write [ap 0-0 7) 8 ar) aay] OAL IAI oo lal Note that if [Al #9, then A(A)"/[A]=1, from whieh (5.3) follows, CHAP. 5] DETERMINANTS: SL Supplementary Problems a Problenm 5.21 theough 5.26, let ty Gal ef tee zo41 Fy 1-27 dana S21 Find (a) det.A and (6) det, and (c} show that det AB™= det A det B. S22 Find (a) der € and (6) dec, and (c) show that det @D'= det € de: D. $23 Find (0) det B ana (6) det F $24 Use determinants to find (a) A“! and (6) B-*, EAE Use determinant: to find BD! 5.26 Use determinants to find In Problems 5.27 and 5.28, find the detceminant af the given matrix. sa ft -1 1 -4 sae 5.29 Use determinants to Bnd the inverse of the matrix given in Problem 5.28. 5.80 Prove Property $8 SH Prove that A Ra order mn, then det RA = A det A S28 Prove that A ha order mn, then [9] = [AP S33 Frove thar I A and B are square matices of the same ovder, then det AB = oet BA. SM Prove Unol if A is invertible, ln dot A“! — 17det As 5.35 Let A— LAI be an LU alecomposition off A (eee Chapites 3), Show that det is equal tothe pronfust of the diagonal elements of 1. Chapter 6 Vectors DIMENSION A vector is a matrix having either one row or one column, The number of clements in a row vector af a column veetor is its dimension, and the elements are called componen. The transpose of 8 row vector 1s eulumn vector, and vice versa LINEAR DEPENDENCE AND INDEPENDENCE, A set of m-dimensional vectors {V,,¥,,-..,¥,} of the same type (row or column) is linearly dependent if there exist constants ¢,¢,..¢, nev all zara snich that eM tM to be Ma ® (on) Example 6.1 The set of fve-dimensional vectors 4{{1.0, -2.0, off. f2. 0.3, 0,0} f0.2, 0, 0.4)". and 3.0, 4,0.0)") 1s linearly dependent because 1) fa] fo 3] [a of joy tz o} fe A oaleaaligofec ylaf-lo o} Jol jo o} fa o} Lol Ut o} La. A set of medimensional vectors (¥,, Vz... »¥,} of the same type is linearly independent if the only constants for which Eq, (6.1) holds are c, = ¢, = -- = c, =0. ‘The following algorithm may be used to devermine whether a set of row vectors is tinearly independent or dependent. The algorithm is applicable to Column vectors too, if their transposes are conndered instead. (See Protkens G2 and 6.3.) STEP 6.1: Construct a matrix V whose rows are the row vectors under consideration. That is, the Fast cow wf ¥ is Yo te secu cw wal Vix Wy, aul oe mn. STEP 6.2 Determine the eank of ¥. STEP 6.3: A the sank of ¥ is smallcr than Ue nusnber uf veclurs in dhe set unier comideratiom (i.e.. the mumber of rows of ¥), then the vectors are lineurly dependent; otherwise. they are linearly independent LINEAR COMBINATIONS A vector B is a d., sueh that sear combination of vectors V,,Ny,-..,¥y if there exist constants Bad +dN,ro0rdN, (6.2) For the mairix addition and equality of (6.2) to be defined. the vectors must all be of the same type (row of column} and have the same dimension, 2 CHAR. 6] VECTORS 3 Example 6.2. The vector [=3, 4, 1,0, 2] ic linear combination of the vector: of Example 6.1 becauee -3 1) 72) Fe 5 4 o} fol jz a 1} =a) -2] +13] +2}0]+(-n} « 0 of fal fo 0 2 o} tol Li 0 Equation (6.2) represents a set_of simultaneous linear equations in the unknowns «di. The algorithms given in Chapter 2 may be used to determine whether or not the d, (21/2... Jip exist and what they are. (See Problems 6.4 and 6.5.) PROPERTIES OF LINEARLY DEPENDENT VECTORS Property 6.1: Every set of m+ 1 or more medimensional vectors of the same type (cither row or column) is linearly dependent, Property 6.2: An ordered set of nonzero vectors is linearly dependent if and only if one veetar cat be written as a linear combination of the vectors that precede: it Property 0.8: If a set of vectors is linearly independent, then any subset of those vectors is also linearly independent. Property 6.4: If a set of vectors is linearly dependent, then any larger set containing this set ts also linearly dependent Property 6S: Any set of vectors of the same dimension that contains the zero vector is linearly dependent. Property 6.6: The set consisting of a single vector is linearly dependent if and only if that vector is the zero vector. ROW RANK AND COLUMN RANK ‘Consider each tow of a matrix A to be @ row vector, The row rank of A is the maximum number ‘of linearly independent vectors that ean be formed from these row vectars; itis the rank of A (see Problem 6.11). Similarly, the cofuma rank of is the maximum number of linearly independent vectors that cin be formed from the columns of A. It may be obtained by calculating the rank of At, because the rows of A" are the columns of A. The row rank of a matrix equals its enlumn rank (562 Problem 6.10); so the column rank of A is also the rank of A. sa on 62 63 o4 6s VECTORS [cHar. 6 Solved Problems Determine whether the set {{1.1,3], [2,=1,3], (0,1,1], [4,4,3]} ie linearly independent. ‘Since the set contains more vectors (four) than the dimension of its member vectors (three), the vectors are linearly dependent by Property 6.1. They are thus mat linearly independent. Determine whether the set {{1.2, 1,6}, (3.8.9, 10), (2, ~ 2]} is Vinearly independent. Using Steps 6.1 through 6 3, we first construct r2- 4 vels 8 9 10 ee! Matrix: ¥ was transformed in Problem 1.13 into the row-echelon form: 12 6 o1 6 oo 1-n ‘By inspection, the rank of V is 3, which equals the qumber of vectors in the given set; hence, the given set of vectors i lineurly independent Determine whether the set 413.2, 1)". (1, 8,9, <8, 71") is linearly independent. Using the algerithm of this chapter. we comstsuct 2 rior ve}2 30 <1 1 163-8 7. 4,017.2. 3.0, ‘which we transformed inte row-echelon form in Problem 1.18: M3 WS 4/3 U9 O41 “US 1-1 oo 8 0 oO ‘Since the rank of ¥ is 2, wbich is less than the mumber of vectors in the given set. that set is linearly sepende mt Determine whether [6, 10, 2)" is a linear combination of {1, 3, 21", [2.8 ~1]’, and [-1, 9,21" Ik isa linear combination if and only if there exist constants d,,d,, and a, such that [alelel-Lal-(3] Solving this system is equivalent ta Solving the systeens of Problem 2.8 with each x replaced by & din ‘problem we found that this system is consistent; hence [6, 10, ~2|" fs a linear combination of the thee vectors—in paticuba, For, = 1, =2, and y= = wl 6.7], and [0 1, 1). df, such that Determine whether [5, 1, 8] is a linear combination of [2, 3.5), | Ir isa lineur combination if und only if there ext constants d,, 4, Care 6] VECTORS 55 15.1, 8] = 4402.3, 5] 4 d,f1,6.7]+ 40.1 = [2d Hay Bay #4, +d, Sd, +74, +] which ie equivalent to the epetem ded, =5 3d, 6 64,4 d,-1 Sd, +7d,+d,=B “This system was shown in Problem 2.5 t0 be inconsistent, 20 [5, 1,8] fe not a linear combination oft other three vectors 66 Prove that every set of m+ I or more m-dimensional vectors of the same type (either row or column) is linearly dependent, Consider a set of mauch vectors, with n > mt. Equation (6.1) generates m-homogencous equations: (one for each component of the vectors under consideration) i the R-UBKDONTE €,, C5011 fy. IF WE were 10 vol Uhuns euativin by Gansta elimina (ose Chater 2), we monk fn that the solution set has a least m~ m arbitrary unknowrs. Since thece arbitrary unknowns may be chosen to be nonzero, there exists a solution set for (6.1) which is mot all zero; thus the m wectors are linearly dependent 6.7 Prove that an elementary row operation of the first kind does nat alter the row rank of a matrix Let B be obisined from matrix A by interchanging two rows, Clearly the rows of A form the same set of row vectors as the rows of By so A and B stust have the same 1ow rank. 68 Prove that if AX~0 and BX = @ have the same solution set, then the nn matrices A and B have the same column rank. ‘The system AX =O can be written as BASE EAE HEALS wm swhere A, isthe first column of 4, A, is the second column of A, and go on, and X = Similaiy. the system BX—@ can be written as 2B 42,Byt- +280 @ Denote the column rani of A as a, 8nd the column rank of B as b. Assume that tbe eoumn rank of ‘Ais greater than the column rank of B, so that a> 8. Now there must cxist@ columns of A which are linearly indenendent, Without loss of generality, we ean assume that these are the fist 2 ealurens of A. (Uf not, rearrange A so that they are; this columa rearrangement does not change the column rank Of A, by reasoning analogous to that used in Problem 4.§,) However, the fist @ columns of B are linear! opendent, Because 6 in assumed te be emallcr than @- Thus, thers exint eonstanteat cals, dyy not ero, such that 4B, +4B, +++ dB, <0 From this, it follows that (8, + dB, #9 dB, FOB, and that and, wed, is a solution of system (2). Since these same values are given to be a solution of system (1). it follows that 69 6.10 ‘VECTORS (char. 6 ik Aa = where, as noted, the constants d,,d,,...,d, are not all zero. But this implics that A,, A,,...,A, are Breaily dependent, which ie contraction. Thue the colime rank of A cannot be greater than the column rank of B. ‘A similar argument, with the roles of A and B reversed, shows that the column rank of B cannot be ‘greater than the column rank of A, so the mwo column ranks must be equal jon of any kind does not alter the column rank of a Denote the original matrix a3 A, and the macrix abrained by applying an-elementary row operation. to A.as B, The two: homogeneous systems of equations AX = @ ang BX have the same set of solutions ee Chapter 2). Thus, 38.4 result of Problem 6.8, A and B have the same column rani, Prove that the row rank and column rank of any matrix are identical ‘Assume that the row rank of an m x m matrix A is, and its column rank isc. We with to show that 1c, Rearrange the rows of A so tht the first rows are linearly independent and the remaining m — 7 rows are the linear combinations of the str rows. It follows from Problems 6.7 and 6 9 thatthe coluenn rank and tow rank of A remain unaltered, Denote the roms of Aas A,.A,,..., 4, in order, and define: A r Be) ] ond ‘Then. ithe portioned mati | |. Furthermore. since every ro of Cis a near combination of os Of B. there exis a matin T sh iat C="TB, Tn parla, i Aad Ay tea, 4d A, then [dy. dy... d,) 8 the first row of 7, Now for any n-dimensional vector ¥. Bx] [Bx ax [ eel Lrax Hence, AX=0 if and only if BX=0, and it follows tom Problem 6.8 that A and B have the same columa rank ¢, But the columns of B are r-dimensional vectors, so the columa rank of Bt cannot be sreater than r, That is, csr om [Dy cepenting this reasoning on. A’, we conclude that the column rank of AT cannot be grester thar the row rink of A’, Bult since the colurans of A/ are the rows of A and vice-versa, this means that dhe row amk of A cannot be greater than the column rank of As that rse 2 ‘We conclude from (1) amd (2) that r= c, Prove that hoth the raw rank and the column rank of a matrix equals its rank Let U be a matrix in rowechelon form obtained from A by elementary row operations. Then follows from Problem 6.9, that A and U bave the same colume rank. Now denote the rank of Aas r From the definition of rank, ¢ is the number of nonzero raws in U, Since dhe first nonzero element in a (Ee 1,8 Tand ey 0 (ee 18+ 2m) So the set ‘of vectom it linearly dependent Supplementary Problems In Problems 6.15 through 6.20. determine whether the given set of vectors is linearly independent. (1.21 eat (01128 (2.2.28 2.2.0) ((1.0.0f. (1.2.0) (0.4.21) (1.0.2.0) 92.2.0, 1P% (1,-2.6. -11"F (12.0, 1,11, 10,1, =A AB 10,061.21 (Qh 2.49% (101, 09% 0, “10 2.8, 31") 1s [1,317 linear combination of the vectors giver in Problem 6.157 (0) Determine whether (0,1, 1] can be written as a linear combination of the vectors given in Problem 6.16. (6) Repeat part a for the vector [1, 2,0}. (a) Determine whether [2, 1.2, 1] can be written as a linear combination of the weetors piven in Problem 6.19. (6) Repeat part a for the vector [0,0,0.1} Show that any 3-dimensional row vector can be expressed a Problem 6.17 inear combinstion of the vectors given in Choose a maximal subset of Finearty independent vectors from those given in Problem 6.15. CHAP. 6] VECTORS 9 ome “ar 628 oa9 rey 631 oat ou Choose 8 maximal subset of Kneurly independent vectors from thoes given in Problem 6.16 (Choose a muasimal set OF linenrly independent vestors from the following (2.2.0.1) (3.2, 1.3), f0. 1.1.01 (3.3, 0,3}. ‘An m-dimensional vector V is a convex combination of the m-dimensional vectors ¥,. V,.... .¥y ofthe ‘sume type (row or column) if there exist nonnegative constants dd... dq whose sum is 1, such that Va dN, +dv,#---44¥,. Show that (5/3, 5/6] 5 @ coavex combination Of the vectors [1. 1} [3., 2. Determine whether (0, 7]” can be written a8 a convex combination of the vectors: () ts) Gh Prove that if (V,, Vp... %,) i bivearly independent and V cannot be writen a£. linear combination of | this set, then (V,.¥...., ¥,, V} is ao linearly independent. Prove Property 6.5. ‘The null space of a matrix Ais the set of all veetors which are solutions of AX =", Determine the null space of Devemin heel pace of he mar af al Determine the oll spece of the matrix ==ce Chapter 7 Eigenvalues and Eigenvectors CHARACTERISTIC EQUATION ‘A nonzero column vector X is an eigenvector (or right eigenvector or right characteristic vecior) of fa square matrix A if there exists a scalar A such that AK = AX (ay Then A is an eigenvaiie (or characteristic value) of A. Figenwalues may be zern: an not be the zero vector. jgenvectar may Example 7.1. [1, ~1]" is an cigenvector corresponding 10 the eigenvalue “fos [2 Hof -2) fl Wa} ‘The characwristir equatian of ann xn matrix A is the mth-depree: polynomial equi det (A~ Al} = 0 (72) ‘Solving the characteristic equation for 4 gives the eigenvalues of A, which may be real, complex, oF ‘multiples of each other, Once an eigenvalue is determined, it may be substituted into- (7.7), and then that equition may be solved for the corresponding eigenvectors (See Problems 7.1 through 7-3.) The polynomial det (A = Al) is called the characteristic polynomial of A. ~2 for the matrix seca [. PROPERTIES OF EIGENVALUES AND EIGENVECTORS Property 7.1: The sum of the eigenvalues of « matrix is equal te its trace, which is the sum af the elements on its main diagonal, Property 7.2: Eigenvectors corresponding to different eigenvalues ane linearly independent Property 7.3: A. matrix is singular if and only if it has a zero eigenvalue, Property 7.4: If X isan eigenvectar of A corresponcing to the eigenvalue A and A s.inverule, then X 1s an eigenvector of A' corresponding to its eigenvalue 1/A IFX is an eigenvector of a matrix, then so too is kX for any nonzero constant &, and both X and kX correspond to the same eigenvalue, ‘A matrix and its transpose have the same eigenvalues. The eigenvalues of an upper of lower triangular matrix are the elements on its main diagonal Property 7. The product of the eigenvalues (counting. multipl determinant of the matrix Property 7.9: If X is an eigenvector of A somesponding to eigenvalue A, then X Is an an cigenveetor ‘of A~ cl corresponding to the eigenvalue 4 ~ ¢ for any scalar. a ities) of a matrix equals the ‘CHAP. 7) EIGENVALUES AND FIGENVECTORS 61 LINEARLY INDEPENDENT EIGENVECTORS ‘The eigenvectors corresponding (0 a particular eigenvalue comtain one or more arbitrary scalars. (See Problems 7.1 through 7.3.) The number of arbitrary scalars is the number of linearly independent eigenveciors associated with that eigenvalue. To obtain a maximal set of linearly independent eigenvectors corresponding to an eigenvalue, sequentially set cach of these arbitrary sealars equal to a convenient nonzero number (usually chosen to avoid fractions) with all other arbitrary sealars set equal to zero. It follows fram Property 7.2 that when the sets corresponding ta all the eigenvalues arc combined, the result is.a maximal set of lineatly independent cigenvectors for the matrix. (See Problems 7.4 through 7.6.) ‘COMPUTATIONAL CONSIDERATIONS ‘There are no theoretical difficulties in determining eigenvalues, but there are practical ones. First, evaluating the determinant in (7.2) for an a x # matrix requires approximately ! multiplica- tions, which for lage m is 4 prohibitive qumber. Second, obtaining the routs of a general characteristic polynomial poses an intractable algebraic problem. Consequently, numerical algo ithms are employed for determining the eigenvalues of large matrices (see Chapters 19 and 20) THE CAYLEY-HAMILTON THEOREM ‘Theorem 7. Every square matrix satisties its own characteristic equation, That Bet A) = BATH BGA + BAP BAH BS then DAT + by GAT tt BAP + BA B= 0 {See Problems 7.18 through 7.17.) Solved Problems Tot Determine the eigenvalues and eigenvectors of (2) oe hs mae, nome [ Talo f° seree A= ap (= AYA = A ‘The characteristic equation of A is A' + A~2-= 0; when solved for A, it gives the twa eigenvalues 3 = 1 and A= 2. Ass cheek, we utilize Property 7.1: The tase of Ain 3}(-4)——1, whih is alae the same ‘of the eigenvalues. ‘The eigenvectors corresponding to A = 1 are obtained by solving Fg. (7-1) for X ‘value Of A. Alter subsutung and rearranging, we have (Lb a)o0 yi) (al " (2 -slle]-[) which is equivalent to the set of linear equations 1 RaL ith this EIGENVALUES AND EIGENVECTORS [car 7 de, + 64-0 2a = 51 ‘Tre solution uo this system bx, = |x, with #, aPbitrary, othe sigansectors corresponding te A— 1 ane [hj i[ris x-[2]-[ 2] with arbiteary. When a= 2, (7.1) may be written (2 fet ME} b) L -alle}-(0) which ix squivalent to the set of Hinewr equations Se, ¢S4,00 ey a, 20 ‘The solution to this system is x, = a, with £, arbitrary, s0 the eigenvectors correspanding ta. A= —2 are x-[a}-UE]-s[7) with x, arbiteary, Determine the eigenvalues and cigenvectors of 522 aie] 6 6 9 S22) Tro) fsa 2 2 36 3{-alo 1 af=] 3 3 sea) Wow ls 6 gna ‘The determinant of this tast matrix may be obtained by expansion by colsetors (see Chapter Sj: iti AnaL =a) +20 =A +176 = C4 ~ 14) ‘The characteristic equation of A ={ = 3)°(A = 14] =O, which has as its solution the eigenvalue J 3 of multiplicity wo and the eigenvalue A= 14-of multiplicity oe. As check, we uilize Property 7.1; The luace of A is $+6+9= 20, which equals the sum of the three eigenvalues The eigenvectors corresponding {o A= 3 are blained by solving (7.1) for X= [x,. value of A, Ths, we may write (Res) gS WEH) . bees) which is equivalent to the set of linear equations ff with this CHAP. 7] EIGENVALUES AND FIGENVECTORS “ 7 2a, 4 Bey t2e, 0 Be, #4, 934,20 br, +64, +64, 0 ‘The solution to this system is x, =, ~-x, with x, and x, arbitrary; the eigenvectors corresponding to a LAL Sl (G34 RHE dlls 920 2YH] fo ot 3-8 affs]-fa 6 « walled Lo. which is equivalent to the set of linear equations 95, +2n, + 2n90 3x, ~ 84, #34, <0 64, 53-50 ‘The solution to this system is x, = fx, andx, ~ jr, with x, arbitrary; the eigenvectors corresponding to fh igenvalues and eigenvectare of wih, anbierary Determine th ala Fe te mara, some[ Ss] T-Pot ty aeoce Se (A= AD) (= ANS =A) masa DLS The characiersic equation of Ais A'+2445-l0; when solved for A, it gives the owo complex sipenvalues = ~1 + 2 and A= —1~i2, As acheck, we note thatthe trace of is ~2, which isthe sum ff these eupenvalues The eigenvectors corresponding to A~ are obtained by Solving Eq. (7-1) for X= (xy, x,)" wih this value of A After cubstitting and rearranging, we have {Le Seevealy MES o (SP talle}-E] Which is equivalent to the set of linear equations 2 1s EIGENVALUES AND EIGENVECTORS. (CHAP. 7 (1-2, + feo ~Sx, + (4-22) ‘The solution to this system & x, =(—/S ~ £2/5x, with x, arbitrary; the eigenvectors corresponding 10 Aw =14 2 are thu a-[2]-fOOS 2 fen ey 28] sith arb With A= 1 = 12, the corresponding eigenvectors are found in a similar manne [EJ [ x u ~seais) swith 5 bien. ‘Choose & maximal set of linearly independent eigenvectors for the matrix given in Pratlem 72. “The cigenvectors astocinted with A= were found in Problem 7.2 t0 he 1 “Tore are two tinaarly independent eigenvectors ascocioted with 4 = 3, one for euch arbitrary scalar. One of them may be obtained by setting £,~ 1. x,=0; the other. by setting x: =O. £, The eigenvectors associated with A~14 are uy x,{ 1/2] x, arbitrary 1 Since there is only one arbitrary constant here, there is only one linearly independent eigenvector associated with 4 = 14, It may be obtained by choosing x, wo be any nonzero scan, A convenient choice to avoid fractions, is x, = 6, Combining the linearly independent eigenvectors correrponding to the to Sort fi) Ee) By 5 8 maximal set of linearly independent eigenvectors for the rnatri, Choose a maximal set of linearly independent eigenvectors for the matrix given in Problem m1 “The eigenvectors cortesponding to A= 1 were found in Problem 7.1 19 be af $2] sain Since there is waly one arbitrary scalar. there is only one linearly independent ‘with this eipenvatue, Ik may be cbuained by shoosing x, to be any nowzeTo salar. A convenient choice, to avoid Fractions, tx x, =2. The eigenvectors corresponding to A= ~2 are AL] stan CHAP, 7] EIGENVALUES AND EIGENVECTORS 6s ‘There ic one Knsarly independant eigenvector associated with this eigenvalue, and it may he obtained by choosing x, 10 be any nonzero scalar. A convenient choice here is x,= 1. Collecting the lineasly independent eigenvectors for the two eigenvalues, we have ral Cl 15 & maximal set of linearly independent eigenvectors for the matt. 7.6 Choose a maximal set of Vinearly independent eigenvectors for the matrix, Booon eeone eones euess press Since this matrix is upper iiangular its eigenvalues are the elements on jis main diagonal. ‘Thus, Ae 2 isan eigenvalue of smukipticty five, The cigenveetors ameniaied with thes eigenvalue are oy} fe 1 ° o+s, o ° 1 of th ol Lo. with x). ty. and x, arbitrary. Hecause there are three arbitrary scalars, there are three lineasty independent cigenvestors awsociated with A. One may Be obtained by setting another by setting 1, 1, £, =.44~ 0; aad the third by setting x, = and x, = x, =0, Note that this matrix has only ther lineaely independent eigenvectors, even though it has onder $$. 7.7 Show if A is an eigenvalue of a matrix A, then it is a solution to (7.2), TEA ie an aigeavalue of A, there muct exist a nonzero vector X euch thot AX AX. Thus, AX = aX=0, and (A = 0. This implies that A= aI iesingular, for otherwise X= (A.— AI) '@= 0, which is not the case, But if A.~ AV is singular, thea det (A ~ Al) ~0 (see Chapter 5), 7. Show that eigenvector: corresponding te different eigenvalues are linearly independent Let Ay, ays hy be different eigenvalues of a matrix A, and let X,,Xo....4Xq be associated felgenuectors. We rust show that the ely colution to 6X, 2X, toh a0 w . pmumprying (77 on the Feft By A, we obtain AK, AX, $+ e,AN = ADO. Slace each vector here is an eigenvector, we use (7.2) 16 weite CAR, FAK, to RAR (2) Maliplying (2) om the left by A and again using (7.1), we obtain eyAIK + AIK, 40+ beg ARXQ 0 a Equations (J) through (3) are the firs three equations of the set 6 EIGENVALUES AND EIGENVECTORS (char. 7 eM, te, bet eM AK, HGAK, to HEALS GAIK, FeARK, FoF AAR, GAN +e AIK, + HeLALRS ATK, HEAT OR tHe AT R20 igencrated by sequentially multiplying each equation on the left by A. This system cam be written in the ‘mates form ° 3 -|o Ww o “The first mation the feft isan mx m mauelx which we shall denoce a1 Q ex determinant is called che ‘Vandermonde determinant and i ANAS AWAD ANAC Hae AMAR A ead ‘which is not zero in this situation because all the eigenvalues are different, As a result Q is nonsingular, sand the system (4) cam be written 24 aX, o e hy oO o am |-@} 0 [=| 0 sake o} le 1 Folens that eX, =O (1.2, mp Rut since oak X. is an eigenvector. itis 8a tern: sac, = Olfor exch f 7.9 Prove that 2 matrix is singular if and only if it has a zero eigenvalue ‘A matrix A has a zera eigenvalue if and only if det(a =01)=0, which is ime if and only if ‘det A~ 0. which ie turn ss teve if and only Af A is singular (see Chapter 5) 7.10 Prove that if X is an eigenvector corresponding to the eigenvalue A of an invertible matrix A, then X is an eigenvector of A°' corresponding to its eigenvalue 1/4. Ir follows from Problem 7.9 that A#0, We are given AK= AX, so AYAR)= A“\(AX) and X=AMA'X). Dividing by A, we obtain A-'X=(I1/A)X, which implies the desired resut 7AM Prove that 2 matrix and its transpose have the same eigenvalues. Tf Ais an eigenvalue of A, then (= det (A~ Aty= det (AY = ALT} = dew at ~ aly’ = det (AT ~ at) by Propenty 5.7. Thus, 4 is also an eigenvalue of AY FAL Prove that if X.X,,.-..%q cigenvalue a, then any nonzero corresponding 10 A Set Ke dX, $4,X, +004 d,X,. Then all eigenvectors of a mateix A corresponding to the sume inear combination of these vectors is also an eigenvector of A CHAP, 7] EIGENVALUES AND EIGENVECTORS or cary TAS AN A(GX, 4d,¥, 400 84X,) = d)AK, + d,AK, +--+ dAX, Bd AX, + d,AX, ++ d aX, MAX, + d,X, 40+ dX) ‘Thus, X is an eigenvector of A, Note that a nonzero constant times an eigenvector is also an eigenvector corresponding 10 the same eigenvalue. A left eigenvector of 3 matrix A isa nonzero row vector X having the property that XA= AX or, equivalently, that X(A- al) =0 w for some scalar A. Again A is an eigenvalue for A, and itis found a6 before. Once a is determined. itis substitited into (J) and then thay equation is solved for X. Find the eigenvalues and left eigenvectors for 35 a-[2 “The eigenvalues were found in Probie 7.4 tobe k= Land = 2. Set = [xy] With 41, (1) teallf3 4] {)-e.0 2s o beedl_} 4 oF 2x, ~ 24, 54, —3,] = 0,0] which is equivalent to the set of equations 24, = 24,20 5x, - 55,5 “The solution to this system ine, = 4, with 1, arbitrary. The left eigenvectors corresponding to A= 1 ake thas [xy] bay] lt] wi x aettray. For a= -2, (J) reduces to teal[_} _3]-.01 or [5x -24,.5x, = 25,]~(0. 0] which is equivalent to the set of equations Sr, -2x,00 Se, ~ 24 = ix,, with x, arbiorary, The left sigenvevtors corresponding 19. Af with x, atitrary. ‘The solution ta this system, are [2] * [xy 44) ™ 2,120% Prove that the transpose of a right eigenvector of A is a left eigenvector of A” corresponding. to the came eigenvalue HC X is a right eigenvector of A carrespondiing to the ei transpose of both sider of this equation, we obtain XAT = AX’ value a, en AK AX. Taking the Verify the Cayley-Hamilton theorem for 7.16 EIGENVALUES AND EIGENVECTORS [cHar.7 ‘The characteristic aquation for A war determined in Problem 7.1 to be A? +.A— AG ee 1 a Verify the Cayley-Hamilton theorem for $22 [ie 4] 6 6 8, equation for A was found in Problem 7.2 to be ~A?+20A"-93A + 126~0. S21 498 498 BoM OM) fs 22) Al 768 Tf 420) stan SI [-93 3 6 OY naz ae 1509} [y02 102 m1 668 bap faa a’ +125}0 1 of-|a 0 0 oot) lao, Prove the Cayley-Hamilion theorem. ‘We denote the charactesstc polynomial of ann m mit A. = 0, Substituting Ao A-2I ‘The characteris Therefore. we evaluate =A) 420A 934+ 1268 APR BAT HATE BAPE LEB, a and set caA-at 12) Ten ix) deta = an) = dere w Since € is an non mawix having firscegree polynomials in A for its diagonal elements and scalars slocwlicre, it fllunes that i aanfaston itatin © aoousated silly © (sec Chaise 5) will wre clensen that ate polynomials of degree m= I or a= 2 in A. Elements on the mtn diagonal of C* will be polynomials of degree m~ 1; all other elements will be polynomials of degree n-2. The same will be ‘ue of the transpose of this cofactor matrix; hence (C°)" may be written as the sum of products of distinct powers of A and scalar matrices (OY MGA A A My ro where MM,,...,M,_, are all m2» scalar matrices, 1. Fallows fin Pioiew $20 and (2) dha (Cy! = ta CM = A aE “) Using (2), we obtain tery = (AE = ALE = AE With CF. this vies aE ACE)” = ae)? 6) Substituting (1) asd (4) into (6), we obtalla BAT ATE + ALS t= AMM, Ur AM al MAT MAI ee ME MA Both sides of this matrix equation are polynomials in A. Since two matsix polynomials are equal if and only if their corresponding soefficients are equal it follows thai cua. 7] EIGENVALUES AND EIGENVECTORS o Mattipying the fist e tse equations by A the second by A" the third by AT, and so om (he as feqotion will be aukipied by A=) and then adding, we find tha terms on the wight side cancel leaving Nb A he which is the Cayley'Hamillon theorem for A with characteris polynomial given by (1). Supplementary Problems In Peoblems 7.18 through 7.26, find the cigenvalues and corresponding eigenvectors for the given matrix is so-4 sa va (alow fs) fa] va (Sa) = [3 3) om [2G] boot oo. 223 o2 1) vas lo -1 a] sm | 216 oo 8 oo 4 Tas 744 In Problems 7.27 through 7.34. find the eigenvalues and a maximal set of linearly independent eigenvectors {or the given mateix S10 sao 500 os 1] rae los 0] 7a jos o a nar bas bas sos fo soo 3100 4 9 dion os90 3 0 1M loos 2 oosn 1a o 3 io oo8 O08 3 3 bia 22 ww [tsi] oe [2 23 rid 2-2 In Problems 7.35 through 7.40, find the eigenvalues and a maximal set of linearly independent left eigenvectors for the given matels 1.38 The matiix in Problem 7.18 1.86 The matic ia Fob 118 ~~ 7.37 The matrix in Problem 72 7 EIGENVALUES AND EIGENVECTORS [CHAR + ar pot -1) 70 3-1 L B= 3 7.41 Verify the Cayley-Hamilton theorem for the mattix in (a) Problem 7.18; (b) Problem 7.24; and (¢) Problem 7.30, : rs [2 742 Show that if in an eigenvalue of A with contesponding eigenvector X. then X i ako 4a cigeswecto of A’ corresponding, to 4 7.43 Show that fA is an eigenvalue ot A wimh corresponding eigenvector X. then for any sealar ¢, X ws an eigenvector af Al corresponding to the eigenvalue A— c. 7.44 Prove that if A has order n ¥ Get ~ AL) =(- 1)" (a (race AAT + OCA where (a7) denotes a polynamial in A of degree a ~2 ot less then TAS Prove that the trace of a square matrix is equal to the sum of the eipenvalues of that matrix. 7.46 Prove that trace(A + B)— ace At arace B iC A and B are square malrivey uf the sane ueder 7.47 Prove that trace AB= trace BA if A and B are square matrices of the same order. Show that S is an invertible matrix of the same order as A then trace(S'AS) 7.49 Prove that the determinant of a Equare matrix equals the product of all the eigenvalues of that matrix 7.50 Show that the m xn matrix ob oo Bo 0 0 Lo oo 0 0 oO Bo oo 0 Oo 1 has acts characteristic equation yar + “The matrix-€ is called the companion matrix for this characteristic equation. Chapter 8 Functions of Matrices SEQUENCES AND SERIES OF MATRICES A sequence (B,) of matrices B, =[b1"], all of the same order, converges to a matrix B = [4] if the elements b'* converge to by for every f and j. The infinite series £5. B, converges to B if the sequence of partial sum. (8, - £4.,B,) converges lo B. (See Problem 8.1.) WELL-DEFINED FUNCTIONS Ia function f(z) of a complex variable z has a Mactaurin series expansion fizy= & a" ‘which converges for [z] < R, then the matrix series EZ.) a,A° converges, provided A is square and each of its eigenvalues has absolute value less than &. In such a case, f(A) is defined as a)= 3 aa" Z and Is called 9 we-depmed Junction. By convention, A’ =. (See Frobiems 8.2 and 8.3.) Example 8.1 converges for oll values of z (thal is, R=). Since every eigenvahie A of any squite matrix satisfies the ‘ondiion that [a] <=, tate i is well defined for all square matrices A COMPUTING FUNCTIONS OF MATRICES An infinite series expansion for J(A) ie not generally uceful for computing the elements of f(A). It follows (with some effort) from the Cayley-Hamilton theorem that every well-defined function of an xm matrix A can be expressed as a polynomial of degree ~1 in A. Thus, JAy= a, Aa, AT oe baat aA + al wry where the scalars 18), 84 ay ate determined as follows STEPS. Let WO)= a, AT Fa a Tee ea AT ha Atay which is the right side of (8.1) with Al replaced by a! (j= STEP 5.2: For each distinet eigenvalue a, of A, formulate the equation JUY= RAD (we) ” n FUNCTIONS OF MATRICES ICHAR & STEP 8.3: snvalue of multiplicity k, for k%1, than formulate also the following fing derivatives of f(A) and r{.A) with respect to A: POM aa Maas, POM, = Obes, (8.3) FeO. Bae, STEP 8.4: Solve the set of all equations ob "Once the stalars determines in Step 8.4 are substituted inta (8.1), f(A) may be calculated, (See Problems &4 through 8.6.) ined in Steps 8.2 and &:3 for the unknown scalars ‘THE FUNCTION e™ For any constant square matrix A, and real variable 1, the matrix function eis computed by setting B=Ar and then calculating -e® as described in the preceding section, (See Problems 8.7 through 8.10.) ‘The eigenvalues of B~ As are the eigenvalues of A multiplied by r (see Property 7.5). Nate that (8.3) involves derivatives with respect to. and not 1; the correct sequence of steps fet frst take the necessary derivatives of fA) and r(A) with respect to A and then substitute A=A,. The reverse procedure—first substituting A= A, (2 funetion of 1) imo (4.2) and then taking derivatives with Tespect {0 can give erroneous results DIFFERENTIATION AND INTEGRATION OF MATRICES “The derivative of A= [aj] is the matrix obtained by differentiating cach clement of A: that ia, Ndi = [da, id) ‘deat integral.of A, either definite or indefinite, is obtained by integrating each element of A. Thus, [ae-[ffaa] ad frde[fara] (See Problems 8.11 and 8.12;) DIFFERENTIAL EQUATIONS: ‘The initial-value matrix differential equation HA) = ARE) +FU) X(t has the solution Xt) seveves f emmaya a or, equivale x= etme [ (85) 1M the ifferential equation is homogeneous [i.¢., Fld) =O]. then (8.4) and (85) reduce to Rin) = eC, CHAP, 8} FUNCTIONS OF MATRICES 2B Tn (8-4) and (8.5), the matrices eM", eM, and eM gre easily computed from eM by replacing the variable with r tg, ~s, and 1— 5, respectively. Usually, X(t) is obtained more easily from (F.5) than from (8.4), because the former involves one fewer matrix multiplication. However, the integrals arising in (8-5) are generally more difficult to evaluate that those in (&4), (See Problems 8.13 and 8.14.) ‘THE MATRIX EQUATION AX +XB=C ‘The equation AX + XB=C, where A, B, and © denote constant square matrices of the same has a unique solution if and only if A and B have 10 eitenvilues in common. This unique x--[ Ce de (86) provided the integral exists (sce Problem 8.13), sn Example 6.2 For rmatrin equating has the uniqus soletion X= bie the integral (8.6) diverges. Solved Problems 8.1 Determine tim B, when Since 2.5 cue oe Sy Bey ef cary which converges for all values of z (that is, R= =), Every eigenvalue of amy square matrix satis the condition that [4] <=, 90 18 well defined for every square matrix A "4 83 FUNCTIONS GF MATRICES [CHAP 8 Determine whether arctan A is well defined for 24) alia “The Maclaurin series for arctan z is meunze2~ 565 which converges for all values of = having absolute value less than 1. Therefore. Poa een TTS, aed a arctan A in well fined for any square nats whose cigenvalucs are al less thaa. 1 in absolute value, The yivea mmairix A has eigenvalues 4, =0 and A, = 4. Since the second of these eigenvalues has absolute value feater than 1, afctan A is aot defined for this matros, Find cos A for the matrix given im Problem 8.3. From Problem 8.2 we know that cos A is well defined for all matrices. For this particular 22 matrix A, (8.1) becomes wo crane ntel? Now 0) =o) 9a a hein ems ate = ath Setting Tes Sa a ae ate as cos = a (0)+ a, cod= a4} +a, Solving these equations for a, and a), we obtain a, =cov0=1 and a, = (cos 4 = 1)/4= ~O.41MiL. Substituting these values into (1) and simplifying give us O.LTRITR 1.653608 cma [ titi “Onn Find e* for the matrix given in Problem 8.3. 1 follows Grom Exavople 8:1 that ©* i defined for all matrices. Tor this particulae 22 malin A (8.1) becomes ao Naw J(A)= 4", HAP™ oA +4, ana the distinct eigenvalues of A are A, =O and A, =4, Subssiruting these quanlities into (8.2) once for exch eigenvalue, we formulate the two: equations = 4y(0) + ay H+ a ‘Thus a, = "= Tan @, ~ (et = 1)/4~ 13.9995, Sulmiituting Ussse vabacy inky (1) wind simplifying give = [22781 See] COPA 33995 27.7901 CHAP. 8] FUNCTIONS OF MATRICES 8 a6 a7 Find sin A for 22 0 Aw) 0-2 1 oo -2 ‘The Maciaurin series for sin z converges for all finite values of z, 50 sin A is well defined for all matrices. For the given matsix 3*3.(8.1) becomes “8 2) ]-2 2 Oo] Jiao sin A a -a/ea) 0-2 1]+afo 10 a4 oo -21 Toad 2a, +4, 8a, +2a, 2a, ad 0 =%a,+2, -de,+0, 9 9 40,2, +0, Matrix A has eigenvalue A~-2 with multiplicity three, so we will have to use Step 8.3, We determine Fads sin A rAd =a,At taht ay (4Ays cok P(A)= 2a, Aba, PO)= sink F(A) = 2a, respectively, sin (-2)= a,{-2)' + 9, cos(-2}=2a(-2) +4, ~sin(=2)* 20, 454649; 0, = c0s (2) ~2 sin (~2) = 1.4048: and ay = 2 608 (=2) = ting these values into (1) and simplifying give us er ~9.992204 ‘a and write (8.2) and (8.3) 2) +9, We thus obtain a, = — si sin (=2) = QOTT0038, Subst ao -090R2NF -0.416147 a o -0.905297. Find «* for We set amar-[ og] ‘and compute e*. Since B is of order 2% 2, (8.1) becomes eeapsad=[_ 2“) 0 Here f(a) a's (4) = aA | dpe and the distit cigenraes wf Bare A, ~ it amd Ay ~ ~H Subhosing these quanttic into (8.2) separately for cach eigenvalue, we obisin the two equations Boat ema (—ay + Solving these equations tor 2, and a,, we obtain 6 a9 FUNCTIONS OF MATRICES [cHar: # Lage ale Suhatinuting these walucs inte (J), we determine Find efor af wee os oae[f tu] and compute ¢", Since B is of order 2™ 2, (8.1) becomes peemeanlsy ha) w ‘Here f(A) = 2°, (A) 7 a, ay, and the distnes cigenvatues of B are A, = 2s and A, = —44, Substituting these quamile> into (#2) once for each cigemvaloe, we wbisin the two equations esata, (4) + ay Solving these equations for a, and’, we obtain a, = (e"—e “Diiirand a, = (2e" +e°")/3. Substizat- ing theas values inte (7), We get Find e for ooo a-|i 00 Lon We cet B~ Ar and compute o% Sinow Wie a 3% 3 matic, (87) becomes ra) Now f(a) ea and 4,1 with multiplicity ene. Substituting these quantities, along with f'(A)=e' and r'(A)= 2a,A¢ a,, into (8.2) and (8.31, we formulate the three eqy Oa Oy + 0,10) 4, &= 2a) ear tay and a, = 1. Substituti 1 ooo #10 a, these values into (1) and simplify oe CHAP. 8) FUNCTIONS OF MATRICES 7 8.10 Establish the equations that are needed tw find ef! if Ae ecocen rere eeneus ooeeen rernes We set B= Ar and compute eM Since Bis a 6% 6 matrix (8.1) becomes Pea aR tah sam +aR tal rir Te distinct eigenvalues of Bare A, with mutipticity one. We determine fase et aA had tadta, Fee Ul Sad" + 40.) t Baa + eat a, Fa = eA) Maya" + Waa? +604 +2a, nd (2) andl (8 3) become with multiplicity three, 4 = 2¢ with multipticty two, and A= 0 Seat tat tant ay aay + Rat + at 2, Dar + Wa + 60,0422, eM a (20) + a0) +a 21+ 2,020) +028 a, 2 =Sa,021" + 4a,(2e + 3a,(20)! + 26,021) + 2, =a (OF + o,{0)" + a,(0)' + 0,(0)? + 0,40) +0, should be simplified before they are solved. 8.11 Find dard if [aie sin 4S fap fam a [EoD HOO af a oe di d cons 0 d Soma 8) 8.12 Find fA di for A as given in Problem 8.11. fesne few [oma fase [ras Wate iesed cost CASE, B13 Solve X(t) = ANU) + FCA) with initial value X(0)=C when a-[y a] ro=[2] e-[_t] ‘The solution is given by either (8 ¢) or (8.5). We shall use (8.4) here, and (8.$) in Problem 8.14. For as given, «™ has already been calculated in Problem 88, Therefore, we ean compute 78 FUNCTIONS OF MATRICES [CHAP 8. eee ome abl come PERE al? ierl: [eomee[ i hs wie =e 46 Bol 100" + 4c +8 vers iets See +6 ole" + 4e" +6 $6)+ (0 = elle" + 42" +6) ] $0) 4 ede tee 6 Thus miner es et [ents [creel setae dL Ae ied 8.14 Use (8.5) to solve Problem 8.13, The vector e**°'C was found in Problem 8,23, Furthermore, cont “) xno femnne “aie hes LAE EL] os befor. CHAP. 8} FUNCTIONS OF MATRICES ~ for X whon Salve the mati equation AX + XB a[o a) [2 7] In preparation for the use of (8.6), we calculate we afe” oO eof) om Then " Co Meo eo RE] [S TH] vat wie + he EE 8.16 Prove that ee" = e"*°*" if and only if the matrices A and B commute (that i, if and only if the commutative property for multiplication holds for A and B). If ABBA, and only then, we have ARE RAS WATS ZAR ER? and, in genera asore S(Q)er wy ny mt where (> ea ‘the Rina cneffcent (things ten Fat time" Now according to the defining equation, we have far any A and B: eee - @ 5 Siaepy and Eaters Satay 8) ‘The last series in (3) is equal to the last series in (2) ifand onty if (2) holds: that is, if and only if A and B combate 17 Prove that eMe“* = M#-" Setting f= Lin Problem 8.16, we-conclude that ee Ab and ~As commute, snes (An(~A8) = (AMIS) = (AAI) = (ANNAN eh gee fA and B commute, But the matsices Consequently, «7 a FUNCTIONS OF MATRICES (CHAP. 8 LAIR Prowe that = From the definition of ms multiplication, 0° = 0 for a= 1. Hence, Shor Supplementary Problems B19 Determine the limit of cach of the following sequences of matrices as & goes 10 =: 1 ok=1 wh fPar eet ole A a) s e-[i. etl 8.20 The Bese funtion ofthe fist kind of order zero is defined as _ sce 4an= 2 SiS FFor which matrices a is J, (A) well defined? 3.21 Determine the conditions on matrix A that will make the following function well defined 22 Find fa) sin A and (b} e* for a. “2 3 2 ita) coe aad 6) 187420 ka 8.24 Find (0) sin A and (b) cos A far the 353 tere matrix tn Problems #28 through $31, find e* se [po] oe [2S] ee [Fa a 210 200 021) sm ford oo 2 vod sa {tl 4 CHAP. 8] FUNCTIONS OF MATRICES al 8.2 Find sin Ar for eues B33 Solve R(1) = AN(t} + Fins X= C when welt a] nose ofl BM Solve Xi) = AX(e) + Fle) when ab eof] &M Solve Xr) = AX Ce) + Bly): K(O) = C when a-(2 4] n=[{] cme 136 see ten 835 wah [4] BaaT Sole ARS AH = C when al3 sl wf! Chapter Canonical Bases GENERALIZED EIGENVECTORS A vector X,. is a generalized (right) eigenvector of rane m for the square matrix A and associated cigenvector A if (A-A"X,, <0 but (A= AN 'X, #0 (See Problem 9.1 through 9.1.) Right eigenvectors, os defined in Chapter 7, oe generalized eigenvectors of rank 1, ‘CHAINS A chain generated by @ generalized eigenvector X,_of rank m associated with the eigenvalue A ater of vector (Xu, Nas Vy} defined recursively as X=(AADK jo Ema 1m=2...0) (9.1) (See Problems 9.5 and 9.6.) A chain is a linearly independent set of generalized eigenvectors of descending rank. The number of vectors in the set is called the ferigit of the chain, ‘CANONICAL BASIS A canonical basis for ann Xn matrix A is a set of n lincarly independent generalized ssigenvectors composed entirely of chains. That i, if a generalized eigenvector of rank m appears in ‘the basis, so too does the complete chain gencraicd by that vector. “The simplest canonical bases, when they exist, are those consisting solely of chains of length one (ie, of linearly independent eigenvectors). Such bases always exist when the eigenvalues of @ matrix are distinct. (See Problem 9.9.) The chains associated with an eigenwalue of multiplicity grester than fone are determined with the following algorithm, which first establishes the number of generalized eigenvectors of each rank that will appear in a canonical basis and then provides a means for obtaining, thers STEP 9.1: Denote the mulkiplcty of A as wr, and dctermine the smallest positive integer p for whieh the rank of (A— AI)” equals n ~ m, where m denotes the number of rows (and ‘solunans) in A, STEP 9.2: For each imeger k between | and p, inclusive, compute the eigenvalue rank number N, N, = rank(A = alt! = eank(a = att (a2) Each Ni; is the number of generalized eigenvectors of rank k that will appear in the canonical basis, STEP 9.3: Desermine a generalized vigenvector of rank p, am comsceuct the chain generawed by this vector. Each of these veetors is part af the eanonical basis. STEP 9.4: Reduce each positive Ny (K- 1,2... p) by 1. Mf all N, are zero, stop; the procedure is completed. If not, eontinge 10 Step, 9.$ STEP 9.3; Find the highest value of & for which N, is mot zero. and determine a generalized cigenvector of that rank which is linearly independent of all previously detcrmincd 2 CHAP, 9} CANONICAL BASES 83. generalized eigenvectors associated with A. Form the chain generated by 1 tnd include it in the basis, Return to Step 8.4. (See Probleme 9.10 through 9.13.) ‘THE MINIMUM POLYNOMIAL ‘The minimum polyncmial m( 4) for an n > n matrix A is the monic polynomi of least degree for which m(A) = 0. Designate the distinct eigenvalues of A as A,. Az... A, (1=3 =a). and for each A, etermine a p,as im Step 9.1 above, The minimum polynomial fer A is then sag) = (A APMC ag) a 3) {Sce Problems 9.14 and 9.15.) Solved Problems 9.1 Show that X= [1,0,0)"is a generalized eigenvector of rank 2 corresponding to the eigenvalue A=3 for the matrix 7-31 aAs| 41 o 02 wos 8 0-36 Acts] 40 10 1] and (aan ]0 0B 0 o-t oo 4 For X= (10,0), we have (A —31)X =[-10, 4,0]! #0 and (A—31)'X = @, which implies that X is a generalized eigenvector of rank 2 9.2. Find a generalized eigenvector of rank 4 corresponding to the eigenvalue A= 7 for the matrix TAZ alo 71 007 We seck a thieestimensional vector X= [xy] SUCH that (A= TH)'X, = @ and (A= THPR, wel EE ~ woe EET) The condition (A ~71)'X, = Os automatiealy satisfied the condition (A ~ 71K, + Ois satisfied only if #0. Thus, 5, and x, are arbirary, whereas x, 6 constrained ta be nonzero. A simple choice ss ayy ony yielding X= {0,05 1)" os 94 os CANONICAL BASES (CHAP. 9 Find generalized eigenvector of rank 2 corresponding to the eigenvalue A= 4 for the matrix a 008 510 “130 o 003 We seek s four-dimensional vector X. 0 We have oF Ry Fal” suet thar (A —A1)X, = @ and (A — 41K, and (aay, = al] atte yp To snity (A —59K 8, mat The 1 sly (AAI), 8, we oe goa 3,83, — a A simple choice i, “This gives us X, =[1,0.0. 0)". Show that there is mo generalized for the matrix given in Problem 9.3. For such a weetor X, = [1.1 Ky. 441" U6 exist, the conditions (A l)'X, = @ and (A must be sanisfied, For the given mtr A, (A= A1)'X,=[0.0,0,—n4)" while (A= AN), =[0.0.0.4,)" “To satisfy both conditions, x, must be zero and nonzero simultaneously, which is impossible, Therefore, Avhas no generalized eigenvector of rank 3 corresponding to A= 4. vector of rank 3 corresponding to the eigenvalue A= 4 yx, #0 Determine the chain that is generated by the generalized eigenvector of rank 3 found in Problem 9.2 rom Problem 9.2, we have X,= [0,01] corresponding to the eigenvalue A= 7. Furthermore, oe Asm=|O 0 1 ooo It follows from (9.1) that ot 270) [2 xoamin-[t 0 ifs {i oooh bo. o 1 2y2) 71 aut svasomn,-[ gif} -[t) oo oho! Lo, he stain as owns {EH CHAP. 9) (CANONICAL BASES 85 Deteinine the chain that is generated by the generalized cigenvector of rank 2 found in Problem 9.3. Fam Prollsie 9.3 we have X, =(1,0,0,0)%, corresponding to A= 4, Using (0.1), we site ‘The chain is (X,.%,)= €{0.0,0.0)7, (0,1, -1,0]") 9.7 Show that if X, is.a generalized eigenvector of rank me for matrix A and eigenvalue A, then X, as defined by (2.1) is a generalized eigenvector of rank j corresponding 10 the same matrix and eigenvalue, Since X,, is a generalited eigenvector of rank mi, (ASMX. =00 and (AA Xo follows from Eq. (9.1) that XK, = (A ADK, = (A= ANTON, Therefore (aA), = (A= AIA = AN, = AX, = and (A= ALY IR, = (A anya any (A ALK, 0 Which together imply that X, is a generaiized eigenvector of rank j tor A and A. 9.8 Show that a chain is a linearly independent set of vectors. The prool is inductive: on the length of the chain, For chains of length one, the generating agencralized cigenvector X, must be an eigenvector, 30 X, #0. Therefore, the only solution to the wector ‘equation ¢,X, = 0 is ¢, =O, and the chain is independent Assume thar all chains enptaining exactly Ae—1 vertors are iaearly independent, and consider a chain consisting of the -vector set {Xq.X,...+--»X,) for mattis A and eigenvalue A. We must show that the only solution to the vector equation Met Kay trek ae a sos ep= 0, Multiply (1) by (A= ADM, and observe that far each X, (7= k= 1,8 1) i that equation, (A= AD) FeX, = 6 (A= AIHA = AX, = (AA = because each X, i a generalized eigenvector of rank j (cet Problem 9.7). What remains, then, (A= ADR, = 0 a Since X, is generstized eigenvector of rank &, (A= At)" 'X, #0, and it follows from (2) that ¢, =. Equation (1) thus reduces to Ky gto tek a8 oy But X, 1X, ic a chain of length k~1, uehich we assumed to be lineatly independent, £0 the constams ¢,.,...€, im (3} must all be zero. Therefore, the only solution 10 (1) is €, =<, ¢, 70, from which i follows that the chain of length k is linearl pendent 9.9 Determine a canonical basis for

You might also like