You are on page 1of 30
CHAPTER 17 Canonical Forms IT nt 1.3 4 Kat 7 be a linear operator on a vector space of ite dimension. As seen in the preceding chapter, T may not have a diagonal matrix representation. However, itis still possible to “simplify” the matrix ‘Representation of Tin a number of ways. This is the main topic of this chapter. In particular, we obtaia the primary decomposition theorem and the triangular, Jordan, and rational canonical forms, INVARIANT SUBSPACES Define an invariant subspace of a linear operator. H Let T:V-oV be linear. A subspace W of V is said 0 be invariant under Tor Teavartan ie 7 maps W into iselt,e., if EW implies T(e) © W. In this case T restricted to W defines « linenr operatee on Ws that is, T induces a linear operator 7: W—>W= defined by Tow) = Tw) for every weW, Problems 17.2-17.5 refer to the linear operator T:R*>R? which rotates each vector about the + axis by an angle @ as pictured in Fig. 17-13 ie., T(x, y, 2) (x cos 0 ysin @, xsin 9+ y eos 8 =), Fig. 17-1 Let W be the xy plane in R*. Is W invariant under T? I Each vector w= (a,b,0) in the ay plane W remains in W under the mapping W as indieated in Sig, 17-1. Thus W is invariant under 7. The restriction of 7 to W rotates each vector in W about the origin O. Let W’ be the yz plane in Rts W" invariant under T? FA nonzero vector w’=(0,b, 0) in W" does not remain in W' under T [unless @=7 or a multiple of]. Thus W" is not T-invariant, Let U be the z axis in R’ Is U invariant under T? H For any vector w= (0,0,2) in U, we have T(u)=u. ‘Thus U's invariant under T. In fact, the restriction of To U is the identity mapping on U. , Let U" be the x axis in R23 Is U’ FA nonzero vector u’= (a, 0, 0) in U' does not remain in U' under T [unless @= a or a multiple of ar], Thus U’ is not invariant under 7. variant under T? What if any, is the relationship between eigenvectors of a linear operator T and invariant subspaces of T? 1 If is any nonzero eigenvector of 7, then span(v) is a one-dimensional invariant subspace of T. Conversely, if W is a one-dimensional invariant subspace of T, then any nonzero vector ie WY ie an eigenvector of 7. wd 17.8 19 17.10 nn 17.12 4713 1.14 CANONICAL FORMS [] 387 Problems 17.7-17.10 refer to any linear operator T: VV. Show that {0} is invariant under 7. FP We have 7(0)=0€ (0}s hence (0} is i Show that V is invariant under T. F For every veV, Tv) GV; hence Vis invariant under 7. ‘Show that the kernel of T is invariant under 7. FH Let wGKerT. Then TMu)=0GKerT since the kemel of Tis a subspace of V. Thus Ker T is invariant under 7. Show that the image of T is invariant under T. f Since T(v)ElmT for every vEV, invariant under T. is certainly true if vEImT. Hence the image of Tis Find all iaveriant subspaces of A=(7 3) viewed as an operator on R® FF First of all, we have that R* and {0} are invariant under A. Now if A has any other invariant subspaces, then it must be one-dimensional, However, the characteristic polynomial of A is 2 sate al= [7 +1 val Hence A has ao eigenvalues (ja R) and so A has no eigenvectors. But the one-dimensional invariant subspaces correspond to the eigenvectors; thus R* and {0) are the only subspaces invariant under A. Suppose {W,} is a collection of T-invariant subspaces of a vector space V. Show that the intersection W=N,W, is also T-invariant. I Suppose vEW; then vEW, for every i. Since W, is T-invariant, T(v)EW, for every i, Thus Te)EW=, 1K, and so W is T-invariant, Theorem 17.1: Let T:V->V_ be linear, and let f(1) be any polynomial. Then the kernel of f(7) is invariant under T. Prove Theorem 17.1 F Suppose veKer f(T), that is, f(T) of (7), that is, f(TY(T@)) =0. Since fle THT) = T(0) =0 “as required. 0. We need to show that 7(v) also belongs to the kernel OQ, webave {T)T= THT). Thus f(T)TW)= Theorem 17.2: Suppose W is an invariant subspace of T:V—>V. Then T has a block matrix representation AB) where A is a matrix representation of the restriction T of Tio W. oc ” Prove Theorem 17:2. T We choose a basis {0v,,-+-41,} of Wand extend it to a basis (04, ...,W,,Us)--+40,} 0f @ Then Tow.) ‘Thw,) = TOWa) = aayW soo Tow,) Ue 12, 1a) = BayWy + 388 0 CHAPTER 17 175 17.16 17.7 1718, 17.19 17.20 7.21 7.22 7.23 But the matrix of T'in this basi is the transpose of the matrix of coefficient in the above system of cquations. ‘Therefore it has the form (4) 2) where a isthe transpose of the matrix of coefficients for the obvious subsystem. By the same argument, A is the matrix of f relative to the basis (o»,) of W. Problems 17.15~17.16 refer to the restriction f of a linear Operator T to an invariant subspace W, that is, Tow) T(w) for every weW. For any polynomial f(t), A2)(w) =/(T)(w). JE 10-0. orit fis 4 constant, ic., of degree L, then the result clearly holds. Assume deg f= 72 E And that the result holds for polynomials of degree less than n. Suppose thet POR ag" + 4,0! teshaetay. Then Pro FIM) = (a8 + ayy P ot oO) PNT) + (Gy PT H+ ag Nw) G2" NTO) + (a, TO 4 ago) =HTYw) Prove: ‘The minimum polynomial of divides the minimum potynomiat of 7. F Let m() denote the minimum polynomial of T, ‘Then by Problem 17.15, m(T)(W9) = m(T)(0v) = Ow) =0 for every weEW; that is, Tis 2 zexo of the polynomial m(t). Henee the minimum polynomial of F divides m(i). Show that every subspace of V is invariant under 1 and 0, the identity and 2ot0 operators. FT Suppose W is a subspace of Vand wEW. ‘Then Hn)=weW and 00) =0E8, ‘Thus Wis invariant under J and 0, svat 2-4) 7 2 Petermine the invariant subspaces of A= (7 ~S) viewed as a linear operator RE 4H Here a@=F416 is the characteristic polynomial of A. ‘There are no eigenvalues (in R) and hence there are no eigenvectors. Thus there are no one-dimensional invasions subspaces. Accordingly, (0) and R? are the only A-invariant subspaces, Determine the invariant subspace of the above matrix A viewed as a linear operator on C2, F Since A(Q)= 2416 (+ 4i)(t~4i), there are two eigenvalues, A;=4i and A, Hi. Setting Ad~ A=0 yields a nonzero solution v= (2,1~2i) and setting del A=0_ yields a nonzero solution Ge (@.142/). Thus the only invaziant subspaces are the following: (6), C= W, pan(2,1~2i), Wy span(2, 1422). Problems 17.20-17.23 refer to a subspace W which is invariant under: V2 and Ts Voy, Show that W is invariant under $47, F Let weW. Then Siw)EW and Tim) EW. Since W isa subspace S(w) + TOw) EW, Therefore, ($+ T)(w) = S(w)+ Tw) beloags to W. Thus W is invervont uneen SHT. Show that W is invariant under the composition ST. 4H Let weW. Then T(w)EW and hence (S*T)G") = S(T) EW. Thus W is invariant under Se. Show that W is invasiant under kT for any scalar kG K. Toilet weW. Then TWw)EW. Since W is a subspace ATW) EW. Thus (kT)(Qv) = k7(w) belongs toW. Hence W is invariant under KT, Show that W is invariant under f(T) for any polynomial f(). CANONICAL FORMS J 389 I By Problem 17.21, W is invariant under T* and, by induction, W is invariant under T* for any k= 1. By Problem 17.22, W is invariant under @,7* for any scalar a,. Also, W is invariant under J by Problem 47.17 [where Lis the identity map]. Last, by Problem 17.20, W is invariant under a,7" +--+ a,T + al. In other words, W is invariant under f(T) for any polynomial (9). 17.2. DIRECT SUMS, PROJECTIONS 17.24 17.25 17.26 17.27 17.28 17.29 17.30 Define 2 direct sum of subspaces and the corresponding projections. TA vector space Vis termed the direct sum of its subspaces W,,...,W,, writen V= W,@W,@---@W,, if every vector vEV can be written uniquely in the form v=w, tw, t-s-+w, with wpEW,. In such a case, the projection of V into its subspace W, is the mapping E:V—V defined by E(w) = wy, [The projection E is well-defined since the sum for v is unique and there is a projection mapping for cach subspace W,.] Problems 17.25~17.28 refer to the following subspaces of R*:U=xy plane, W=yz plane, Z=z axis, L= ((k, &, : FER}. Is R=UOW? TR? =U +W since every vector in R is the sum of a vector in U and a vector in W. However, R’ is not the direct sum of U and W since such sums are not unique; e.g., (1.2.3) =(1, 1,0) + (0,1, 1)= (1,3, 0) + (0, ~1,3). Is R=U@zZ? F Any vector (a,b, c)ER* can be written as the sum of a vector in U and a vector in Z in one and only one way: (a,b,c) = (a, b,0)=(0,0,c). Thus R*=U®Z. Given R°=U@L, find the projections Ey and E,, of V into U and L, respectively. I For any vector (a, 8, ¢)ER®,_ the unique representation is as follows: (a, b, ¢)=(a~- c,d ~ 6,0) + (ce). Thus Ey and &, are defined by Ey(a,b,e)=(a- cb c,0) and E,(a,b,)=(6, 6,0) Given R?=W®L, find the projections Ey and E, onto W and L, respectively. T We have (a,b,c) =(0,b-a,¢~4)+(q,a, a) is the unique representation; hence Ey(a, b, €)= (b-a,c~a) and E,(a, b,¢) = (a, 2,0). Theorem 17.3: Suppose W,,..., W, are subspaces of V and suppose B,=(Wj,-++4Wa,) is a basis for W,for i=i,...,1. Let B be the union of all the basis vectors, i.c., @ It Bisa basis of V, then V=W,8---OV,. (i) If V=W,@--@W,, then Bis a basis of V. Prove (i) of Theorem 17. F Let veV. Since Bisa basis for Vo v= ayy My Ht ayy Mag, FEE Wey EE a Mon, wy tinge dw, where Wp = aigWy Hos dian, EW. We next show that such a sum fs unique. Suppose v= wih fi)-bertw! where WYEW,. Since {w,,,.-.,W,,} isa basis of W, wi = F byw Hot Brn Wrae Since Bis a and so the sum for ¥ is unique. Big Wan, FTF Din Min, 204 SO" UB Wy EO Dg Magy basis of V, a, =B,, | for each i and each Accordingly, V is the direct sum of the W,. Hence “'w, Prove (ii) of Theorem 17.3. Toler ve. ce V is the direct sum of the W,, we have v=w,+---+w, where w,GW,, Since ‘Ov,) is a basis of WY,, each w, is a linear combination of the w,, and so v is a linear combination of the elements of B. Thus B spans V. We now show that B is linearly independent. Suppose a,,¥#), +++ Wag EAH MW PE Geog, 02 Note that ay Win +7 + dy Wye We also have that 390 9 CHAPTER 17 17.31 17.32 17.33 17.34 17.35 17.36 17.3 17.37 7.38 Pe OF #0 where OG, Since such a sum for Ois unique, aj.¥y 4+, Hoy 4,...,7. ‘The independence of th independent and hence is a basis of V, Le V4 where vy VO BW, an boty, # Suppose, for ueéV, Ka hwy to kv, ety, Wye Elo + u) =, + w= Ele) dlet_ E:V->¥_ be the projection may fe bases {w,,} implies that all the a’s are 0, Thus B's w,EW,, Show that Eis linear, for f= nearly ping E:V+V defined by E(v)=w, ME MEE oes we, ‘Then vt w= (ny + wt) +o Q0, 407) and wi GW, are the unique sums correspondin + E(u) and E(kv) = kw, = KE(U) and therefore F is Jinear, Show that E*=E for the above projection map E. 4 Bist we have that y= 0-4 hence E(w.)=0g. Then for any veV, £0) = E(E(a)) required, Theorem 17.4: Suppose v= FOF W FOF EEO E:Vo->V is linear and E?=E, Then () EW)=u for ny uelmE. ¢ Im E@Ker E, (Gi) Z is the projection of V into Im B, Remarks By this theorem only if 7? is the unique sum corresponding tow, €W,:° =Ew,)= 0, = Ev). Thus B= £," as and Problems 17.31 and 17.32, a linear mapping 7: V-~V is a projection if and This characterization of a projection is frequently used as its dednition, Prove (i) of Theorem 17.4. olf welme, then the EQ) =u as requized. ere exists vEV for which E(u) =u; Prove (ii) of Theorem 17.4. FT Let_vG¥._ We can write v in the form Ev ~ E(v)) = E(v) ~ E%(0} Ele) +v- EQ). Now Ei ) = EW) — E(w) = 0, v- E(w) Ker E. According ow Suppose, w Elm ENKer E, By (I) of Theorem 17.4, Ew)=w_ tecguse welm E, other hand, E(w) = ‘conditions imply that V is Prove of Theorem 17. the direct sum of the image and kernel of E. 4. A Let vev and suppose vew+w where vim E and weKerE. Note E(u)=u Theorem 17.4, and E(w) Suppose E:V->V is a projection, that is, E* hence E(u) = E(E(v)) = Ev) = (vv) Im E and, since wy, V=ImE-+Ker £, On the because weKer£. Thus w=0 andso ImEnKer i= {0}. These two by (i) of vor yacause WEKerE Hence E(u) E(u + w)= Flu) + Bo) ou eo That is, F fs the projection of V into its image. =E. Show that I-E isa projection, i U- EY 3-26 + &*)=(1-28 + e)aI-g, Thus 7~£ is a projection. INVARIANT DIRECT-SUM DECOMPOSITIONS Define an invariant direct-sum decomposition of a vector space respect to a linear operator. I Let T:V->V be linear. Suppose V is the direct sum of [nonzero] T-invariant subspaces Wy We ¥, suppose V=W,@--@W, and TOW )CW, it, W, are said to reduce T or to form a T-invariant 7, is the restriction of T to W,, then Tis said to the direct sum of the 7,, written T=7,@-.-@T.. 17. Then the subspaces direct-sum decomposition of V, Furthermore, if fet T:R'—>R? be the linear operator which rotates each Vector about the z axis by an angle @ in Fig. I7-Alsie., Thx, y, axis U form a T-invariant +2) = (eos 8— ysin 4 xsin g++ direct-sum decomposition of R®. 6086, z). Show that the xy plane W be decomposable into the operators T,, or 7's said to be [es pictured and the 2 CANONICAL FORMS Jf 3¢ A Note first that R°=W®U_ since the only way that v=(a, &, ¢) in R? can be written as the sum of : vector in W and a vector in U is as follows: (a, b,c) =(2, 6,0) +(0,0,c). Furthermore, W and U are invariant under 7. ‘Thus W and U form a T-invariant direet-sum decomposition of R’. ‘The following three theorems [proved in Problems 17.39, 17.44, and 17.45] indicate the main content of thi section, Theorem 17.5: Suppose T:V—+V is linear and V is the direct sum of T-invariant subspaces W,,..., W, If A, is a matrix representation of the restriction of T'to W,, thea T can be represented by the block diagonal matrix Theorem 17.6 (Primary Decomposition Theorem): Let T:V-+V_ bea linear operator with minimal polynomial m(e) = f,(2)"'f("=-+ + f,(" where the fi(t) are distinct monic irreducible polynomials, Then V is the direct sum of T-invariant subspaces 1W;,..., W, where W, is the kernel of f(T)". Moreover, f(t)" is the minimal polynomial of the restriction of T to W,. Theorem 17.7: A linear operator T:V->V_ has a diagonal matrix representation if and only if its minim polynomial m(t) is a product of distinct linear polynomials. Theorem 17.8 (Alternate Form of Theorem 17.7}: A matrix A is similar to a diagonal matrix if and only if its minimal polynomial is a product of distinct linear polynomials. Remark: Theorem 17.8 is a useful characterization of diagonalizable operators, e.g., see Problem 17.46. 17.39 Suppose T:¥-V is linear and V=U@W_ is a T-invariant direct-sum decomposition of V. Prove ‘Theorem 17.5 in the case that dimU+2 and dim W=3. T Suppose {1 3} and (wy, ,, ws) are bases of U and W, respectively. If T, and T; denote the restrictions of T to U and W, respectively, then 118 + BW t Bis 1411 ¥ BagWa + DasWs ary t BaaWn + Day's . TyGor) Tw)= a4, + gat, 7B Hea)= mim tag TO) Hence Bix Bax Bs: a=( &) ana o-(te bas *) 232 aa, are matrix representations of T, and T, respectively. By Theorem 17.3, {1t,, 3, Wy, Way Wy) is a basis of Y., Since Tlu,)=Ty(u) and TOw,)= Ta0v,), the matrix of Tin this basis is the block diagonal matr o Bh Remark: The proof of Theorem 17.5 is exactly the same as the above proof and will be omitted, 17.40 Suppose T:V—¥_ is linear and suppose T= T,®T, with respect to a T-invariant directsum decomposition V=UGW, Let m(t), m,(t), and mg(t) denote, respectively, the minimum polynomials of T,T,, and T,. Show that m(t) is the least common multiple of m,(#) and m,(t). I By Problem 17.16, each of nt,(1) and m,(1) divides m(e). Now suppose f(¢) is 2 multiple of both m,(0) and m,(); then f(7,)(U)=0 and f(T,)(W)=0. Let vEV; then v=utw with weU and WEW. Now f(Tu=f(T)ut AT)w = f7,)u+ f(T.)w =0+0=0. That is, Tis a zero of f(. Hence m(f) divides f(t), and so mi(2) is the least common multiple of m,(t) and ma(t). — a 392 J CHAPTER 17 TAL In the above problem, let A(), 4,(0), and A,(¢) denote, respectively, the characteristic polynomials of 7, T,, and T;, Show that A(*)=A\()Ag(1). I By Theorem 17.5, Thas a matrix representation M=(4 °) where A and B ate matrix tepresentations of T, and T;, respectively. ‘Then, sm fr~at|= [54 peg! tht Aller B]=a,ca,en as required, ‘Theorem 17.9: Suppose T: ¥—>¥ is linear, and suppose _f(t)= g(®)h(t) are polynomials such that A(T) =0 and g(¢) and A(?) are relatively prime, Then V is the direct sum of the Tiavariant subspaces U and W, where U=Ker g(T) and W= Ker HH(T). 17.42 Prove Theorem 17.9. 4 Note first that U and W are T-invariant by Theorem 17.1. Since (7) and A(:) are relatively prime, there exist polynomials r(2) and s(¢) such that r(1)g(1) + s(:)A(0)=1, Hence for the operator Tr, 2(T)g(T) + s(TYMT) =7 wo fat vEV; then by (1), vm r(T)g(T)e +s(T)h(7)e. But the first term in this sum belongs to. W= Kera(7) since ACT)r(T)g(r)u = r(T)g(TyA(T)v = r(T)F(T yw = r(T)Ow = 0. Similarly, the second term belongs to U. Hence Vis the sum of U and W. sro breve tbat V= UGW, we must show that a sum muy, with weU, weW, is uniquely determined by v. Applying the operator r(T)g(T) to v=u-tw and using g(T)u=0, we obtain (CBT) = HT) g( Thu r(L)g(T)w = (T)g(T)#._ Also, applying (1) tow alone and using (T)w =O, Wwe obtain w= r(T)s(T)w + (TMT yw =r(T)g(F)w. Both of the above formulas give us w= (Tg Pe and so wv is uniquely determined by v. Similarly w is viniquely determined by v. Hence V=U@W, a6 required, Theorem 17.10: Suppose in Theorem 17.9 that f(2) is the minimum polynomial of T [and g(t) and h(é) are monic], Then g() and h(¢) are the minimum polyaomials of T, and 7, respectively [here 7, is the restriction of T to U and T; is the resutiction of T to W'). 17.43 Prove Theorem 17.10. J Let m(1) and m,(0) be the minimal polynomials of T, and T,, respectively. Note that a(T,)=0 and MT;)=0 because U=Ker g(t) and W=Ker k(:). Thus m,(0) divides g(e) and —_-m,(1) divides (0) a By Problem 17.40, f() is the least common multiple of ma,(t) and ma(i). But _m,(t) and m,(t) are relatively prime since g(t) and 4(e) are relatively prime. Accordingly, 1) =, (¢)m,(0). We also have that JC)=g()A(0). "These two equations together with (1) and the fact that all the polynomials are monic imply that g@)=m,(Q and h()=m,(0), as required, 7.44 Prove the Primary Decomposition Theorem 17.6. F The proof is by induction on r. ‘The case r=1 is trivial. Suppose that the theorem has been proved Wy ince y, Theorems 17.9 we can write V as the direct sum of Tinvariant subspaces W, and V; where W, is the kernel of f(T)" and where V, is the kernel of A(TY"---f(TY". By Theorem 17-10,-the minimal polynomial of the restrictions of T to W, and V, are, respectively, fi" and. fe)" f(t)" JPenote the restriction of T'to ¥, by T,. By the inductive hypothesis, V, is the diect sum of subspaces Was. W, such that W, is the kernel of (7,)"' and such that f(T)" is the minimal polynomial for the restriction of T, to W. But the kemel of f(T)", for i=2,...,r, is necessarily contained in V, nce AC)" divides f("*---£(9)". "Thus the kernel of f(T)" is the same as the kernel of f(7,)" which is W/. Also, the resttietion of T'to W, is the same as the restriction of T, 10 W, (for i=2,_"..'r): hense 40" % also the minimal polynomial for the restriction of Tto W,. Thus’ V=W,8W,@---@iW. ig the desired decomposition of 7. 17.45 17.46 CANONICAL FORMS J 393 Prove Theorem 17.7. F Suppose m(j) is a product of distinct linear polynomials; say, m{t)= (r= A,)((— A+++ (= A,) where the A, are distinct scalats. By the Primary Decomposition theorem, V is the direct sum of subspaces Wye. W, where Wy=Ker(T-A,). Thus if v@W,, then (T~A/)(v)=0 or Tw)=Av. In other words, every vector in W, is an eigenvector belonging to the eigenvalue A, By Theorem 10.4, the union of bases for W,,..., W, is a basis of V. This basis consists of eigenvectors and so T is diagonalizable. Conversely, suppose T is diagonalizable, i.e., V has a basis consisting of eigenvectors of T. Let Ays+-++ A, be the distinct eigenvalues of T. Then the operator f(T) = (T— A(T Agl)“-(T— 4,1) maps cach basis vector into 0. “Thus f(T)=0 and hence the minimum polynomial m(t) of T divides the polynomial f(s) = (eA, )@—A,)---(FA,). Accordingly, m(e) is a product of distinct linear polynomials. Suppose AI is 2 square matrix for which A*= J. Determine whether or not A is similar to a diagonal matrix if A is a matrix over ({) the real field R, (ii) the complex field C. Ff Since A’=1, A isa zero of the polynomial f()=1—1=(r—1)( +41). The minimal polynomial m(*) of A cannot be ¢—1, since A#/. Hence m()=P+c+1 or m()=P—1, Since neither polynomial is @ product of linear polynomials over R, A is not diagonalizable over R. On the other hand, each of the polynomials is a product of distinet linear polynomials over C. Hence A is diagonalizable over C. 17.4 NILPOTENT OPERATORS AND MATRICES qWaT 17.48 49 17.50 17.51 17.82 Define ipotent operator and @ nilpotent matrix. TA linear operator T: V> Vis termed nilpotent if T7=0_ for some positive integer n; we call k the index of nilpotency of Tif T*=0 but Té"'x0, Analogously, a square matrix A is termed nilpotent if A”=0 for some positive integer n, and of index kif A*=0 but A*' 40. Problems 17.48-17,51 refer to an n-square nilpotent matrix A of index k. What is the minimum polynomial m() of A? # Since At =0, but AS "#0, we have m(s) Find the eigenvalues of A, FT Since m(r) is the minimum polynomial of A, only 0 is an eigenvalue of A. Show that k=n, i.c., that the index of A does not exceed its order. A Since the degree of the characteristic polynomial A(t) of A isn, k= deg m(t) V_ be linear, Suppose, for v@V, T(v)=0 but T**(v) 40. i) The set S=(v, T(v),...,7*"(o)) is linearly independent, ‘The subspace W generated by $ is T-invariant. ‘The restrietion 7 of T to W is nilpotent of index k. . Relative to the basis (7*"'(v),..., T(v), 0} of W, the matrix of Fis the following K-square canonical mairix er er 396 J CHAPTER 17 17.64 7.66 1.67 69 “00 700 o1 0 00 0 Ol ‘Thus the k-square matrix N is nilpotent of index ke. Prove (i) of Lemma 17.13. TF suppose a0 + a, T(e) + Tv) +++ ay TQ) <0 a Applying T*"* to (1) and using Tw)=0, we obtain aT**(w)=0; since T*'(v) #0, a@=0. Now applying T*~* to (1) and using T*(v)=0 and a=0, we find @T*v)=0; hence a,=0. Next applying T** to (1) and using T(o)=0 and a=a,=0, we obsain a,T*“'(y) =0; hence @,=0, Continuing this process, we find that all the a’s are 0; hence S is independent. Prove (ii) of Lemma 17.13, H Let vew, Then 7) bv + b,T(0) + BgT(U) +--+ Dg Tv). Using Tv) OT) +b, TO) te, aT'W) EW. Thus Wis invariant, |, we have that Prove (iii) of Lemma 17.13, F By hypothesis Tv) =0. Hence, for /=0,..., 1, PTW) =T'"@)=0. That is, applying 7* to each generator of W, we obtain 0; hence 7* and so Tis nilpotent of index at most &. On the other hand, 7*"'(y)= Ty) 40; hence Tis nilpotent of index exactly k. Prove (jv) of Lemma 17.13, F For the basis (74), TY %(u), + Tie), v} of W, TUT") = Tv) =0 TW) = Te) Fry) = To) TT) PO) Tw)= To) Hence the matrix of Tin this basis is WV. Let T:V->V. be finear. Let U TUG U. F (i) Suppose we U=KerT. Then T)=0 andso T(u)= (Tw) = 1(0)=0. Thus ue Ker T'*! = W. But this is true for every u€U; hence UCW. GD Similacly. if we We Ker 7"! then T'*"w)=0. Thus P07) = 1'(T00)) = 70) =0. and so TUW)CU. er T’ and W=Ker7'*'. Show that (i) UCW, Gi) Let T:V->V belinear. Let X=KerT* Y=KerT! and 2=Ker By the preceding Problem, XCYEZ. Suppose {assess ths (py en ey les Baynes Vode (buyer ey ths Berns Y, Muse coat) are bases of X.Y, and Z, respectively. Show that” §= {1,10 4, Ttw).0.., TGe)} is contained in Y and is nearly independent, 1 By the preceding problem, T(Z)GY and hence $CY. Now suppose Sis linearly dependent. Then there exists a relation ayu, +++-+a,u, +b, T(w,) +--+ b,Tw,)=0 where at least one cociitelent is not zero. Furthermore, since (u,} is independent, at least one of the b, must be nonzero. Tyansposing, we Bnd 0:70) b= T,) = —ayu, "~~ a,u, X= Ker, Hence Tb, Tlw,) +0 DTH) 0. Thus T(byw, +--+ bav,)=0 and so byw, +++ bw, EY =Ker 7"! Since CANONICAL FORMS 9 397 {u.0,) generates ¥, we obtain a relation among the u,, ¥), and w, where one of the coefficients, i.c., one of the b,, is not zero. This contradicts the fact thet {t,, u,, #,) is independent, Hence S must also be independent. 17.10 Prove Theorem 17.11, Let T:V->V_ be a nilpotent operator of index k, ‘Then T has a block diagonal matrix representation whose diagonal entries are of the form ot oo 0 0 ‘There is at least one N of order k and all other N are of orders 2k. ‘The number of N of each possible order is uniquely determined by T. Moreover, the total number of of all orders is the nullity of T. F Suppose dimV=n, Let W=KerT, W,=KerT...,W,=KerT’ Set m,=dimW,, for is1j.e.,k Since Tis of index k, W,=V and W,.~#V andso m,_)J, is nilpotent of index =k, # Since A*=0, we have (A")§=(4*) = 0" |. Thus A" is nilpotent of index =k. Suppose A and B are similar. Show that A is nilpotent of index & if and only if B is nilpotent of index &. 4 Suppose B= PAP, If A’=0, then Bo =(P™'AP) = Po'a'P = p~'op =o, Similarly, if B’=0, then A'=0. Thus A is nilpotent if and only if B is nilpoten:, and, in such a case, they have the same index. JORDAN CANONICAL FORM Define @ Jordan block J or order & belonging to the eigenvalue A. q TJ's the E-square matrix with As on the diagonal, 1s on the superdiagonal, and Os elsewhere; f A TO ++ 0 01 OAL 00 0 0 00 00 Write down the Jordan blocks of orders 1, 2,3, and 4 belonging to the eigenvalue A=7. FF The matrices follow: 710 0 710 71 0710 m (3) (73) 0071 o007 Show how a Jordan block J may be written as the sum of a scalar matrix and a canonical nilpotent block N. TOJ=A+N as follows: ALO. 0) 0 OAs 0 0 ooo. 1 On 0 00-04 +00) Problems 17.79-17.81 refer to the following Jordan block A of order 4: Sce5 0 0 1 7 cone CANONICAL FORMS £ 399 17.79 What is the characteristic polynomial A(¢) and minimum polynomial m(t) of A? What are the eigenvalues of A? FH Both A) and m(s) are equal to (¢—7)*; that is, A@=m()=@-7). Thus A= eigenvalue. is the only 17.80 Find a basis for the eigenspace of the eigenvalue A=7. FP Substituting 1=7 in the matrix equation d~ A=0_ yields the following homogeneous system: O m1 0 OVex o o-1 olfy 0 0 0 -iftz o 0 0 Ofte 0} mn =lo] 0 0=0 ‘There is only one free variable x; hence v=(1,0,0,0) forms a basis for the eigenspace of 1=7. 17.81 What is the algebraic multiplicity and geometric multiplicity of the eigenvalue A= 7? J Since A(t)=(r-7)', the algebraic multiplicity is 4, Since the eigenspace of A=7 has dimension ‘one, the geometric multiplicity of A is 1. W782 Define a Jordan matrix M. TA matrix Af is a Jordan matrix if M is a block diagonal matrix whose diagonal blocks, say, Igy Jyr-sy Jy, ate Jordan blocks. [We emphasize that more than one diagonal block may belong to the same eigenvalue.) 17.83 Define equivalent Jordan matrices. FA Jordan matrix M, is equivalent to a Jordan matrix M, if Mf, can be obtained from M, by rearranging the diagonat blocks, Remark: We usually do not distinguish between equivalent Jordan matrices. In particular, the term “gnique Jordan form” means usique up to equivalence. Problems 17.84-17.87 refer to the following Jordan matrix: “3 17.84 Find all Jordan matrices equivalent to a. Ff There are exactly two other ways of arrangis 17.85 Find the characteristic polynomial A(#) and eigenvalues of M. FH Here A().=(¢+3)'(¢- 5). The exponent 3 comes from the fact that there are three 3s on the Giagonel and the exponent 4 comes from the fact that there are four Ss on the diagonal. In particular, 3 and A,=S are the eigenvalues. 400 J CHAPTER 17 17.86 17.87 « 17.88 17.89 17.90 17.91 17.92 17.93 17.94 BIBLIOTECA DEPTO. DE INVESTICACION Y POSGRADO EN ALIMENTOS Find the minimum polynomial m(t) of M. 4H Here m(s)=(1-+3)'(¢-5). ‘The exponent 3 comes from the fact that 3 is the order of the largest block belonging to A, =—3 and the exponent 2 comies from the fact that 2 is the order of the largest block belonging to A, =. [Alternatively, m(i) is the least common multiple of the minimal polynomials of the blocks.] Find a maximum set § of linearly independent eigenvectors of M. F Each block contributes one eigenvector to S. ‘Three such eigenvectors are v, =(1,0,0,0,0, 0,0), ¥2= (0,0,0,1,0,0,0), v5 =(0,0,0,0,0,1,0) which correspond to the first, second, and third blocks, respectively, The entry 1 in each vector is the position of the first entry in the corresponding block. Problems 17.88-17.96 refer to the following Jordan matrices: Find the characteristic polynomial A(t) and the cigenvalues of A. I Here af A=4 and A, (¢—4)"(¢~2)* since there are five ds on the diagonal and three 2s on the diagonal. ‘Thus are the eigenvalues of A, Find the characteristic polynomial A(é) and eigenvalues of B. J Here a@)=(~4)'(¢~2)* since there are five 4s and three 2s on the diagonal, Thus A=4 and are the eigenvalues of B. Are A and B equivatent Jordan matrices? F Although 4 and B have the same characteristic polynomial and the same eigenvalues, A and B are not equivalent since the diagonal blocks are different. Find the minimum polynomial m(s) of A. F Here m(t)=(~4)°¢-2)* since 3 is the order of the largest block in A belonging to A,=4 and 2. the order of the largest block in A beionging to A, =2. Find the dimension d, of the eigenspace E, of A,=4 in A. [In other words, find the geometric multiplicity of -A,=4 in A.] Also find a basis of the eigenspace E,. F Here d,=2 since there are two blocks belonging to A,=4. Also ¥=(1,0,0,0,0,0,0,0) and = (0,0,0,1,0,0,0,0) form a basis for £,. Find the dimension d, of the elgenspace H, of A,=2 in A. Also find a basis of the eigenspace Fy. =(0,0,0,0,0,1,0,0) and F There are two blocks in A belonging to A, W,=(0,0,0,0,0,0,0,1) form a basis of Fy. hence d,=2. Also, ‘Remark; The entry 1 in each of the above eigenvectors of A is the position of the first entry in the corresponding block. Find the minimum polysomial m(¢) of B, I Note 2 is the order of the largest block in B belonging to A, and 3 is the order of the largest block in B belonging to A,=2; hence mit) =(-4F¢-2. CANONICAL FORMS f 401 17.95 Find the dimension d, of the eigenspace E, of A, =4 in B, (Note d, is the geometric multiplicity of A,=4.] Also, find a basis of the eigenspace E. 4H There are three blocks in B belonging to A,=4; hence d,=3. Also v,=(1,0,0,0,0,0,0,0), 2,=(0,0,1,0,0,0,0,0), v=(0,0,0,0, 1,0,0,0) form a basis of B,. in B and find a basis of E,. Also, w=(0,0,0,0,0,1,0,0) 17.96 Find the dimension d, of the eigenspace E, of A, I There is only one block in B belonging to A, forms a basis of E,. 2; hence d, 17.97 Find all (nonequivetent) Jordan matrices with characteristic polynomial A(t) = (¢~7)* There are five such matrices which follow: 710 0 10 _{o 71 71 Arlo o7 o7 000 A Since deg A()=4, all the matrices are of order 4, Also, only 7 appears on the diagonal since A=7 is the only eigenvalue, 17,98 Find the minimum polynomial of each of the matrices in Problem 17.97. F Let m,(0) denote the minimum polynomial of A,. Then m,(2)=(¢-7)* where kis the order of the largest block. Thus m,()=(-7), ma()=(- 7 mC) = m= (t- 7), mgt) = 0-7. 17.99 Find the geometric multiplicity of the eigenvalue A=7 in each of the matrices in Problem 17.97. J Let d, denote the geometric multip! A, (belonging to. A=7). Thus d, =7 in A, Then d, is equal to the sumber of blocks in d=2, d,=3, dad. ity of 1. Theorem 17.14: Let T:V-*V_ be a linear operator whose characteristic and minimum polynomials are, respectively, AG)=(= A, (F= A and m()=(P- AM (C= 2," where the A, are distinct scalars, ‘Thea T has @ unique Jordan matrix representation M [called the Jordan canonical form of 7]. Furthermore, the blocks J, of M belonging to the eigenvalue A, have the following properties: There is at least one J, of order m,s all other J, are of order $m, ') The sum of the orders Gf the Ji is n,. (ii) The number of J,, equals the geometric multiplicity of 4,. (iv) The number of J; of each possible order is uniquely determined by 7. ‘Theorem 17.18 [Alternate Form of Theorem 17.14]: Let A be a matrix whose characteristic polynomial A(¢) is a product of linear factors. Then A is a similar to a unique Jordan matrix M with the above properties. [The matrix M is called the Jordan canonical form of A.} 17.100 Prove Theorem 17.14 which represents the main content of this section. 1 By the Primary Decomposition theorem, T is decomposable into operators T,,...,T,, that is, T= T,@--@T,, where (¢—2,)" is the minimal polynomial of 7,. Thus in particular, " (T,— 4,1)” 0,...,(T,- AL)" =0. Set N,=T,—A/. Then for f=1,...,7, T,=N, +A, where NM =O, That is, T, is the sum of the scalar operator A, and a nilpotent operator NN, which i of index m, since (¢-A,)"" ‘is the minimal polynomial of T,. Now by Theorem 17.11 on nilpotent operators, we can choose a basis so that Nis in canonical form. In this basis, T,=N,-+ Ais represented by a block diagonal matrix M, whose diagonal entries are the matrices J, ‘The direct sum J of the matrices M, is in Jordan canonical form and, by Theorem 17.5, is a matrix representation of T. 402 0 CHAPTER 17 Lastly, we must show that the blocks J, satisfy the required properties. Property (i) follows from the fact {het Ny is of index m,, Property (i) is true since T and J have the same charnato. polynomial. Poverty (il) is tue since the malty of N,=T,~- 4,1. is equal to the geometnie multiplicity of the sigenvalue 4. Property (jv) follows from the fact that the 7, and hence the Nare uniquely determined by Tr. 7-101 Suppose the characteristic and minimum polynomials of an operator T are, respectively, A(() = (2-3) and m(t)= (2G 3). Find all possible Jordan canonial Irene with above conditions, F Since A(t) = (¢-2)'(@~3)*, there must be four 2s on the diagonal and three 3s on the diagonal. Also, since “m(1)=(1—2)"¢~3)%, ‘there must be a block of order 2, and none larger, belonging to the eigenvalue 2; and there must be a block of order 2, and none larger, belonging to the eigenvalue 3. There are two possiblities which follow: 2 1 2 or ‘The frst matrix occurs if T has two independent eigenvectors belonging to its eigenvalue 2; and the second ‘matrix occurs if T has three independent eigenvectors belonging to 2. Find all possible Jordan canonical forms for a linear map T: VV. whose characteristic polynomial is AQ) =((-7)° and whose minimum polynomial is m(1)=(¢—7)2, F Since A(t)=(¢~7)* has degree 5, the matrix must have order 5 and have five 7s on the diagonal. Also, since m(i)=(1~ 7), there must be a block of order 2, and none higher. There nce two possibilities which foltow: 7 1 ‘Rae fist occurs if A=7 has geometric multiplicity 3 and the second occurs if Ase7 has geometric muttiplicity 4, 103 Suppose T:V—>V has characteristic polynomial AC) = (¢+8)"(e~ 1° and minimum polynomial m() = (1-8) 1) Find the Jordan canonical form M of T. F Since dog.a(@)=7, the order of Mis 7. Since A) = (¢#8)\(t~ 1), M has four —8s and three is on the diagonal. Also, since ms) =(t~8)(¢—1)* there must be a blcck of cates 3 belonging to ~3 and a block of order 2 belonging to 1. There is only one possiblity which folloe 8 7404 Determine all possible Jordan canonical forms for a linear operator T:V->V whose characteristic polynomial is A) =(1-27'(r— 5), UT Since 1-2 has exponent 3 in AG), 2 must appear three times on the main diagonal. Similarly 5 must appear twice. ‘Thus the possible Jordan canonical forms are as follows: 17.108 17.106 17.107 17.108 17.109 17.10 WALL 7.112 CANONICAL FORMS / 403 13 Problems 17.105-17.110 refer to the above matrices B,,B,,...,B,. Also, m,(f) denotes the minimum polynomial of B), and E, and F, denote, respectively, the eigenspaces of the eigenvalues 2 and 5 in B,, Find m,(#) and a basis for E, and F, in the matrix B,. F Here m(t)=(¢-2)@-5), Also, w=(1,0,0,0,0) forms a basis for E, and v=(0,0,0,1,0) forms a basis for F,. Find m,(0) and the dimension of E, and of F, in the matrix F Here m,() =(¢-2)'(¢-5). Also, dim(E,) and dim(F,)=2. Find m,(0) and a basis for E, and F, in the matrix B,. F We have m,(1)=(1~2)(t~-5). Alsou, =(1,0,0,0,0), = (0,1,0,0,0), u, (0,0,1,0,0) form a basis of EZ, and v=+(0,0,0,1,0) forms a basis for F,, Find m,(#) and a basis for E, and F, in the matrix By. HF Here m.()=(t~2)'(¢-5). Also w=(1,0,0,0,0) forms a basis of E,, and v, = (0,0,0, 1,0) and v,=(0,0,0,0,1) form a basis of F, Find m,(¢) and the dimension of B, and of F, in the matrix B,. FH Here ms()=(¢-2)(¢—5) and dim(Z,)=2 and dim(F,)=2. Find mg(¢) and the dimension of E, and of F, in the matrix By. FH m@Q=(-2)~5), and dim(E,)=3 and dim(F,) =2. Suppose A is a S-square matrix with minimum polynomial m(r)=(t—2)%. Determine all possible Jordan canonical forms M for A. TM must have one Jordan block of order 2 and the others must be of order 2 or 1. Thus there are only two possibilities: Note that all the diagonal entries must be 2 since 2 is the only eigenvalue. The first matrix occurs when A ‘has three independent eigenvectors, and the second matrix occurs when A has four independent eigenvectors. Let A be a real [square] matrix. Is A similar to a Jordan matrix? If not, give a counterexample. 4 Ais similar to a Jordan matrix only if the characteristic polynomial A(¢) of A is'a product of linear factors. This is not always tue, For example, the cheracteristi polynomiat of a=(9 ~4) is a@= #41, Hence this matrix A is not similar to a Jordan mattix. Sr 404 | CHAPTER 17 17113 Let B be a complex [square] matrix. Is B similar to a Jordan matrix? If not, give a counterexample, J Let A(0) be the characteristic polynomial of B. By the Fundamental Theorem of Algebra, A(t) factors into linear polynomials over the complex field C, ‘Thus every complex matrix B is similar to a Jordan matrix. 17.6 QUOTIENT SPACES AND TRIANGULAR FORM 17.114 Let W be a subspace of a vector space V. Define the cosets of W. F For any vector vEV, we write w+ W forthe set of sums ut+w with weW; thatis, v+ W={ut+w:wEW). These sets are called the cosets of Win V. The dimension of v+W is defined to be the dimension of W, 17.115 Let W be the subspace of R* defined by W={(a,b):a=b}. Describe the cosets of W. 1 Wis the line in the plane R® given by the equation x~y=0. We can view v-++W asa translation Of the line, obtained by adding the vector v to each point in W as pictured in Fig. 17-2. Note that v+I¥ is also a line and is parallel to W. Thus the cosets of W in R? are precisely all lines parallel to W. y vew Fig. 17-2 17.116 Let W’be the solution space of the homogeneous equation 2x 43y +4; R Describe the cosets of W in 4 Wis a plane through the origin © ==(0,0,0), and the cosets of W are the planes parallel to W. [See Fig, 17-3.] Equivalently, the cosets of Ware the solution sets of the family of equations 2x + 3y +42 = k ‘GR, In particular the coset_v+W, where v=(a, b,c), is the solution set of the linear equation 2x b3ybdz=2at3b+4e or Ax~a)+3(y—b) +4(z Fig. 17-3 CANONICAL FORMS & 405 17.17 Let ¥= C[0,2], the vector space of continuous functions on the interval 012. Let W be the subset of V consisting of all functions F(2) such that f(1)=0. Show that W is a subspace of V. FT We have 0(1)=0; hence the zero function 0 belongs to W. Suppose f,gEW; then f(1)=0 and g(1)=0. Thus (f+ 9)Q)=f(1) + g(1)=0+0=0 and (kfYU)=Af(L)+k-0%0 for any scalar KE K. Hence f+g@W and kfEW. Accordingly, W is a subspace of V. 17.118 Describe geometrically cosets of W in V. FW consists of all continuous fnctions passing through the point A(1,0) in the plane R? as pictured in Fig. 17-4(g). A coset of W consists of all continuous functions passing through a point B(1, k) for some fixed scalar k as pictured in Fig. 17-4(b). BCA) Ado) | (@) Blements of W (b) Elements of coset of W Fig. 17-4 Theorem 17.16: Let W be a subspace of a vector space over a field K. ‘Then the cosets of W in V form a vector space over K with the following operations of addition and scalar multiplication: @ Ut W)+O+W)=(+0)4 WH. (i) kt W)=kutW, where KEK, Remark: The above vector space consisting of the cosets of W in V is called the quotlent space of V by W and is denoted by W/W. Theorem 17.17: Suppose W is a subspace invariant under a linear operator T:V—>V, Then T induces a linear operator T on V/IV defined by F(v-++ W)= lv) + W. Moreover, if Tis a zero of any polynomial, then so is 7. Thus the minimum polynomial of T divides the minimum polynomial of T. 17.119 Let W be a subspace of a vector space V, Show that the following are equivalent: (i) ue@u+W, (ii) u-vew, (ii) veu+W. F Suppose wGv+W. Then there exists wy GW such that u=v tw». Hence u-u= "EW Conversely, suppose u-vEW. Then u~o-=wy where w,@W. Hence u=ut+w,Gu+W. Thus (®) and (if) are equivalent. We also have u-vEW iff -(w-v)v-wEW iff veutW. Thos (i equivalent. and (ii) are also rr | ay 406 J CHAPTER 17 17.120 124 17,122 17.123, 17.124 17.125 Prove: |The cosets of W in V partition V into mutually disjoint sets. ‘That is, (i) any ewo cosets w+ W and u+tw are either identical or disjoint; and (ii) each uV belongs to 2 coset; in fact, veu + W, Furthermore, ut Weu-+W ifand only if u-veEW, andso (o-+w)+W=u+W forany wel, 4H Let vEV. Since OEW, we have v=v+0Eu+W which proves (ii). Now suppose the cosets w+ W and v+W are not disjoint; say, the vector x belongs to both w+ and vt W. Then w~xEW and x~v@W. The proof of (i) is complete if we show that u-+W= v+W. Let w+ ws be any element in the coset w+. Since u—x, x~v, and wy belong to W, C+ Wg) v= (uma) + (e—v) + wy EH, Thus wt w,Go+W and hence the coset wt Wis contained in the coset_v+W, Similarly v+W is contained in u+W andso u+W=u+W. ‘The last statement folows from the fact that _u+Weu-+W ifand only if w@v+W, and by the preceding problem this is equivalent to uv GW. Show that the operations in Theorem 17.16 are well-defined; namely show that if u+Weu'-+W and veWeo'tW, then) (u+o)+We(u'tu’)+W and (i) kutW=ku' +W, forany kek. FO) Since ue Wau +W and v+W=v'+W, both u-w’ and v-o" belong to W. But then (u+0)~ (uv) = (um uw) 4 (u—vEW. Hence (uty) + Wea(u' bo +W, Also, since wu" GW implies A(u~w')EW, then ku— ku’ = kuw')EW; hence kuch We ku’ + W, What is the zero element in the quotient space V/W? T Forevery veV, wehave (0+ W)+W=v+W. Hence W; itself; isthe zero element in VW. Let ¥ be a vector space and W a subspace of V. Show that the natural map 7: V->VIW, defined by a(e)= 0+, is linear, HF Forany u,vEV and any kEK, we have nus u)satot Wet Wut W= a(n) $n(v) and (kv) = kot W= kw + W)=kn(v). Accordingly, is linear, LLet W be a subspace of a vector space V. Suppose (w,,..., w,} is a basis of WY and the set of cosets (G1...) 0,}, where #,=v,-+W, isa basis of the quotient space, Show that B= yee By Marsssy0,) is2 basis of V. Thus dim V= dim W + dim(V/W), F Suppose wEV. Since {5} is abasisof VW, @=u+ Wo G0, + 0,0, +-+++a,5,. Hence u aw, to tau, tar where wEW. Since (w,) is a basis of W, ‘wm a,u, F++--F a0, t Dyary += b,w,. Accordingly, B generates V. We now show that B is linearly independent. Suppose CU, Fie eeu, dy, Fe +, 0 mM Then ¢,,+-+++¢,8,=0=W. Since {0)} is independent, the c's are all 0. Substituting into (1), we find date, +--+ dv,=0. Since (v,} is independent, the a's ate all 0. “Thus B is linearly independent and therefore a basis of V. Prove Theorem 17.17. 1 We first show that # is well-defined, ice. veW then TutW)=Tw+w). if ut Pea AW then uve and, since W is T-invariant, T(u~ 0) = T(u)~-T(v)EW. Accordingly, Tu + W)= Tu) + We Te) + W= F@ + W)_as required. _ We next show that Tis linear. We have 7((u + W)-+(v+ W))=T(u+v+W! Mu) + Te) + We Tu) + W + Tv) + We Tet W) + TotW) and Feu + Wy T(ku) + W= kT (u) + W k(T(u) + W) = kT + W). Thus Tis linear, Now, for any coset w+ W in VW, Tu + W)= Tu) Wen Tu) + Wee TTu) + W) = Tut W))=T%(u+W), Hence TF= 7 Similarly T*= 7" for any n, Thus for any polynomial Ki FD us W) = fT) + W= Dau) +w=d, @(T'(u) + W) Thc + v) + W. Tu + W) a, Hb ay, DaPw+w)=d «Fur wy= (= oF ue W)=APu+ Ww) 17.126 1.27 17.128 17.129 CANONICAL FORMS ff 407 and so f(T)=f(T). Accordingly, if T is a root of f(t) then f(T) root of f(t). Thus the theorem is proved. AAT), that is, Tis also a Theorem 17.18: Let T:V-»V_be a linear operator whose characteristic polynomial factors into linear polynomials. Then V has a basis in which T is represented by a triangular matrix [called a triangular form of T]. Theorem 17.19 {Alternate Form of Theorem 17.18\; Let A be a {square matrix whose characteristic polynomial factors into linear polynomials. Then A is similar to a triangular matrix. Prove Theorem 17.18 which represents the main content of this section. I The proof is by induction on the dimension of V. If dim V is I by T matrix which is triangular. Now suppose dim ¥=n>1 and that the theorem holds for spaces of dimension less than . Since th characteristic polynomial of T factors into linear polynomials, T has at least one eigenvalue and 30 at least one nonzero eigenvector v, sey T(v)=a,,v. Let WV be the one-dimensional subspace spanned by v. Set VIW. ‘Then [Problem 17.124] "dim 7= dim Y—dim W=n—1, Note also that W is inder T. By Theorem 17.17, T induces 2 linear operator T on V whose minimum polynomial divides the minimum polynomial of 7. Since the characteristic polynomial of T is a product of tinear polynomials, so is its minimum polynomial; hence so are the minimum and characteristic polynomials of T. ‘Thus V? and T satist the hypothesis of the theorem. Hence, by induction, there exists basis (3,,...,0,) of V such that , then every matrix representation of Now let v,,...,, be elements of V which belong to the cosets d,,..., d, respectively. Then {Uy Uz,++45 04} is @ basis of V [Problem 17.124]. Since T(é,)=a,.5,, we have T(o,)-a,,5,=0 and 80 T(0,)~ 4,0, €W. But Wis spanned by v; hence T(v,)—a,,0, is a multiple of v, say 7(0,) ~ nV, = My,0 and 80 Ty) aayv + dya0y. Similarly, for 7=3,...,7, T(v,)—aa02~ avs 2,u,EW and so T(v,)=a,0+ 4.0, +--+ a,v,. Thus Te) = 4,0 TW.) = 3,0 + aya, Tq) = ayy0 4 aig, FF yy and hence the matrix of T in this basis is triangular. Let W be a subspace of V. Suppose the set of cosets {u, + W,v,+W,...,0,+W} in V/W is linearly independent, Show that the set of vectors (0;, U,...,0,) in V is also linearly independent. F Suppose a,v, + a0,+-+-a,u,=0. Then (av, +-4+a,0,) + W=0+W=W. Hence a,(v, + W) + ajQv.tw)+---ha,y, tw) =W. Since the v,+ Ware linearly independent, Thus 04, Uy,--+40, are linearly independent. Let W be a subspace of V. Suppose the set of vectors {i,, Wz, ---, u,} in V is linearly independent, and that span(w,)MW= (0). Show that the set of cosets {u,+W,....u,-+ W} in V/W is also linearly independent. F suppose ay, + W) + ay(ag FW) a,(u, WW. Then (au, +a,u, +--+ au.) + We W and ayy ++++a,0,EW. Since span(u)AW= {0}, we have au, +--+ anu, By hypothesis, tty... 54, are linarly independent; hence a, ++4,70, Thus the cosets u,+W are linearly independent. Let V be the vector space of polynomials over R and let W be the subspace of polynomials h(t) which are divisible by that is, A()=bgt' + b,°++-++5,,..0" Show that the quotient space V/W is of dimension 4,

You might also like