9 views

Uploaded by Susila Windarta

save

- MATH 120-Linear Algebra With Differential Equations-Imran Naeem
- Recipes
- mca
- 2_7-LU-Purdue
- Chapter 2 Systems of Linear Equations
- 1379011108.1504Lectures of Weeks 1 and 2
- book1
- BCA Part-II Syllabus 2014
- Class X Maths Term 1
- Scilab Primer
- 1-manifolds.solutions.pdf
- Featured
- 501
- EXER1
- 6 1 gcf
- Appendix I MATLAB Functions Not Including Those Listed in the Explore Other Interesting Features Sections 2017 Matlab Fourth Edition
- Putnam 2016
- Linalg Jun16 v05 Preliminaryexamsolutions 1
- Matlab3
- Computation of Seigel Form NP Skoruppa
- Topis and Saw
- Programs of Sir
- Resume
- Tutorial 1
- Designing Matrix Reports
- Complex.tex
- matlabNotes.pdf
- Matrix Analysis for Scientists and Engineers by Alan J Laub
- Several Simple Real-world Applications of Linear Algebra Tools
- Vec-mat
- Surat Yaa Siin
- Understanding the Publishing Process
- Latihan_Soal
- Matthew Green Phd Thesis
- Buku Modul Kuliah Pancasila
- National Cryptography Policy for the Information Age

You are on page 1of 9

November 1983

**A matrix Euclidean algorithm and matrix continued fraction expansions
**

Paul A. F U H R M A N N

Department of Mathematics, Ben Gurion University of the Negeo, Beer Sheva 84120, Israel

Received 13 March 1983 Revised 24 August 1983

1. Introduction "I'm, x is directly related to the Euclidean algorithm", Kalman [141. The aim of this paper is to try and give a rigorous foundation to the insight carried in this remark of Kalman. In the previously quoted paper of Kalman, as well as in the strongly related paper of Gragg and Lindquist [12], the Euclidean algorithm is taken as vehicle for producing a nested sequence of partial realizations as well as for obtaining a continued fraction representation of a strictly proper U:ansfer function. As a by-product Kalman obtains a characterization of the maximal (A, B)-invariant subspace in Ker C for a minimal realization (A, B, C) that is associated with the continued fraction expansion. While both Kalman and Gragg and Lindquist have a strong feeling that these results should generalize to the matrix case, this seems to elude them, mainly I guess, due to the fact that there does not seem to exist a Suitable generalization of the Euclidean algorithm to the matrix case. Results such as in Gantmacher [11] or MacDuffee [16] are not what is needed for handling this problem. In this paper the line of reasoning is reversed. Rather then start with the Euclidean algorithm we start with a very simple idea derived from the M o r s e - W o n h a m geometric control theory, namely the knowledge that V*er c is related to maximal McMillan degree reduction by state feedback. Indeed a minimal system is feedback irreducible iff l~Kerc= (0}. Thus feedback irreducible systems provide the atoms needed for the construction of a continued fraction representation, or alternatively

for a version of the Euclidean algorithm. Putting things together we obtain a recursive algorithm for the computation of V~erc. The dual concept of reduction by output injection is introduced and actually most results are obtained in this setting which is technically simpler. A recursive characterization of V , ( ~ ) is also derived. These characterizations are related through dual direct sum decompositions. 2. A matrix Euclidean algorithm In papers by Kalman [14] and Gragg and Lindquist [12] the Euclidean algorithm and a generalization of it introduced by Magnus [17,18] have been taken as the starting point of the analysis of continued fraction representations for rational functions, or even more generally, formal power series. As a corollary Kalman obtained a characterization of V~erC. Let us analyze what is involved. Assume g is a scalar strictly proper transfer function and let g = p / q with p, q coprime. Write, following Gragg and Lindquist, the Euclidean algorithm in the following form: Suppose s i_ 1, si are given polynomials with deg s i < deg s i_ 1, then by the division rule of polynomials there exist unique polynomials a~+ 1 and s,'+l such that deg s,'+ t < deg si and

si_ l = a~+ lsi - s[+ l.

**Let b i be the inverse of the highest nonzero coefficient of a~+l. Multiplying through by b, and defining
**

ai+ t = biaS+t, si+t = bis'+l

**we can write the Euclidean algorithm as
**

Si+ l = a i + l S i - - b i s i - t

(2.1) 263

0167-6911/83/$3.00 © 1983, Elsevier Science Publishers B.V. (North-Holland)

s . A similar simplification has been observed in F u h r m a n n [8] where it turned out that the analysis of the output injection group in terms of polynomial models is significantly simpler that that of the feedback group.gl ) G. q or alternatively of the transfer function g. alg-glg=bo or gl=(aag-bo)/g. s. Write g = p / q and let p = bos with s monic.s 264 . Suppose we obtain in the i-th step T~.. U o = U. as well as output injection invariant. Specifically let p~ be the solution of (2.Volume 3. = TS~U~ be a left matrix fraction representation of G i with T~ row proper. i. In the more general case if g = b o / ( al . . In this case s. Let us consider the extreme situation. that a l s o .b.e. Being zeroless its McMillan degree 8 ( g ) = deg a~ is feedback invariant. by a result of Hautus and Heymann [13] the transfer function b o s / a s is obtainable from p / q by state feedback. is nonsingular and T. The implications are quite clear. Here the a~+~ are monic polynomials and b i are nonzero normalizing constants. Let G~ be a strictly proper p × m transfer function and let This in turn implies that g = p / q = b o / a .2) with the initial conditions x_~=O and x 0=1. (s. Thus we have g = b o s / ( a .2) with the initial conditions x_~=-I and x 0=0 and let qi be the solution of (2.e. the atoms of the pair p.+~ = 0 .e.bos-1 = 0 The moral of this is that given s (which in this case is just p normalized) there is a unique way of adding a polynomial r of degree less than s to q such that the resulting transfer function has smallest possible McMillan degree.r ~ for some polynomial r 1 of degree less than that of q. We can use this to obtain a multivariable version of the Euclidean algorithm. Now we use the a~ and b i to solve the recursion relation xi+ 1 = ai+lx . we obtain b o / a 1 and this is not further reducible. Set T0=r. It is somewhat more convenient to begin not with feedback reduction but rather with reduction by output injection.( r / s ) ) .gl ). Thus.r) = b o / ( a . If we reduce r~ modulo s we can write q = a l s . is a nonzero constant.r) . It has been shown in both previously mentioned papers that p = p.2) with two different sets of initial conditions. . Then there exist a nonsingular row proper polynomial matrix Ti+ 1.-1U~ strictly proper. Feedback reduction is well defined in the multivariable setting. or equivalently that a 1p .. .boq = O. U~ left coprime such that T.x. being the greatest common divisor of p and q. (2. the transfer function g has no finite zeros. a nonsingular polynomial matrix A.+ 1 with proper inverse and poly- and writing gl = r / s we have g = bos/(a. We state now the main technical lemma needed for our version of the Euclidean algorithm. This indicates how we can obtain the first atom of g._ 1 with d e g ( r ) < deg(s). namely that where the Euclidean algorithm terminates in the first step. In (2.1. Thus let G be a p × m strictly proper transfer function and assume G = T ~U is a left coprime factorization. i.3) (2.. i. Let a be any monic polynomial such that d e g ( a ) + deg(s) = d e g ( q ) .r and this representation isunique.. This means that s~ = O. and q = q. Hence q = a s . Clearly. Kalman calls the a.e. Thus for some n. Lemma 2. by the definition of the algorithm. i. } is a sequence of polynomials of decreasing degrees.. and we have g = b o / ( a l .1) the recursion and initial condition were used to compute the a~ and b . We describe the next step. Number 5 SYSTEMS & CONTROL LETTERS November 1983 with s_l=q and s0 = p .

B.+. + I A .-'V. i.6) where ~r_ is the projection map that associates with a rational function its strictly proper part. as in the scalar case exemplified by equation (2.E ÷ . + . = T .+l) = deg(detA). Since T. the idea is to obtain maximal McMillan degree reduction by adding lower order terms.÷I.+ l.1 U be a left matrix fraction representation which we do not assume to be left coprime.9) and T. ÷ I W ( & + Q ) .E.+1 is strictly proper.). Then Tn is the greatest common left divisor of T and U.&.Volume 3.+ 1 is only determined up to a right unimodular factor we can use this freedom to ensure that Ai+ 1 is row proper.10) Let n be the first integer for which d(G.-1U1 is strictly proper. U.+.7) (2. = T . + 1 . i.WQ . using the previous lemma.. i. + ~ W ) B .). Define recursively. for which U. Notice that even if we start with a rectangular transfer function G then after the first step all the transfer functions Ti~U.+. . (iii) A i is row proper. = ( T .+llU. we have a = W ( A . The simple calculation is omitted. To see this note that if A 1 aB~ is output injection equivalent to A . that there exist polynomial matrices U and ~+1 such that T. g = T. Number 5 nomial matrices B~ and U~+1 such that T. Then Naturally such a decomposition is not unique. (ii) AT+llBi is output injection irreducible.e.+I we reduce U modulo T.3). This shows that T. Then for some polynomial matrix Aa+ 1 T i = Ti+lJZli+ 1 Ul+ 1 (2. with T row proper. For later use we define or+ = I . by Theorem 3.U1 < e(T.4) (2.5) November 1983 U.U = (T. Let G be a p × m strictly proper transfer function.2. This means.. Let G = A . the alternative representation T. However T.+I is determined uniquely up to a right unimodular factor.U) = (T.3). Obviously T.+1 is strictly proper. .W)A1 +(T. We will call the { Ai+ 1. the Ai+ 1 being nonsingular and properly invertible.21 in [8]. We are ready to state the following matrix version of the Euclidean algorithm.I is strictly proper by construction. and the following conditions are satisfied: (i) T.4) and (2. Finally we point out that equations (2. U~+~. (2.. though not necessarily nonsingular. we write U. A . Fixing T.~r_. which follows from the fact that T.÷ 1 are square. (2. and a unimodular polynomial matrix IV.+ t = T. Thus A7+11 is output injection irreducible since B.or_ T.8) Remark.-1U.9) is clearly unique. deg(detA. = O. } the left atoms of the transfer function G. Here low order terms are interpreted in the sense that T. While this can be achieved in many ways we obtain uniqueness if we add the additional requirement that ~ is strictly proper.e. Therefore we have for T. B. SYSTEMS & CONTROL LETTERS and (2. 1 is unique modulo a right unimodular factor.. Note that. 265 .W)& ..U and 4= ~+. a sequence of polynomial matrices (A. A representation of the form (2. (2. Proof.e.-~U is.+. Let G = T .+I~U. + Q) for some polynomial matrix W.) = 0.5) taken together are the generalization to the multivariable case of (2. to the denominator in a left matrix fraction representation. Theorem 2.~ B be an output injection irreducible transfer function that is output injection equivalent to G~.-1U is strictly proper and Ti = ~ + .~_]U.1B then for some polynomial matrix Q. for which Aa 1Q is strictly proper.

We prove this by induction. In fact it is a g.a '-'~-1 2~.14) R k = Sk+lA 1 ._.18) Proof. Clearly (2.0 --' (2.-1). where F. (2. is a common left divisor of T. T. 266 = ( ~ .+ 1 = & + l W / .+l).l. = 0 and We use now the (Ai+ 1.G+1) -1 = (A.Vk+1.k+._I.(A.v~÷. Define a sequence of transfer functions F.--1U~ is output injection irreducible.Volume 3. 1 = ( S k + l A l ._~._.. this is the content of Theorem 2. we solve the recursions Ri+ i = Ai+ 1Ri .e.+ I .+.F. and so T. Thus U.) -1 Sk÷l -1 and 3(Fi) = deg det Ti = deg det(~+..12) (R k W ~ ) = [ ( / 0)( Ak-I B k + ~ ) ' " ( .}.2~U. B. Of course the transfer function G can be reconstructed from the atom sequence {A._ 1. ) = ( A .c. Otherwise R . . + a .+lA. B._I= T.~+. Number 5 Proof.)-lAC'S.I (A..+._1A. R o = To_2=T.11) I. _ .9._ z.'+.) = deg det 7]. Then and _P. i. For k = 1 this follows from out assumptions. ) .15) Obviously (R.BiR.S. 1 By assumption A f I is proper.W.+1 = 0..BiW. W 0 = 0.2.lB._ 1.÷. Proof.A .R . SYSTEMS & CONTROL LETTERS November 1983 Since 3(T. i = 0..+I)-1B. ~ + .B .I.U~+I) = deg det(T/+lAi+l) > deg det T.d.) by (R._ 2 and U. and we proceed by induction. Wk = S~+IBo..+1R.17) (2. Assume ( A i ) are properly inoertible and A. IV. --.e._1= T.÷ 1._ 1 and U..A._~B. = TT'U._I-U.. Assume this holds for any k .. If ~ = A~11B i is irreducible by output injection then F. the decrease of the McMillan degree is proved and guarantees the termination of the process in a finite number of steps. Then R{IWk is strictly proper.= ( A .+~ = 3(T.deg det(T.3.4. Assume the algorithm terminates in the n-th step. i.-1U. + . by the output injection irreducibility of A~. (2.. W. B~) to define recursively two sequences of polynomial matrices { R .. by ro = o (2.) . U.._ 1.R.1 factors.1 with initial conditions R W/.+llBi strictly proper. . Thus T.13) (2. ~ _ .+1 ).B. G_~=T.I B~)]0 o0) or Lemma 2. The sequence of transfer functions ( Fi ) so constructed satisfies 3(F~+1) < / ~ ( C ) .+~) > deg det(T.)= ( I 0) ..B. Lemma 2. = (A. .W.) = 8(~+~U~+1) = 8(/].0 "o).-1 with initial conditions W_ 1 -.. say n. But A. T. ". is a common left divisor of T. Sk+ 1Vk+ 1 is strictly -1 proper and Sf+ll proper by the induction hypothe- .

Then if F..24) Theorem 2.9.5.B i R i _ I ) = gi+lG= Ei+ 1• Wi+l G .) / -.10.l c .R. Assume G is strictly proper and rational.19) with E_a=I and Eo=G._I =A. Khargonekar and Emre [15] and Fuhrmann and Willems [9. (2.GG -.22) Proof. where R ...-.G= (mi÷lR Theorem 2.) transfer function and let R k.18) (2. Emre and Hautus [3]. Wk be solutions of the recursion equations (2. Since (I-A 1 ""k+l Hence .. ro=(r~.+a(R. The last two 267 . N u m b e r 5 SYSTEMS & CONTROL LETTERS N o v e m b e r 1983 sis.21) Proof.~Ek= R.+~) -~B.. Follows from the strict properness of the F.8.1 A..r.. Connections with geometric control theory E k = G ' " to.ti+lE . and W. properness of R~-I follows.~+. = T. ro = (&+...÷ ~F.'W~=(~-A?'S.'+. Proceed by induction.(A/+IW W~_. are deft'ned through the recursions (2.~So +l and this is clearly strictly proper. Let G be a p × m strictly proper W~)-B.. We pass now to the connection between the previously obtained matrix continued fraction representations and some problems of geometric control theory. (2.~).5+ l ] k rk..-ro-B~r~_.7. = RiG . .'W. (2. Ai÷.17) and (2.. ro is a bicausal isomorphism.-B.10]. --. as developed in Wonham [19].B. The link between the two theories is given by the theory of polynomial models developed in a series of papers by Antoulas [1].'G''" Fo.. (2.+~ . ( I .r~)rk_. or . For k = 0 this holds by definition. The rational matrices E i are all strictly proper..~ Proof.G.A ¢ ' ~° k -~ Vk + l )-'A..C .4 S~. =Ak+IE k - BkEk_ 1= Corollary 2. Theorem 2.(Ri_. are strictly proper there is a matching of at least the first k + 1 Markov parameters. Corollary 2.G and Note that since all the F. G Ek+ 1.24). but this of course is only a rough estimate to the more precise estimate (2. We have r. = F.V~. 3.. /3.. Next we define a sequence of rational functions {El} by E. = (A.E.0 it follows that (2.17) and (2. Next -I R. we have W i t h F o -.1 1.18). The E i satisfy the recursion Ei+l = A i + l E i .6. We have En = 0 iff I'n = O.WiWi_. We compute We can give now a precise answer to the question of how good an approximation Rk1Wk is to G..20) G = R. Fuhrmann [4-8].23) Theorem 2.sk)C_.V o l u m e 3... Then i -..WiEi_ 1.Bo =A~+.R.~S~+.1Wk--.E.

C) defined by A = ST. Number 5 SYSTEMS & CONTROL LETTERS or S n = Sn_IA1 .7) to [8]._~A. As a special case we obtain (so vo) = (t 0)( A. =SoXA.Vn_ l and in general November 1983 papers are especially relevant to the following analysis. and hence The continued fraction representation obtained previously allows us to give a finer description of this realization.8) .~ U = S._ 1Ax implies a direct sum decomposition.._.2) with and S~j~V. Clearly S.. Bu=Uu foru~F m. and Xs. By induction.. 0)0 = (s. = So_~al Vo_~ (3. V.+~ .10 in [10l..2. X r and maps ST we refer xR = xs° =XA°~S1XA°_ • .A.5) These formulas lead to interesting direct sum representations for X r. F o r n = 1 we have T . Since S. Vo= 0. Under the previous assumptions we have C / = ( T .3) xs = Xs°_. Let (A._ l is strictly proper it follows. directly to some canonical forms associated with the continued fraction expansion. ( ~ ) = so_lxA. But the factorization S.' f ) _ 1 f o r f ~ X r. vo_._~.5 in [10] that Xs. = x ~ ._. associated with G = S~-1V~. B. A)-invariant subspace containing l m B is Sn_lXA. It follows from Lemma 5. Then the minimal (C. ~ so-~x~. Theorem 3. in the scalar case._. } by x~._. = T~-IUo (3._IXA.6) with Sn equal to T up to a left unimodular factor. The power of the method of polynomial models is the fact that with any matrix fraction representation we have a closely associated realization. Thus all statements on the level of polynomial or rational matrices have an immediate interpretation in terms of state space models._ 1 is strictly proper.1) (3. That the setting up of such a complete correspondence is not a trivial matter becomes clear by a perusal of the above mentioned papers.i. are equal as sets. Recall [4] that with the left matrix fraction representation G= T-1U of a p × m strictly proper rational function G there is associated a realization in the state space X r given by the triple of maps (A. (3. eS. B~ ) be the atom sequence obtained from G. The multivariable analogs have not been clarified sofar. (3. To this end let { A. as A~ is -1 proper.~V. though they carry different module structures. These lead. (3. and so if E. F o r the definitions of spaces Xr.Volume 3._.v.._10 " / This direct sum decomposition is related to geometric concepts. See in this connection the papers of K a l m a n [14] and Gragg and Lindquist [12].e. (3. C) be the realization in Xs. so = i.1. = R . that A 7 1 S T j l V .1 U = A~-1B0 and $1 = A. This realization is always observable and is reachable if and only if T and U are left coprime. B. By induction (3. (3. see Theorem 2.~.)( A..= S.7) follows. v . Define the sequence of polynomial matrices ( Si. Proof.._I B.= xs°_._Z BOo) 268 Theorem 3. = 0 it follows that G = T .4) S.

. This leads us to conjecture that G=(Sn_IAIso V~.N .10) f=Alh and g=/)~k w i t h h ~ X A. That this is the minimal subspace follows from Theorem 3. This imphes.Q~. ) X °' * XA .A)-invariant subspace is an (A.A)-invariant subspaee containing Im B is the maximal (A. which is the minimal (C.N . (3. Also from the recursion relation (3. N = BaD ~. E)-invariant subspace contained in Ker C.N.N~+ 1. g] Dlk] k] =[D. Thus S.D. see [10].N l ) h .e. D 1 . ._I)-Isn_IBo . B)-invariant subspace. D . Then there exist a nonsingular column proper matrix Di+l.+I and the following conditions hold: (i) G~÷ 1 = N.N~.-N.deg(det A 1) the dimension of V~erC has to be deg(det D~). bx.11) w i t h k ~ X ~'. L). Let G = N D . . the direct sum decomposition ( lr ÷( Ai Dx . That S.Volume 3.N~ ) h. -N.N1) X °'. =[(A1D1-N1)-IAlh. Number 5 SYSTEMS & CONTROL LETTERS November 1983 Proof. respectively. We pass now to the analysis of the dual results.3 of [8].1 we can state. (iii) Ai+ 1 is column proper. a nonsingular properly invertible polynomial matrix Ai+ 1 and polynomial matrices Ni+ 1 and B~ such that D. X~.= B~D.' f .14) under the pairing of X o and Xfi defined in [8].Alk] k] Xfi = )(5. as we saw before.D. be a p × m strictly proper rational matrix and let G. Lemma 3. I .8 of [8].+ 1Di+~ is strictly proper.2) it follows that V. (3. as B o ~ X ~ .I ( A .3.9) Xo D V~erC = ~r+D X °' = ~ r + ( A I D 1 . i. Note that the annihilator of a (C. We compute ( f . N.(A1D.N. Proof. Actually we can prove more. Assume f and g are in Xa._.4. ) .)-'A1h. 3 ~ . namely those related to feedback reduction. By transposition we obtain /3 =/)~.]ak ) = [(A1D .N1)h. A)-invariant subspace follows from the characterization of these subspaces given by Theorem 3. (ii) BiAT+11 is feedback irreducible.1 be a strictly proper p x m rational matrix. Now every (A.)-I~r+(A1D1 . = ~ D .X~. is a (C. = S._.)-'(A. -1 (3.(A. D 1 . . + i Oi +.15) Moreover this direct sum decomposition is the dual of (3. (3.. + L)IX~. Starting with G = ND-~ we can write (3.12) D = AID~ . Lemma 3.D._ ~B0. Let G.)h. . ~r =/3~ j~o ' (3. Then the following direct sum decomposition holds: Xo = * r ÷ ( A . the following._ 1XA. without proof. In particular the annihilator of /gXA' = [ A I D 1 ( A . = A. k] = 269 ._. = [A.[(. and X3. In analogy with Lemma 2. (3. Since d i m / ) X ~ --.4~ .- k] = 0 by the causality of A~-~N1D~ -1.B0~ ~ S. Thus be a right matrix fraction representation with D i column proper. Also for h ~ X D' and k ~ X A' we have with (/31. .13) -..N . g) = [(AmD 1 .~1)-1~ strictly proper. B)-invariant subspace of Xn is of the form ~r+D~rDL for some submodule L of z-IFm[[z-1]]..14) We proceed to obtain the dual direct sum decomposition of X o. B~= S.

we have V~¢¢~c = ~r+DX D. = B.. _ 1 A ~ 1 the direct sum decomposition follows. +~r+DD~ -1 . Thus there exist h.+. (3. Hence (3. ~r+Dk_lD[lAk+lh. ~r+Dk_lD[lAk+lh. g ~ Xfi. and Xfi=Xz+b..~ . We note that in X 5.24) ~0.. ~r+Dk_.+.21) ( f .17) The preceding result can be easily generalized to yield the following. ..N .+. .25) [ D . and (3.+. Then. + ~XA'. N u m b e r 5 SYSTEMS & CONTROL LETTERS N o v e m b e r 1983 The removal of the projection w+ is permissible by the causality of A~D~ ( A ..22) = [Dk+ID-IDD? : 1 .+ .+lk] ' . .+ah.+. by induction. ( I m B) = b~XA-' (3.. Hence X D = ~r+DD( -1 . to prove that the orthogonality relations Xfi. (3..) This ends the proof. . 1 • .x. . Assume we proved the result for k. and N.+ . 270 (3.._~Xx.20) f=rr+DD~ l .28) and (3. 7r+D k . ~r+D. D .Volume 3.-+ l D .._1D~_ll = B . a . k ] (3..D:I(~r+DkD[)_.19) hold. + b.dt... k ~ z . To show the duality of the two direct sum decompositions it suffices.23) : 1 = [ ( I . .l F ' ~ [ [ z . • . = Xfi.XD.. . By induction. D.Nk+~.+lh.29) D.A t < + . N I ) X D'.+.5. = ~r+DkD. = Dk+l+~k+ .Xo. and X o ..+.D++~. • / ) t ..+.+. ¢ r + D k D . . . . Assume first f~r +DD~ -1 . + .N~+. . since D k = Ak+~Dk+ ~ . . (3._ + .+. with the realization associated with D-~fir we have V . Proof. B i } a n d the relations (3. it follows that )(5. .+ . . = A. .18) JDk+1Xz~+.~ ] ] such that (3. For k = 1 we proved the result in the previous lemma. Similarly we want to compute (3..k] = [Dk+mD-I~r+DD~ 1 .26) x ~ = x~.x. Then the direct s u m decompositions X o = ~r+DD~br+D~Df ~ • . + I D . + b. with the realization associated with N D .27) Since N. [D-1er+DD? l . k] k] (3. _L rr+DD? 1 ... rr+Dk_lDklA.N~ ) -1 . - (3.+.2 D k-I l + XA. ~r+ D k _ I D [ I X A .Nk+ ~ b.+.. + .-+D. D . /~._mD~-'X~°+ ._. . ~r+DI. fi) X / . + X~.rr+Dk_lD[lA. ) -h1. $ XA. g=Dk+lk. whereas in X D.. + b~x..~+._ID[1XA. Xx.16) •XA. + a h . = ~ r + ( A .I ~ r + D D .+.+. Given G = N D .+.. g k] and b~. ~r+DkD~I+1Xt>. Theorem 3. g) = are dual direct s u m decompositions. _L ~r+ D D . • .~ with the right atom sequence { Ai+ ~.

A polynomial matrix approach to F mod G invariant subspaces.J.L.'~t_DD~.C. 18 (1980) 288-296.+ IDt-+tIAT+tl)-' Also. Factorization indices at infinity for rational matrix functions. Wonham. Linear feedback . Control 30 (1979) 363-377. l+ D k + l h . Oi = A i + l O i + l . Control Optim.D. [Dk~. On strict system equivalence and similarity. Math.Volume 3. Fuhrmann. Khargonekar and E. A. of Mathematics.C. SlAM J. B)-invariant subspaces via polynomial models.A. [12] W.. J. [4] P.A. 80 (1960) 209-216.M.A. Fuhrmann. Acta Polytech.an algebraic approach. [19] W. 1979). [13] M. [7] P. [16] C. Fuhrmann. New York. T h i s c o m p l e t e s t h e p r o o f of t h e t h e o r e m . Expansion of power series into P-fractions.k] =[Dk+th. [6] P.' . ~ + D k D .. J. New York. [2] E. Gantmacher. lnternat. . Willems.'w + DD~. for i > j . [17] A. Control 28 (1978) 689-705.A. Certain continued fractions associated with the Pad~ table.L. Emre and M. Lindquist. MacDuffee.. ~ r + D k D k ~ l D k + l h . d. Nonsingular factors of polynomial matrices and (A. = is p r o p e r . Franklin Inst. Control Optim 18 (1980) 420-436. since SYSTEMS & CONTROL LETTERS November 1983 References [1] A. U s i n g these p r o p e r t i e s it follows t h a t [ D . Antoulas. .C. Gragg and A. [11] F.. Simulation of linear systems and factorization of matrix polynomials. -~0. . [3] E. G)-invariant subspaces. h . A polynomial characterization of (A. It follows. Doctoral Dissertation.)( D i I t D T _ ~ ) " " ( 1)/+ t D 7 t ) a n d the p r o d u c t of p r o p e r m a t r i c e s is p r o p e r .A. Duality in polynomial models with some applications to geometric control theory. /)# + i A k + . Control 26 (1981) 284-295. 78 (1960) 361-374. k] [ D . [18] A. B)-invariant and teachability subspaces. 1EEE Trans. Emre. ~ + D k ~ i Dk + . Dept. Magnus. Fuhrmann. [14] R. A study of (A.L Hautus and M. .B.P. Z. On partial realizations. 271 .+.h. The Theory of Matrices (Chelsea. p r o c e e d i n g i n d u c t i v e l y .Dt -l = (1 - N.Ak+tk]=O. Fuhrmann and J.E.A. Integral Equations Operator Theory 2 (1979) 287-301. [9] P. The Theory of Matrices (Chelsea. Heymann. Scand 31 (1979) 9-32. .+ . B)-invariant subspaces. . Algebraic system theory: An analyst's point of view. k ] = [D~-1~r+DID~ ] .D~D: -1 is p r o p e r since A. a n d so is - + t D. Hautus. Linear Algebra AppL 15 (1983).I = Di+ l( Ai+ tDi+ ] . to appear. J. 1959).C. Z. [8] P. 2rid edn. ETH Zurich (1979).+. k ] t = [Ak+IDk+ID-t~'_DD~ "'" ~r + D k D E ) t D I . Internat. Automat. On the partial realization problem.. Magnus. (Springer.A. Further results on polynomial characterization of (F.Ni+ 1. .Dk+. Emre. Control 25 (1977) 5-10. t h a t t . lnternat.R. Fuhrmann. New York. J. Internat. Linear Multioariable Control. Math. SIAM J.+'. [15] P. l ) k + t A k + ./)k+ tA. it follows t h a t Di+ lDi . Fuhrmann and J. Kalman. [5] P. Control 16 (1978) 83-105. Linear feedback via polynomial models. D71= ( AiDiD{--t. Control 31 (1980) 467 494..Ni+l) -I -I = -I A. SIAM J. [10] P. transfer functions and canonical forms.' .)-t A. 301 (1976) 521-540. Number 5 T o this e n d we n o t e t h a t .+ tD.?t A . + l h . 1956). Willems.

- MATH 120-Linear Algebra With Differential Equations-Imran NaeemUploaded byAbdullahIsmail
- RecipesUploaded bynekougolo3064
- mcaUploaded bymanojchavan21
- 2_7-LU-PurdueUploaded byfonseca_r
- Chapter 2 Systems of Linear EquationsUploaded bygladys
- 1379011108.1504Lectures of Weeks 1 and 2Uploaded byss
- book1Uploaded byDamian
- BCA Part-II Syllabus 2014Uploaded byYDM2
- Class X Maths Term 1Uploaded byPrashant Jain
- Scilab PrimerUploaded byrichx7
- 1-manifolds.solutions.pdfUploaded byJose Luis Condori
- FeaturedUploaded bysivaranjani_mani
- 501Uploaded bykr0465
- EXER1Uploaded byAngelus Vincent Guilalas
- 6 1 gcfUploaded byapi-314809600
- Appendix I MATLAB Functions Not Including Those Listed in the Explore Other Interesting Features Sections 2017 Matlab Fourth EditionUploaded byJesse
- Putnam 2016Uploaded byAditya Kumar
- Linalg Jun16 v05 Preliminaryexamsolutions 1Uploaded byahmed
- Matlab3Uploaded byJayveeRevillaJeresano
- Computation of Seigel Form NP SkoruppaUploaded byTamilselvi
- Topis and SawUploaded byYien Shadzrin
- Programs of SirUploaded bylovablevishal143
- ResumeUploaded bysakhi16
- Tutorial 1Uploaded byKedar Patil
- Designing Matrix ReportsUploaded bySetyawanWawan78
- Complex.texUploaded byPapadakis Christos
- matlabNotes.pdfUploaded byUmer Abbas
- Matrix Analysis for Scientists and Engineers by Alan J LaubUploaded byJulio Cesar Barraza Bernaola
- Several Simple Real-world Applications of Linear Algebra ToolsUploaded byphyzics
- Vec-matUploaded byshivamj099

- Surat Yaa SiinUploaded bySusila Windarta
- Understanding the Publishing ProcessUploaded bySusila Windarta
- Latihan_SoalUploaded bySusila Windarta
- Matthew Green Phd ThesisUploaded bySusila Windarta
- Buku Modul Kuliah PancasilaUploaded bySusila Windarta
- National Cryptography Policy for the Information AgeUploaded bySusila Windarta