i i

On the zero-modules of spectral
factors using state space methods
– the role of a coupled Sylvester-
and homogeneous linear equation.
György Michaletzky∗

1 Abstract
The poles and zeros of a transfer function can be studied by various means. The main motivation of the
present paper is to give a state-space description of the module theoretic definition of zeros introduced and
analyzed by Wyman et al. in [12] and [13] and to find the connections between the zero modules of a
matrix-valued rational spectral density and its (left, rectangular) spectral factor without assuming the the
spectral density is of full rank. This analysis is carried out for proper (in general acausal) spectral factors.
The transformation of a spectral factor into a left-invertible transfer function using a tall-inner function
plays an important role in this analysis eliminating this way the “generic” zeros corresponding to the kernel
of the spectral factor. As it is well-known the zeros are connected to various invariant subspaces arising in
geometric control, see e.g. Aling and Schumacher [1] for a complete description. The connections to these
subspaces are also mentioned in the paper.

2 Introduction
In this paper the connections between the zero structure of a rational, matrix-valued spectral density and
its spectral factor are analysed. The study of zeros of transfer functions has already a long history. During
this various zeros were defined, various approaches were used to describe them. We cannot give a detailed
description of this history but the book written by H. Rosenbrock (’70) should be cited here [9] as well as
that of T. Kailath (’80) [5]. One of the approaches used in these books to define the zeros of a transfer
function is based on the Smith-McMillan form of these functions. These are the so-called transmission zeros.
C. B. Schrader and M. K. Sain (’89) in [10] give a survey on the notions and results of zeros of linear time
invariant systems, including invariant zeros, system zeros, input-decoupling zeros, output-decoupling zeros
and input-output-decoupling zeros, as well. The connection of these zeros to invariant subspaces appearing
in geometric control theory was considered e.g. in A. S. Morse (’73) [8] for strictly proper transfer functions
and for proper transfer functions not assuming the minimality of the realization in H. Aling and J. M.
Schumacher (’84) [1] showing that the combined decomposition of the state space considering Kalman’s
canonical decomposition and Morse’s canonical decomposition in the same lattice diagram corresponds to
∗ Eötvös Loránd University

H-1111 Pázmány Péter sétány 1/C,
Budapest, Hungary
e-mail: michgy@ludens.elte.hu


i i

The book written by J. while the second one is homogeneous linear equation. In this extension the so-called Wedderburn-Forney-spaces play an important role. Rodman [2] uses the concept of left (and Right) zero pairs. Let F (s) = D +C (sI − A) B be a matrix-valued rational function of size p×m and consider −1 a function g(s) = H (sI − Λ) . They are coupled leading to a ”coupled Sylvester. especially it is analytic at the eigenvalues of Λ.1 in [4]. F. The main result in [13] is that the number of zeros and poles of a rational transfer function coincide (even in the matrix case) assuming that the zeros are counted in a right way.) The starting point of the present paper is the module-theoretic approach to the zeros of multivariate transfer functions defined by B. and further analyzed by Wyman et al. Ball. An important aspect of this paper was further analyzed by P. In such cases it might happen that there are rational functions mapped to the identically zero function by the transfer function. Then the functions in the kernel of the transfer function form an infinite dimensional vector space over the space of scalars. B) is a controllable pair – then there exists a matrix polynomial ψ such that the function F (g + ψ) is a polynomial. (Interestingly. Then – if the pair (C. The following theorem can be considered as the starting block of the analysis. Forney. in [12]. [13]. H. It is well-known that to define the multiplicity of a finite zero (or even an infinite zero) the Rosenbrock matrix provides an appropriate tool. This offers the possibility of analyzing – together with the position of the zeros – the corresponding zero directions. K. To this aim the notion of minimal polynomial bases should by used as in [3] by G. I. The zeros play an important role in the theory of spectral factors. But defining the multiplicity of this zero-function as the corresponding dimension of the kernel subspace does not give a satisfactory result. i. as was pointed out by the author in [7]. as well. G) is controllable and F (gG + ψ) is analytic at the eigenvalues of Λ. The first equation in (3. splitting subspaces and the algebraic Riccati-inequality was studied in A. This condition should be compared with the zero module definition of Wyman and Sain (see [11]). to show how to compute these zero-modules starting from a state-space realization of the transfer function. The finite zero module Zfin (F ) is defined as follows: F −1 (Cp [s]) + Cm [s] Zfin (s) = .e. (i) Assume that there exists matrix G and a (matrix-valued) polynomial ψ such that the pair (Λ. but it is finite dimensional over the field of rational functions. Λ) is an observable pair and (A. 3 Preliminaries and notation The main motivation of the present paper is to give a matrix theoretic description of the corresponding zero-concepts.) −1 Theorem 1. “zeros˙spectral i i 2008/5/15 page i the various notions of multivariate zeros. (3. (’95) [6]. (See Theorem 3. A) is observable –. Π satisfy the equation (3. there exists a matrix Π such that      A B Π ΠΛ = . ker F + Cm [s] i i i . Fuhrmann and A. The connection between the zeros of spectral factors. Sain (’83) [11]. this concept can be formulated in the framework of the dilation theory. Wyman and M. Gohberg and L. But it is an easy task to construct (non-square) matrix-valued transfer function with no finite (infinite) zeros. Gombani (’98) were the concept of externalized zeros was introduced.1) C D H 0 (ii) Assume that there exists a matrix Π such that Λ.linear equation” mentioned in the title.1) is a Sylvester-equation. D. Introducing the notation Cp [s] (and Cm [s]) for the space of p-vectors ( m-vectors) of polynomials we can write that in case of (ii) the columns of g are in F −1 (Cp [s]) + Cm [s].1). If now – (H. Lindquist et al.

Obviously. so without loss of generality we might fix a basis in it.) We might assume that the columns of Πmax provide a basis in V ∗ (Σ). the differences R0 = H1 − H2 . (The maximality is meant in terms of the subspace inclusion sense for ImΠ.1). i i i . H. Consider now a maximal solution – in terms of α0 and R0 – of the equation      A B 0 Πmax α0 = (3.1). while Cm [[s−1 ]] denotes the m-tuples of power-series in s−1 . D) is given by the realization of the transfer function F . the columns of K0 generate the kernel of F over the field of rational functions. if for the proper rational q-tuple g the identity F (s)g(s) = 0 holds. The it kernel zero-module W (ker F ) is defined as π− (ker F ) W(ker F ) = ker F ∩ s−1 Cm [[s−1 ]] where the mapping π− renders to a rational function its strictly proper part. C. Although the subspace Im(Πmax ) = V ∗ (Σ) = V ∗ (A. i. α0 = Λ1 − Λ2 of two solutions are solutions of the following equation:      A B 0 Πmax α0 = (3. Hmax . Let us observe that even the maximal solution triplet (Π. Let us introduce the notation: −1 K0 (s) = R0 + Hmax (sI − Λmax ) α0 . In other words ImΠ is an output-nulling controlled invariant subspace. Then F K0 = 0. but even for a fixed Π the matrices Λ and H are not necessarily uniquely defined. The following two statement give the next building blocks of the analysis of zero-modules in terms of state-space equations. Note that equation (3. D) there exists a maximal output-nulling controlled invariant subspace denoted by V ∗ (Σ). It is well-known that for a given system Σ = (A. the numerator contains those rational “input” functions for which there exists a polynomial such that applying the transfer function F to the sum a polynomial is obtained. “zeros˙spectral˙ i i 2008/5/15 page i In words.1) implies that A ImΠ ⊂ ImΠ + ImB C ImΠ ⊂ ImD implying that there exists a state feedback K such that (A + BK)ImΠ ⊂ ImΠ ⊂ ker(C + DK) .e. Let (C. −1 Theorem 2. determining this way the matrix Πmax . A) be an observable pair. Moreover.3) C D R0 0 (the maximality is meant is the subspace inclusion sense for Im R0 ). Let Πf zk be the corresponding solution of (3. B. This can be obtained as the image of a maximal solution (Πmax . C. Then there exists a rational function h(s) such that g(s) = K0 (s)h(s) . Λ) of equation (3. Λmax ) of equation (3. B.1) is not unique. Assume that the columns of the function Hf zk (sI − Λf zk ) provide a basis in Z(F ) ⊕ W (kerF ).2) C D R0 0 holds.

ker F + s−1 Cm [[s−1 ]] I. Assume that the pair (C.g. The minimal input-containing subspace is denoted by C ∗ (Σ).e. In the general case a full geometric description of the zeros is given by Aling and Schumacher in [1]. (See e. Λmax α0 .) The zero module a infinity is defined as F −1 (s−1 Cp [[s−1 ]]) + s−1 Cm [[s−1 ]] Z∞ (F ) = . A) is observable. . g2 with this property are considered to be equivalent if for some strictly proper q-tuple h the identity F (g1 − g2 + h) = 0 . A) is observable. The input sequence gives the coefficients of a polynomial in F −1 s−1 Cp [[s−1 ]] + s−1 Cm [[s−1 ]]. Λ2max α0 . where B ∗ denotes the reachability subspace of the given realization B ∗ = hA | Bi.4) and g1 . ImF + s−1 Cp [[s−1 ]] Theorem 5. (3. Aling and Schumacher [1]. A subspace C is called input-containing subspace if there exists an output-injection L such that (A + LC) C ⊂ C and Im(B + LD) ⊂ C. . (More precisely to the defect of the range of F . The last notion from the sequence of zero-modules we have to recall is the zero-module corresponding to the range of F .1 in [12] claims that the sum of the dimensions of these four zero modules is exactly the number of poles. Note that R∗ (Σ) = V ∗ (Σ) ∩ C ∗ (Σ). Then the equivalence classes of W (Im F ) are determined by the functions −1 C (sI − A) β where β ∈ hA | Bi and two functions – given by the vectors β1 . Assume that the pair (C. . Two polynomials are taken to be equivalent if the difference of the corresponding β vectors are in R∗ (Σ) = V ∗ (Σ) ∩ C ∗ (Σ) . β2 – are considered to be equivalent if β1 − β2 ∈ V ∗ (Σ) ∨ C ∗ (Σ) . “zeros˙spectral i i 2008/5/15 page i Proposition 3. Under the assumption of observability their diagram simplifies to the following simpler one. the q-tuples of rational functions g should be considered for which there exist a strictly proper rational q-tuple h such that F (g + h) is strictly proper. .) This is denoted by W (ImF ).   hΛmax | α0 i = Im It can be proved that the subspace Πmax hΛmax | α0 i coincides with the maximal output-nulling reachability subspace R∗ (σ) of the given realization of F . and N ∗ the unobservability subspace . The state-space description of the zero-module at infinity is given by the following statement: Proposition 5. Then the equivalence classes in Z∞ (F ) are de- termined by the vectors in C ∗ (Σ) in the sense that for any β ∈ C ∗ (Σ) there exists a finite input sequence producing no output but giving β as the next immediate state-vector. π− (ImF ) W (ImF ) = . In the statements above we have assumed that the pair (C. i i i . A) is observable. Under the assumptions of the previous theorem then the equivalence classes in W (ker F ) are determined by the functions −1 Hmax (sI − Λmax ) β where β is any vector in α0 . Theorem 4.

The subscript left will refer to the corresponding notions.6)  C −C R where P is the unique solution of the Lyapunov-equation AP + P A∗ + BB ∗ = 0 (4. g → F ∗ g this can be equivalently described by the left action of F : g ∗ → g ∗ F .ind. R = DD“ .d. RRR RRR RRR l llll RRR RRR vll i. G∗ (s) = (G(−s̄)) .i.e. (4.lll ll l R R ll ll ll RR( vlll vlll V ∗ RRR (V ∗ ∨ C ∗ ) ∩ B ∗ RRR ll RRR RRR ∞llll RRRtrm.9) − B ∗ D∗ Since the function Φ is the product of two functions we might expect that the zeros of both factors influence the zeros of Φ.inv lll RRR co-range ind. Applying this to F ∗ .7) and C = CP + DB ∗ .e.d.inv.and left zero-modules. while for a matrix valued function G(s) the notation G∗ (s) ∗ refers to its para-hermitian conjugate function (in continuous time sense).  {0} 4 Zero-modules of a spectral density For a matrix A its adjoint will be denoted by A∗ .8) ∗ −1 Also for the parahermitian adjoint function F ∗ (s) = D∗ + B ∗ (−sI − A ) C ∗: ! ∗ 0 −A∗ C ∗ G ∼Σ = . The corresponding state-space equation can be obtained from the previous discussion in a straightforward manner. ( ( ∗ V ∗ ∩ B ∗RR llC RRR l RRR lll R l∞lll trm. For now on we are going to assume that the matrix A the following property holds: σ(A) ∩ σ (−A∗ ) = ∅ (4. “zeros˙spectral i i 2008/5/15 page i X i.d. Φ(s) = F (s)F ∗ (s) = R + C (−sI − A∗ ) C ∗ + C (sI − A) C . i. RRR ll R( vlll (V ∗ ∩ C ∗ ) kern.inv lll ∞llll RRR co-r.syst  ∗ V ∨ B ∗RR l l lll RRR Ri. Until now the zero-modules were considered with respect to the right action g → F g.d. The following statement provides a connection between the various subspaces corresponding to the right. Then  ∗  A 0 C ∗ ∗  Φ(s) ∼ ΣΦ =  0 −A C  (4. ll l RRR ulll RRR ∗ ∗ ) ∗ lV ∨ C R R llB ll R RRRi. i i i .e.5) Denote by Φ the spectral density corresponding to the function F considering it as a left spectral factor. −1 −1 ∗ I. (4. i.

(4.11) then (X1∗ Y − Y1∗ X) W + W1∗ (X1∗ Y − Y1∗ X) = 0 (4. (4.15) Ymax and      ∗ −Ymax 0 I Xmax Vlef t (ΣΦ ) = Im = Im . Z1 . Z.17) C ∗ (ΣΦ ) = ker [−Ymax ∗ ∗ . W and X1 . Now     0 I 0 0 −I 0  −I 0 0  S = S∗  I 0 0  . Z 0 or in other words [−Y ∗ . using the orthogonality property stated in Proposition 6 we obtain that ∗ ∗ ∗ Clef t (ΣΦ ) = ker [Xmax .6) is minimal and consider a maximal solution (Xmax . 0] . Y. Then the maximal output-nulling controlled invariant subspace for this realization of Φ will be:   ∗ Xmax V (ΣΦ ) = Im (4. and Cleft ∗ (Σ) = V ∗ (Σ) . if X. Now assume that the realization (4. Z ∗ ] S = [W ∗ Y ∗ . To this aim let us introduce the notation  ∗  A 0 C S =  0 −A∗ C ∗  . ∗ ∗ dim C (ΣΦ ) = dim Clef t (ΣΦ ) . Y.13) A special consequence of this that if X. W1 are solutions of (4.10) C −C R To describe the finite zero-module of Φ the corresponding form of equation (3. W give a solution of (4. Wmax ) of (4.18)    ∗ ∗ 0 I = ker [Xmax . Zmax .11) C −C R Z 0 should be considered. −W ∗ X ∗ . X ∗ . i i i . dim V ∗ (ΣΦ ) + dim C ∗ (ΣΦ ) = 2n . Ymax ] (4. To utilize this observation first consider the various subspaces for the realization above of the spectral density Φ. Ymax ] (4. Ymax . Assume that F has the realization F (s) = D + C (sI − A) B. Z. (4.1)  ∗     A 0 C X XW  0 −A∗ C ∗   Y  =  YW  (4. Then Vleft (Σ) =  ⊥ C ∗ (Σ) .11) then     −Y YW S ∗  X  =  −XW  . “zeros˙spectral i i 2008/5/15 page i  ⊥ −1 ∗ Proposition 6.14) holds giving especially that ker (X1∗ Y − Y1∗ X) is W-invariant.11). Xmax ] (4.12) 0 0 I 0 0 I Consequently.16) Xmax −I 0 Ymax Furthermore. (4. Y1 .19) −I 0 implying that dim V ∗ (ΣΦ ) = dim Vlef ∗ t (ΣΦ ) .

20) C D 0 0 0 0 Then X = −P Π ∗ .and kernel-zeros of F with respect to the action g → F g to appear in the zero structure of Φ F ∗ should be applied to a function h in such a way that the output F ∗ h should be the ”zero-function” of F . Assume that Π.1 From spectral factors to spectral density  0 0 0 Lemma 8. Xmax 5 Elementary connections between the zeros of the spectral density and its spectral factors There is an obvious connection between the kernel zero-module of Φ and that of F ∗ . Λ are solutions of the following equation:      A B Π ΠΛ = (5. This observation is reflected in the next statement formulated in a more general way needed later: Lemma 9. Ymax   −Ymax R∗lef t (ΣΦ ) = Vlef ∗ ∗ t (ΣΦ ) ∩ Clef t (ΣΦ ) = ∗ ker (Xmax ∗ Ymax − Ymax Xmax ) .22) where P is the solution of the Lyapunov-equation (4.21) C −C R Z 0   Pξ  0 thus if ξ ∈ V ∗ Σ then ∈ V ∗ (ΣΦ ) . (5. Then ker Φ = ker F ∗ . Y = −Π ∗ . the action g → F ∗ g and g → Φg = F F ∗ g. REMARK 1 Lemma 8 and identity (4. This is not surprising due to the fact that right zero-modules are considered. “zeros˙spectral˙ i i 2008/5/15 page i Finally. i.   ∗ ∗ ∗ Xmax ∗ ∗ R (ΣΦ ) = V (ΣΦ ) ∩ C (ΣΦ ) = ker (Xmax Ymax − Ymax Xmax ) .23) C D H 0 i i i . H. especially W (ker Φ) = W (ker F ∗ ) . ξ This shows that the finite zeros and the zeros corresponding to the kernel module of F ∗ appear directly in the zero structure of Φ. (5. Z = H ∗ and W = −Λ ∗ are solutions of the equation  ∗     A 0 C X XW  0 −A∗ C ∗   Y  =  Y W  . Proposition 7.7). If moreover the realizations of F and then   ∗ P  0 R (ΣΦ ) = R∗ Σ .14) implies that  0 (Xmax − P Ymax ) hWmax | ker (Xmax − P Ymax )i ⊥ V ∗ Σ .0 . H . Λ is a solution of the following equation: i  A B h 0 0 h 0 0 i Π .e. Assume that Π . Concerning the finite.H = Π Λ . I 5. Assume that Φ = F F ∗ hold. (5.

H. H. Λmax is a maximal solution of (5. Define the matrices Π and H as follows Π = X − PY .24) holds would lead to solving a Riccati-equations. The problem to find those triplets (Π. Consider an arbitrary solution (Π. Then −A∗ C∗           Y YW A B Π ΠW = . To following statement shows that in case of left-invertibility the solvability of equations (5. Corollary 11.2 From spectral density to spectral factors Continuing with the elementary connections the next statement shows that some converse of the previous constructions also hold: Lemma 12.  0 0 0  where Πmax . Z = K the following equation holds:  ∗     A 0 C X XΛ + P U  0 −A∗ C ∗   Y  =  Y Λ + U  .26) 5. Assume that  ∗     A 0 C X XW  0 −A∗ C ∗   Y  =  YW  (5. (5. We are going to show that this can be avoided via reducing the problem to left-invertible spectral factors of the same spectral density. K. [I. To assure the solvability of the corresponding Riccati-equation additional conditions are needed. K.24) is always guaranteed. To obtain this transformation a special Riccati-equation should be solved but it can be proven that this Riccati-equation has always a solution. if U = 0 then we might see that the corresponding zero of F appears among the zeros of Φ.23) for which there exist a triplet (L.24) for which 0 ∗ ∗ the columns of L are in Clef t (Σ). Under the conditions of the previous theorem [I.25) C −C R Z 0 Especially.20). Using Lemma 8 the following immediate corollary is obtained. U ) of the equation (5. i i i . (5. and U satisfy −A∗ C∗      L LΛ + U = . H = −B ∗ Y + D∗ Z .27) C −C R Z 0 holds. U = 0) such that (5. L. Y = L. If the system Σ is left-invertible then there exists a solution (K. = . Assume that the given realizations of F and Φ are minimal.24) −B ∗ D∗ K H Then for the matrices X = P L + Π. “zeros˙spectral˙ i i 2008/5/15 page i and furthermore L. (5. Λ) solving (5. Λ) of the equation (5. −P ] V ∗ (ΣΦ ) ⊂ V ∗ (Σ) . −P ] V ∗ (ΣΦ ) ⊃ V ∗ (Σ) . Hmax . Theorem 10. the matrix U can be written in the form U = Πmax V for some matrix V . (5.23).23). −B ∗ D∗ Z H C D H 0 Consequently.

5) holds. ∗ Theorem 13. Then Ymax  0 (i) V ∗ (Σ) = Im (Xmax − P Ymax ) .  0 (ii) R∗ (Σ) = (Xmax − P Ymax ) hWmax | ker (Xmax − P Ymax )i . C ∗ Σ = ker (Xmax ∗ ∗ − Ymax P) . Ymax . The corresponding zeros are transformed into the finite zero-module.11) the following relations hold: (i)  0 C ∗ (Σ) = [I. (ii)  0 V ∗ (Σ) = Im (Xmax − P Ymax ) .l. The following theorem gives a full description of the connections between the zero modules of Φ and its spectral factors. the finite zeros of F and F ∗ appear in the spectrum of Wmax . Furthermore. −P ] ker [−Ymax ∗ ∗ . Let (C. Then the function F L is left-invertible. i. This tall-inner function can be constructed as follows. 7 General case If F is non necessarily left-invertible then there exists a tall inner function L such that F L is already left- ∗ invertible and moreover (F L) (F L) = Φ. Consider a square inner extension of K denoted by [K. i i i . Ymax . Then dim C ∗ (ΣΦ ) = dim C ∗ (Σ) + dim C ∗ Σ . moreover provides a left spectral-factor of Φ = F F ∗ . (or equivalently C ∗ Σ = ker (Xmax ∗ ∗ − Ymax P )) .o.g.e. F L is a spectral factor of the same spectral density.3) assuming – w. Furthermore. −P ] C ∗ (ΣΦ )). Xmax ] . Zmax . Zmax . Consider maximal solutions of (3. Assume that (Xmax . Lemma 14. Wmax ) the maximal solution of (4. Let us point out that the role of the inner function L is to eliminate the kernel zero-module W (Σ) of the given realization of F . Wmax ) determines a maximal solution of the equation  ∗     A 0 C Xmax Xmax Wmax  0 −A∗ C ∗   Ymax  =  Ymax Wmax  (7. R∗ Σ = R∗ (Φ) .  0 ⊥ (iii) V ∗ Σ = (Ymax ker (Xmax − P Ymax )) ∩ R∗ (Σ) (or equivalently C ∗ (Σ) = R∗ (Σ) ∨ [I. Form Φ = F F ∗ and assume that (4. Assume that F is left-invertible  0  and the given realizations of F and Φ = F F are minimal. “zeros˙spectral i i 2008/5/15 page i 6 Left-invertible spectral factors Summarizing and extending the results listed so far for left-invertible spectral factors the following connec- tions hold. Then there exists a matrix β such that the function −1 K(s) = R0 + (Hmax + R0 β) (sI − (Λmax + α0 β)) α0 is a tall inner (in continuous time sense) function.1) and (3. – that the column-vectors of the matrix R0 are orthonormal. A) be an observable pair.6) provides a minimal realization for Φ moreover property (4.28) C −C R Zmax 0   Xmax for which the kernel of = {0} . V ∗ Σ = Ymax ker (Xmax − P Ymax ) . L]. Consider a transfer function F with minimal realization. Theorem 15. denoting by (Xmax .

Schumacher. 69:223–239. Prentice-Hall. 1983. Control Optim. Morse. Englewood Cliffs.. M. [8] A. Applic. [2] J. On the zeros of minimal realization. Poles and zeros of matrices of rational functions. 13/3:493–520. i i i . Linear Alg. 1990. On the zeros and poles of a transfer function. and A. Forney. [6] A. Int. Michaletzky. SIAM J. SIAM J. Sain. Kailath. Sain. F. and the algebraic Riccati inequality. 1980. On the Nevanlinna-Pick interpolation problem: Analysis of the McMillan degree of the solutions. 2003. G. with application to multivariate linear systems. Sain. 1970. [5] T. [7] Gy. 1995. Rosenbrock. 1989. M. K. Lindquist. Thomas Nelson and Sons. Linear Algebra and Applic. and L. Perdon. Linear Algebra and Applic. Sain. Minimal bases of rational vector spaces. Quasi-similarity of compressed shift operators. K. Wyman and M. 33:365–401. D. 1991. Ball. Wyman. Zeros of spectral factors. Gohberg. [4] A. H. 1973. State-space and Multivariable Theory. Michaletzky. 50:621–637. J. Linear Algebra and Applic. Szeged. Int. M. Wyman. 1975. 2007. Conte. SIAM J. and G. Control. G. “zeros˙spectral i i 2008/5/15 page i Bibliography [1] H. S. [11] B.. J. M. 11/3:446–465. [9] H. Gy. and A. K. [12] B. Control. Aling and J. Rodman. Control. NY. Schrader and M.. 122-124:123–144. [13] B. M. 425:486–517. Research on system zeros: a survey. Math. Gombani and Gy. Conte. F. 1989. Control. I. Michaletzky. Acta Sci. Perdon. F. B. 157:113–139. 1984. K. Birkhauser. 39/4:779– 805. Picci. Interpolation of Rational Matrix Functions. A nine-fold decomposition for linear systems. the geometry of splitting sub- spaces.. Linear Systems. Structural invariants of linear multivariable systems. [3] G. [10] C.. 50/4:1407–1433.