You are on page 1of 7

On The Analytical Expansion of Transfer Matrices

Dionisio Bernal1 and Omer Tigli2

1
Associate Professor, Northeastern University, Boston MA, E-mail: bernal@neu.edu
2
Graduate Student, Northeastern University, Boston MA, E-mail: tigli.o@neu.edu

ABSTRACT

This paper presents a non-modal based approach to expand a Laplace domain map between a set of input and
output channels to a square transfer matrix relating all the measured coordinates. The approach uses reciprocity
(enforced at a number of values of the Laplace variable s) to form a set of linear equations from where one solves
for additional columns of the input to state matrix B (corresponding to unloaded coordinates) and for additional
rows for the state to output matrix C (for coordinates that are loaded but where output is not measured). The
number of s values used to form the equations cannot be less than the identified order but can be as large as
desired. The approach presented offers conceptual simplicity and, in the case of multiple collocations, direct
consideration of the redundant information.

Introduction

Identification results (for linear systems) are often presented to users in terms of a discrete time realization
quadruple {A,B,C,D} which provides an input-to-state-to-output map that matches the measured data to within
some tolerance. Elimination of the state to obtain a direct algebraic relation between inputs and outputs leads to
the well known expression for the transfer matrix G(s) = Cc [Is - A c ] Bc +Dc where the subscript c indicates that the
-1

matrices are the continuous time version of the discrete time realization results. Expressions connecting the
discrete and the continuous time matrices depend on the behavior (or assumed behavior) of the input between
sampling times and can be found in various references e.g., [1,2]. As shall be reviewed in the following section, Cc
and Bc ∈Rmx2n and R2nxr respectively, where m is the number of measured outputs, r the number of independent
inputs and 2n the order of the system, one can readily see that the experimental G(s) ∈Rmxr. It is well known,
however, that if the intersection of the set of inputs and output coordinates is not empty reciprocity can be taken
advantage of and, ultimately, G(s) can be defined over the coordinates p, where p = m ∪ r . Traditionally the
approach to perform the expansion from the mxr partition to the full pxp matrix takes place in the modal domain,
where the approach is not always transparent to users that are less familiar with the details of the underlying
theories [2,3,4]. This paper presents a non modal based approach to expand the Bc and Cc matrices to the full set
of the p coordinates so the standard expression for the transfer function given previously can be used directly to
obtain the full size transfer matrix.

While all consistent procedures to formulate the transfer matrix lead to exact solutions when the modes in the
identified bandwidth are exactly identified (in the truncated sense), in the real situation the identification results
are imprecise and the embedded error gets processed differently depending on the details of the approach used
to enforce reciprocity. In this regard, it is opportune to note that the non modal approach introduced here is not
presented as a way to improve accuracy (over the traditional modal based approach) but rather as an alternative
whose fundamental merit is its conceptual simplicity.

State-Space Formulation

Since all the derivations that follow are embedded in the context of state-space formulation, a brief review on the
associated theory is presented. A linear finite dimensional structural system subjected to a time varying excitation
u(t), can be described by the following ordinary linear differential equation :

 + ζw
Mw  +Kw = b2u(t) (1)
where w is the displacement vector at the degrees of freedom, M, ζ and K are the mass, damping and stiffness
matrices respectively and b2 is a vector describing the spatial distribution of the excitation u(t), taking;

x1 = w
(2a,b)

x2 = w

and substituting into eq.1 one gets

⎧ x 1 ⎫ ⎡ 0 I ⎤ ⎧ x1 ⎫ ⎡ 0 ⎤
⎨  ⎬ = ⎢ -1 -1 ⎥ ⎨ ⎬ + ⎢ -1 ⎥ u(t) (3)
⎩ x 2 ⎭ ⎣ -M K -M ζ ⎦ ⎩ x 2 ⎭ ⎣M b 2 ⎦

which can conveniently be written as;

x = A c x +Bc u (4)

where x = [ x1 x 2 ]T is the state vector. Assuming that the available measurements are linear combinations of the
state one can write:

y = Cc x +Dc u (5)

where matrix Dc is zero, unless acceleration is measured and Cc is a matrix of appropriate dimensions which
relates the state to the output measurements. The solution to eq.4 is given by the well known expression

t
x(t) = e Ac t x o + ∫ e Ac (t-τ)Bc u(τ) dτ (6)
0

When eq.6 is substituted into eq.5, we get

t
y(t) = Cc e Ac t x o + Cc ∫ e Ac (t-τ)Bc u(τ) dτ +Dc u(t) (7)
0

where xo is the initial state. Since experimental data from structural vibration tests are usually obtained in digital
form what is obtained in practice is a realization in discrete time which can be expressed as

x k+1 = Axk +Buk


(8a,b)
y k = Cxk +Duk

where the relation between the discrete time matrices and those in the continuous time realization depend on the
behavior of the input between time steps. For the common zero order hold assumption (input is assumed constant
between time steps), one has B = [ A -I] A c -1Bc and D = Dc. Moreover, independently of the interest behavior of the
input A = e Ac ∆t and C = Cc . By explicitly carrying out the recursion in eq.8 one finds that the input – output
relationship in discrete time can be written as:

k
yk = ∑ CA k -iBui + CA k x o (9)
i=1

where the terms CA k -iB are known as the Markov parameters. The collection of Markov parameters for increasing
k is the pulse response of the system.
From a State-Space Representation to the Transfer Matrix

Taking a Laplace transform of eq.4 (assuming zero initial conditions), solving for the state vector and substituting
the result into eq.5 gives,

(
y(s) = Cc (I ⋅ s - A c ) Bc +Dc u(s)
-1
) (10)

from where, by definition

G(s) = Cc (I ⋅ s - A c ) Bc +Dc
-1
(11)

Expansion of Matrices Bc and Cc

To avoid unnecessary clutter in the notation we use m, r and p as scalars when appropriate but also as the
corresponding sets, in particular we take

m := {output coordinates} (12)


r := {input coordinates} (13)
p = m∪r (14)
v1 = m ∩ r (15)
v 2 = m - v1 (16)
v 3 = r - v1 (17)

Eq.11 can be written as

⎡ - Cv1 -⎤ ⎡ | | | ⎤ ⎡ Cv1 Z(s)Bv1 Cv1 Z(s)Bv 2 Cv1 Z(s)Bv3 ⎤


⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ - Cv 2 - ⎥ Z(s) ⎢Bv1 Bv2 Bv3 ⎥ = ⎢Cv 2 Z(s)Bv1 Cv2 Z(s)Bv2 Cv2 Z(s)Bv3 ⎥ (18)
⎢ ⎥ ⎢ | ⎢ ⎥
⎣⎢ - Cv3 - ⎦⎥ ⎣ | | ⎥⎦ ⎣⎢Cv3 Z(s)Bv1 Cv3 Z(s)Bv2 Cv3 Z(s)Bv3 ⎦⎥

where Z∈C(2n)x(2n) is,

Z(s) = [I ⋅ s - A c ]
-1
(19)

and, we’ve assumed that we’re interested in displacement or velocity transfer matrices so Dc≡ 0. In eq.18 Cv3 and
B v 2 are unknown at the outset (not obtained in the realization). We note, for clarity, that the subscript c has been
dropped from matrices Bc and Cc to simplify the notation. Reciprocity allows us to write

( )
T
Cv1 Z(s)Bv2 = Cv2 Z(s)Bv1 (20)

In eq.20 the right hand side is known so we define

( )
T
P(s) = Cv2 Z(s)B v1 (21)

and write

Cv1 Z(s)Bv2 = P(s) (22)


Evaluating eq.22 at t different s values and combining the equations one gets

⎡ Cv1 Z(s1 ) ⎤ ⎡P(s1 ) ⎤


⎢ ⎥ ⎢ ⎥
⎢Cv1 Z(s2 ) ⎥ ⎢P(s2 ) ⎥
B =
⎢ ..... ⎥ v2 ⎢ ... ⎥ (23)
⎢ ⎥ ⎢ ⎥
⎢⎣ Cv1 Z(st ) ⎥⎦ ⎢⎣P(st ) ⎥⎦

from where the missing columns of the Bc matrix can be obtained as

B v 2 = Q† P (24)

where Q ∈ C(v1 *t)x(2n) and P ∈ C(v1 *t)x(v2 ) are clearly

⎡ Cv1 Z(s1 ) ⎤ ⎡P(s1 ) ⎤


⎢ ⎥ ⎢ ⎥
⎢Cv Z(s ) ⎥ ⎢P(s ) ⎥
Q = ⎢ 1 2 ⎥ and P = ⎢ 2 ⎥ (25a,b)
..... ...
⎢ ⎥ ⎢ ⎥
⎢⎣ Cv1 Z(st ) ⎥⎦ ⎢⎣P(st ) ⎥⎦

The expansion of matrix Cc follows an entirely analogous sequence, namely from reciprocity

( )
T
Cv3 Z(s)Bv1 = Cv1 Z(s)Bv3 (26)

so we define

( )
T
R(s) = Cv1 Z(s)Bv3 (27)

and write

Cv 3 Z(s)B v1 = R(s) (28)

Evaluating eq.28 for t different s values gives

⎡ | | | | ⎤
⎢ ⎥
Cv3 ⎢ Z(s1 )Bv1 Z(s2 )Bv1 ... Z(st )B v1 ⎥ = ⎡⎣R(s1 ) R(s2 ) ... R(st ) ⎤⎦ (29)
⎢ | | | | ⎥⎦

so, the missing rows of matrix Cc can be found from

Cv3 = RT † (30)

where, T ∈ C(2n)x(t*v1 ) and R ∈ C(v3 ) x(t*v1 ) are

⎡ | | | | ⎤
⎢ ⎥
R = ⎡⎣R(s1 ) R(s2 ) ... R(st ) ⎤⎦ and T = ⎢ Z(s1 )B v1 Z(s2 )B v1 ... Z(st )Bv1 ⎥ (31a,b)
⎢ | | | | ⎥⎦

Selection of s Values

The solutions for the missing columns of matrix Bc and the missing rows of matrix Cc involve the pseudo inverses
of matrices Q and T whose conditioning and dimension depend on the number and the location of s values
selected from the complex plane. The question that requires answering is what guidelines should be followed to
select a set of s values to perform the evaluation. One can easily confirm that the minimum number of s values
needed to make the matrices Q and T square (so the solution has a chance of being unique) is 2n/v1. The fact
that the matrices are square, however, does not insure full rank or a good conditioning and it turns out that to
satisfy these requirements it is necessary to select s values that are near the poles. A clarification of why this is so
is best carried out when the state space matrices are represented in modal domain. Consider a spectral
decomposition of the system matrix Ac

A c = ψΛψ-1 (32)

Introducing the transformation

x = ψz (33)

eq.4 becomes,

ψz = ψΛψ-1ψz +Bc u (34)

from where

z = Λz + ψ-1Bc u (35)

The output expression in eq.5 takes the form

y = Cc ψz +Dc (36)

so the transfer matrix in modal coordinates is


 
G(s) = C(I ⋅ s - Λ)-1B +Dc (37)

where
 
Λ = diag(λ1,....,λn ,λ1* ,....,λn* ), B = ψ-1Bc and C = Cc ψ (38a,b,c)

In the modal domain the matrices Q and T become

Q = SE1 and T = E2 ST (39,a,b)

where E 1∈ C(v1 *2n)x(2n) and E 2∈ C(2n)x(v1 *2n) are

 
E1 = vec(C v1 ) and E 2 = vec(B v1 ) (40)
   
where Cv1 and B v1 are the rows and columns of matrices C and B connected with the collocated coordinates
respectively and vec is the operator that takes columns (rows) of a matrix and stacks them along the diagonal as
if they are square which creates tall (wide) matrices as can be deduced from the dimension of the matrices E1 and
E2. The matrix S ∈ C(v1 *t)x(v1 *2n) is given by
⎡ I I ⎤
⎢ s - λ .. s - λ* ⎥
⎢ 1 1 1 n ⎥

S = ⎢ ... .. .. ⎥ (41)
⎢ ⎥
⎢ I ..
I ⎥
⎢⎣ st - λ1 st - λn* ⎥⎦

where the identities in the blocks are of dimension v1. Examination of S shows that if the s values are selected
such that each one is at a distance from a pole that is constant and small compared to the distance to the other
poles then the block matrices in the diagonal dominate and S approaches to a scaled unit matrix so the condition
number is near one.

Numerical Examination

To illustrate the performances of different sets of s values consider the 12-story shear frame depicted in fig.1.
Note that there are 5 inputs, 5 outputs and 3 collocated coordinates. For the selected configuration one has

m = {2,4,6,8,11}, r = {2,5,6,9,11}, p = {2,4,5,6,8,9,11}, v1 = {2,6,11}, v 2 = {4,8} , v 3 = {5,9}

We begin by showing that poor conditioning can result if one is not careful and that selection along the imaginary
line tends to work well, even if one doesn’t make an effort to pick each s value at a constant distance from a pole.
For this purpose we consider two sets of 2n s values, one set from the real line segment [-100 100] and the other
from the imaginary segment [-100i 100i]. In each case the points are evenly spaced and include the edges. The
condition number for the Q and T matrices in Table 1 illustrate the previous point.

Table 1. Condition numbers of matrices Q and T for the sets of s values selected along real and imaginary lines

Cond (Q) Cond (T)


Real line 45587.6 54315.0
Imaginary line 18.6 23.0

Input Output DOF Mass kstory As a second exercise we select each s value to be at a constant
distance from a pole and compute the condition number of the
12 0.8 matrices Q and T as the distance changes. As one gathers from
1250 a theoretical examination, the results will depend not only on the
11 11 11 0.8 distance from s to the pole but also on the actual orientation of
1250
each of the (s-λ) terms. We have computed results for two
10 0.8
1250 cases, one where the s values have the same imaginary ordinate
9 9 0.8 of the pole and are shifted by positive d in the real direction and
1250 one where the shift is d in the positive imaginary. The results
8 8 0.9 obtained are plotted in fig.2.
1350
7 0.9
1350 Conclusions
6 6 6 0.9
1350
A non-modal based approach to expand a Laplace domain map
5 5 0.9
1350 between a set of input and output channels to a square transfer
4 4 1 matrix relating all the measured coordinates is presented. The
1400 approach is based on enforcing reciprocity at a number of s
3 1 values. While one can select as many s values as desired and
1400 check the condition number of the associated matrices to ensure
2 2 2 1 an adequate numerical solution, a minimum number of s values
1400
1 can be used if the locations of the system poles are known.
1
1400 Specifically, in this case it suffices to select one s value for each
identified pole and to take it at a distance from the pole that is

Figure 1: 12-story shear frame


small compared to the distance to the remaining ones. While an explicit examination of the case of closely spaced
poles was not undertaken here, the theory suggests that in this case on has to be careful not only with the
distance but with the orientation of s with respect to the poles. The advantage of the approach outlined is
conceptual simplicity and perhaps the fact that the redundant information in the case of multiple collocations is
naturally processed in the least square framework.

35 70
T T
30 Q 60 Q

25 50
Condition number

Condition number
20 40

15 30

10 20

5 10

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Radius of the circle/Average gap distance Radius of the circle/Average gap distance
(a) (b)

Figure 2. Dependence of the conditioning of matrices Q and T on the distance of the selected s value to the pole
(a) s values are shifted to the right of the pole parallel to the positive real axis; b) s values are shifted upward
parallel to the positive imaginary axis

References

[1] Bernal, D., From state-space realizations to flexibility matrices, Journal of Engineering Mechanics (in press).

[2] Heylen W., Stefan L., Sas P., Modal Analysis Theory and Testing, Katholieke Universiteit Leuven, 1998

[3] Silva, J.M.M. An Overview of the Fundamentals of Modal Analysis, Proceedings of the NATO Advanced Study
Institute on Modal Analysis and Testing, Vol. 363, pp. 1-34, 3-15 May 1998

[4] Ewins, D., J., Basics and state-of-the-art of modal testing, Sādhanā, Vol. 25, Part 3, pp. 207-220,.June 2000

You might also like