You are on page 1of 76

State-Space Solutions and Realization

Dr. Wenjie Dong

Department of Electrical and Computer Engineering,


The University of Texas Rio Grande Valley

W. Dong (UTRGV, Dept. of ECE) 1 / 75


Solution of LTI State Equations
Consider the linear time-invariant (LTI) state-space equation
ẋ(t) = Ax(t) + Bu(t) (1)
y (t) = Cx(t) + Du(t) (2)
where A, B, C , and D are, respectively, n × n, n × p, q × n, and q × p
constant matrices. The problem is to find the solution excited by the
initial state x(0) and the input u(t).
d −At
(e x(t)) = e −At Bu(t)
Z t dt Z t
−Aτ
d(e x(τ )) = e −Aτ Bu(τ )dτ
0 0
Z t
At
x(t) = e x(0) + e A(t−τ ) Bu(τ )dτ
0
Z t
y (t) = Ce At x(0) + C e A(t−τ ) Bu(τ )dτ + Du(t)
0

W. Dong (UTRGV, Dept. of ECE) 1 / 75


Solution of LTI State Equations

The solution of (1)-(2) can be computed with the aid of Laplace transform.

x̂(s) = (sI − A)−1 [x(0) + B û(s)]


ŷ (s) = C (sI − A)−1 [x(0) + B û] + D û(s)
x(t) = L−1 [x̂(s)] = L−1 [(sI − A)−1 [x(0) + B û(s)]]
= L−1 [(sI − A)−1 ]x(0) + L−1 [(sI − A)−1 B û(s)]
y (t) = L−1 [ŷ (s)]

One has
e At = L−1 [(sI − A)−1 ]

W. Dong (UTRGV, Dept. of ECE) 2 / 75


Solution of LTI State Equations

Methods for finding e At :


1 First, compute the eigenvalues of A; next, find a polynomial h(λ) of
degree n − 1 that equals e λt on the spectrum of A; then e At = h(A).
2 Using Jordan form of A: Let A = Q ÂQ −1 ; then e At = Qe Ât Q −1 ,
where  is in Jordan form and e Ât can readily be obtained.
3 e At = L−1 [(sI − A)−1 ].

W. Dong (UTRGV, Dept. of ECE) 3 / 75


Example

Find e At , where  
0 −1
A=
1 −2

W. Dong (UTRGV, Dept. of ECE) 4 / 75


Example

Solution: Method 1: The eigenvalues of A are −1, −1. Let


h(λ) = β0 + β1 λ. If h(λ) equals f (λ) = e λt on the spectrum of A, then

f (−1) = h(−1) : e −t = β0 − β1
f 0 (−1) = h0 (−1) : te −t = β1

Thus we have
h(λ) = e −t + te −t + te −t λ
So

e At = h(A) = (e −t + te −t )I + te −t A
 −t
e + te −t −te −t

=
te −t e −t − te −t

W. Dong (UTRGV, Dept. of ECE) 5 / 75


Example

Method 2:
 −1  
−1 s 1 1 s + 2 −1
(sI − A) = = 2
−1 s + 2 s + 2s + 1 1 s
" #
s+2 1
(s+1)2
− (s+1)2
= 1 s
(s+1)2 s+1)2

(1 + t)e −t −te −t
 
At −1 −1
e = L [(sI − A) ]=
te −t (1 − t)e −t

W. Dong (UTRGV, Dept. of ECE) 6 / 75


Example
Method 3: The eigenvalues of A are −1, −1. Let h(λ) = β0 + β1 λ. If
h(λ) equals f (λ) = (s − λ)−1 on the spectrum of A, then
f (−1) = h(−1) : (s + 1)−1 = β0 − β1
f 0 (−1) = h0 (−1) : (s + 1)−2 = β1
Thus we have
h(λ) = (s + 1)−1 + (s + 1)−2 + (s + 1)−2 λ
and
(sI − A)−1 = h(A) = [(s + 1)−1 + (s + 1)−2 ]I + (s + 1)−2 A
" #
s+2 1
(s+1)2
− (s+1) 2
= 1 s
(s+1)2 s+1)2

(1 + t)e −t −te −t
 
−1 −1
L [(sI − A) ] =
te −t (1 − t)e −t
W. Dong (UTRGV, Dept. of ECE) 7 / 75
Example

Find the solution of


   
0 −1 0
ẋ = x+ u
1 −2 1

W. Dong (UTRGV, Dept. of ECE) 8 / 75


Example

Solution:

Z t
At
x(t) = e x(0) + e A(t−τ ) Bu(τ )dτ
0
(1 + t)e −t −te −t
 
= x(0)
te −t (1 − t)e −t
Rt
− 0 (t − τ )e −(t−τ ) u(τ )dτ
 
+ R t −(t−τ ) u(τ )dτ
0 [1 − (t − τ )]e

W. Dong (UTRGV, Dept. of ECE) 9 / 75


Solution of LTI State Equations

Based on the solution of a system, the following conclusions can be


obtained:
1 If A has an eigenvalue, simple or repeated, with a negative real part,
then every zero input response will approach zero as t → ∞. If A has
an eigenvalue, simple or repeated, with a positive real part, then most
zero-input responses will grow unbounded as t → ∞.
2 If A has some eigenvalues with zero real part and all with index 1 and
the remaining eigenvalues all have negative real parts, then no
zero-input response will grow unbounded. However, if the index is 2
or higher, then some zero-input response may become unbounded.

W. Dong (UTRGV, Dept. of ECE) 10 / 75


Discretization
Consider the continuous-time state equation
ẋ(t) = Ax(t) + Bu(t) (3)
y (t) = Cx(t) + Du(t) (4)
If the set of equations is to be computed on a digital computer, it must be
discretized.
Method 1:
x(t + T ) − x(t)
ẋ(t) = lim
T →0 T
x(t + T ) = x(t) + Ax(t)T + Bu(t)T (5)
If t = kT for k = 0, 1, . . ., then
x((k + 1)T ) = (I + TA)x(kT ) + TBu(kT )
y (kT ) = Cx(kT ) + Du(kT )
This discretization is the easiest to carry out but yields the least accurate
results for the same T .
W. Dong (UTRGV, Dept. of ECE) 11 / 75
Discretization

Method 2: Let

u(t) = u(kT ) =: u[k], x(t) = x(kT ) =: x[k] for kT ≤ t ≤ (k + 1)T

one has
Z kT
x[k] = x(kT ) = e AkT x(0) + e A(kT −τ ) Bu(τ )dτ
0
Z (k+1)T
x[k + 1] = e AT x[k] + e A(kT +T −τ ) Bdτ u[k]
kT
ZT
= e AT x[k] + e Aα Bdαu[k], α = kT + T − τ
0
x[k + 1] = Ad x[k] + Bd u[k] (6)
y [k] := y (kT ) = Cx(kT ) + Du(kT ) = Cd x[k] + Dd u[k] (7)

W. Dong (UTRGV, Dept. of ECE) 12 / 75


Discretization

where

Ad = e AT (8)
Z T Z T
Aτ −1
Bd = e dτ B = A de Aτ B
0 0
= A−1 (e AT − I )B = A−1 (Ad − I )B, if A is nonsingular (9)
Cd = C (10)
Dd = D (11)

Matlab command [ad,bd]=c2d(a,b,T)

W. Dong (UTRGV, Dept. of ECE) 13 / 75


Solution of Discrete-Time Equations
Consider the discrete-time state-space equation
x[k + 1] = Ax[k] + Bu[k] (12)
y [k] = Cx[k] + Du(k] (13)
It is given x[0] and u[k], k = 0, 1, . . . ,, we want to find the solution.
Noting
x[1] = Ax[0] + Bu[0]
x[2] = Ax[1] + Bu[1] = A2 x[0] + ABu[0] + Bu[1]
proceeding forward, one can obtain, for k > 0,
k−1
X
k
x[k] = A x[0] + Ak−1−m Bu[m] (14)
m=0
k−1
X
y [k] = CAk x[0] + CAk−1−m Bu[m] + Du[k] (15)
m=0

W. Dong (UTRGV, Dept. of ECE) 14 / 75


Equivalent State Equations
Consider the n-dimensional state equation
ẋ(t) = Ax(t) + Bu(t) (16)
y (t) = Cx(t) + Du(t) (17)
where A ∈ R n×n and B ∈ R n×p , and C ∈ R q×n .
Definition 1
Let P be an n × n real nonsingular matrix and let x̄ = Px. Then the state
equation,

˙
x̄(t) = Āx̄(t) + B̄u(t) (18)
y (t) = C̄ x̄(t) + D̄u(t) (19)

where
Ā = PAP −1 , B̄ = PB, C̄ = CP −1 , D̄ = D (20)
is said to be (algebraically) equivalent to (16)-(17) and x̄ = Px is called
an equivalence transformation.
W. Dong (UTRGV, Dept. of ECE) 15 / 75
Equivalent State Equations

The characteristic equation of (18) is


¯ = det(λI − Ā) = det(λPP −1 − PAP −1 ) = det[P(λI − A)P −1 ]

= det(P)det(λI − A)det(P −1 ) = det(λI − A) = ∆(s)

The transfer function of the system (18)-(19) is

ˆ (s) = C̄ (sI − Ā)−1 B̄ + D̄ = C (sI − A)−1 B + D = Ĝ (s)


Ḡ (21)

• Thus equivalent state equations have the same characteristic polynomial


and, consequently, the same set of eigenvalues and same transfer matrix.
• Matlab command: [ab,bb,cb,db]=ss2ss(a,b,c,d,p)

W. Dong (UTRGV, Dept. of ECE) 16 / 75


Equivalent State Equations

Two state equations are said to be zero-state equivalent if they have the
same transfer matrix or

D + C (sI − A)−1 B = D̄ + C̄ (sI − Ā)−1 B̄.

Theorem 2
Two linear time-invariant state equations {A, B, C , D} and {Ā, B̄, C̄ , D̄}
are zero-state equivalent or have the same transfer matrix if and only if
D = D̄ and
CAm B = C̄ Ām B̄, m = 0, 1, 2, . . .

W. Dong (UTRGV, Dept. of ECE) 17 / 75


Equivalent State Equations

Proof: Noting

(1 − s −1 λ)−1 = 1 + s −1 λ + s −2 λ2 + · · ·
s −1 (1 − s −1 λ)−1 = s −1 + s −2 λ + s −3 λ2 + · · ·
(sI − A)−1 = s −1 (I − s −1 A)−1 = s −1 I + s −2 A + s −3 A2 + · · ·

D+CBs −1 +CABs −2 +CA2 Bs −3 +· · · = D̄+C̄ B̄s −1 +C̄ ĀB̄s −2 +C̄ Ā2 B̄s −3 +· · ·
for any s. Compare the coefficients of s −i , the theorem is proved.

W. Dong (UTRGV, Dept. of ECE) 18 / 75


Equivalent State Equations

It is clear that (algebraic) equivalence implies zero-state equivalence. In


order for two state equations to be equivalent, they must have the same
dimension.

W. Dong (UTRGV, Dept. of ECE) 19 / 75


Example
Consider the circuit in Fig. 1.

Figure 1: A circuit

Let x1 be the current through the inductor and x2 be the voltage over the
capacitor. For the two loops, by KVL,
−u + ẋ1 + x2 = 0, 1(ẋ2 − x1 ) + x2 = 0
So
     
ẋ1 0 1 1
= x+ u = Ax + Bu
ẋ2 1 −1 0
y = [0, 1]x = Cx
W. Dong (UTRGV, Dept. of ECE) 20 / 75
Example
If we choose two mesh currents as x̄1 and x̄2 , then

−u + x̄˙ 1 + 1(x̄1 − x̄2 ) = 0,


Z
1(x̄2 − x̄1 ) + x̄2 dt = 0 ⇒ x̄˙ 2 − x̄˙ 1 = −x̄2 ⇒ x̄˙ 2 = −x̄1 + u

So
x̄˙ 1
     
−1 1 1
= x̄ + u = Āx̄ + B̄u
x̄˙ 2 −1 0 0
y = [1, −1]x̄ = C̄ x̄

Noting that x̄1 = x1 and x2 = 1(x̄1 − x̄2 ) = x1 − x̄2 , then


 
1 0
x̄ = x = Px
1 −1

It can be verified that Ā = PAP −1 , B̄ = PB, C̄ = CP −1 .


W. Dong (UTRGV, Dept. of ECE) 21 / 75
Canonical Forms

For simplicity, we consider a 4-th order SISO LTI causal system with the
equation
ẋ = Ax + bu
y = Cx + du
Two canonical forms will be considered: companion form and modal form.

W. Dong (UTRGV, Dept. of ECE) 22 / 75


Companion form
If
|λI − A| = λ4 + β4 λ3 + β3 λ2 + β2 λ + β1
and [b, Ab, A2 b, A3 b] is nonsingular, let

P −1 = Q = [b, Ab, A2 b, A3 b].

The transformation x̄ = Px transforms the system to

x̄˙ = Āx̄ + b̄u, y = C̄ x̄ + du

where  
0 0 0 β1
1 0 0 β2 
Ā = PAP −1 = Q −1 AQ = 


 0 1 0 β3 
0 0 1 β4
is the companion form.
The MATLAB command is: [ab,bb,cb,db,P]=canon(a,b,c,d,’companion’)
W. Dong (UTRGV, Dept. of ECE) 23 / 75
Modal form

Assume the eigenvalues of A are: λ1 , λ2 , α + jβ, α − jβ, where λ1 , λ2 , α,


and β are real.
Let q1 , q2 , q3 , and q4 are the corresponding eigenvalues of the four
eigenvalues. It is obvious that q1 and q2 are real and q3 = q4∗ are complex.
Let P −1 = Q = [q1 , q2 , q3 , q4 ], then
 
λ1 0 0 0
 0 λ2 0 0
Ā = PAP −1 = Q −1 AQ = 


 0 0 α + jβ 0 
0 0 0 α − jβ

Q and Ā can be obtained using MATLAB command: [Q,Ab]=eig(A)

W. Dong (UTRGV, Dept. of ECE) 24 / 75


Modal form

If we let P −1 = Q = [q1 , q2 , Re(q3 ), Im(q3 )], then


 
λ1 0 0 0
0 λ2 0 0
Ā = PAP −1 = Q −1 AQ = 
 

 0 0 α β 
0 0 −β α

and Ā is in the modal form.


The MATLAB command is: [ab,bb,cb,db,P]=canon(a,b,c,d,’modal’) or
[ab,bb,cb,db,P]=canon(a,b,c,d)

W. Dong (UTRGV, Dept. of ECE) 25 / 75


Realizations

Every linear time-invariant (LTI) system can be described by the


input-output description
ŷ (s) = Ĝ (s)û(s)
and, if the system is lumped as well, by the state-space equation
description

ẋ(t) = Ax(t) + Bu(t) (22)


y (t) = Cx(t) + Du(t) (23)

If the state equation is known, the transfer matrix can be computed as


Ĝ (s) = C (sI − A)−1 B + D which is unique. For a given transfer function,
it is possible to find a state-space equation. This is called the realization
problem.

W. Dong (UTRGV, Dept. of ECE) 26 / 75


Realization

A transfer matrix Ĝ (s) is said to be realizable if there exists a


finite-dimensional state equation (22)-(23) or, simply, {A, B, C , D} such
that
Ĝ (s) = C (sI − A)−1 B + D
and {A, B, C , D} is called a realization of Ĝ (s).
Remarks:
• An LTI distributed system can be described by a transfer matrix, but not
by a finite-dimensional state equation. Thus not every Ĝ (s) is realizable.
• If Ĝ (s) is realizable, then it has infinitely many realizations, not
necessarily of the same dimension. Thus the realization problem is fairly
complex.
We study here only the realizability condition. The other issues will be
studied in later chapters.

W. Dong (UTRGV, Dept. of ECE) 27 / 75


Realization

Theorem 3
A transfer matrix C(s) is realizable if and only if C(s) is a proper rational
matrix.

W. Dong (UTRGV, Dept. of ECE) 28 / 75


Realization

Proof: ”⇒”: If Ĝ is realizable, there exists {A, B, C , D} such that

1
Ĝ (s) = C (sI − A)−1 B + D = C [Adj(sI − A)]B + D (24)
det(sI − A)

Since each entry of Adj(sI − A) is the determinant of an (n − 1) × (n − 1)


submatrix of (sI − A), it has at most the degree (n − 1). So,

Ĝ (∞) = D (25)

Since D is a constant matrix, Ĝ (s) is a proper rational matrix.

W. Dong (UTRGV, Dept. of ECE) 29 / 75


Realization

”⇐”: If Ĝ (s) is a q × p proper rational matrix, then Ĝ (s) can be written


as
Ĝ (s) = Ĝ (∞) + Ĝsp (s)
where Ĝsp is the strictly proper part of Ĝ (s). Let

d(s) = s r + α1 s r −1 + · · · + αr −1 s + αr

be the least common denominator of all entries of Ĝsp (s). Here we require
d(s) to be monic; that is, its leading coefficient is 1. Then Ĝsp (s) can be
expressed as
1 1
Ĝsp (s) = [N(s)] = [N1 s r −1 + N2 s r −2 + · · · + Nr −1 s + Nr ] (26)
d(s) d(s)

where Ni are q × p constant matrices. 

W. Dong (UTRGV, Dept. of ECE) 30 / 75


Realization

Now we claim that the set of equations


   
−α1 Ip −α2 Ip · · · −αr −1 Ip −αr Ip Ip
 Ip
 0 ··· 0 0  
  0 

ẋ = 
 0 I p · · · 0 0 +
  0  u (27)

 .. .. . .. .
.. ..   .. 
 . . .   . 
0 0 ··· Ip 0 0
y = [N1 , N2 , . . . , Nr −1 , Nr ]x + Ĝ (∞)u (28)

is a realization of Ĝ (s). The matrix Ip is the p × p unit matrix and every 0


is a p × p zero matrix. The A matrix is said to be in block companion
form; it consists of r rows and r columns of p × p matrices; thus the A
matrix has order rp × rp. The B matrix has order rp × p. Because the C
matrix consists of r number of Ni , each of order q × p, the C matrix has
order q × rp.

W. Dong (UTRGV, Dept. of ECE) 31 / 75


Realization
Next, we show that (27)-(28) is a realization of Ĝ (s). Let us define

Z = [Z10 , Z20 , . . . , Zr0 ]0 := (sI − A)−1 B (29)

where Zi is p × p and Z is rp × p. Then the transfer matrix of (27)-(28)


equals

C (sI − A)−1 B + Ĝ (∞) = N1 Z1 + N2 Z2 + · · · + Nr Zr + Ĝ (∞) (30)

By (29),
sZ = AZ + B (31)
which means that

sZ1 = −αZ1 − α2 Z2 − · · · − αr Zr + Ip

sZ2 = Z1 , sZ3 = Z2 , . . . , sZr = Zr −1

W. Dong (UTRGV, Dept. of ECE) 32 / 75


Realization

So,
α2 αr
sZ1 = −α1 Z1 − Z1 − · · · − r −1 Z1 + Ip
s s
 α2 αr 
= − α1 + + · · · + r −1 Z1 + Ip
s s
 α2 αr  d(s)
s + α1 + + · · · + r −1 Z1 = r −1 Z1 = Ip
s s s
s r −1 s r −2 1
Z1 = Ip , Z2 = Ip , . . . , Zr = Ip ,
d(s) d(s) d(s)

So,
1
C (sI −A)−1 B + Ĝ (∞) = [N1 s r −1 +N2 s r −2 +· · ·+Nr ]+ Ĝ (∞) = Ĝsp (s)
d(s)

W. Dong (UTRGV, Dept. of ECE) 33 / 75


Realization
Procedure for realization of Ĝ (s) ∈ R q×p :
1 Decompose Ĝ (s) as

Ĝ (s) = Ĝ (∞) + Ĝsp (s)

where Ĝsp is the strictly proper part of Ĝ (s) and


Ĝ (∞) = lims→∞ Ĝ (s).
2 Let
d(s) = s r + α1 s r −1 + · · · + αr −1 s + αr
be the least common denominator of all entries of Ĝsp (s).
3 Express Ĝsp (s) as

1 1
Ĝsp (s) = [N(s)] = [N1 s r −1 + N2 s r −2 + · · · + Nr −1 s + Nr ](32)
d(s) d(s)

where Ni are q × p constant matrices.


W. Dong (UTRGV, Dept. of ECE) 34 / 75
Realization

4 The state space equations are


   
−α1 Ip −α2 Ip ··· −αr −1 Ip −αr Ip Ip
 Ip
 0 ··· 0 0  
  0 

ẋ = 
 0 Ip ··· 0 0 +
  0  u (33)

 .. .. .. .. ..   .. 
 . . . . .   . 
0 0 ··· Ip 0 0
y = [N1 , N2 , . . . , Nr −1 , Nr ]x + Ĝ (∞)u (34)

The realization in (33)-(34) has dimension rp and is said to be in


controllable canonical form.

W. Dong (UTRGV, Dept. of ECE) 35 / 75


Example

Consider the proper rational matrix


" #
4s−10 3
2s+1 s+2
Ĝ (s) = 1 s+1
(2s+1)(s+2) (s+2)2

find its realization.

W. Dong (UTRGV, Dept. of ECE) 36 / 75


Example

Solution:
−6(s + 2)2 3(s + 2)(s + 0.5)
  
2 0 1
Ĝ (s) = +
0 0 s 3 + 4.5s 2 + 6s + 2 0.5(s + 2) (s + 1)(s + 0.5)
   
2 0 1 −6 3
= + 3 s 2+
0 0 s + 4.5s 2 + 6s + 2 0 1
   
−24 7.5 −24 3
s+
0.5 1.5 1 0.5

W. Dong (UTRGV, Dept. of ECE) 37 / 75


Example
A realization is
.. ..
 
 −4.5 0 . −6 0 . −2 0   
. . 1 0
−4.5 .. 0 −6 .. 0 −2   0 1
 
 0 
 . . 
· · · .. · · · · · · .. · · · · · ·  
  . . 
 ···

  .. .. 

 .. ..    
 1 0 . 0 0 . 0 0   0 0  u1

ẋ =  .. .. + 
 0 0 . 0 0 . 0 0 
 0 0  u2
  
 .. ..

. .

 . .
· · · .. · · · · · · .. · · · · · · 
 
 ··· 
  
  0 0

.. . 
0 .. 0

 0 0 . 1 0 
  0 0
.. ..
0 0 . 0 1 . 0 0
 
. .
−6 3 .. −24 7.5 .. −24 3 
y = 
. .
0 1 .. 0.5 1.5 .. 1 0.5
This is a six dimensional realization.
W. Dong (UTRGV, Dept. of ECE) 38 / 75
Realization
Other methods to realize a proper transfer matrix Ĝ (s) ∈ R q×p .
Method 1: Let Ĝ (s) = [Ĝc1 (s), . . . , Ĝcp (s)] and u = [u1 , . . . , up ]> ,
then
ŷ (s) = Ĝc1 (s)û1 (s)+· · ·+ Ĝcp (s)ûp (s) =: ŷc1 (s)+ ŷc2 (s)+· · ·+ ŷcp (s)
as shown in Figure 2(a).
Method 2: Let  
Ĝr 1 (s)
Ĝ (s) = 
 .. 
. 
Ĝrq (s)
and let y = [y1 , . . . , yq ]> be the ith component of the output vector
y . Then    
ŷ1 (s) Ĝr 1 (s)û(s)
 .. ..
ŷ (s) =  . =
  
. 
ŷq (s) Ĝrq (s)û(s)
as shown in Figure 2(b).
W. Dong (UTRGV, Dept. of ECE) 39 / 75
Realization

Figure 2: Realization of Ĝ (s) by columns and by rows

MATLAB function [a,b,c,d]=tf2ss(num,den) generates the


controllable-canonical-form for any single-input multiple-output transfer
matrix Ĝ (s).

W. Dong (UTRGV, Dept. of ECE) 40 / 75


Example

Consider the proper rational matrix


" #
4s−10 3
2s+1 s+2
Ĝ (s) = 1 s+1
(2s+1)(s+2) (s+2)2

find its realization.

W. Dong (UTRGV, Dept. of ECE) 41 / 75


Example
Solution:
" #
4s 2 −2s−20 3s+6
Ĝ (s) = 2s 2 +5s+2 s 2 +4s+4 = [Ĝc1 , Ĝc2 ]
1 s+1
2s 2 +5s+2 s 2 +4s+4

The realization of the first column of Ĝ (s) is


   
−2.5 −1 1
ẋ1 = A1 x1 + B1 u1 = x1 + u1 (35)
1 0 0
   
−6 −12 2
yc1 = C1 x1 + d1 u1 = x1 + u1 (36)
0 0.5 0
The realization of the second column of Ĝ (s) is
   
−4 −4 1
ẋ2 = A2 x2 + B2 u2 = x2 + u2 (37)
1 0 0
   
3 6 0
yc2 = C2 x2 + d2 u2 = x2 + u2 (38)
1 1 0
W. Dong (UTRGV, Dept. of ECE) 42 / 75
Example

MATLAB commands:
num1=[4,-2,-20;0,0,1]; den1=[2,5,2]; [a1,b1,c1,d1]=tf2ss(num1,den1)
num2=[3,6;1,1]; den2=[1,4,4]; [a2,b2,c2,d2]=tf2ss(num2,den2)
sys1=ss(a1,b1,c1,d1); sys2=ss(a2,b2,c2,d2);

Other commands: sys=parallel(sys1,sys2), sys=series(sys1,sys2),


sys=append(sys1,sys2,...sysN)

W. Dong (UTRGV, Dept. of ECE) 43 / 75


Example

The two realization can be combined as


       
ẋ1 A1 0 x1 b1 0 u1
= + (39)
ẋ2 0 A2 x2 0 b2 u2
y = yc1 + yc2 = [C1 , C2 ]x + [d1 , d2 ]u (40)

or
   
−2.5 −1 0 0 1 0
 1 0 0 0    0 0 
ẋ = = x + u (41)
 0 0 −4 −4   0 1 
0 0 1 0 0 0
   
−6 −12 3 6 2 0
y = x+ u (42)
0 0.5 1 1 0 0

W. Dong (UTRGV, Dept. of ECE) 44 / 75


Solution of Linear Time-Varying (LTV) Equations

Consider
ẋ = A(t)x (43)
where A(t) is n × n with continuous functions of t as its entries.
Lemma 4
For each initial state xi (t0 ), there exists a unique solution xi (t), for
i = 1, 2, . . . , n. If x1 (t0 ), x2 (t0 ), . . . , xn (t0 ) are linearly independent, then
the columns of X (t) = [x1 (t), x2 (t), . . . , xn (t)] are linear independent for
any t ≥ t0 .

W. Dong (UTRGV, Dept. of ECE) 45 / 75


Solution of Linear Time-Varying (LTV) Equations

Proof: Prove it by contradiction. Assume at t1 x1 (t1 ), x2 (t1 ),. . . ,xn (t1 )


are linearly dependent. Then, there exists a constant vector v such that

x(t1 ) = X (t1 )v = 0

which in turn implies x(t) := X (t)v ≡ 0 for all t, in particular, at t = t0 .


This is a contradiction. Thus, X (t) is nonsingular for all t. 

W. Dong (UTRGV, Dept. of ECE) 46 / 75


Solution of Linear Time-Varying (LTV) Equations

• If X (t0 ) is nonsingular or the n initial states are linearly independent,


then X (t) is called a fundamental matrix of (43).
Remark: The fundamental matrix is not unique.
• Let X (t) be any fundamental matrix of ẋ = A(t)x, then

Φ(t, t0 ) := X (t)X −1 (t0 )

is called the state transition matrix of ẋ = A(t)x. The state transition


matrix is also the unique solution of

Φ(t, t0 ) = A(t)Φ(t, t0 ) (44)
∂t
with the initial condition Φ(t0 , t0 ) = I .
Remark: The state transition matrix is unique.

W. Dong (UTRGV, Dept. of ECE) 47 / 75


Solution of Linear Time-Varying (LTV) Equations

Properties of Φ(t, t0 ):
1 Φ(t, t) = I
2 Φ−1 (t, t0 ) = Φ(t0 , t)
3 Φ(t, t0 ) = Φ(t, t1 )Φ(t1 , t0 )
4 x(t) = Φ(t, t0 )x(t0 ) for any t and t0 .

W. Dong (UTRGV, Dept. of ECE) 48 / 75


Solution of Linear Time-Varying (LTV) Equations

Theorem 5
Consider the system

ẋ(t) = A(t)x(t) + B(t)u(t) (45)


y (t) = C (t)x(t) + D(t)u(t) (46)

It is assumed that its state transition matrix is Φ(t, t0 ). The solution of


the system is
Z t
x(t) = Φ(t, t0 )x(t0 ) + Φ(t, τ )B(τ )u(τ )dτ (47)
t0
Z t
y (t) = C (t)Φ(t, t0 )x(t0 ) + C (t) Φ(t, τ )B(τ )u(τ )dτ
t0
+D(t)u(t) (48)

W. Dong (UTRGV, Dept. of ECE) 49 / 75


Solution of Linear Time-Varying (LTV) Equations

Proof:
Z t
∂ ∂
ẋ(t) = Φ(t, t0 )x(t0 ) + Φ(t, τ )B(τ )u(τ )dτ
∂t ∂t t0
Z t 

= A(t)Φ(t, t0 )x(t0 ) + Φ(t, τ )B(τ )u(τ ) dτ
t0 ∂t
+Φ(t, t)B(t)u(t)
Z t
= A(t)Φ(t, t0 )x(t0 ) + (A(t)Φ(t, τ )B(τ )u(τ )) dτ + B(t)u(t)
t0
= A(t)x(t) + B(t)u(t)
y (t) = C (t)x(t) + D(t) = · · ·

W. Dong (UTRGV, Dept. of ECE) 50 / 75


Solution of Linear Time-Varying (LTV) Equations

If the initial state x(t0 ) = 0, then


Z t
y (t) = C (t) Φ(t, τ )B(τ )dτ + D(t)u(t)
t0
Z t
= [C (t)Φ(t, τ )B(τ ) + D(t)δ(t − τ )]u(τ )dτ
t
Z 0t
= G (t, τ )dτ
t0

where

G (t, τ ) = C (t)Φ(t, τ )B(τ ) + D(t)δ(t − τ ) (49)

G (t, τ ) is the impulse response matrix and is the output at time t excited
by an impulse input applied at time τ .

W. Dong (UTRGV, Dept. of ECE) 51 / 75


Solution of Linear Time-Varying (LTV) Equations

• If A(t) has the commutative property


Z t Z t 
A(t) A(τ )dτ = A(τ )dτ A(t)
t0 t0

for all t and t0 , then


∞ Z t k
Rt
A(τ )dτ
X 1
Φ(t, t0 ) = e t0
= A(τ )dτ (50)
k! t0
k=0

• If A(t) is a constant,

Φ(t, τ ) = e A(t−τ ) = Φ(t − τ )

and X (t) = e At .

W. Dong (UTRGV, Dept. of ECE) 52 / 75


Example

Consider the system  


0 0
ẋ(t) =
t 0
Find its fundamental matrix and the transition matrix.

W. Dong (UTRGV, Dept. of ECE) 53 / 75


Example
Solution: Solve the equation, we have x1 (t) = x1 (0) and
x2 (t) = 0.5t 2 x1 (0) + x2 (0).
Choose an initial state
     
x1 (0) 1 1
x(0) = = ⇒ x(t) =
x2 (0) 0 0.5t 2
and another initial state
     
x1 (0) 1 1
x(0) = = ⇒ x(t) =
x2 (0) 2 0.5t 2 + 2
The two initial states are linearly independent; thus
 
1 1
X (t) =
0.5t 2 0.5t 2 + 2
is a fundamental matrix.

0.25t 2 + 1 −0.5
 
−1
X (t) =
−0.25t 2 0.5
W. Dong (UTRGV, Dept. of ECE) 54 / 75
Example

Thus the state transition matrix is given by

0.25t 2 + 1 −0.5
  
−1 1 1
Φ(t, t0 ) = X (t)X (t) =
0.5t 2 0.5t 2 + 2 −0.25t 2 0.5
 
1 0
=
0.5(t 2 − t02 ) 1

W. Dong (UTRGV, Dept. of ECE) 55 / 75


Discrete-Time Case
Consider the discrete time system

x[k + 1] = A[k]x[k] + B[k]u[k] (51)


y [k] = C [k]x[k] + D[k]u[k] (52)

with initial state x[k0 ] and input u[k] for k ≥ k0 .


The state transition matrix Φ[k, k0 ] is defined by

Φ[k + 1, k0 ] = A[k]Φ[k, k0 ] with Φ[k0 , k0 ] = I

for k = k0 , k0 + 1, . . . , .
It can be shown that

Φ[k, k0 ] = A[k − 1]A[k − 2] · · · A[k0 ] (53)

and
Φ[k, k0 ] = Φ[k, k1 ]Φ[k1 , k0 ]
W. Dong (UTRGV, Dept. of ECE) 56 / 75
Discrete-Time Case

The solution of the system is


k−1
X
x[k] = Φ[k, k0 ]x[k0 ] + Φ[k, m + 1]B[m]u[m]
m=k0
k−1
X
y [k] = C [k]Φ[k, k0 ]x[k0 ] + C [k] Φ[k, m + 1]B[m]u[m] + D[k]u[k]
m=k0

W. Dong (UTRGV, Dept. of ECE) 57 / 75


Discrete-Time Case

If the initial state x[k0 ] = 0, then


k−1
X
y [k] = C [k] Φ[k, m + 1]B[m]u[m] + D[k]u[k]
m=k0
k−1
X
= C [k]Φ[k, m + 1]B[m] + D[m]δ[k − m])u[m]
m=k0
k−1
X
= G [k, m]u[m]
m=k0

where
G [k, m] = C [k]Φ[k, m + 1]B[m] + D[m]δ[k − m]
is the impulse response sequence.

W. Dong (UTRGV, Dept. of ECE) 58 / 75


Equivalent Time-Varying Equations

Consider the n-dimensional linear time-varying state equation

ẋ(t) = A(t)x(t) + B(t)u(t) (54)


y (t) = C (t)x + D(t)u (55)

Let P(t) be an n × n matrix. It is assumed that P(t) is nonsingular and


both P(t) and Ṗ(t) are continuous for all t. Let x̄ = P(t)x, then

d
[P(t)x(t)] = Ṗ(t)x(t) + P(t)ẋ(t) = Ṗ(t)x(t) + P(t)A(t)x(t)
dt
+P(t)B(t)u(t)
= [Ṗ(t) + P(t)A(t)]x(t) + P(t)B(t)u(t)
= [Ṗ(t) + P(t)A(t)]P −1 (t)x̄(t) + P(t)B(t)u(t)
y (t) = C (t)P −1 (t)x̄(t) + D(t)u(t)

W. Dong (UTRGV, Dept. of ECE) 59 / 75


Equivalent Time-Varying Equations

So,

˙
x̄(t) = Ā(t)x̄(t) + B̄(t)u(t) (56)
y (t) = C̄ (t)x̄(t) + D̄(t)u(t) (57)

where

Ā(t) = [Ṗ(t) + P(t)A(t)]P −1 (t)


B̄(t) = P(t)B(t)
C̄ (t) = C (t)P −1
D̄(t) = D(t)

W. Dong (UTRGV, Dept. of ECE) 60 / 75


Equivalent Time-Varying Equations

Let X (t) be a fundamental matrix of (54), then

X̄ (t) := P(t)X (t) (58)

is a fundamental matrix of (56). It can be verified by

d
[P(t)X (t)] = Ṗ(t)X (t) + P(t)Ẋ (t) = Ṗ(t)X (t) + P(t)A(t)X (t)
dt
= [Ṗ(t) + P(t)A(t)]X (t)
= [Ṗ(t) + P(t)A(t)]P −1 (t)X̄ (t) = Ā(t)X̄ (t)

Theorem 6
Let A0 be an arbitrary constant matrix. Then there exists an equivalence
transformation that transforms (54) into (56) with Ā(t) = A0 .

W. Dong (UTRGV, Dept. of ECE) 61 / 75


Equivalent Time-Varying Equations

Proof: Let X be a fundamental matrix of ẋ = A(t)x. Since


X −1 (t)X (t) = I , then

Ẋ −1 X + X −1 Ẋ = 0
Ẋ −1 = −X −1 Ẋ X −1 = −X −1 A

Let
P(t) := e A0 t X −1
then

Ā = [PA + Ṗ]P −1
= [e A0 t X −1 A + A0 e A0 t X −1 + e A0 t Ẋ −1 ]Xe −A0 t
= A0 e A0 t X −1 Xe −A0 t = A0


W. Dong (UTRGV, Dept. of ECE) 62 / 75
Equivalent Time-Varying Equations

If A0 = 0, then

Ā = 0, B̄ = X −1 B, C̄ = CX , D̄ = D (59)

The block diagrams of (54) with A(t) 6= 0 and A(t) = 0 are plotted in
Figure 3. The block diagram with A(t) = 0 has no feedback and is
considerably simpler. Every time-varying state equation can be
transformed into such a block diagram. However, in order to do so. its
fundamental matrix should be known.

W. Dong (UTRGV, Dept. of ECE) 63 / 75


Equivalent Time-Varying Equations

Figure 3: Block diagrams with feedback and without feedback.

W. Dong (UTRGV, Dept. of ECE) 64 / 75


Equivalent Time-Varying Equations

The impulse response matrix of (56) is, using (58) and (59),

Ḡ (t, τ ) = C̄ (t)X̄ (t)X̄ −1 (τ )B̄(τ ) + D̄(t)δ(t − τ )


= C (t)P −1 (t)P(t)X (t)X −1 (t)P −1 (t)P(τ )B(τ ) + D(t)δ(t − τ )
= C (t)X (t)X −1 (t)B(τ ) + D(t)δ(t − τ ) = G (t, τ )

• Thus the impulse response matrix is invariant under any equivalence


transformation.
• The property of the A-matrix. however, may not be preserved in
equivalence transformations.
• In the time-invariant case, equivalence transformations will preserve all
properties of the original state equation. Thus the equivalence
transformation in the time-invariant case is not a special case of the
time-varying case.

W. Dong (UTRGV, Dept. of ECE) 65 / 75


Equivalent Time-Varying Equations

Definition 7
A matrix P(t) is called a Lyapunov transformation if P(t) is nonsingular,
P(t) and Ṗ(t) are continuous, and P(t) and P −1 are bounded for all t.
Equations (54) and (56) are said to be Lyapunov equivalent if P(t) is a
Lyapunov transformation.

• If P(t) = P is a constant matrix, then it is a Lyapunov transformation.


Thus the (algebraic) transformation in the time-invariant case is a special
case of the Lyapunov transformation.
• Not every time-varying state equation can be Lyapunov equivalent to a
state equation with a constant A-matrix. However, this is true if A(t) is
periodic.

W. Dong (UTRGV, Dept. of ECE) 66 / 75


Equivalent Time-Varying Equations

Theorem 8
Consider (54) with A(t) = A(t + T ) for all t and some T > 0. Let X (t)
be a fundamental matrix of ẋ = A(t)x. Let Ā be the constant matrix
computed from e ĀT = X −1 (0)X (T ). Then (54) is Lyapunov equivalent to

˙
x̄(t) = Āx̄(t) + P(t)B(t)u(t) (60)
−1
ȳ (t) = C (t)P (t)x̄(t) + D(t)u(t) (61)

where P(t) = e Āt X −1 (t).

W. Dong (UTRGV, Dept. of ECE) 67 / 75


Equivalent Time-Varying Equations

Proof: Since

Ẋ (t + T ) = A(t + T )X (t + T ) = A(t)X (t + T )

X (t + T ) is also a fundamental matrix. Furthermore,

X (t + T ) = X (t)X −1 (0)X (T ) = X (t)e ĀT

P(t) is periodic because

P(t + T ) = e Ā(t+T ) X −1 (t + T ) = e Āt e ĀT [e −ĀT X −1 ] = e Āt X −1 = P(t)

The matrix P(t) satisfies all conditions of Lyapunov transformation. The


rest of the theorem follows directly from Theorem 6. 

W. Dong (UTRGV, Dept. of ECE) 68 / 75


Time-varying Realization
Every linear time-varying system can be described by the input-output
description Z t
y (t) = G (t, τ )u(τ )dτ
t0
and, if the system is lumped as well, by the state equation

x(t) = A(t)x(t) + B(t)u(t) (62)


y (t) = C (t)x(t) + D(t)u(t) (63)
If the state equation is available, the impulse response matrix can be
computed from
G (t, τ ) = C (t)X (t)X −1 (τ )B(τ ) + D(t)δ(t − τ ) for t ≥ τ (64)
where X (t) is a fundamental matrix of ẋ = A(t)x. The converse problem
is to find a state equation from a given impulse response matrix. An
impulse response matrix G (t, τ ) is said to be realizable if there exists
{A(t), B(t), C (t), D(t)} to meet (64).
W. Dong (UTRGV, Dept. of ECE) 69 / 75
Time-varying Realization

Theorem 9
A q × p impulse response matrix G (t, τ ) is realizable if and only if G (t, τ )
can be decomposed as

G (t, τ ) = M(t)N(τ ) + D(t)δ(t − τ ) (65)

for all t ≥ τ . where M, N, and D are, respectively, q × n, n × p, and


q × p matrices for some integer n.

W. Dong (UTRGV, Dept. of ECE) 70 / 75


Time-varying Realization

Proof:
If G (t, τ ) is realizable, there exists a realization that meets (64).
Identifying M(t) = C (t)X (t) and N(τ ) = X −1 (τ )B(τ ) establishes the
necessary part of the theorem.
If G (t, τ ) can be decomposed as in (65), then the n-dimensional state
equation

ẋ(t) = N(t)u(t) (66)


y (t) = M(t)x(t) + D(t)u(t) (67)

is a realization. Indeed, a fundamental matrix of ẋ = 0 is X (t) = I . Thus


the impulse response matrix of the above system is

M(t)I × I −1 N(τ ) + D(t)δ(t − τ ) = G (t, τ )


W. Dong (UTRGV, Dept. of ECE) 71 / 75
Example

Consider g (t) = te λt or

g (t, τ ) = g (t − τ ) = (t − τ )e λ(t−τ )

find its realization.

W. Dong (UTRGV, Dept. of ECE) 72 / 75


Example

Solution: It is straightforward to verify that

−τ e −λτ
 
λt λt
g (t − τ ) = [e , te ]
e −λτ

One time-varying realization is

−te −λt
   
0 0
ẋ(t) = x+ u(t)
0 0 e −λt
y (t) = [e λt , te λt ]x(t)

W. Dong (UTRGV, Dept. of ECE) 73 / 75


Example

Another realization can be obtained as follows. The Laplace transform of


g (t) is
1 1
ĝ (s) = 2
= 2
(s − λ) s − 2λs + λ2
Its realization is
2λ −λ2
   
1
ẋ = x(t) + u(t)
1 0 0
y (t) = [1, 0]x(t)

This LTI state equation is clearly more desirable because it is easy to be


implemented.

W. Dong (UTRGV, Dept. of ECE) 74 / 75


Summary

For time-invariant systems:


1 Solutions of continuous-time LTI systems
2 Discretization: two methods.
3 Solutions of discrete-time systems
4 Equivalent systems: same transfer matrix
5 Canonical forms: Companion form, modal form
6 Realizations
For time-varying systems:
1 Solutions of LTV systems: fundamental matrix, state transition matrix
2 Solutions of discrete-time systems
3 Equivalent systems: algebraic transformation, Lyapunov
transformation
4 Time-varying realizations

W. Dong (UTRGV, Dept. of ECE) 75 / 75

You might also like