You are on page 1of 61

Topic 03: Response of linear systems

Advanced System Theory


Winter semester 2021/22

Dr.-Ing. Oliver Wallscheid


Automatic Control
Faculty of Electrical Engineering, Computer Science, and Mathematics

Oliver Wallscheid AST Topic 03 1


Table of contents

1 Linear homogeneous and nonhomogeneous differential equations

2 Response of linear, autonomous and time-invariant state-space systems

3 Response of linear state-space systems

4 Discrete-time state-space systems

Learning objectives
Given state-space description of a dynamic system, initial states and input sequences (if
applicable) we will learn different solutions method to obtain the system’s state / output
response in continuous and discrete time.

Oliver Wallscheid AST Topic 03 2


Course outline

No. Topic
1 Introduction (admin, system classes, motivating example)
2 Linear algebra basics
3 Response of linear systems incl. discrete-time representations
4 Laplace and z-transforms (complex frequency domain)
5 Frequency response
6 Stability
7 Controllability and observability
8 State transformation and realizations
9 State feedback and state observers

Oliver Wallscheid AST Topic 03 3


Homogeneous and nonhomogeneous equations

Definition 3.1: Linear differential equations


We consider the system of n linear homogeneous ordinary differential equations (ODEs)

ẋ(t) = A(t)x(t) (3.1)

and the system of n linear nonhomogeneous ODEs

ẋ(t) = A(t)x(t) + r(t) (3.2)

where A(t) : Rn → Rn is the dynamic matrix, r(t) ∈ Rn is the system input and x(t) ∈ Rn is
the system state.

I For specified initial conditions x(t0 ) = x0 , these systems possess unique solutions for
every (t0 , x0 ).
I The solutions depend continuously on the initial conditions.
Oliver Wallscheid AST Topic 03 4
Solution space

Theorem 3.1: Solution space


The set of solutions of (3.1) forms an n-dimensional vector space.

Definition 3.2: Fundamental matrix


A set of n linearly independent solutions of (3.1), say (ψ1 , ..., ψn ), is called a fundamental
set of solutions. The n × n matrix Ψ = [ψ1 . . . ψn ] is called a fundamental matrix of (3.1).

I Ψ is a fundamental matrix of (3.1) if and only if it satisfies

Ψ̇ = A(t)Ψ and det Ψ(t) 6= 0 ∀t. (3.3)

I Ψ0 is also a fundamental matrix of (3.1) if P is an n × n nonsingular matrix:

Ψ0 = ΨP . (3.4)
Oliver Wallscheid AST Topic 03 5
Solution space example
Consider the homogenous ODE

ẋ1 = 5x1 − 2x2 ,


ẋ2 = 4x1 − x2 .

It can be verified that  3t


et

e
Ψ(t) = Ψ = 3t
e 2et
is a fundamental matrix since it satisfies (3.3):
 3t
et 5 −2 e3t et
   
3e
Ψ̇ = = = AΨ,
3e3t 2et 4 −1 e3t 2et

det(Ψ) = e3t 2et − et e3t = 2e4t − e4t = e4t 6= 0.

Oliver Wallscheid AST Topic 03 6


State transition matrix (1)

Definition 3.3: State transition matrix


For any fundamental matrix Ψ(t) of (3.1), we define

Φ(t, t0 ) = Ψ(t)Ψ−1 (t0 ) ∀t, t0 (3.5)

as the state transition matrix.

I The state transition matrix is uniquely determined by A(t) and independent of the
particular choice of Ψ(t).
I Proof: Applying (3.4) for any arbitrary, constant non-singular matrix P it directly follows

Φ(t, t0 ) = Ψ(t)Ψ−1 (t0 ) = Ψ(t)P P −1 Ψ−1 (t0 ).

Oliver Wallscheid AST Topic 03 7


State transition matrix (2)
Further properties of the state transition matrix:
I Φ(t, t0 ) is the unique (nonsingular) solution of the matrix equation
Φ̇(t, t0 ) = A(t)Φ(t, t0 ) with Φ(t0 , t0 ) = I. (3.6)

I For any t, τ, σ ∈ R we have Φ(t, τ ) = Φ(t, σ)Φ(σ, τ ).


I From (3.3) and (3.5) follows:
det(Φ(t, t0 )) = det(Ψ(t)Ψ−1 (t0 )) = det(Ψ(t))det(Ψ−1 (t0 )) 6= 0, (3.7)
(Φ(t, t0 ))−1 = (Ψ(t)Ψ−1 (t0 ))−1 = Ψ(t0 )Ψ−1 (t) = Φ(t0 , t). (3.8)
I Closed-form expressions of Φ(t, t0 ) exist only for special classes of matrices A(t).

I The unique solution x(t, t0 , x0 ) of (3.1) with x(t0 , t0 , x0 ) = x0 specified, is given by

x(t, t0 , x0 ) = Φ(t, t0 )x0 ∀t. (3.9)


Oliver Wallscheid AST Topic 03 8
State transition matrix example
Recall the previous example:
e3t et
 
ẋ1 = 5x1 − 2x2 ,
Ψ(t) = Ψ = 3t .
ẋ2 = 4x1 − x2 , e 2et
The state transition matrix is given by:
  −3t
−e−3t0
 3t
et −4t0 2et0 −et0
  3t
et
  
−1 e e 2e 0
Φ(t, t0 ) = Ψ(t)Ψ (t0 ) = 3t e =
e 2e t 3t
−e 0 e 0 3t e3t 2et −e−t0 e−t0
 3(t−t )
0 − e(t−t0 ) −e3(t−t0 ) + e(t−t0 )

2e
= .
2e3(t−t0 ) − 2e(t−t0 ) −e3(t−t0 ) + 2e(t−t0 )
 T
With exemplary initial state x0 = 1 1 and t0 = 0 we receive the solution:
 3t
2e − et −e3t + et
      3t 
x1 (t) 1 e
x(t) = = Φ(t, t0 )x0 = = 3t .
x2 (t) 2e3t − 2et −e3t + 2et 1 e

Oliver Wallscheid AST Topic 03 9


State transition matrix for time-variant systems (1)

The question that remains is how to find the state transition matrix?!

1 For LTI systems we can utilize a multitude of solution approaches (next section).
2 For linear time-variant (LTV) systems there is no general, closed-form solution.
However, we can try to find (or approximate) Φ(t, t0 ) also for the LTV case:
I We can superpose n solutions of ψ̇ = A(t)ψ for initial conditions
     
1 0 0
0 1 0
ψ1 (t0 ) =  .  , ψ2 (t0 ) =  .  , ··· ψn (t0 ) =  .  , (3.10)
     
 ..   ..   .. 
0 0 1
 
obtaining the fundamental matrix Ψ = ψ1 · · · ψn .By utilizing (3.5) we can find
Φ(t, t0 ) = Ψ(t)Ψ−1 (t0 ) for any t, t0 , ∈ J.
Oliver Wallscheid AST Topic 03 10
State transition matrix for time-variant systems (2)

Theorem 3.2: Peano-Baker series


Let A(t) be the system matrix of a LTV ODE problem. Then, the state transition matrix can
be computed by the following Peano-Baker series:
Z t Z t Z σ1
Φ(t, τ ) = I + A(σ1 ) dσ1 + A(σ1 ) A(σ2 ) dσ2 dσ1
τ τ τ
Z t Z σ1 Z σ2 (3.11)
+ A(σ1 ) A(σ2 ) A(σ3 ) dσ3 dσ2 dσ1 + ...
τ τ τ

I By direct insertion it becomes clear that the above series satisfies (3.6).
I A closed-form solution of (3.11) is only available for some special system classes.

The limitations of finding closed-form LTV solutions motivates to approximate it by piece-wise


numerical integration (i.e., simulation methods like Runge-Kutta iteration).
Oliver Wallscheid AST Topic 03 11
Nonhomogeneous equation
For specified initial conditions x0 , the solution to the nonhomogeneous system (3.2) is
Z t
x(t, t0 , x0 ) = Φ(t, t0 )x0 + Φ(t, τ )r(τ ) dτ . (3.12)
t0
| {z } | {z }
xh xp

I The homogeneous solution xh is the solution to (3.1) with specified initial conditions.
For nonzero initial conditions but zero forcing term r(t) = 0, the solution of (3.2)
reduces to x = xh (zero input response).
I The particular solution xp is due to the forcing term r(t). For zero initial conditions
x0 = 0, the solution of (3.2) reduces to x = xp (zero state response).
I For given initial conditions x0 and given forcing term, the behavior of (3.2) is summarized
by x and is known for all t. This is why we call x the state of (3.1) at time t.
I A formal proof (3.12) solves the nonhomogeneous system ODE will be given later.
Oliver Wallscheid AST Topic 03 12
Table of contents

1 Linear homogeneous and nonhomogeneous differential equations

2 Response of linear, autonomous and time-invariant state-space systems

3 Response of linear state-space systems

4 Discrete-time state-space systems

Oliver Wallscheid AST Topic 03 13


State transition matrix for autonomous LTI systems

Theorem 3.3: State transition matrix for autonomous LTI systems


For time-invariant A, a fundamental matrix of (3.1) is the matrix exponential
∞ k
X t
Ψ(t) = exp(At) = Ak (3.13)
k!
k=0

and the state transition matrix is

Φ(t, t0 ) = exp[A(t − t0 )]. (3.14)

Proof that the above proposal is a fundamental matrix (i.e., it solves the autonom. LTI ODE):
∞ ∞ k
d X ktk−1 X t k+1
I exp(At) = Ak = A = A exp(At) (i.e., satisfies Ψ̇ = AΨ).
dt k(k − 1)! k!
k=1 k=0
I det(exp(At)|t=0 ) = det(I) = 1 6= 0.
Oliver Wallscheid AST Topic 03 14
Determining exp(At): similarity transform method

We first make the following two observations:


1 If A0 = P −1 AP (i.e., A0 and A are similar), then exp(A0 t) = P −1 exp(At)P .
2 If A is diagonalizable, i.e., there exists an invertible matrix P such that
P −1 AP = Λ = diag (λ1 , ..., λn ), then

exp(At) = P exp(Λt)P −1 = P diag (eλ1 t , ..., eλn t )P −1 . (3.15)

I Recall: if A has n linear independent eigenvectors it is diagonalizable.


I In this case, the eigenvectors ui of A form the basis P = [u1 . . . un ].
I To calculate exp(At), it is therefore convenient to find a coordinate system where the
computation of the matrix exponential is particularly simple.
I Keep in mind that not every matrix is diagonalizable. Matrices with repeated eigenvalues
may not be diagonalizable. Real symmetric matrices, however, are always diagonalizable.

Oliver Wallscheid AST Topic 03 15


Examples (1)
Consider the following autonomous LTI state-space system
 
2 1
ẋ(t) = x(t).
1 2

The above system matrix has the eigenvalues λ1,2 = {1, 3} as well as the corresponding
 T  T
eigenvectors u1 = 1 −1 and u2 = 1 1 . Using (3.14) and (3.15) we can find the
transition matrix
1 1 et−t0
 t−t
e 0 + e3(t−t0 ) −et−t0 + e3(t−t0 ) 1
    
0 1 −1 1
Φ(t, t0 ) = = .
−1 1 0 e3(t−t0 ) 1 1 2 −et−t0 + e3(t−t0 ) et−t0 + e3(t−t0 ) 2
 T
Assuming x0 (t0 = 0) = 1 1 , we obtain the ODE solution
   3t 
1 e
x(t) = Φ(t, t0 = 0) = 3t .
1 e

Oliver Wallscheid AST Topic 03 16


Examples (2)
Lets investigate a new system denoted by
 
1 −1/2
ẋ(t) = x(t).
2 −1
Here, the system matrix is nilpotent (i.e., Ak = 0, k ∈ {N|k > 0}) which can be directly
found by
 2  
1 −1/2 0 0
= .
2 −1 0 0
For this special property calculating the transition matrix using (3.14) is particular simple since
we only need to address the exponential series up to a certain order:
 
1 + (t − t0 ) −1/2(t − t0 )
Φ(t, t0 ) = exp[A(t − t0 )] = I + (t − t0 )A = .
2(t − t0 ) 1 − (t − t0 )

Likewise we might calculate (3.14) for non-nilpotent matrices by considering only a certain
order of the matrix exponential if the elements in exp(At) converge to sufficiently small values.
Oliver Wallscheid AST Topic 03 17
Modes
Let’s assume that A is orthogonally diagonalizable (e.g., if it is real symmetric). We have
already seen that, since A = P ΛP −1 ,
exp(At) = P exp(Λt)P −1 .
This can also be written as a sum of n modes
n
X
exp(At) = pi p̃i eλi t (3.16)
i=1

where pi is the i-th column of P (the i-th eigenvector) and p̃i is the i-th row of P −1 .
I Each term in the sum (3.16) is called a mode of the system ẋ = Ax.
I In general, the solution x(t) contains contributions of all modes.
I However, if we choose the initial value x(0) = αpi , then the corresponding mode eλi t is
the only mode in the solution
x(t) = exp(At)x(0) = αpi eλi t . (3.17)
This follows from p̃i pi = 1 and p̃j pi = 0 for i 6= j.
Oliver Wallscheid AST Topic 03 18
Example
The eigenvalues of  
−1 1
A=
0 1
are λ1,2 = {−1, 1}. Hence, the matrix is diagonalizable using its eigenvectors
     
    1 1 −1 p̃1 1 −1/2
P = u1 u2 = p1 p2 = , P = = .
0 2 p̃2 0 1/2
Applying (3.16), the transition matrix is
   
At λ1 t λ2 t 1 −1/2 −t 0 1/2
Φ(t, t0 = 0) = e = p1 p̃1 e + p2 p̃2 e = e + et .
0 0 0 1
 T
If the initial value is of the form x0 = α 0 , we receive the system response
T
x(t) = α 0 e−t ,


i.e., only the mode corresponding to λ1 is present while the mode of λ2 is suppressed.
Oliver Wallscheid AST Topic 03 19
Determining exp(At): similarity transform method for Jordan matrices (1)
If A is non-diagonalizable, we can still perform a similarity transform into the Jordan
canoncial form  
J0 0
 J1 
J = P −1 AP =  (3.18)
 
.. 
 . 
0 Js
where J0 is a diagonal matrix with elements λ1 , . . . , λk and each Ji , i ≥ 1 is a matrix of the
form  
λk+i 1 0 ··· 0
 0
 λk+i 1 · · · 0 
 .. .. .. .. .. 
Ji =  .
 . . . . 
. (3.19)
.
· · · ..
 
 0 0 1 
0 0 0 · · · λk+1

Oliver Wallscheid AST Topic 03 20


Determining exp(At): similarity transform method for Jordan matrices (2)
Since J0 is a diagonal matrix etJ0 = diag(tλ1 , . . . , tλk ) follows. The other Jordan blocks can
be rewritten as  
0 1 ··· 0
 .. . . .. 
. . .
Ji = λk+i I + Ni = λk+i I +   .. ..

 (3.20)
. . 1
0 ··· ··· 0
with Ni being an ni × ni nilpotent matrix resulting in Nik = 0 for all k ≥ ni . Hence, the
series defining eNi t terminates, resulting in

· · · tni −1 /(ni − 1)!


 
1 t
0 1 · · · tni −2 /(ni − 2)!
etJi = eλk+i t eNi t = eλk+i t  . . (3.21)
 
.. ..
 .. . . 
0 0 ··· 1

Oliver Wallscheid AST Topic 03 21


Determining exp(At): Cayley-Hamilton theorem method (1)
From Theo. 2.3 we know that
p(A) = α0 I + α1 A + α2 A2 + · · · + αn An = 0
and, therefore, one can rewrite any matrix polynomial as
f (A) = α0 I + α1 A + · · · + αn−1 An−1 .
If f (A) is a infinite but convergent power series (e.g., matrix exponential) we have
∞ ∞ k
X X t
f (A) = ai Ak = exp(At) = Ak = α0 (t)I + α1 (t)A + · · · + αn−1 (t)An−1 . (3.22)
k!
k=0 k=0

Multiplying this with any non-zero eigenvector ui of A we receive


n−1
X
λi t
f (λi ) = e = αk (t)λki . (3.23)
k=0

Oliver Wallscheid AST Topic 03 22


Determining exp(At): Cayley-Hamilton theorem method (2)
Repeating (3.23) for all n eigenvalues results in a linear equation system
· · · λn−1
 λ t   
e 1 1 λ1 1 α0 (t)
 eλ2 t  1 λ2 · · · λn−1
2
  α1 (t) 
 ..  =  .. .. . (3.24)
    
.. ..   ..
 .  . . . .   . 
eλn t 1 λn · · · λn−1
n αn−1 (t)
| {z }
Vandermonde matrix V

I The Vandermonde matrix V is invertible if and only if all λi eigenvalues are distinct.
I If that applies, we can solve (3.24) for the unknown coefficients αi (t) by matrix inversion:
−1  λ t 
1 λ1 · · · λn−1
  
α0 (t) 1 e 1
 α1 (t)  1 λ2 · · · λn−1   eλ2 t 
2
 =  .. .. ..   ..  . (3.25)
     
 .. ..
 .  . . . .   . 
αn−1 (t) 1 λn · · · λn−1
n eλn t
Oliver Wallscheid AST Topic 03 23
Determining exp(At): Cayley-Hamilton theorem method (3)
Then we can insert the known coefficients αi (t) from (3.25) into (3.22) and obtain
exp(At) = α0 (t)I + α1 (t)A + · · · + αn−1 (t)An−1 . (3.26)
Example: The system  
0 1
ẋ(t) = Ax(t), A=
−1 0
has a characteristic equation of the form λ2 + 1 = 0 leading to the complex eigenvalue pair
λ1,2 = ±j. Applying (3.23) delivers the equation system
ejt = cos(t) + j sin(t) = α0 + α1 j
e−jt = cos(t) − j sin(t) = α0 − α1 j
revealing α0 (t) = cos(t) and α1 (t) = sin(t). The transition matrix an be found by (3.26):
 
At cos(t) sin(t)
Φ(t, t0 = 0) = e = cos(t)I + sin(t)A = .
− sin(t) cos(t)
Oliver Wallscheid AST Topic 03 24
Determining exp(At): Cayley-Hamilton theorem method (4)
I Problem: if any eigenvalue λi has analgebraic multiplicity µi ≥ 2 (i.e., non-distinct
eigenvalues), the Vandermonde matrix V becomes singular and we cannot use (3.25).
I Solution: produce µi − 1 additional independent equations by taking the derivative of
f (λi ) = eλi t w.r.t. λi :
f (λi ) = eλi t = α0 + α1 λi + · · · + αn−1 λn−1 ,
df (λi ) deλi t
= = teλi t = α1 + 2α2 λi + · · · + (n − 1)αn−1 λn−2 ,
dλi dλi
..
.
dµi −1 f (λi ) dµi −1 eλi t
µ −1
= µi −1 = tµi −1 eλi t = (µi − 1)!αµi −1 + · · · + (n − µi ) · · · (n − 2)(n − 1)αn−1 λn−1−µi .
d i λi d λi
(3.27)

I The produced equations replace the rows in (3.24) associated with the repeating
eigenvalue.
Oliver Wallscheid AST Topic 03 25
Determining exp(At): Cayley-Hamilton theorem method (5)
Example (with a repeating eigenvalue): Consider the system
 
0 0 −2
ẋ(t) = Ax(t), A = 0 1 0 
1 0 −3
with the eigenvalues λ1 = 2 and λ2,3 = 1. Our general ansatz is then
eλi t = α0 (t) + α1 (t)λi + α2 (t)λ2i .
Inserting the non-repeating eigenvalues we obtain two out of three necessary equations
et = α0 (t) + α1 (t) + α2 (t),
e2t = α0 (t) + 2α1 (t) + 4α2 (t).
For the repeated eigenvalue λ2,3 = 1 we differentiate our ansatz equation w.r.t. λi receiving
teλi t = α1 (t) + 2α2 (t)λi .
Oliver Wallscheid AST Topic 03 26
Determining exp(At): Cayley-Hamilton theorem method (6)
Inserting λ3 = 1 we get our third independent equation
tet = α1 (t) + 2α2 (t).
The overall equation system yields
 t   
e 1 1 1 α0 (t)
tet  = 0 1 2 α1 (t) .
e2t 1 2 4 α2 (t)
By matrix inversion we receive
α0 (t) = −2tet + e2t , α1 (t) = (2 + 3t)et − 2e2t , α2 (t) = −(t + 1)et + e2t .
The resulting transition matrix is
(2 − et )et 0 (−12t + 10et − 10)et
 

Φ(t, t0 = 0) = α0 (t)I + α1 (t)A + α2 (t)A2 =  0 et 0 .


t t t
(6t − 5e + 5)e 0 (−18t + 14e − 13)e t

Oliver Wallscheid AST Topic 03 27


Table of contents

1 Linear homogeneous and nonhomogeneous differential equations

2 Response of linear, autonomous and time-invariant state-space systems

3 Response of linear state-space systems

4 Discrete-time state-space systems

Oliver Wallscheid AST Topic 03 28


Response of linear continuous-time systems (1)
We have claimed in (3.12) that the solution of ẋ(t) = Ax(t) + r(t) is
Z t
x(t, t0 , x0 ) = xh (t) + xp (t) = Φ(t, t0 )x0 + Φ(t, τ )r(τ ) dτ.
t0

The proof for the homogenous solution (r(t) = 0) with xh (t) = Φ(t, t0 ) = exp[A(t − t0 )]x0
is straightforward:

X ktk−1 ∞ X tk ∞
d d !
xh (t) = exp(At)x0 = Ak x0 = Ak+1 x0 = A exp(At)x0 = Axh (t).
dt dt k(k − 1)! k!
k=1 k=0

For the particular solution we make use variation of parameters approach with the ansatz:

xp (t) = exp(At)p(t). (3.28)

Here, p(t) is an unknown function which need to be determined next.


Oliver Wallscheid AST Topic 03 29
Response of linear continuous-time systems (2)
Inserting (3.28) in ẋ(t) = Ax(t) + r(t) yields

exp(At)ṗ(t) + A exp(At)p(t) = A exp(At)p(t) + r(t) (3.29)

and, therefore,
Z t
ṗ(t) = exp(−At)r(t) ⇔ p(t) = exp(−Aτ )r(τ ) dτ + p(t0 ) (3.30)
t0

with (exp(At))−1 = exp(−At). Moreover, we set p(t0 ) = 0 since the initial condition is
already considered in xh (t). Hence, the particular solution is
Z t Z t
xp (t) = exp(At)p(t) = exp(A(t − τ ))r(τ ) dτ = Φ(t, τ )r(τ ) dτ. (3.31)
t0 t0

Hence, we could show that x(t) = xh (t) + xp (t) is a solution of ẋ(t) = A(t)x(t) + r(t).
Oliver Wallscheid AST Topic 03 30
Response of linear continuous-time systems (3)

To solve the state equation ẋ(t) = Ax(t) + Bu(t), we note that r(t) = Bu(t), thus
Z t
x(t, t0 , x0 ) = Φ(t, t0 )x0 + Φ(t, τ )Bu(τ ) dτ. (3.32)
t0

Here, the input matrix B represents the actuator structure, i.e., there might be certain
limitations on how u(t) can act upon the system states. To solve the output equation
y(t) = Cx(t) + Du(t), we substitute the solution for x and obtain:

Total response
Z t
y(t) = CΦ(t, t0 )x0 + C Φ(t, τ )Bu(τ ) dτ + Du(t). (3.33)
| {z } t0
zero-input response/
| {z }
zero-state response/forced response
natural response

Oliver Wallscheid AST Topic 03 31


Standard input test signals
In order to evaluate the system behavior in the time domain it is common to apply certain
standard input tests signals:

Fig. 3.1: Exemplary standard input signals

I Unit step: σ(t) = 1, t ≥ 0, σ(t) = 0 otherwise,


I Rectangular function: Π(t) = 1/, 0 ≤ t ≤ , Π(t) = 0 otherwise,
I (Dirac) impulse: δ(t) = lim→0 Π(t).
Oliver Wallscheid AST Topic 03 32
Impulse response
Now consider only the zero-state response:
Z t
y(t) = C Φ(t, τ )Bu(τ ) dτ + Du(t)
t0
Z t
= [CΦ(t, τ )Bu(τ ) + Du(τ )δ(t − τ )] dτ (3.34)
t0
Z t Z t
= [CΦ(t, τ )B + Dδ(t − τ )] u(τ ) dτ = H(t, τ )u(τ ) dτ.
t0 t0

Definition 3.4: Impulse response matrix


For a linear system the impulse response matrix is
(
CΦ(t, τ )B + Dδ(t − τ ) t≥τ
H(t, τ ) = . (3.35)
0 t<τ

Oliver Wallscheid AST Topic 03 33


Impulse response for LTI systems
For LTI systems the total response is
Z t
y(t) = CΦ(t, t0 )x0 + CΦ(t, τ )Bu(τ ) dτ + Du(t)
t0
Z t (3.36)
A(t−t0 ) A(t−τ )
= Ce x0 + [Ce B + Dδ(t − τ )]u(τ ) dτ.
t0
The impulse response matrix
(
CeA(t−τ ) B + Dδ(t − τ ) t ≥ τ
H(t, τ ) = (3.37)
0 t<τ
only depends on the time difference t − τ . Thus, we may write simply
y(t) = CeA(t−τ ) x0 + H(t) ∗ u(t), with
(
CeA(t−τ ) B + Dδ(t) t ≥ 0 (3.38)
H(t) = .
0 t<0
Oliver Wallscheid AST Topic 03 34
Example (1)
Consider the LTI system defined by
   
0 1 0    T
A= ,B = , C = 0 1 , D = 0 with x0 (t0 = 0) = 1 −1 .
0 0 1
The system matrix is nilpotent and the transition matrix is therefore
 
1 t
Φ(t) = exp(At) = I + tA = .
0 1
According to (3.36) the natural response is
    
 1 t
 1   1−t
yh (t) = CΦ(t, t0 )x0 = CΦ(t)x0 = 0 1 = 0 1 = −1.
0 1 −1 −1
For a given step input u(t) = σ(t) the forced response is
 t2
Z t Z t  

yp (t) = C Φ(t, τ )Bu(τ ) dτ + Du(t) = C Φ(t, τ )Bσ(τ ) dτ = 0 1 2 = t.
t0 0 t
Oliver Wallscheid AST Topic 03 35
Example (2)
The total response results in

y(t) = yh (t) + yp (t) = t − 1.

For the given system the impulse response matrix is


  
  1 t 0
H(t) = CΦ(t)B + Dδ(t) = CΦ(t)B = 0 1
0 1 1
= 1.

Interpretation of H(t)
The elements hij (t) of H(t) can be considered as some dynamic input-to-output sensitivity:
how dependent output i is, on what input j was, t seconds ago. In the above example, this
sensitivity is constant over time (special case).
Oliver Wallscheid AST Topic 03 36
Table of contents

1 Linear homogeneous and nonhomogeneous differential equations

2 Response of linear, autonomous and time-invariant state-space systems

3 Response of linear state-space systems

4 Discrete-time state-space systems


System response in discrete time
Discretization of continuous-time state-space models

Oliver Wallscheid AST Topic 03 37


Motivation for discrete-time system theory

Control algorithm
Discrete time

Continuous time
Fig. 3.2: A continuous-time system within a discrete-time control framework

Oliver Wallscheid AST Topic 03 38


Discrete-time systems
Assuming that ideal zero-order hold digital-to-analog (D/A) and analog-to-digital (A/D)
conversion
u(t) = u[k], tk ≤ t < tk+1 , k ∈ {Z|k ≥ k0 }
(3.39)
y[k] = y(tk ), k ∈ {Z|k ≥ k0 }

is available, a given (linear) system can be represented in discrete time by difference


equations

x[k + 1] = A[k]x[k] + B[k]u[k],


(3.40)
y[k] = C[k]x[k] + D[k]u[k].

Note: The discrete-time system matrices {A[k], B[k]} are not just their continuous-time
counter parts inserting another counting variable. The transformation between the time
domains is explained in the upcoming slides.
Oliver Wallscheid AST Topic 03 39
Solution of the homogeneous difference equation

Assume that we have given a discrete-time state-space description

x[k + 1] = A[k]x[k] + B[k]u[k],


(3.41)
y[k] = C[k]x[k] + D[k]u[k].

The linear homogeneous system of difference equations with initial conditions

x[k + 1] = A[k]x[k], x0 = x[k0 ] (3.42)

has the straightforward solution

x[k] = A[k − 1]A[k − 2] · · · A[k0 ]x0 . (3.43)

Oliver Wallscheid AST Topic 03 40


State transition matrix
Thus, we can write
x[k] = Φ[k, k0 ]x0 , k > k0 (3.44)
with the discrete-time state transition matrix
k−1
Y
Φ[k, k0 ] = A[j]. (3.45)
j=k0

Theorem 3.4: Time-invariant discrete-time transition matrix


For time-invariant systems
x[k + 1] = Ax[k] (3.46)
the state transition matrix is
Φ[k, k0 ] = Ak−k0 . (3.47)

Oliver Wallscheid AST Topic 03 41


Example
Let  
−1 2
A=
0 1
be the discrete-time LTI system matrix for an autonomous system. In this case the state
transition matrix is given as

(−1)k−k0 1 − (−1)k−k0
 
Φ[k, k0 ] = ,
0 1

i.e.,
Φ[k, k0 ] = A if (k − k0 ) is odd, Φ[k, k0 ] = I if (k − k0 ) is even.
 T
Consider the initial state x[k0 = 0] = 2 1 , then the system response is
 T  T
x[k] = 0 1 for k = 1, 3, 5, . . . x[k] = 2 1 for k = 2, 4, 6, . . .

Oliver Wallscheid AST Topic 03 42


Solution of the non-homogeneous difference equation
For simplicity, we will only consider time-invariant discrete-time systems with k0 = 0 for the
remainder of this section. The solution to the non-homogenous difference equation
x[k + 1] = Ax[k] + Bu[k], x[0] = x0 (3.48)
can be obtained recursively as
x[1] = Ax0 + Bu[0]
x[2] = Ax[1] + Bu[1] = A2 x0 + ABu[0] + Bu[1]
..
.
x[k] = Ak x0 + Ak−1 Bu[0] + · · · + ABu[k − 2] + Bu[k − 1] (3.49)
k−1
X
= Ak x[0] + A(k−1)−j Bu[j], k > 0.
j=0
| {z } | {z }
homogeneous solution particular solution
Oliver Wallscheid AST Topic 03 43
Response of LTI discrete-time systems
Plugging this solution into the output equation
y[k] = Cx[k] + Du[k], (3.50)
we obtain the total system response
k−1
X
y[k] = CAk x[0] + CA(k−1)−j Bu[j] + Du[k]
j=0
| {z } | {z } (3.51)
zero-input response zero-state response
= CAk x[0] + H[k] ∗ u[k], k>0
where H[k] is the impulse response matrix (in discrete time):

k−1
CA B k > 0

H[k] = D k=0. (3.52)

0 k<0

Oliver Wallscheid AST Topic 03 44


Example
Consider the discrete-time LTI system defined by
   
0 1 0  
A= ,B = , C = 1 0 , D = 0.
0 −1 1

First, we we can observe that Φ[k] = Ak follows an intermittent series

0 (−1)k−1 δ[k] (−1)k−1 σ[k − 1]


   
k
Φ[k] = A = , k ≥ 1 or Φ[k] = ,k≥0
0 (−1)k 0 (−1)k σ[k]

with σ[k] being the discrete-time unit step. With a given initial state x[0] we can calculate the
natural response. According to (3.52) the impulse response matrix is

0, k ≤ 0,
 " #" #
H[k] = h i δ[k − 1] (−1)k−2 σ[k − 2] 0
H[k] = 1 0
 = (−1)k−2 σ[k − 2], k ≥ 1.
0 (−1)k−1 σ[k − 1] 1

Oliver Wallscheid AST Topic 03 45


Obtaining a LTI discrete-time state-space representation (1)
When comparing the discrete-time state-space notation

x[k + 1] = Ax[k] + Bu[k],


(3.53)
y[k] = Cx[k] + Du[k],

to its continuous-time counterpart

ẋ(t) = Ax(t) + Bu(t),


(3.54)
y(t) = Cx(t) + Du(t),

of a given system it is obvious that


1 C and D are independent from the time setup and are identical in (3.53) and (3.54).
2 A and B depend of the framework, i.e., they differ from each other in (3.53) and (3.54).

The notation Ad , Bd and Φd is temporarily used to highlight the discrete-time framework.


Oliver Wallscheid AST Topic 03 46
Obtaining a LTI discrete-time state-space representation (2)

A typical modeling problem (e.g., in discrete-time control design)


How to obtain Ad and Bd when we only have a continuous-time model (3.54)?

We can just compare the system responses in both time domains: for given time points
tk = Ts k (Ts : sampling interval) the state transition matrices for both continuous-time and
discrete-time must be equal for an exact system representation:
!
Φ(tk ) = eAtk = eATs k = Akd = Φd [k]. (3.55)

Hence, the discrete-time system matrix is

Ad = eATs . (3.56)

Oliver Wallscheid AST Topic 03 47


Obtaining a LTI discrete-time state-space representation (3)
Likewise the forced response terms in continuous time (3.32) and discrete time (3.49) must be
equal:
Z tk k−1
! (k−1)−j
X

e Bu(τ ) dτ = Ad Bd u[j]. (3.57)
0 j=0

Assuming ideal zero-order hold, i.e., u(t) is constant during two sampling steps {k, k + 1} we
can simplify to
Z tk =Ts k k−1
! (k−1)−j
X

e B dτ = Ad Bd . (3.58)
0 j=0

The above equation must hold true for any k, hence we can set k = 1 and we find an exact
mapping of the discrete-time input matrix:
Z Ts Z Ts 
Aτ !
e B dτ = Φ(τ )dτ B = IBd = Bd . (3.59)
0 0

Oliver Wallscheid AST Topic 03 48


Obtaining a LTI discrete-time state-space representation (4)
Note: if A is nonsingular
Z Ts  Z Ts  
Aτ −1 Aτ
e dτ =A e dτ −I (3.60)
0 0

holds true. In this case we have


Bd = A−1 (Ad − I)B. (3.61)

Euler forward approximation


RT
Only for special cases calculating Ad = eATs and Bd = 0 s eAτ B dτ is straightforward (e.g.,
if A is nilpotent). To reduce computational burden, approximating the discrete-time system
and input matrices by a first-order Taylor approach might be sufficient in some cases:

Ad ≈ I + Ts A, Bd ≈ Ts B. (3.62)
Oliver Wallscheid AST Topic 03 49
Example (1)
Consider the continuous-time system
   
0 −1 0
ẋ(t) = 1 x(t) + u(t) with C = I, D = 0.
/2 −1 1
The characteristic polynomial is
1
det(A − Iλ) = λ2 + λ +
2
resulting in the complex eigenvalue pair λ1,2 = −0.5 ± j0.5. To investigate the zero-input
response we apply Cayley-Hamilton (3.24) to find the continuous-time transition matrix:
   
λ1 − 12 t 1 1 ! 1 1
e =e cos( t) + j sin( t) = α0 (t) + α1 (t) − + j = α0 (t)λ01 + α1 (t)λ1 ,
2 2 2 2
   
λ2 − 12 t 1 1 ! 1 1
e =e cos( t) − j sin( t) = α0 (t) + α1 (t) − − j = α0 (t)λ02 + α1 (t)λ2 .
2 2 2 2
Oliver Wallscheid AST Topic 03 50
Example (2)
From the previous equation system we can find
 
− 12 t 1 1 1 1
α0 (t) = e cos( t) + sin( t) , α1 (t) = e− 2 t sin( t)
2 2 2

leading to the continuous-time state transition matrix


 1 1

e− 2 t cos( 12 t) + sin( 21 t) −2e− 2 t sin( 21 t)

Φ(t) = α0 (t)I + α1 (t)A =  1 1  .
e− 2 t sin( 12 t) e− 2 t cos( 12 t) − sin( 12 t)

From (3.56) follows the discrete-time system and corresponding transition matrices
 1 1

e− 2 Ts cos( 21 Ts ) + sin( 12 Ts ) −2e− 2 Ts sin( 12 Ts )

k
Ad =   , Φd [k] = Ad .
− 12 Ts 1 − 12 Ts 1 1
e sin( 2 Ts ) e cos( 2 Ts ) − sin( 2 Ts )

Oliver Wallscheid AST Topic 03 51


Example (3)

For comparison reasons the simplified Euler approximation using (3.62) is also considered:
 
1 −Ts
Ad ≈ Ãd = I + Ts A = Ts , Φ̃d [k] = Ãkd .
2 1 − Ts

For an exemplary comparison we set Ts = 0.5 leading to


   
0.947 −0.385 1 −0.5
Ad = , Ãd = .
0.193 0.562 0.25 0.5
 T
The different response calculation results for x0 = 1 0 , i.e,

x(t) = Φ(t)x0 , x[k] = Akd x0 , x̃[k] = Ãkd x0

are visualized on the next slide.

Oliver Wallscheid AST Topic 03 52


Example (4)
1

0.5

0
0 1 2 3 4 5 6 7 8 9 10
0.4

0.2

0
0 1 2 3 4 5 6 7 8 9 10

 T
Fig. 3.3: Zero-input system response for different solution approaches (x0 = 1 0 )
Oliver Wallscheid AST Topic 03 53
Example (5)
Additionally, the system response to a unit step u(t) = σ(t) should be investigated. From
(3.33) we can calculate the forced response:
Z t
xp (t) = Φ(τ )Bu(τ )dτ
0
 " #
Z t e− 12 τ cos( 1 τ ) + sin( 1 τ ) −2e − 12 τ
sin( 1
τ ) 0
2 2 2
= 
1 1
 σ(τ )dτ
e− 2 τ sin( 21 τ ) e− 2 τ cos( 21 τ ) − sin( 12 τ ) 1

0
 1
  1

Z t −2e− 2 τ sin( 21 τ ) Z t 2e− 2 τ sin(− 12 τ )
= 
1  dτ = 
1  dτ.
0 e− 2 τ cos( 12 τ ) − sin( 12 τ ) 0 e− 2 τ cos(− 12 τ ) + sin(− 12 τ )
To solve the above integral we can make use of the following equations:
Z t  x t Z t  x t
x e x e
e sin(x)dx = (sin(x) − cos(x)) , e cos(x)dx = (cos(x) + sin(x)) .
0 2 0 0 2 0

Oliver Wallscheid AST Topic 03 54


Example (6)
Considering change of variables x = −τ2 (dτ = −2dx) the forced response becomes
 1 
e− 2 t cos( 21 t) + sin( 21 t) − 1

xp (t) = 2  .
− 12 t 1
e sin( 2 t)
 T
The total response for the same x0 = 1 0 is then
 1   − 1 t 
e− 2 t cos( 12 t) + sin( 21 t) 2e 2 cos( 12 t) + sin( 21 t) − 2

x(t) =  + 
− 12 t 1 − 12 t 1
e sin( 2 t) 2e sin( 2 t)
 1

3e− 2 t cos( 12 t) + sin( 21 t) − 2

= 
1
.
3e− 2 t sin( 12 t)

Oliver Wallscheid AST Topic 03 55


Example (7)
Regarding the discrete-time total response we can utilize that A is nonsingular and its inverse
is given as  
−1 −2 2
A = .
−1 0
The exact discrete-time input matrix is then
" # − 21 Ts 1

−2 2 −2e sin( T
2 s )
Bd = A−1 (Ad − I)B = 
1

−1 0 e− 2 Ts cos( 1 Ts ) − sin( 1 Ts ) − 1

2 2
 
− 12 Ts 1 1

2e cos( 2 Ts ) + sin( 2 Ts ) − 2
= .
− 21 Ts 1
2e sin( 2 Ts )
The approximated discrete-time input matrix using Euler forward is
 T
B̃d = BTs = 0 Ts .
Oliver Wallscheid AST Topic 03 56
Example (8)
The discrete-time total response can then be calculated using (3.51):
k−1
(k−1)−j
X
x[k] = Akd x[0] + Ad Bd u[j]
j=0
k−1
T X (k−1)−j
Akd

= 0 1 + Ad Bd σ[j].
j=0

The exemplary discrete-time input matrices for Ts = 0.5 are


   
−0.106 0
Bd = , B̃d = .
0.385 0.5

Note: Instead of recalculating the above forced response series for each new k again and
again, we can calculate the discrete-time response directly using x[k + 1] = Ad x[k] + Bd u[k].
Oliver Wallscheid AST Topic 03 57
Example (9)
1

-1

-2
0 1 2 3 4 5 6 7 8 9 10

0.5

0
0 1 2 3 4 5 6 7 8 9 10

 T
Fig. 3.4: Total system response for different solution approaches (x0 = 1 0 , u(t) = σ(t))
Oliver Wallscheid AST Topic 03 58
Important Matlab / GNU Octave commands
1 = =
sys = ss(A,B,C,D); % Creates a cont. time state space model
2 = =
sysd = ss(A,B,C,D,Ts); % Creates a discrete time state space model
3
4 sysd = c2d(sysc,Ts); % Converts model from continuous to discrete time
5 sysc = d2c(sysd); % Converts model from discrete to continuous time
6 =
sys1 = d2d(sys, Ts); % Resamples discrete time model
7
8 Y = expm(X); % Matrix exponential (both numeric & symbolic)
9
10 step(sys); % Step response plot of dynamic system
11 y = step(sys,t) % returns the step response data at times t
12 impulse(sys); % impulse response plot of dynamic system
13 y = impulse(sys,t); % returns the impulse response data at times t
14 lsim(sys,u,t); % Plot simulated time response with arbitrary input u
15 y = lsim(sys,u,t); % returns system response to arbitrary input u

Note that above commands are from Matlab; GNU Octave commands might slightly differ.
And some commands require special toolboxes (e.g., Control System Toolbox).
Oliver Wallscheid AST Topic 03 59
Summary: what you’ve learned in this section

I Due to the superposition principle of linear systems we can find the forced and natural
responses separately to finally obtain the total response.
I For linear time-variant (LTV) systems a general closed-form solution is not available.
Only for special cases we can find a direct response solution.
I In contrast, that is always possible for linear time-invariant (LTI) systems.
I The state transition matrix is of prime importance to calculate any natural and forced LTI
system response.
I It is defined via the matrix exponential which can be found using multiple approaches
(e.g., Cayley-Hamilton or similarity transforms).
I Same circumstances apply for LTI systems in discrete time, however, calculating their
response is much easier.
I By comparing the discrete and continuous-time responses we can find an exact
transformation between the two state-space representations.
Oliver Wallscheid AST Topic 03 60
End of this section

Thanks for your attention!

Oliver Wallscheid AST Topic 03 61

You might also like