Professional Documents
Culture Documents
An example of a static system is a simple light switch in which the switch position determines if
the light is on or not. On the other hand, a light controlled by a toggle switch is NOT a static
system with light switch position as input.
Another example is a flow control valve (Fig.1.1), whose output flow rate Q is given as
s
2Ps (t)
Q = u(t)Amax (1.2)
ρ
where u ∈ [0, 1] is the input, Amax is maximum orifice area, Ps is flow pressure and ρ is fluid density.
Here Ps is a time-varying parameter that affects the static output Q.
3
4
Perry
c Y.Li
A uA
Fs
x
P
A uA
Orbiting pendulum
Earth
The flow control valve shown in Fig. 1.2 with force F (t) acting on the valve as input is not a
static system. The fluid pressure Ps is constant. However, the flow rate history is a function of the
force F (t) acting on the valve. It is necessary to know the time history of the forcing function F (t)
in order to determine the flow rate at any time. The position x(t) of the valve is governed by the
differential equation
ẍ = F (t) − bẋ − kx (1.3)
where k is the spring constant and b is the damping factor. If the fractional opening of for a given
valve position is f (x), then the flow rate is then given by:
s
2Ps (t)
Q = f (x)Amax (1.4)
ρ
A static time invariant system is one with y(t) = f (u(t)) for all t. To determine the output of
a static system at any time t, the input value only at t is needed.
today. One needs to know all the past deposits and withdrawal. Alternatively, one can know the
so call state at one time . . ..
On the other hand, the value of a stock is determined by the revenue that it will bring in the
future (actually discounted by the interest rate). Thus, the system that determines the value of the
stock is not a causal system as it depends on future revenue (and future interest rates). The price
of the stock, however, is the market’s estimate of the value of the stock based on past information.
Thus, the process that determines the stock price is a causal system (albeit a complicated one).
u + (v + w) = (u + v) + w Associativity of addition
u+v =v+u Commutativity of addition
∃0 ∈ X, s.t. ∀v ∈ X, v+0=v Identity element
∀v ∈ X, ∃ − v ∈ X Additive inverse
a(bv) = (ab)v Compatibiliyt of scalar mulitplication
1v = v Multiplicative identity
a(u + v) = au + av Distributivity
(a + b)v = av + bv Distributivity
An Example: A car with input u(t) being its acceleration and output y(t) being position.
1. If the behavior of interest is just the speed of the car, then x(t) = ẏ(t) can be the state. It is
qualified to be a state because given u(τ ), τ ∈ [t0 , t], the speed at t is obtained by:
Z t
v(t) = ẏ(t) = x(t0 ) + u(τ )dτ.
t0
U. of Minnesota, ME 8281 - Advance Control System Design (updated Spring 2016) 7
y(t)
2. If the behavior of interest is the car position, then xa (t) = ∈ <2 can be the state.
ẏ(t)
y(t) + 2ẏ(t)
3. An alternate state might be xb (t) = . Obviously, since we can determine the
ẏ(t)
old state vector xa (t) from this alternate one xb (t), and vice versa, both are valid state vectors.
This illustrates the point that state vectors are not unique.
Remark 1.2.1 1. If y(t) is defined to be the behavior of interest, then by taking t = t0 , the
definition of a state determined system implies that one can determine y(t) from the state
x(t) and input u(t), at time t. i.e. there is a static output function h(·, ·, ·) so that the output
y(t) is given by:
y(t) = h(x(t), u(t), t)
h : (x, u, t) 7→ y(t) is called the output readout map.
2. The usual representation of continuous time dynamical system is given by the form:
ẋ = f (x, u, t)
y = h(x, u, t)
• State transition property For any t0 ≤ t1 , if two input signals u1 (·) and u2 (·) are such
that u1 (t) = u2 (t) ∀t ∈ [t0 , t1 ], then
i.e. if x(t0 ) = x0 , then the final state x(t1 ) depends only on past inputs (from t1 ) that occur
after t0 , when the initial state is specified. Systems like this is called causal because the state
does not depend on future inputs.
8
Perry
c Y.Li
• Semi-group property For all t2 ≥ t1 ≥ t0 ∈ T , for all x0 , and for all u(·),
s(t2 , t1 , x(t1 ), u) = s(t2 , t1 , s(t1 , t0 , x0 , u), u) = s(t2 , t0 , x0 , u)
Thus, when calculating the state at time t2 , we can first calculate the state at some interme-
diate time t1 , and then utilize this result to calculate the state at time t2 in terms of x(t1 )
and u(t) for t ∈ [t1 , t2 ].
Similarly,
∂h ∂h
∆y(t) ≈ (x̄(t), ū(t), t)∆x(t) + (x̄(t), ū(t), t)∆u(t)
∂x ∂u
= C(t)∆x(t) + D(t)∆u(t)
Notice that if x ∈ <n , u ∈ <m , y ∈ <p , then A(t) ∈ <n×n , B(t) ∈ <n×m , C(t) ∈ <p×n and
D(t) ∈ <p×m .
Be careful that to obtain the actual input, state and output, one needs to recombine with the
nominals:
u(t) ≈ ū(t) + ∆u(t)
x(t) ≈ x̄(t) + ∆x(t)
y(t) ≈ ȳ(t) + ∆y(t)
U. of Minnesota, ME 8281 - Advance Control System Design (updated Spring 2016) 9
with
where x(t) ∈ <n , u : [0, ∞) → <m and y : [0, ∞) → <p . A(t), B(t), C(t), D(t) are matrices with
compatible dimensions of real-valued functions.
This system may have been obtained from the Jacobian linearization of a nonlinear system
Assumption 2.1.1 We assume that A(t), B(t) and the input u(t) are piecewise continuous func-
tions of t. A function f (t) is piecewise continuous in t if it is continuous in t except possibly at
points of discontinuity. The set of points of discontinuity contains at most a finite number of points
per unit interval. Also, at points of discontinuity, both the left and right limits exist.
Assumption 2.1.1 allows us to claim that given an initial state x(t0 ), and a input function u(τ ),
τ ∈ [t0 , t1 ], the solution,
Z t
x(t) = s(t, t0 , x(t0 ), u(·)) = x(t0 ) + [A(τ )x(τ ) + B(τ )u(τ )]dτ
t0
exists and is unique via the Fundamental Theorem of Ordinary Differential Equations (see below)
often proved in a course on nonlinear systems.
Uniqueness and existence of solutions of nonlinear differential equations are not generally guar-
anteed. Consider the following examples.
An example in which existence is a problem:
ẋ = 1 + x2
with x(t = 0) = 0. This system has the solution x(t) = tan t which cannot be extended beyond
t ≥ π/2. This phenomenon is called finite esacpe.
11
12
Perry
c Y.Li
The other issue is whether a differential equation can have more than one solution for the same
initial condition (non-uniqueness issue). e.g. for the system:
2
ẋ = 3x 3 , x(0) = 0.
ẋ = f (x, t) (2.3)
2. There exists a piecewise continuous function, k(t), such that for all x1 , x2 ∈ <n ,
Then,
1. there exists a solution to the differential equation ẋ = f (x, t) for all t, meaning that: a) For
each initial state xo ∈ Rn , there is a continuous function φ : R+ → Rn such that
φ(to ) = x0
and
2. Moreover, the solution φ(·) is unique. [If φ1 (·) and φ2 (·) have the same properties above,
then they must be the same.]
Remark 2.1.1 This version of the fundamental theorem of ODE is taken from [Callier and Desoer,
91]. A weaker condition (and result) is that on the interval [t0 , t1 ], the second condition says
that there exists a constant k[t0 ,t1 ] such that for any t ∈ [t0 , t1 ], the function x 7→ f (x, t), for all
x1 , x2 ∈ Rn , satisfies
kf (x1 , t) − f (x2 , t)k ≤ k[t0 ,t1 ] kx1 − x2 k .
The proof of this result can be found in many books on ODE’s or dynamic systems 1 and is
usually proved in details in a Nonlinear Systems Analysis class.
1
e.g. Vidyasagar, Nonlinear Systems Analysis, 2nd Prentice Hall, 93, or Khalil Nonlinear Systems, McMillan, 92
U. of Minnesota, ME 8281 - Advance Control System Design (updated Spring 2016) 13
Theorem 2.1.2 For any pairs of initial and final times t0 , t1 ∈ <, the state transition map
of the linear differential system (2.1)-(2.2) is a linear map of the pair of the initial state x(t0 )
and the input u(τ ), τ ∈ [t0 , t1 ].
In order words, for any 2 input functions u(·), u0 (·), 2 initial states x, x0 and 2 scalars α, β ∈ <,
s(t1 , t0 , (αx + βx0 ), (αu(·) + βu0 (·))) = α · s(t1 , t0 , x, u(·)) + β · s(t1 , t0 , x0 , u0 (·)). (2.4)
from the pair of initial state x(t0 ) and input u(τ ), τ ∈ [t0 , t1 ], to the output y(t1 ) is also a linear
map, i.e.
ρ(t1 , t0 , (αx + βx0 ), (αu(·) + βu0 (·))) = α · ρ(t1 , t0 , x, u(·)) + β · ρ(t1 , t0 , x0 , u0 (·)). (2.5)
Before we prove this theorem, let us point out a very useful principle for proving that two time
functions x(t), and x0 (t), t ∈ [t0 , t1 ], are the same.
Lemma 2.1.3 Given two time signals, x(t) and x0 (t). Suppose that
ṗ = f (t, p) (2.6)
• The differential equation (2.6) has unique solution on the time interval [t0 , t1 ],
Proof: (of Theorem 2.1.2) We shall apply the above lemma (principle) to (2.4).
Let t0 be an initial time, (x0 , u(·)) and (x0 , u0 (·)) be two pairs of initial state and input, producing
state and output trajectories x(t), y(t) and x0 (t), y 0 (t) respectively.
We need to show that if the initial state is x00 (t0 ) = αx0 + βx0 0, and input u00 (t) = αu(t) + βu0 (t)
for all t ∈ [t0 , t1 ], then at any time t, the response y 00 (t) is given by the function
We will first show that for all t ≥ t0 , the state trajectory x00 (t) is given by:
To prove (2.8), we use the fact that (2.1) has unique solutions. Clearly, (2.8) is true at t = t0 ,
Moreover,
Hence, g(t) and x00 (t) satisfy the same differential equation (2.1). Thus, by the existence and
uniqueness property of the linear differential system, the solution is unique for each initial time t0
and initial state. Hence x00 (t) = g(t).
where 0u denotes the identically zero input function (u(τ ) = 0 for all τ ) and 0x denote the zero
state vector x = 0. It is so because we can decompose (x0 , u) into
and then apply the defining property of a linear dynamical system to this decomposition.
Because of this property, the zero-state response (i.e. the response of the system when the inital
state is x(t0 ) = 0x ) satisfies the familiar superposition property:
required answer.
Proposition 2.3.1 Let X(t) ∈ <n×n be a fundamental matrix. Then, X(t) is non-singular at all
t ∈ <.
Proof: Since X(t) is a fundamental matrix, X(t1 ) is nonsingular for some t1 . Suppose that X(τ )
is singular, then ∃ a non-zero vector k ∈ <n s.t. X(τ )k = 0, the zero vector in <n . Consider now
the differential equation:
ẋ = A(t)x, x(τ ) = X(τ )k = 0 ∈ <n .
Then, x(t) = 0 for all t is the unique solution to this system with x(τ ) = 0. However, the function
x0 (t) = X(t)k
satisfies
ẋ0 (t) = A(t)X(t)k = A(t)x0 (t),
and x0 (τ ) = x(τ ) = 0. Thus, by the uniqueness of differential equation, x(t) = x0 (t) for all t. Hence
x0 (t1 ) = 0 extracting a contradiction because X(t1 ) is nonsingular so x0 (t1 ) = X(t1 )k is non-zero.
16
Perry
c Y.Li
This formula is derived from applying Picard Iteration (see class notes). It can be verified
formally (do it) by checking that the RHS(t = t0 ) is indeed I and that the RHS satisfies the
differential equation
∂
RHS(t, t0 ) = A(t)RHS(t, t0 ).
∂t
Let us define the exponential of a matrix, A ∈ <n+n using the power series,
A A2 Ak
exp(A) = I + + + ··· + ···
1 2! k!
by mimicking the power series expansion of the exponential function of a real number.
If A(t) = A is a constant, then the Peano-Bakar formula reduces to (prove it!)
Notice that (2.11) does not in general apply unless the matrices commute. i.e.
Z t k Z t k
A(t) A(τ )dτ = A(τ )dτ A(t).
t0 t0
Rt
Each of the following situations will guarantee that the condition A(t) and t0 A(τ )dτ commute
for all t is satisfied:
18
Perry
c Y.Li
1. A(t) is constant.
2. A(t) = α(t)M where α(t) is a scalar function, and M ∈ <n×n is a constant matrix;
P
3. A(t) = i αi (t)Mi where {Mi } are constant matrices that commute: Mi Mj = Mj Mi , and
αi (t) are scalar functions;
∂
Φ(t, t0 ) = A(t)Φ(t, t0 ); Φ(t0 , t0 ) = I
∂t
Note that this is a n2 coupled differential equation if one stack all the n2 elements in Φ(t, t0 ) in a
vector.
Infinte series
When A(t) = constant, there are algebraic approaches available:
Φ(t1 , t0 ) = exp(A(t1 − t0 ))
A(t1 − t0 ) A2 (t1 − t0 )2
:= I + + + ...
1! 2!
Matlab provides expm(A ∗ (t1 − t0 )).
where Φ̂(s, 0) is the Laplace transform of Φ(t, 0) when treated as function of t. This gives:
(sI − A)Φ̂(s, 0) = I
so that Φ̂(s, 0) = (sI − A)−1 . To get Φ(t, 0) take the inverse Laplace transform of (sI − A)−1 .
Example
−1 0
A=
1 −2
U. of Minnesota, ME 8281 - Advance Control System Design (updated Spring 2016) 19
−1
−1 s+1 0 1 s+2 0
(SI − A) = =
−1 s + 2 (s + 1)(s + 2) 1 s+1
1/(s + 1) 0
=
1/(s + 1)(s + 2) 1/(s + 2)
Taking the inverse Laplace transform (using e.g. a table) term by term,
e−t
0
Φ(t, 0) = −t
e − e−2t e−2t
A · v = λv
be the collection of eigenvectors, and the associated eigenvalues. Then, the so-called similarity
transform is:
A = T ΛT −1 (2.12)
Remark
• When A is not semi-simple, a similar decomposition as (2.12) is available except that Λ will be
in Jordan form (has 1’s in the super diagonals) and T consists of eigenvectors and generalized
eigenvectors). This topic is covered in most Linear Systems or linear algebra textbook
Now
At A2 t2
exp(At) := I + + + ...
1! 2!
Notice that Ak = T Λk T −1 , thus,
Λt Λ2 t2
exp(tA) = T I + + + . . . T −1 = T exp(Λt)T −1
1! 2!
The above formula is valid even if A is not semi-simple. In the semi-simple case,
h i
Λk = diag λk1 , λk2 , . . . , λkn
20
Perry
c Y.Li
so that
w1T
w T
2
If we write T −1 = . where wiT is the i-th row of T −1 , then,
..
wnT
n
X
exp(tA) = T exp(Λt)T −1 = exp(λi t)vi wiT .
i=1
This is the dyadic expansion of exp(tA). It is easy to shot ath wiT A = λi wiT (do it). This means
that wi are the left eigenvectors of A.
Let Di = vi wiT ∈ <n×n be a dyad. From wiT vj = 0 if i 6= j and wiT vi = 1, we can show that
(
0n×n i 6= j
Di Dj =
Di i=j
The expansion shows that the system has been decomposed into a set of simple, decoupled 1st
order systems.
Example
−1 0
A=
1 −2
The eigenvalues and eigenvectors are:
1 0
λ1 = −1, v1 = ; λ2 = −2, v2 = .
1 1
Thus,
1 0 e−t e−t
0 1 0 0
exp(At) = = −t ,
1 1 0 e−2t −1 1 e − e−2t e−2t
Digression: Modal decomposition The (generalized 2 ) eigenvectors are good basis for a coor-
dinate system.
Suppose that A = T ΛT −1 is the similarity transform.
Let x ∈ <n , z = T −1 x, and x = T z, Thus,
x = z 1 v 1 + z 2 v 2 + . . . zn v n .
ẋ = Ax + Bu
T ż = T Λz + Bu
ż = Λz + T −1 Bu
żi = λi zi + B̄i u
the state transition map can be decomposed into the zero-input response and the zero-state re-
sponse:
s(t, t0 , x0 , u) = Φ(t, t0 )x0 + s(t, t0 , 0x , u)
Having figured out the zero-input state component, we now derive the zero-state response.
• Step 1: t0 ≤ t < t0 + h · i.
Since u(τ ) = 0, τ ∈ [t0 , t0 + h · i) and x(t0 ) = 0,
where ∆T = t − t0 + h · i.
• Step 3: t ≥ t0 + h · (i + 1).
Input is no longer active, ui (t) = 0. So the state is again given by the zero-input transition
map:
Φ (t, t0 + h · (i + 1)) B(t0 + i · h)u(t0 + h · i) · h
| {z }
≈x(t0 +(i+1)·h)
The total zero-state state transition due to the input u(·) is therefore given by:
(t−t0 )/h−1
X
s(t, t0 , 0, u) ≈ Φ (t, t0 + h · i) B(t0 + i · h)u(t0 + h · i) · h
i=0
In this heuristic derivation, we can see that Φ(t, τ )B(τ )u(τ ) is the contribution to the state x(t)
due to the input u(τ ) for τ < t.
Clearly (2.14) is correct for t = t0 . We will now show that the LHS of (2.14), i.e. x(t) and the RHS
of (2.14), which we will denote by z(t) satisfy the same differential equation.
000000000000
111111111111
000000000000
111111111111
sigma
000000000000
111111111111
000000000000
111111111111
sigma = tau
000000000000
111111111111
111111111111
000000000000
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
tau
Figure 2.1: Changing the order of integral in the proof of zero-state transition function
let f (σ, τ ) := A(σ)Φ(σ, τ )B(τ )u(τ ) and then changing the order of the integral (see fig 2.1)
Z t Z tZ σ
= B(τ )u(τ )dτ + f (σ, τ )dτ dσ
t0 t0 t0
Z t Z t Z σ
= B(τ )u(τ )dτ + A(σ) Φ(σ, τ )B(τ )u(τ )dτ dσ
t0 t0 t0
Z t Z t
= B(τ )u(τ )dτ + A(σ)z(σ)dσ
t0 t0
uj (t) = j δ(t − τ )
0
..
.
0
where j is the j−th unit vector j = 1 j-th row
. The output response is:
0
..
.
0
The matrix (
[C(t)Φ(t, τ )B(τ ) + D(t)δ(t − τ )] ∀t ≥ τ
H(t, τ ) = (2.16)
0 t<τ
is called the Impulse response matrix. The j−th column of H(t, τ ) signifies the output response
of the system when an impulse is applied at input channel j at time τ . The reason why it must be
zero for t < τ is because
P the impulse would have no effect on the system before it is applied.
Because u(t) ≈ τ H(t, τ )u(τ )dτ , we can also see that intuitively,
Z t
y(t) = H(t, τ )u(τ )dτ
t0
Linearity: For any α, β ∈ R, and for any two initial states xa , xb and two input signals ua (·)
and ub (·),
s(k1 , k0 , αxa + βxb , αua (·) + βub (·)) = αs(k1 , k0 , xa , ua (·)) + βs(k1 , k0 , xb , ub (·))
As corollaries, we have:
0
Φ(k, k0 ) = Πk−1
k0 =k0 A(k )
The discrete time transition matrix has many similar properties as the continuous one, particularly
in the case of A(k) is invertible for all k. The main difference results from the possibility that A(k)
may not be invertible.
1. Existence and uniqueness: Φ(k1 , k0 ) exists and is unique for all k1 ≥ k0 . If k1 < k0 , then
Φ(k1 , k0 ) exists and is unique if and only if A(k) is invertible for all k0 > k ≥ k1 .
3. Semi-group property:
Φ(k2 , k0 ) = Φ(k2 , k1 )Φ(k1 , k0 )
Φ(k + 1, k0 ) = A(k)Φ(k, k0 )
Φ(k1 , k − 1) = Φ(k1 , k)A(k − 1)
Can you formulate a property for discrete time transition matrices similar to the splitting property
for continuous time case?
26
Perry
c Y.Li