John Lygeros
Automatic Control Laboratory, ETH Zrich
www.control.ethz.ch
Examples classes:
Mondays, 13:0015:00, ETF C1 and ETF E1
Assessment
Examples papers (7/9 required)
WriPen examination
Exercises in lecture notes are not assessed
Please try to do them nonetheless and discuss with instructor and assistants if you have questions
0.2
Reading material
Lecture notes
Slides handout
Blackboard notes
Recommended book
K.J. strm and R.M. Murray: Feedback Systems: An Introduction for Scientists and Engineers, Princeton U.P., 2008 hPp://www.cds.caltech.edu/~murray/amwiki/index.php/Main_Page G.F. Franklin, J.D. Powell, and A. EmamiNaeini, Feedback control of dynamical systems, PrenticeHall, 2006 (also used in Regesysteme I/II) J. Hespanha, Linear Systems Theory, Princeton U.P., 2009 T. Kailath, Linear systems, PrenticeHall, 1980 E.A. Lee and P. Varaiya, Structure and interpretation of signals and systems, AddisonWesley, 2002
0.3
Dynamical systems
Describe evolution of variables over time
Input variables
Output variables
State variables
SS1 SS2
Control:
Steer systems using inputs
RS1 Feedback
0.4
System
Output
SS1: System maps input signals to output signals SS2: Where does inputoutput map come from? RS1: What happens when we connect system inputs and outputs?
0.5
Input
u(t)
x(t)
+
A
+ + y(t)
Output
System 1 System 2
Dynamical systems
Describe evolution of variables over time
Input variables
Output variables
State variables
What is a state?
What values can it take?
What is time?
What values can it take?
What is evolution?
How can evolution be described?
0.6
Discrete vs continuous
Discrete Finite (or countable) values
{0, 1, 2, 3, }
{a, b, c, d}
{ON, OFF}, {hot, warm, cool, cold},
Discrete
Finite state machines, Turing machines
ztransform
Hybrid
Discrete
Hybrid
Switching diusions
In this course
We will concentrate mostly on
Continuous state
Continuous time
Linear systems
Continuous state
Discrete time
Linear systems
and to
0.10
7. Nonlinear systems
Notation
Z denotes the integers. This is a discrete set
Z = {!,!2,!1,0,1,2,!}
N N denotes the natural numbers
= {0,1,2,!} C denotes the complex numbers
s = s1 + js2 !C
Im[s] = s2 Re[ s] = s1
Im[ s ] s2 s
s
Belongs to
e = cos( ) + j sin( )
j
s1 Re[ s ]
0.13
Notation
Rn denotes Euclidean space of n dimensions. It is a
nite dimensional vector space (sometimes called linear space). Special cases:
x !R n=1, real line, (drop the superscript)
n=2, real plane,
Notation
R n!m matrices with n rows and m columns, whose
elements are real
Also a vector space, can dene length, Special cases n = R n!1 ,R = R1!1 R Assume familiar with basic matrix operations (addition, multiplication, eigenvalues)
! # # A=# # # "
a12 ! a1m $ & a22 ! a2m & = ! aij $ (R n'm & " % n'm ! " ! & an2 ! anm & %
0.15
Notation
2 Denition of sets
!R 2  x12 + x2 ! 1} {x Special case: Intervals for
!R,a < b a,b
x2
1
1 1 1
x1
} look like?
0.16
Notation
Continuous time
Discrete time
Continuous state
Continuous input
Continuous output
Discrete state
e.g. thermostat
t !R +
k !N
x !R n u !R m y !R p
q !Q q !Q = {ON ,OFF}
Notation
x !X f (i) : X ! Y f function returning for an
(x) !Y
x ! f (x)
Continuous time
Exercise: Show that if f (i) : R n ! R m , g(i) : R m ! R p are linear functions, then their composition g( f ()) : R n ! R p is also linear. If f and g are defined in terms of matrices A !R m"n , B !R p"m what does this composition correspond to? 0.19
u(t)e ! st dt #
"
"
# u(! )h(t ! ! ) d !
cf. SSI
Exercise: Show that the Laplace transform and the convolution are linear functions of u(.) 0.21
!"
u(t)e ! st dt = # u(t)e ! st dt #
0
"
"
(u * h)(t) =
"
Series of examples
1. Pendulum: Continuous time, continuous state, nonlinear autonomous system
2. RLC circuit: Continuous time, continuous state linear system with inputs
3. Amplier circuit: Continuous time, continuous state linear system with inputs and outputs
4. Population dynamics: Discrete time, continuous state nonlinear system
5. Manufacturing machine: Discrete time, discrete state system
6. Thermostat: Continuous time, hybrid state system
1.2
Example 1: Pendulum
A continuous time, continuous
state, autonomous,
nonlinear system
Mass m hanging from
weightless string of length l
l String makes angle q with
downward vertical
m Friction with dissipation
constant d
mg Determine how the pendulum
is going to move
i.e. assuming we know where the pendulum is at
!
time t=0 (0) and how fast it is moving ( )
!0
determine where it will be at time t ((t))
1.3
Need to solve Newtons dierential equation
i.e. nd a function
() : R + ! R !
such that
! (0) = ! 0
Time Angle
! (t)
>> x=[0.75 0]; >> [T,X]=ode45(pendulum, [0 10], x); >> plot(T,X); >> grid;
1.6
such that
Time
velocity
! !t !R + , x(t) = f (x(t))
1.8
t =0
t
x1
x1 (t)
1.9
vR (t)
v L (t)
iL (t)
+
L C
v1 (t)

vC (t)

1.10
dvC (t) C = iL (t) dt diL (t) L = v L (t) dt vR (t) = RiL (t) v L (t) = v1 (t) ! vR (t) ! vC (t)
Solution to ODE gives vC(t)
All other voltages and currents can be computed from vC(t)
1.11
In matrix form
" $ 0 ! x(t)= $ $ !1 $ # L
1 C !R L
% " 0 % ' ' x(t) + $ 1 ' u(t) $ ' ' $ L ' ' # & &
1.14
2nd order ODE two states States related to energy stored in system External source of energy input u(t) system no longer autonomous Function f(x,u) linear in x and u dynamics described by linear ODE 1.15
iR (t)
0
C0
i1
+
R1
vC (t)
+
1
+ 
iin
R0
+
C1
vin
iout
v1 (t)

v0 (t)

1.16
iin
vin

Rin
+
Rout
+
vin
iout
vout

Gain (large)
Rout ! 0
Rin ! "
!"
iin ! 0
Virtual earth assumption
Makes circuit analysis much easier
Note that
Necessary energy comes from external voltage source (not shown!)
1.18
Input power iinvin=0
Output power ioutvout is arbitrary
dt
dvC (t)
1
dt
vC (t)
R1
dvC (t)
0
dt
= ic (t)
0 0 0
dt
vC (t)
vC (t)
1 iR (t) = vC (t) 0 R0 0
1.19
1.20
Exercise: Why does the output settle to zero even though input is nonzero? 1.22
d 1 1 T ! ! P(t) = E(t) = x(t) Qx(t) + x(t)T Qx(t) dt 2 2 1 1 T T T T = x(t) A + u(t) B Qx(t) + x(t)T Q Ax(t) + Bu(t) 2 2 1 1 T T = x(t) A Q + QA x(t) + u(t)T BT Qx(t) + x(t)T QBu(t) 2 2
Population dynamics
A discrete time, continuous state system
Experiment:
Closed jar containing a number (N) of fruit ies
Constant food supply
Question: How does y population evolve?
Number of ies limited by available food, epidemics
Few ies abundance of space/food more ies
Many ies competition for space/food fewer ies
x=
N N MAX
[0,1]
1.26
Exercise: Is the function f(x) linear or nonlinear? Exercise: Show that if r ![0,4] and x0 ![0,1] then xk ![0,1] for all k=0, 1, 2, 1.27
1.28
Single equilibrium
1.30
Manufacturing system
A discrete time, discrete state system
Model of a machine in a manufacturing shop
Machine can be in three states
Idle (I)
Working (W)
Broken (D)
A part arrives and starts gebing processed (p) The processing is completed and the part moves on to the next machine (c) The machine fails (f) The machine is repaired (r)
1.31
Exercise: Is linear or nonlinear? Does the question even make sense? 1.32
I p c W f r
D
1.33
( p ! c + p ! f ! r)* ! (1+ p + p ! f )
OR Followed by 1.34
Thermostat
A continuous time, hybrid system
Thermostat is trying to keep the temperature of a room at around 20 degrees
Turn heater on if temperature 19 degrees.
Turn heater o if temperature 21 degrees.
Due to uncertainty in the radiator dynamics, the temperature may fall further, to 18 degrees, or rise further, to 22 degrees
Both a continuous and a discrete state
Discrete state: Heater
q(t) !Q = {ON ,OFF}
Exercise: Solve the dierential equations Heater ON: Temperature increases to verify exponential exponentially towards 30 increase/decrease.
! x(t) = !! x(t)
Heater changes from ON to OFF and back depending on x(t) Natural to describe by mixture of dierential equation and graph notation 1.36
x(t) ! 19
ON
x(0) = 20
1.37
Thermostat: Solutions
q(t) x(t)
ON
OFF
1.38
1.39
! Position derivatives easy,
q(t) = v(t) Velocity derivatives (=accelerations) from Newton law
Current/voltage relations
C
Signal und Systemtheorie II DITET, Semester 4 Notes 2: Revision of ODE and linear algebra
John Lygeros
Automatic Control Laboratory, ETH Zrich
www.control.ethz.ch
To exploit structure need tools from linear algebra
Discrete time continuous state models very similar
State equations are dierence equations, else the same
Assumed to be known
Matrix product, compatible dimensions
(AB)C = A(BC) Associative:
A(B Distributive with respect to addition:
+ C) = AB + AC AB Noncommutative:
! BA in general
T = BT AT (AB)
2.3
The 2norm
Definition: The 2norm is a function i : R n ! R that to each x !R n assigns a real number Fact 2.1: For all x, y !R n , a !R 1. x ! 0 and x = 0 if & only if x = 0 2. ax = a ! x 3. x + y ! x + y Exercise: Prove 1 and 2. Is the 2norm a linear function?
x =
xi2 !
i=1
= xT x
The 2norm is a measure of size or length Distance between is x ! y x, y !R n x The set of points that are closer than r>0 to !R n
y !R n
x! y <r
2.4
Inner product
Definition: The inner product is a Fact 2.2: For all x, y, z !R n , a,b !R function i,i : R n ! R n ! R that 1. x, y = y, x n takes two vectors x, y !R and 2.a x, y + b z, y = ax + bz, y returns the real number 2 n 3. x, x = x T
x, y = ! xi yi = x y
i=1
Definition: Two vectors x, y !R n are called orthogonal if x, y = 0. A set of vectors, x1 , x2 ,, xm !R n are called orthonormal if
xi , x j
" 0 if i ! j $ =# $ 1 if i = j %
2.5
Linear independence
n Definition: A set of vectors {x1 , x2 ,!xm } !R is called linearly independent if for a1 ,a2 ,!am !R
a1 x1 + a2 x2 +! + am xm = 0 ! a1 = a2 = ! = am = 0
Otherwise they are called linearly dependent. Linearly dependent if and only if at least one is a a1 linear combination of the rest. E.g. if ! 0 am a2 a1 x1 + a2 x2 +! + am xm = 0 ! x1 = " x2 "! " xm a1 a1 Fact 2.3: There exists a set with n linearly Exercise: What is a set of independent vectors in R n ,but any set with n linearly independent more than n vectors is linearly dependent. vectors of R n ? 2.6
Subspaces
Definition: A set of vectors S ! R n is called a subspace of R n if for all x, y !S and a,b !R, we have that ax + by !S. Note that the set S is generally an innite set
Examples of subspaces of
n R
S = R n and S = {0}
Not subspaces
{ x { !R
n x !R x1 = 2x2 n
x1 = 0
{ { x !R
n x !R x1 = 1 n
x1 = 0 or x2 = 0
Exercise: Draw these sets for n=2. Prove that they are/are not subspaces.
2.7
Basis of a subspace
n Definition: A set of vectors x1 , x2 ,, xm ! R is called a basis for a subspace S ! R n if 1. { x1 , x2 ,, xm } are linearly independent 2. S = span { x1 , x2 ,, xm }
2.8
Fact 2.4: range(A) is a subspace of R n Definition: The rank of a matrix A !R n"m is the dimension of range(A).
A = ! a1 "
Fact 2.5: range(A) = span {a1 ,a2 ,,am }. rank(A)=number of linearly independent Exercise: Prove Fact 2.5 columns, a1, , am of A. 2.9
2.10
A!1 A = AA!1 = I
Fact 2.9: If an inverse of A exists then it is unique. Definition: A matrix is called singular if it does not have an inverse. Otherwise it is called nonsingular or invertible. Exercise: Prove Fact 2.9
2.11
adj(A) A = det(A)
!1
Fact 2.11: A is invertible if and only if the system of linear n n equations Ax = y has a unique solution x !R for all y !R Fact 2.12: A is invertible if and only if null(A) = 0
{}
m=n unique solutions if and only if A invertible (Fact 2.11)
If A singular system will have either no solutions, or innite number of solutions
n>m equations > unknowns generally no solution
Fact 2.14: If A has rank m then x = A A unique minimizer of Ax ! y
T
( )
!1
AT y is the
n<m unknowns > equations generally innite solutions Fact 2.15: If A has rank n then the system has infinitely many !1
y
2.13
Orthogonal matrices
Fact 2.16: A is orthogonal if and only if its columns are orthonormal. The product of orthogonal matrices is orthogonal. If A is orthogonal then Ax = x
2.14
Eigenvalues and eigenvectors are in general complex even if A is a real matrix An nxn matrix has n eigenvalues (some may be repeated). They are the solutions of the characteristic polynomial det(! I ! A) = ! n + a1! n!1 + a2! n!2 +! + an = 0 The n eigenvalues of A are Exercise: Show that if w is an called the spectrum of A eigenvector then so is aw for any a !C 2.15
2.16
Diagonalizable matrices
Awi = !i wi ! A[ w1 w2 ... wn ] = [ w1
W !C n"n
" $ $ $ w2 ... wn ] $ $ $ W $ #
!1
0 . . 0
0 . . 0
. . . . . . . .
!2 . .
! "C n#n
A = W !W
!1
Fact 2.18: If the eigenvalues of A are distinct (!i ! ! j if i ! j) then its evectors are linearly independent
A = U !U T
A = U !V
2.19
Input vector
! # # y=# # # "
Output vector
! # # x=# # # "
State vector
Exercise: Which of the examples in Notes 1 can be described by real vectors? What are these vectors?
f i (i) : R ! R ! R + ! R,
n m
Algebraic equations giving the outputs as a function of the states, inputs and possibly time, i.e. we have functions
hi (i) : R n ! R m ! R + ! R,
2.21
In vector form
Usually write more compactly be dening
f (i) : R n ! R m ! R + " R n
h(i) : R n ! R m ! R + " R p
# f (x,u,t) & # h (x,u,t) & 1 % ( % 1 ( ( f (x,u,t) = % ! ( h(x,u,t) = % ! % f (x,u,t) ( % h (x,u,t) ( % n ( % p ( $ ' $ ' Then state, input and output vectors are linked by
d x(t) = f (x(t),u(t),t) dt y(t) = h(x(t),u(t),t)
Exercise: What are the functions f for the pendulum, RLC and opamp examples of Notes 1? What are the dimensions of these systems?
2.22
Definition: A system in state space form is called linear if the functions f and h are linear, i.e.
! x(t) = f (x(t)),
y(t) = h(x(t))
Exercise: What are the general equations for a linear time invariant system?
2.23
These can be converted to state space form by dening lower order derivatives (all except the highest) as states
Exercise: Convert the following dierential equation of order r into state space form
" d r y(t) dy(t) d r!1 y(t) % + g $ y(t), ,..., =0
r r!1 ' dt dt dt # &
What is the dimension of the resulting system? Is it autonomous? Under what conditions is it linear?
2.24
! t =1
The result is a time invariant system of dimension n+1
Exercise: Convert the following time varying system
! x(t) = f (x(t),u(t),t), x !R n
y(t) = h(x(t),u(t),t)
of dimension n into a time invariant system of dimension n+1.
Exercise: Repeat for the linear time varying system
! x(t) = A(t)x(t) + B(t)u(t), x !R n
y(t) = C(t)x(t) + D(t)u(t)
Is the resulting time invariant system linear?
2.25
Coordinate transformation
What happens if we perform a change of coordinates for the state vector?
Restrict abention to time invariant linear systems
and linear changes of coordinates x (t) = Tx(t), T !R n!n ,det(T ) ! 0 In the new coordinates we get another linear system
! x(t) = f (x(t)),
y(t) = h(x(t))
Nonautonomous systems essentially the same, formal mathematics more complicated What is the solution of the system? Would like to nd functions
x(t Where do we start? Say
0 ) = x0 !R , How long do we go? Say until some
! t0 t1
n
at time t0 !R
Examples
Illustrate potential problems on 1 dimensional systems
No solutions: Consider the system
Exercise: Compute the solutions for x0 = 1 and x0 = 1. Are they dened for all T?
starting at x0=0. The system has no solution for any T ! x(t) Many solutions: Consider the system = 3x(t) 2 3 , x0 = 0. For any a 0 the following is a solution for the system
2.30
Examples
No solutions for T large: Consider the system
! x(t) = 1+ x(t) 2 , x0 = 0.
The solution is
(t ) = tan(t ) x
Exercise: Prove this. What happens at t= /2?
So many things can go wrong! Fortunately many important systems are wellbehaved Definition: A function f : ! n ! ! n is called Lipschitz if there exists ! > 0 such that for all x, x !! n
f (x) ! f ( x ) " ! x ! x
2.31
LipschiQ functions
l is called the Lipschih
constant of f
Lipschih functions are
continuous but not necessarily
dierentiable
All dierentiable functions
with bounded derivatives
are Lipschih
Linear functions are Lipschih
Exercise: Show that the f(x) in the three pathological examples in p. 2.302.31 are not Lipschih.
Exercise: Show that any other is also a Lipschih constant
Exercise: Show that
f(x)=x (x is a real number) is Lipschih. What is the Lipschih constant? Is the function dierentiable?
Exercise: Show that f(x)=Ax is Lipschih. What is a Lipschih constant? Is the function dierentiable?
2.32
Continuity
Even if a unique solution exists, this does not mean we can nd it
Sometimes we can: See the pathological examples above and linear systems (Notes 3)
Usually have to resort to simulation on computer
Construct approximate numerical solution
It helps if solutions that start close remain close
Theorem 2.3: If f is Lipschih then the solutions starting at t are such that for all ! 0 x0 , x0 !! n
x(t) ! x (t) ! e !t x0 ! x0
Nonautonomous systems
Formally, we would expect that given
2.35
Signal und Systemtheorie II DITET, Semester 4 Notes 3: Continuous LTI systems in time domain
John Lygeros
Automatic Control Laboratory, ETH Zrich
www.control.ethz.ch
State space
x1 $ & x2 & n & '" ! & xn & %
Input vector
! # # y=# # # "
Output vector
State vector
A !! n"n
C !! p"n
B !! n"m
D !! p"m
3.2
u(t)
+ +
. x(t)
A
x(t)
++
y(t)
3.3
! x1 (t) = a11 x1 (t) + ... + a1n xn (t) + b11u1 (t) + ... + b1mum (t) ! x2 (t) = a21 x1 (t) + ... + a2n xn (t) + b21u1 (t) + ... + b2mum (t) ! " xn (t) = an1 x1 (t) + ... + ann xn (t) + bn1u1 (t) + ... + bnmum (t)
y1 (t) = c11 x1 (t) + ... + c1n xn (t) + d11u1 (t) + ... + d1mum (t) y2 (t) = c21 x1 (t) + ... + c2n xn (t) + d 21u1 (t) + ... + d 2mum (t) ! y p (t) = c p1 x1 (t) + ... + c pn xn (t) + d p1u1 (t) + ... + d pmum (t)
3.4
Examples
RLC Circuit
dv (t) C C = iC (t) = iL (t) dt di (t) L L = v L (t) = v1 (t) ! vR (t) ! vC (t) dt
Amplier Circuit
dvC (t)
1
dvC (t)
0
dt
=!
0
vC (t)
1
dt
=!
vC (t) R0C0
R1C1
+
1
vC (t) R1C0
v1 (t) R1C1
v1 (t) R1C0
$ & & & u(t) & & %
" $ 0 ! x(t)= $ $ 1 $ ! # L
1 % " 0 % ' ' C ' x(t) + $ 1 ' u(t) $ R ' $ L ' ! ' # & L &
nonlinear
3.5
System solution
Since system is time invariant, assume we are given
n x Initial condition
0 !R The input values
u() : [0,T ] ! ! m Compute the system solution
x() : [0,T ] ! ! n
State solution
The system solution is
where
State transition matrix and the integral is computed element by element a 2t 2 e at = 1+ at + + ... if a !! (cf. Taylor series expansion: ) 2! 3.7
Output solution
Simply combine state solution
3.8
1. 2. 3. 4.
!(t)!("t) = !("t)!(t) = I
Exercise: By invoking the existence discussion from Notes 2, show property 4 (harder).
3.9
! x(t) = Ax(t) + Bu(t) To show that we use the LeibniX formula for dierentiating integrals
d g (t ) d ! f (t ) l(t,! ) d ! = l(t, g(t)) dt g(t) dt d " l(t, f (t)) f (t) dt g (t ) # +! l(t,! ) d ! f (t ) #t
3.10
Example: RC circuit
u(t) Inputs: = vs (t) Voltage x(t) States: = vC (t) Input + x Initial condition: 0 = vC (0) vS(t) State space equations
+ R
_ + iC(t) vC(t) _ C
x(t) = e
t RC
Exercise: Derive the state space equations. What are the matrices A and B? Exercise: Derive the step response 3.11
Total transition
Linear function of initial state Linear function of input Convolution integral 3.12
Total Response
Many ways of doing this Summing innite series (in some rare cases!) Using eigenvalues and eigenvectors (this set of notes) Using the Laplace transform (later) Numerically (later) Using eigenvalues at least two methods Using Cayley Hamilton Theorem (Theorem 2.1) Using eigenvalue decomposition (used here) 3.14
Ak t k At !(t) = e = I + At + + + k!
x(0) = wi
! ! x(0) = Awi = !i wi
i.e. if we start on evector we stay on evector x(t) increases/decreases depending on sign of x2 E.g.
w2
w1
n = 2, !1 < 0, !2 > 0
! x (0) x(0)
! x (0)
x1
3.15
AW = W ! " A = W !W #1
Therefore (Fact 2.18)
At !t !1 !(t) = e = We W
where
" !1t
Matrix of evectors
Ak = W ! kW !1
Exercise: Prove this
$ e =$ $ $ #
!t
e ! 0
3.16
d ! vC (t) $ ! 0 1 C $ ! vC (t) $ + ! 0 # i (t) & = # '1 L ' R L & # i (t) & # 1 L dt " L %" L % " % "
Set R=3, C=0.5, L=1
$ v (t) & s %
 iL (t ) +
C
vC (t)
A=" 0 2 $ # !1 !3 %
Eigenvalues:
+ +
v1 (t)
vR (t)
R
 +
vL (t)
L
Eigenvectors:
!1 = !1, !2 = !2
# 2 1 % # e "t 0 % # 1 1 % Using Matlab: = $ "1 "1 & ' 0 e "2t ( $ "1 "2 & A=[0 2;1 3]; $ & >>
e !t
W !1
# 2e "t " e "2t 2e "t " 2e "2t & !(t) = % "e "t + e "2t "e "t + 2e "2t ( $ '
3.18
Because w1 and w2 linearly independent they form a basis for (p. 2.8) R2 Therefore all initial conditions can be wrinen as
x(0) = a1w1 + a2 w2
Therefore ZIT 0 for all initial conditions
3.19
$ "2e "t + e "2t + 1 'V ,t*+* $ V ' = ,, & e "t " e "2t ) %0( % (
3.20
! 0)
3.21
Re()
3.22
i.e. if the state starts small it stays small (p. 2.4)
or you can keep the state as close as you want to 0 by starting close enough
3.23
Asymptotic stability
Definition: The system is called asymptotically stable
if it is stable and in addition
x(t) ! 0 as t ! "
i.e. not only do we stay close to 0 but also converge to it Note that
Denitions do not require diagonalizable matrices In fact we, will see that they also work for nonlinear systems (Notes 7) 3.24
Diagonalizable matrices
Theorem 3.1: System with diagonalizable A matrix is:
Stable if and only if Re[!i ] ! 0,"i Asymptotically stable if and only if Re[!i ] < 0 !i Unstable if and only if !i : Re[ !i ] > 0
Stable node
3.26
Unstable node
3.27
Center
3.28
+ + + + +
+ + +
+ +
3.29
" !1 1 $ & e A2t = " e !t te !t $ A2 = ' 0 e !t ( Exercise: Prove the formulas # 0 !1 % # % for the transition matrices
Exercise: Repeat the computations for the matrices
Exercise: What are the eigenvalues of A1 and A2? What are the eigenvectors? What goes wrong with their diagonalization?
if ! = 0,! = 0
if ! ! 0,! = 0
sin(! t),cos(! t),t sin(! t),,t r!1 cos(! t) e! t sin(" t),e! t cos(" t),,t r!1e! t cos(" t)
if " = 0,! ! 0
Can be shown using generalized eigenvectors and Jordan canonical form 3.31
e! t , t k e! t cos ! t, t k e! t sin ! t #t!"! 0 tk ## Hence ZIT tends to zero (for some initial states) e! t , t k e! t cos ! t, t k e! t sin ! t #t!"! " tk ## Hence ZIT tends to innity (for some initial states)
If = 0, !
1,cos ! t, sin ! t remain bounded
, t k cos ! t, t k sin ! t #t!"! " for k ! 1 tk ## ZIT may remain bounded or tend to innity
Cannot tell just by looking at evalues
3.32
!i Re[!i ] ! 0 but "i Re[!i ] = 0 Reason is that if then stability not determined by evalues alone.
ZIT may remain bounded for all initial conditions
ZIT may go to innity for some initial conditions
If matrix nondiagonalizable, but no evalue with
is repeated then Theorem 3.1 still applies
Re[!i ] = 0 3.33
" !"
! (t) = 1
if t < 0
1 if 0 ! t < ! ! 0 if t " !
3.34
=e
at
t 0
e "a! b" (! ) d ! = e at b
t
3.35
Example: RC circuit
1 1 For the RC circuit:
A = ! !!, B = !! RC RC
" t RC
!(t) = e
Impulse transition
1 h(t) = !(t)b = e RC
t
"
t RC
3.36
. . h1m (t) $ & & . . . = '(t)B (! n)m & . . . & . . hnm (t) & %
hij(t) equal to xi(t) when x(0) H (t) = !(t)B = 0 u j (t) = ! (t), uk (t) = 0 k ! j Again, ZST convolution of input with impulse transition
x(t) = (H ! u)(t)
Combine impulse transition formula and output map, output impulse response given by
K (t) = C!(t)B + D! (t) "R p#m
y(t) = (K ! u)(t)
Theorem 3.3: Assume that Re[!i ] < 0 !i . Then there exists ! ! 0 such that ZST, x(t), satisfies
u(t) ! M !t ! 0 "
If in addition
t!"
x(t) ! ! M !t ! 0
t!"
So small inputs lead to small states. If in addition the input goes to zero, so does the state
Asymptotic stability needed, not enough
Re[!i ] ! 0
3.39
Complete response = ZIR + ZSR
If
Re[!i ] < 0 !i ZIT/ZIR and ZST/ZSR small if input and x(0) small
Bounded input, bounded state (BIBS) property
Bounded input, bounded output (BIBO) property
ZIT/ZIR and ZST/ZSR tend to 0 if input tends to 0
Hence output tends to 0 if input tends to 0
3.40
(k !1)R iout
C1 R1
+
vR
C2
R2
v0 (t)

iin = 0 ! i1 (t) = i0 (t) ' % % vC (t) 2 % i1 (t) = " ( ! v0 (t) = kvC2 (t) R vC (t) " v0 (t) % % i0 (t) = 2 % (k "1)R ) 4.2
! v (t) $ # C1 & 'R 2 x(t) = # v (t) & # C2 & " % Input Variable: none (autonomous)
! v (t) $ ! v (t) C1 d # & = A # C1 # v (t) dt # vC (t) & # 2 & # C2 " % " $ & & & %
dvC 2 (t) + C1 + C2 =0 KCL: R2 dt dt dvC (t) 1 KVL: v (t) ! v (t) ! R C ! kvC (t) = 0 C2 C1 1 1 2 dt
2 1
vC (t)
dvC (t)
4.3
R For simplicity set
1 = R2 = R,C1 = C2 = C Autonomous system (ZI)
Stability deftermined by sign of the real part of eigenvalues
Eigenvalues are the roots of the characteristic polynomial
3! k 1 det(! I ! A) = 0 ! ! + !+ =0 2 RC (RC)
2
This is NOT true for higher order polynomials, we need HurwiJ test
4.4
#!i % < 0 ' Response k < 3 ! "i,Re $ & goes to 0 j Response oscillates with k = 3 ! !i = ! frequency =1/RC RC Response goes to #!i % > 0 ' infinity (generally) k > 3 ! Re $ &
4.5
0 Real and negative < k ! 1 1 Complex, negative real part < k < 3 k Imaginary = 3 3 Complex, positive real part < k < 5 k Real and positive ! 5
Exercise: Simulate the Wien oscillator for k = 0.5, 2, 3, 4, 6 and plot x1 vs. x2 Exercise: Plot locus of the evalues as k goes from 0 to innity (in matlab)
4.6
System energy
For the Wien oscillator
! C 0 $ 1 1 1 2 2 & x(t) E(t) = C1vC (t) + C2 vC (t) = x(t)T # 1 1 2 2 2 2 # 0 C2 & " % 1 Quadratic function of the state
E(t) = x(t)T Qx(t) 2
Matrix Q
!0 Any quadratic with Q that satises these properties serves as an energy like function
Exercise: Find Q Example: Coordinate change
such that
x (t) = Tx(t)
= 1 x (t)T Qx (t)
for T invertible
E(t)
2
4.7
Symmetric (Q=QT) (in this case diagonal) T Q Positive denite > 0, i.e. x Qx > 0 !x
System power
Instantaneous change in energy
! ! dE(t) x T (t)Qx(t) x T (t)Qx(t) P(t) = = + dt 2 2 x(t)T AT + u(t)T BT Qx(t) x(t)T Q Ax(t) + Bu(t) = + 2 2 x(t)T AT Q + QA x(t) u(t)T BT Qx(t) + x(t)T QBu(t) = + 2 2
)
)
P(t) =
x(t)T AT Q + QA x(t) 2
4.8
System power
Power also a quadratic of the state
Exercise: Show R is symmetric
Matrix R is symmetric
R If it is positive denite
> 0, i.e. x T Rx > 0 !x ! 0
then energy decreases all the time
Natural to assume that in this case system is stable
Exercise: Compute power for the Wein oscillator. For which values of k is R positive denite?
Theorem 4.1: The eigenvalues of A have negative real part if and only if for all R = RT > 0 there exists a unique Q = Q T > 0 such that AT Q + QA = !R 4.9
Lyapunov equation
Lyapunov equation
Elements of A
Aq = r
of Q
Because Q and R symmetric n(n+1)/2 equations in n(n +1)/2 unknowns Fact 2.11 equation has: A Unique solution if nonsingular A Multiple solutions or no solutions if singular 4.10
Elements of R
Lypunov functions
Linear version of Lyapunov Theorem (Notes 7)
For any solving for Lyapunov equation for R = RT > 0 ! x(t) unknown Q allows us to determine stability of
= Ax(t)
Unique positive denite solution Asymptotically stable
No solution, multiple solutions
Nonpositive denite solution
Resulting energylike function : R n ! R V Exercise: Why is Lyapunov equation linear? Is Lyapunov known as Lyapunov function function linear?
1 V (x) = x T Qx 2
4.11
InputStateOutput relations
Investigate the eect of
Input on state
State on output
4.12
Controllability
Consider a linear system
Observations
x0 , x1 In other words: For any we can nd
such that
u() : [0,t] ! ! m
t
Observations: Input can drive the state from to in time t, x0 x1 x1 but not necessarily keep it at afterwards Input to state relation, outputs play no role and can be ignored for the time being Denition generalizes to other systems (nonlinear, time varying, etc) but more care is needed 4.14
Observations
Fact 4.1: The system is controllable over [0, t] if and only if for n all x1 !! there exists an input u() : [0,t] ! ! m such that x(t ) = x1 starting at x(0) = 0
Proof:
Exercise: Prove only if part
If: To drive the system from x(0)=x0 to x(t)=x1 use input that drives At ! ! x it from
(0) = 0 to x (t) = x1 ! e x0
Fact 4.2: The system is controllable over [0, t] if and only if for
all x0 !R n there exists an input u() : [0,t] ! ! m such that
x(t) = 0 starting at x(0) = x0
Proof:
Exercise: Prove only if part
If: To drive the system from x(0)=x0 to x(t)=x1 use input that drives ! At ! ! it from
(0) = x0 ! e x1 to x (t) = 0 x
4.15
Controllability gramian
Given time t, dene controllability gramian
Fact 4.3: The system is controllable over [0, t] if and only if WC(t) is invertible Proof: If. Drive system from to in time t. x0 !! n x1 !! n By Fact 4.1, assume x0=0 and select T AT (t!! ) !1 Exercise: Complete
u(! ) = B e
WC (t) x1 , ! ![0,t]
the if part
4.16
Controllability gramian
Only if: If WC(t) is not invertible then (Fact 2.11, 2.17) there exists with such that
z !! n z!0
t
T T A! T AT ! WC (t)z = 0 ! z WC (t)z = 0 ! " z e BB e z d ! = 0
0 t 2
z T e A! B d ! = 0 ! z T e A! B = 0 for all ! #[0,t] !"
0 t
Therefore T x(t) = z T e A(t!! ) Bu(! ) d ! = 0 z 0 and only x(t) orthogonal to z can be reached from x(0)=0. By Fact 4.1, the system is not controllable
"
4.17
Controllability test
Dene the controllability matrix
P = [B AB A2 B ! An!1B] !R n!nm
Theorem 4.2: The system is controllable over [0, t] if and only
if the rank of P is n
The rank of P is at most n since it has n rows (Fact 2.5)
Proof: We know that the system is controllable over [0, t] if and
n only if WC(t) is invertible. WC(t) is invertible if and only if for
!! z WC(t)z=0 z=0
(else WC(t) has 0 as an eigenvalue, Fact 2.17). If we can show
WC(t) z=0 PTz=0
this would imply PT has rank n if and only if WC(t) is invertible.
4.18
By Taylor series, the last part holds if and only if BTeA z and all its derivatives at =0 are equal to zero, in other words d T AT ! T AT ! T B e z = BT AT z = 0 B e z =B z=0 d! ! =0 ! =0 and so on, until d n!1 T AT ! B e z = BT (An!1 )T z = 0 d! n!1 ! =0 Higher derivatives (involving An, An+1, etc.) are then automatically zero by the CayleyHamilton Theorem 2.1. Summarizing WC (t)z = 0 ! P T z = 0 and the system is controllable if and only if P has rank n.
4.19
dvC (t) vin (t) vC (t) ! = +C + iL (t) R1 R0 dt dvC (t) 1 1 1 ! = ! iL (t) ! vC (t) + vin (t) dt C R0C R1C
diL (t) diL (t) vC (t) L = vC (t) ! = dt dt L
i1 (t)
iL (t)
L

iC (t)
vC (t)
R1
i0 (t)
R0
+
vin (t)
v0 (t)

4.20
Observations
Easy test for controllability
Requires matrix multiplications and rank test, instead of integration of matrix exponential
Proof of Theorem 4.2 implies the following
Corollary 4.1: The set of x1 !! for which !u() : [0,t] ! ! such that x(t) = x1 starting at x(0) = 0 is equal to Range(P) Fact 4.4: WC(t) is invertible for some t > 0 if and only if it is invertible for all t > 0.
n
Roughly speaking, if the system is controllable can get from any state to any other state as fast as we like The faster we go, the more the energy and the bigger the inputs we will need 4.22
! u(! )
t
u(! ) d ! = ! u(! ) d !
0
! u(! )
0
Theorem 4.2: Assume that the system is controllable. Given x1 !! n and t > 0, the input that drives the system from x(0)=0 to x(t)=x1 and has the minimum energy is given by
um (! ) = B e
T
AT (t!! )
4.24
Since x(t)=x1, we have that
e A(t!! ) Bu(! ) d ! = 0 0
t t t T and
um (! )T u(! ) d ! = ! u(! )T um (! ) d ! = ! x1 WC (t) "1 e A(t"! ) Bu(! ) d ! = 0 ! 0 0
0 Therefore
T u(! )T u(! ) d ! = x1 WC (t) "1 x1 + ! u(! )T u(! ) d ! # ! um (! )T um (! ) d ! ! 0 0 0 t t t
4.25
Observability
dx (t) = Ax(t) + Bu(t) x !R n ,u !R m , y !R p dt y(t) = Cx(t) + Du(t)
Definition: The system is called observable over [0, t] if given u() : [0,t] ! ! m and y() : [0,t] ! ! p we can uniquely n determine the value of x() : [0,t] ! ! Again time of observation, t, will turn out to play no role Inputs play little role, just carried along Generalizations (to e.g. nonlinear systems) possible, but care is needed 4.26
Therefore to infer given u() : [0,t] ! ! x() : [0,t] ! ! n it suces to infer the initial condition x(0)=x0 Assume that two initial conditions, x0 and , under the x0 u() : [0,t] ! ! m same input lead to the same output
Ce A! x0 + C " e A(! !s) Bu(s)d s + Du(! ) = Ce A! x0 + C " e A(! !s) Bu(s)d s + Du(! )
0 0
Then
A! (x0 ! x0 ) = 0, for all ! ![0,t] Ce x !R n such that called unobservable
Ce A! x = 0 !! ![0,t]
4.27
Unobservable states
! Ce A! x = 0 x !R nunobservable if and only if for all
![0,t]
A!
Note that if x=0 then , so x=0 is an Ce x = 0 unobservable state Exercise: Show that the System is observable if and only unobservable states form if x=0 is the only unobservable state a subspace In this case the initial state x0 is uniquely determined by the zero input response since
Observability
Note that two initial conditions that under the same input lead to the same output dier by an unobservable state
By a Taylor series argument
A! x = 0 for all ! ![0,t] Ce
if an only if all its derivatives at = 0 equal to 0
d
A! Ce x = Cx = 0, Ce A! x = CAx = 0,! ! =0 dt '
! =0
d n!1 Exercise: Show this Ce A! x = CAn!1 x = 0 n!1
d! ! =0 By the Cayley Hamilton theorem, if for k=0, CAk x = 0 1, , n1 then for all k>n1
Exercise: Show this CAk x = 0 4.29
Observability
Therefore a state x is unobservable if an only if
= 0 Qx
" C $ $ CA Q= $ ! $ CAn!1 # % ' ' (R np!n ' ' &
If Q is full rank then the only unobservable state is 0 In this case, system is observable since
4.31
Observability Gramian
One can also construct and observability gramian
t Exercise: Show that
AT ! T A! n!n
W (t) = e C Ce d ! "R T
O WO (t) = WO (t) ! 0
0
Fact 4.5: The system is observable over [0, t] if and only
if WO(t) is invertible. If the system is observable over some [0, t] then it is also observable over all [0, t]
Notes
Checking the rank of matrix Q is easier
Corollary 4.2: Set of unobservable states Rank of Q at most n (n columns)
equal to Null(Q) Time of observation is immaterial
4.32
" ! 0 % $ u(0) ' ! ! 0 ' $ u(0) ! 0 '$ ! $ ' ! D & $ u (n!1) (0) #
np"nm
Y = Qx(0) + KU
Y !R , K !R
,U !R
nm
4.33
x(0) = Q !1 (Y ! KU )
If p>1, more equations than unknowns, least squares solution. If Q has rank n, pseudoinverse (Fact 2.14)
x(0) = Q Q
T
!1
Q T (Y ! KU )
4.34
But
Dierentiating measurements is a bad idea
Noise gets amplied
Example: Sinusoidal signal corrupted by small amplitude, high frequency noise
y(t) = asin(! t) + bsin(! n t) b ! a, ! n ! ! a Signaltonoise ratio:
SNR = ! 1 b a! a ! (t) = ! a cos(! t) + ! n bcos(! n t) SNR = y " b! n b
a! 2 a! !! = !! asin(! t) ! ! bsin(! n t) SNR = y(t) " 2 b! n b! n Derivative of signal soon becomes useless
2 2 n
4.35
Observers
Instead of dierentiating, build a lter
! Progressively construct estimate of the state
(t) !R n x ! Start with some (arbitrary) initial guess
(0) !R n x Measure y(t) and u(t)
Update estimate according to
Observers
Theorem 4.4: If the system is observable, then L can be chosen such that eigenvalues of (ALC) have negative real parts. In this case error system is asymptotically stable
Error goes to zero
e(t) #t!"! 0 ## t!" ! x ! State estimate converges to true state
(t) ### x(t) Convergence arbitrarily quick by choice of L
In presence of noise transients may be bad
Kalman lter: Optimal tradeo for L if
State and measurement equations
corrupted by noise
System linear and noises Gaussian
4.37
Kalman decomposition
There exists change of coordinates invertible such that:
T R nn
! # # x (t) = Tx(t) = # # # # " x1 (t) $ controllable & observable & x2 (t) & controllable & unobservable & x3 (t) & uncontrollable & observable & x4 (t) & uncontrollable & unobservable %
A11 A 0 0
21
0 A 0 0
22
A13 A
23
A33 A
43
0 % " ' $ ' A24 $ ' , B = TB = $ 0 ' $ ' $ A44 ' # &
C1 0 C3 0 % ' &
4.38
Kalman decomposition
'! )# )# (" A11 A
21
'! A ) # 11 )# 0 ("
u(t)
B2
B1
C+O
x1 (t)
A23 A21 A24 A13 A43
C1
y(t)
C3
C+O
x2 (t)
C+O
x3 (t)
C+O
x4 (t)
4.39
Signal und Systemtheorie II DITET, Semester 4 Notes 5: Continuous LTI systems, frequency domain
John Lygeros
Automatic Control Laboratory, ETH Zrich
www.control.ethz.ch
Laplace Transform
Convert time function f(t) to a complex variable function F(s)
F :C! C f :R!R
"
L ! st !!! F (s) " f (t) #!! F(s) = L f (t) = f (t)e dt !1 !
L 0
Recall that we assume that f(t)=0 for all t<0 (unlike SSI, p. 0.22)
Can also be dened for matrix valued functions
} #
n"m
f :R!R
n"m
F :C! C
!at
{ }
s+a
L {sin(! t)} = !
L cos(! t) = s
s2 + ! 2
s2 + ! 2
t!0
Exercise: Prove C using the s shift property. Prove D using C. Prove B using A and the time derivative property.
f (t) = t, f (t) = t n , f (t) = te !at , f (t) = e !at cos(! t), d2 g(t) = 2 f (t), dt f (t) = sin(! t + ! )
" % % 1 1 1 % !1 " !1 " 1 L # 2 ! &= L # &= L # & $ s + 3s + 2 ' $ (s + 1)(s + 2) ' $s +1 s + 2' !1 " 1 % !1 " 1 % !t !2t =L # &! L # &=e !e $ s + 1' $s + 2'
5.5
! dx(t) $ L" % = L Ax(t) + Bu(t) ' sX (s) ( x(0) = AX (s) + BU (s) # dt & X (s) = (sI ! A) !1 x0 + (sI ! A) !1 BU (s) Laplace domain:
Convolution
{ } ) X (s) = L {e }
L x(t) = L e At
{ }
At
{ }
Comparing
L e At = (sI ! A) !1
5.7
{ }
L
C
! # 0 d ! vC (t) $ # # &= dt # iL (t) & # 1 " % # ! " L 1 C R ! L $ & ! v (t) $ ! 0 $ &# C & + # 1 & vs (t) & & # iL (t) & # % # L & &" " % %
vs (t )
For R=3, C=0.5, L=1
" 0 2 % " s !2 % A=$ ' ( (sI ! A) = $ ' # !1 !3 & # 1 s+3 & Laplace transform of state transition matrix
" s+3 2 % 1 (sI ! A) = 2 $ ' s + 3s + 2 # !1 s &
!1
5.8
"1
"1
& ( ( '
5.10
5.11
5.12
In general, for stable systems with sinusoidal input steady state solution is also sinusoidal with
Same frequency as input
Amplitude and phase determined by system matrices
5.13
Transfer function
Consider ZSR
(s) = (sI ! A) !1 BU (s) X Y (s) = CX (s) + DU (s)
!1
p#m
Summarizes system inputoutput behavior
(s) = G(s)U (s) Y In the RLC example
s
= ! 0 1 # , D = 0 ! G(s) = " $ (s + 1)(s + 2)
If we measure y=vC C
5.14
G(s) =
Some bi may be zero If no polezero cancellations denominator is the characteristic polynomial of A i.e. poles are the eigenvalues of A For simplicity we will mostly consider SISO, strictly proper transfer functions in the rest of these notes 5.17
Bu(! ) d ! + Du(t)
Re[ Asymptotically stable if and only if pi ] < 0,!i Unstable if !i : Re[ pi ] > 0 If system may be stable !i Re[ pi ] ! 0 and "i Re[ pi ] = 0 or unstable, depending on partial fraction expansion (cf. depending on eigenvectors of matrix A, Notes 3)
Block diagrams
G2(s) + + +
G2(s)G1(s)
G2(s)+G1(s)
+
G1(s) G2(s)
[1+G1(s)G2(s)]1G1(s)
Block diagrams
u K1(s) + + y K2(s) K3(s)
Y (s) = [1+ G(s)K 2 (s)K 3 (s)] G(s)K 2 (s)K1 (s)U (s)
!1
G(s)
In the SISO case: Composition or rational transfer functions is also a rational transfer function Properties of closed loop system studied using the same tools
Exercise: In the SISO case, show that if G(s) strictly proper, K1(s), K2 (s), K3(s) proper then closed loop transfer function is strictly proper
5.21
Frequency response
In RLC example, steady state response to sinusoidal input is sinusoidal
More generally consider proper, stable SISO system with transfer function G(s)
Apply u(t)=sin(t)
Output secles to sinusoid y(t)=Ksin(wt+f) with
The same frequency,
Amplitude
= G( j! ) = Re[G( j! )]2 + Im[G( j! )]2 K
Frequency response
Response of system to sinusoids at dierent frequencies called the frequency response
Frequency response important because
Sinusoids are common inputs
Directly related to any other input by Fourier transform
Frequency response tells us a lot about system behavior
E.g. Will it be stable under various interconnections?
Frequency response usually summarized graphically
Bode plots: Loglog plot linlog plot
j! ) vs. ! !G( G( j! ) vs. ! ,
Nyquist plot: G(j) in polar coordinates, parameterized by
G( j! ) vs. !G( j! ), Nichols chart: Loglin plot parameterized by
5.23
2 G(s) = (s + 1)(s + 2)
s G(s) = (s + 1)(s + 2)
5.24
2 G(s) = (s + 1)(s + 2)
G(s) =
s (s + 1)(s + 2)
5.25
Resonance
f (t)
Appears in second order systems (two poles)
Bode magnitude plot has maximum at some frequency
Sinusoidal inputs around this frequency get amplied
Important consequences for performance
Second order systems very common in practice
Example: Simplied suspension model
x(t)
Output:
position
Input:
Force
5.26
Resonance
Second order transfer functions of interest look like
2 K! n G(s) = 2 , 2 s + 2!" n s + ! n k ! , For suspension example:
n = M
!n > 0
!=
d 2 km , 1 K =! k
Gain
2 K! n 2 n
Frequency response
G( j! ) = K!
2 n 2 (! n ! ! 2 ) + j(2!" n! )
Natural frequency
Damping ratio
G( j! ) =
(!
!!
) + (2!" ! )
2 n
5.27
Resonance
1. 2. 3. 4. 5.
Exercise: Verify 15
! For stability need
! 0 For poles real (overdamped system)
! !1 For poles real and equal (critical damping)
! =1 0 <! <1 For poles complex (underdamped system)
For poles imaginary (undamped system)
! =0
1 !! 6. For magnitude Bode plot decreasing in w
2 1 7. For magnitude Bode plot has a maximum
0 !! < 2 Exercise: Take K the derivative ! = ! n 1! 2! 2
at and
j! ) = G( G( j! ) of to 2! 1! ! 2
verify 67
5.28
Resonance
Bode plots typically drawn with normalized axes
As damping 0
Max. freq. n
! Max. ampl.
Gain =1
5.29
Transfer function unique time domain description?? $ dx ? & (t) = Ax(t) + Bu(t) (s ! z1 )(s ! z2 )!(s ! zk ) G(s) = "" % dt # (s ! p1 )(s ! p2 )!(s ! pn ) & y(t) = Cx(t) + Du(t) ' Given G(s), choice of A, B, C, D such that C(sI ! A) !1 B + D = G(s) known as a realization of G(s) Clearly not unique, e.g. coordinate change = Tx, det(T ) ! 0 x 5.30
bn!2 b1 # x(t) $
Exercise: Show that both the controllable and the observable canonical forms are realizations of G(s)
5.31
1 s +1
! x(t) Same as
= !x(t) + u(t), y(t) = x(t) !R Exercise: Show points 15
Original state space system unstable
Transfer function poles have negative real parts!
Polezero cancellation of term corresponding to uncontrollable/unobservable part
6. Can be shown in general using Kalman decomposition
5.32
In summary
Transfer function alternative system description to state space
Closely related, not equivalent
Advantages
+ + + + +
Disadvantages
Coordinate independent Easier to manipulate for system composition Easier to compute response to complicated inputs Immediate connection to steady state sinusoidal response May also work for systems that do not have state space description (e.g. delay elements) Less natural in terms of physical laws Used mostly for ZSR May contain less information than state space description Unobservable and uncontrollable parts lost
5.33
Signal und Systemtheorie II DITET, Semester 4 Notes 6: Discrete time LTI systems
John Lygeros
Automatic Control Laboratory, ETH Zrich
www.control.ethz.ch
6.2
Requires transformation of real valued signals of real time to discrete valued signals of discrete time and viceversa
Analog to digital conversion (A/D or ADC)
Digital to analog conversion (D/A or DAC)
6.3
6.4
COMPUTER
Z.O.H SYSTEM
6.5
x(t) = e
A(t!kT )
6.6
(k +1)T
kT
A((k +1)T !! )
C = C, D = D
6.7
Solution: Given x0 !R n and uk !R m , k = 0,2,, N !1 solution consists of two sequences xk !R n ,k = 0,1,, N p and yk !R , k = 0,1,, N such that:
xk = A x0 + " A
k i=0
k !1
k !i!1
Bui
Exercise: Prove this by induction
ZIT
ZST
6.9
Computation of solution
Hard part is computation of (c.f. )
If matrix is diagonalizable
" ! k ... 0 $ 1 !k = $ ! " ! $ 0 ... !nk $ # % ' ' ' ' &
Definition: The system is called stable if for all > 0 there exists > 0 such that x0 ! ! ! xk ! ! for all k = 0,1, It is called asymptotically stable if in addition lim k!" xk = 0 . A system that is not stable is called unstable. 6.10
= ! i j! i , !i = ! i2 + ! i2
A is diagonalizable and
A = e AT
Theorem 6.1: System with diagonalizable A matrix is: Stable if and only if
!i !i ! 1
!i !i < 1
6.11
!i : !i > 1
6.12
Deadbeat response
Assume all eigenvalues of A are zero:
Then for some (nilpotent matrix)
Example:
In general proved using Jordan form
x Then
k = Ak x0 = 0 for all k ! N ZIT gets to 0 in nite time and stays there.
This never happens with continuous time systems.
6.13
Coordinate change
xk = Txk Assume for some invertible
!R n"n T In the new coordinates system dynamics are again linear time invariant
xk +1 = Axk + Buk y = Cx + Du
k k k
with
A = TAT !1 , B = TB = CT !1 , D = D C
6.14
If (autonomous system)
6.15
6.16
Controllability
System is controllable if we can steer it from any n xN 0 !R n initial condition to any nal condition !R x using appropriate sequence
Assume
! n N Dene again controllability matrix
P = [B
AB
A B! A
n!1
B] !R
n!nm
6.17
Observability
System is observable if we can infer the state evolution by observing the input and output sequences
N Assume
! n !1 " C % $ ' Dene again
$ CA ' (R np!n Q=
observability
$ ! ' $
matrix
n!1 ' # CA & Theorem 6.5: The system is observable if and only if Q has rank n.
Exercise: Prove this 6.18
z Transform
Time function fk converts to a complex variable function F(z)
F :C! C f :N! R
" Z !k
!!!! " F (z) = Z f k = fk z f k #!! F (z) !1 Z
k =0 As before we assume that fk=0 for all k<0 (unlike SSI)
Can also be dened for matrix valued functions by taking sum element by element
can be thought of as unit time delay
z !C
{ } #
fk
f k +1
6.19
z Transform: Properties
Assumption: The function fk is such that the sum converges
Z Convolution
( f * g) k = Z
Some common functions:
{ }
{
! k0
F (z)
{!
k i=0
f i g k !i = F (z)G(z)
(! 0 = 1, ! k = 0 if k ! 0)
{! } = 1 z Z Step function
{1 } = z !1
Z Impulse function
k
Geometric progression Z
{a }
k
( a < 1)
6.20
Transfer function
Assume
Take z transform of all signals
Y (z) = "C zI ! A $ #
!1
Transfer Function
Exercise: Show that the transfer function is ztransform of impulse response (appropriately dened!)
6.21
Transfer function
G(z) = C zI ! A
!1
B+D
Rational function of z.
System asymptotically stable
Poles of G(z) have magnitude less than 1
If system uncontrollable/unobservable pole zero cancellations.
6.22
Simulation
Simulation: Numerical solution in computer
Simulation of discrete time systems (linear or nonlinear) is very easy conceptually
Discrete time systems can also help understand the simulation of continuous time systems
Consider continuous time, LTI system
6.23
L
C
! # 0 d ! vC (t) $ # # &= dt # iL (t) & # 1 " % # ! " L 1 C R ! L $ & ! v (t) $ ! 0 $ &# C & + # 1 & vs (t) & & # iL (t) & # % # L & &" " % %
vs (t )
Solution depends on eigenvalues and eigenvectors Determined by R, L, C Consider autonomous case vs(t)=0 for all t
6.24
For example
w1 w2
!2 = !2 < !1 = !1 < 0
!i = !1.5 14.06 j
6.25
For example
!i = 14.14 j
6.26
Simulation
Exact solution
Approximation
6.27
Numerical approximation
Approximate the solution with a sequence
Divide the interval in N equal subintervals
Let
Simulation step
We approximate
xk +1 = (I + A! )xk + ! Buk
6.28
Numerical approximation
Integration using Euler method
First order approximation of the equation
6.29
6.31
Discrete time system asymptotically stable if and x only if k ! 0 "x0 #R n $ 1+ !i" < 1 "i = 1,,n For example, if i are real and negative
2 !< max  !i 
i=1,,n
! = 0.01
! = 0.05
6.33
! = 0.25
! = 1.25
Simulation
Simple rst order approximation known as forward Euler method
Another approach is backward Euler
E.g. RungeKu^a, variable step, high order Specialized methods for sti systems, hybrid systems, dierentialalgebraic systems, etc. Coded in robust numerical tools such as Matlab 6.35
Nonlinear systems
Most of this course: Dynamical systems modeled by linear dierential equations in state space form
7.2
Nonlinear systems
More general than linear system, hence more dicult
Concentrate on autonomous, time invariant systems
! x(t) = f (x(t))
#R n , !! > 0,"x, x
f (x) $ f ( x ) % ! x $ x
This implies existence and uniqueness of solutions
In general solution cannot be computed analytically
Simulation methods applicable however
Look into the following issues
Invariant sets
Stability of invariant sets
7.3
Invariant sets
Generalization of notion of equilibrium
Definition: A set of states S ! R n is called invariant if
! x(t) = f (x(t))
x(t) means the solution to starting at x0
Equilibrium points are an important class of invariant sets
Definition: A state x !R n is called an equilibrium if
f ( x) = 0
{}
7.4
Equilibria
Linear systems have a linear subspace of equilibria
Nonlinear systems can have many isolated equilibria Example: The pendulum from Notes 1 has 2 equilibria
Exercise: Show that the ! x(t) = Ax(t) equilibria of coincide with the null space of A
" % x2 (t) " ! % Exercise: " 0 % $ ' ! x(t) = $ d ' Show this ', x ' = $ '(x=$ g # 0 & # 0 & $ ! m x2 (t) ! l sin x1 (t) ' # & (More precisely, number of pendulum equilibria is innite, but they all coincide physically with these two) 7.5
x Exercise: Let k +1 = f (xk ) be a nonlinear system in discrete time (cf. p.1.27). The equilibria for this system are given by
x = f ( x)
7.6
7.7
Limit cycles
Observed only in systems of dimension 2 or more
Definition: A solution x(t) is called a periodic orbit if
Nonlinear systems can also have nontrivial, isolated periodic orbits Limit cycles 7.8
7.9
Strange aGractors
In 2D continuous time equilibria & limit cycles as bad as it gets (PoincareBendixson Theorem)
In higher dimensions stranger things may happen
Invariant tori
Chaotic aeractors
! x1 (t) = a(x2 (t) ! x1 (t)) ! x2 (t) = (1+ b)x1 (t) ! x2 (t) ! x1 (t)x3 (t) ! x3 (t) = x1 (t)x2 (t) ! cx3 (t)
E.N. Lorenz 19172008
7.11
Chaotic aGractor
For some parameter values, there is a bounded subset of the state space such that if we start inside we stay there for ever and
Trajectories starting (almost) anywhere go around for ever,
Without ever meeting themselves (not limit cycles)
Given any two points in this set we can nd a trajectory that starts arbitrarily close to one and ends up arbitrarily close to the other
This set is called a chaotic or strange aeractor
Exercise: Compute the equilibria of the Lorenz equations
Exercise: Simulate the Lorenz equations for a=10, b=24, c=2, and x0=(5, 6, 20)
7.12
7.13
Stability
Most commonly studied property of invariant sets
Trajectories stay close or converge to invariant set
Restrict aeention to equilibria
Simple characterization for LTI and equilibrium
= 0 x
Systems stable if eigenvalues of A have negative real part
Poles of transfer function are in left half of complex plane
Definition: An equilibrium x is called stable if for all > 0 there exists > 0 such that
Exercise: Which of the equilibria of the pendulum (simulation p. 7.6) would you say are stable and which not?
7.14
Asymptotic stability
Stability says that if we start close we stay close
Do we get closer and closer?
Definition: An equilibrium x is called locally asymptotically stable if it is stable and there exists M > 0 such that
It is called globally asymptotically stable if this holds for any M >0. The set of x0 such that lim x(t) = x is called the domain of attraction t!" of x
Exercise: What is the domain of aeraction of a globally asymptotically stable equilibrium?
Exercise: Is there a dierence between local and global asymptotic stability for linear systems?
7.15
Exercise: Which of the equilibria would you say are locally asymptotically stable? Which globally?
7.16
Linearization
Simple way to study stability of equilibrium of nonlinear system is to approximate by linear system
!
x(t) = f (x(t)), f ( x ) = 0 x equilibrium x Take Taylor expansion about
Linearization
Consider distance of x to equilibrium
x(t) = x(t) ! x "R n ! When x close to equilibrium, x is small and
d! x(t) ! A! x(t) dt
So close to equilibrium nonlinear system expected to behave like a linear system
In particular, stability of the linearization should tell us something about stability of nonlinear system
Stability of linearization can be determined just by looking at the eigenvalues of A
7.18
Stability by linearization
Theorem 7.1: The equilibrium x is 1. Locally asymptotically stable if the eigenvalues of the linearization have negative real part 2. Unstable if the linearization has at least one eigenvalue with positive real part
Called Lyapunov rst or Lyapunov indirect method
Advantage: Very easy to use
Disadvantages:
No information about the domain of aeraction
Inconclusive if linearization has imaginary/zero eigenvalues
7.19
x = (! ,0) Linearization about has positive eigenvalue x = (! ,0) Hence is unstable for nonlinear system x = (0,0) Linearization about has imaginary eigenvalues x = (0,0) Stability of not determined from Theorem 7.1
It turns out that equilibrium is stable (see g. on p.7.6) This is not always the case For example, the linearization of both
! = !x(t)3 x(t)
about has one eigenvalue at zero
x=0 But 0 stable for one system and unstable for the other
7.21
Lyapunov functions
In linear systems stability characterized in two ways
Eigenvalues of matrix A (Theorems 3.1, 3.2), or poles of the transfer function (p.5.19)
Existence of decreasing energylike function (Theorem 4.1)
First applies to nonlinear systems, how about second?
Properties of energylike function for linear systems
1 T x Qx 2
For nonlinear systems keep 2 and 4, but allow more general (nonquadratic) V(x) 7.22
7.23
Proof: By picture!
Sc = x !S V (x) ! c
7.24
( )
7.26
Examples
Consider rst
! x(t) = f (x(t)) = !x(t)3 where x = 0 S Let
= R, V (x) = x 2 # Clearly
(0) = 0,V (x) > 0 !x " 0, V (x) f (x) = $2x 4 < 0 !x " 0 V #x Therefore 0 is globally asymptotically stable
How about pendulum with d > 0
S = (!! , ! ) ! R As before consider and V(x) the energy
d ! ! V (x(t)) = ml 2 x2 (t) x2 (t) + mgl sin(x1 (t)) x1 (t) = !dl 2 x2 (t) 2 " 0 dt x = (0,0) But =0 whenever x2(t)=0 (not only at ), therefore cannot conclude local asymptotic stability
7.27
La Salles Theorem
Theorem 7.4: Assume there exists a compact invariant set S ! R n and a differentiable function V (i) : R n ! R such that !V (x) f (x) " 0 #x $S Let M be the largest invariant set contained in the set
S = x !S "V (x) f (x) = 0 # R n
Then all trajectories starting in S tend to M as t ! " Compact means bounded and closed x If only invariant set in x !S !V (x) f (x) = 0 then all trajectories starting in S tend to it
7.28
S Take V(x) the energy and = x !R 2 V (x) ! 2mgl ! ! for any > 0 Recall that Exercise: Show & that S is invariant $x %S 2 2( #0 !V (x) f (x) = "dl x2 ' ( = 0 when x2 = 0 ) x = (0,0) is the only invariant set contained in ! S = { x !S  x2 = 0} (since x2 ! 0 if x2 = 0 but x1 ! 0) x Therefore all trajectories that start in S tend to = (0, 0) By Theorem 7.2 is stable x = (0, 0) Hence, by Theorem 7.4, locally asymptotically stable Moreover, since e is arbitrary, the domain of aeraction of (0,0) contains everything except the other equilibrium , 0) (
7.29
General comments
Theorem 7.4 applies to more general invariant sets
(e.g. limit cycles)
Theorems 7.2 and 7.3 also generalize easily
Theorem 7.1 slightly harder to generalize (linearization about trajectories, Poincare maps)
Conditions of Theorems 7.27.4 sucient and not necessary
Finding Lyapunov functions for nonlinear systems an art not a science. Common choices
Energy for mechanical and electrical systems
Quadratics (always work for linear systems)
Intuition!
7.30