Professional Documents
Culture Documents
KHS NONLINEAR CONTROL Compendium - 2012 PDF
KHS NONLINEAR CONTROL Compendium - 2012 PDF
Lecture notes
Karl Henrik Johansson, Bo Wahlberg and Elling W. Jacobsen
This revision December 2011
Automatic Control
KTH, Stockholm, Sweden
Preface
Many people have contributed to these lecture notes in nonlinear control.
Originally it was developed by Bo Bernhardsson and Karl Henrik Johansson,
and later revised by Bo Wahlberg and myself. Contributions and comments
by Mikael Johansson, Ola Markusson, Ragnar Wallin, Henning Schmidt,
Krister Jacobsson, Björn Johansson and Torbjörn Nordling are gratefully
acknowledged.
Elling W. Jacobsen
Stockholm, December 2011
EL2620 2011 EL2620 2011
Lecture 1 1 Lecture 1 2
Lecture 1 3 Lecture 1 4
EL2620 2011 EL2620 2011
Material
• Textbook: Khalil, Nonlinear Systems, Prentice Hall, 3rd ed.,
2002. Optional but highly recommended.
Course Information
• Lecture notes: Copies of transparencies (from previous year)
• All info and handouts are available at the course homepage at • Exercises: Class room and home exercises
KTH Social
• Homeworks: 3 computer exercises to hand in (and review)
• Compulsory course items
• Software: Matlab
– 3 homeworks, have to be handed in on time (we are strict on
this!) Alternative textbooks (decreasing mathematical rigour):
Sastry, Nonlinear Systems: Analysis, Stability and Control; Vidyasagar,
– 5h written exam on December 11 2012 Nonlinear Systems Analysis; Slotine & Li, Applied Nonlinear Control; Glad &
Ljung, Reglerteori, flervariabla och olinjära metoder.
Only references to Khalil will be given.
Two course compendia sold by STEX.
Lecture 1 5 Lecture 1 6
Linear Systems
Course Outline
Definition: Let M be a signal space. The system S : M → M is
linear if for all u, v ∈ M and α ∈ R
• Introduction: nonlinear models and phenomena, computer
simulation (L1-L2) S(αu) = αS(u) scaling
• Feedback analysis: linearization, stability theory, describing S(u + v) = S(u) + S(v) superposition
functions (L3-L6)
Example: Linear time-invariant systems
• Control design: compensation, high-gain design, Lyapunov
methods (L7-L10) ẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t), x(0) = 0
• Alternatives: gain scheduling, optimal control, neural networks, y(t) = g(t) " u(t) =
fuzzy control (L11-L13)
g(τ )u(t − τ )dτ
0
! t
Lecture 1 7 Lecture 1 8
EL2620 2011 EL2620 2011
φ x
Superposition Enough to know a step (or impulse) response
Lecture 1 9 Lecture 1 10
Can the ball move 0.1 meter in 0.1 seconds from steady state?
Linear model (step response with φ = φ0 ) gives
t2
2
x(t) ≈ 10 φ0 ≈ 0.05φ0
Lecture 1 11 Lecture 1 12
EL2620 2011 EL2620 2011
PSfr
Stability Can Depend on Reference Signal S TEP R ESPONSES
Example: Control system with valve characteristic f (u) = u2 0.4
Output y
r 1 1 y
0
Σ s (s+1)2 0 5 10 15 20 25 30
4 Time t
2
r = 1.68
Output y
0
0 5 10 15 20 25 30
−1
10 Time t
Output y
1 2 1
u y 0
s (s+1)(s+1) 0 5 10 15 20 25 30
Time t
Lecture 1 13 Lecture 1 14
Multiple Equilibria
Stable Periodic Solutions
Example: chemical reactor
Input Example: Position control of motor with back-lash
20
0 1
15
1 5 y
ẋ1 = Constant 5s 2+s
Coolant temp xc
10
−x1 exp − + f (1 − x1 )
x2 Sum P-controller Backlash
Motor
! "
200
f = 0.7, ! = 0.4
150 -1
100
50
Temperature x2
0 1
0 50 100 150 200 Motor: G(s) =
time [s] s(1+5s)
Existence of multiple stable equilibria for the same input gives Controller: K=5
hysteresis effect
Lecture 1 15 Lecture 1 16
EL2620 2011 EL2620 2011
Output y
PID
-0.5
0 10 20 30 40 50 A u y
Time t Σ Process
0.5 T
Relay
0
Output y
-0.5
0 10 20 30 40 50 −1
0.5 Time t
0 1 u
Output y
y
-0.5
0 10 20 30 40 50 0
Time t
−1
Lecture 1 17 Lecture 1 18
Harmonic Distortion
Example: Electrical power distribution
Example: Sinusoidal response of saturation
Nonlinearities such as rectifiers, switched electronics, and
a sin t y(t) = k=1 Ak sin(kt) transformers give rise to harmonic distortion
Saturation
"∞
-2
Amplitude y
10 -1
0
-2
Example: Electrical amplifiers
10 0 1 2 3 4 5
Frequency (Hz) Time t
0
Introduces spectrum leakage, which is a problem in cellular systems
-2
Amplitude y
10 -1
Trade-off between effectivity and linearity
0
-2
10 0 1 2 3 4 5
Frequency (Hz) Time t
Lecture 1 19 Lecture 1 20
EL2620 2011 EL2620 2011
Subharmonics
Nonlinear Differential Equations
Example: Duffing’s equation ÿ + ẏ + y − y 3 = a sin(ωt)
0.5 Definition: A solution to
y
0
0
• When is the solution unique?
-0.5
a sin ωt
-1
Example: ẋ = Ax, x(0) = x0 , gives x(t) = exp(At)x0
0 5 10 15 20 25 30
Time t
Lecture 1 21 Lecture 1 22
1 4
dx 1.5
2
= dt 1
x2
Recall the trick: ẋ = x ⇒
0.5
x0
0
0 1 2 3 4 5
−1 −1
x(t) x(0) Time t
Integrate ⇒ − = t ⇒ x(t) =
1 − x0 t
Lecture 1 23 Lecture 1 24
EL2620 2011 EL2620 2011
Uniqueness Problems
Physical Interpretation
√
Example: ẋ = x, x(0) = 0, has many solutions:
Consider the reverse example, i.e., the water tank lab process with
√
x(t) = x(T ) = 0
(t − C)2 /4 t > C
0
ẋ = − x,
#
t≤C
where x is the water level. It is then impossible to know at what time
2
0.5
Hint: Reverse time s
x(t)
= T − t ⇒ ds = −dt and thus
0
dx dx
-0.5
ds dt
=−
-1
0 1 2 3 4 5
Time t
Lecture 1 25 Lecture 1 26
has a unique solution in Br (x0 ) over [0, δ]. Proof: See Khalil,
(f (x) − f (y)( ≤ L(x − y(
Remarks
Slope L
Euclidean norm is given by • δ = δ(r, L)
2
(x( = x21 + ··· + x2n • f being C 0 is not sufficient (cf., tank example)
• f being C 1 implies Lipschitz continuity
(L = maxx∈Br (x0 ) f $ (x))
Lecture 1 27 Lecture 1 28
EL2620 2011 EL2620 2011
Lecture 1 29 Lecture 1 30
ẋ1 = x2
Linear: 0 = Ax∗ + Bu∗ , y ∗ = Cx∗
k g
MR 2 R Often the equilibrium is defined only through the state x∗
ẋ2 = − x2 − sin x1
Lecture 1 31 Lecture 1 32
EL2620 2011 EL2620 2011
ẋ1 = ẋ2 = 0 gives x∗2 = 0 and sin(x∗1 ) = 0 Relay Backlash Coulomb &
Viscous Friction
Lecture 1 33 Lecture 1 34
Lecture 1 35 Lecture 1 36
EL2620 2011 EL2620 2011
Lecture 2 1 Lecture 2 2
Lecture 2 3 Lecture 2 4
EL2620 2011 EL2620 2011
1
s+1
Step Transfer Fcn Scope
Lecture 2 5 Lecture 2 6
0.2 0.2
u
0 0
To Workspace2
-0.2 -0.2
-0.4 -0.4
-0.6 -0.6
Check “Save format” of output blocks (“Array” instead of “Structure”)
-0.8 -0.8
-1 -1
>> plot(t,y) 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Lecture 2 7 Lecture 2 8
EL2620 2011 EL2620 2011
PSfr
Nonlinear Control System
Use Scripts to Document Simulations Example: Control system with valve characteristic f (u) = u2
Motor Valve Process
If the block-diagram is saved to stepmodel.mdl,
r 1 1 y
the following Script-file simstepmodel.m simulates the system: Σ s (s+1)2
open_system(’stepmodel’)
set_param(’stepmodel’,’RelTol’,’1e-3’)
set_param(’stepmodel’,’AbsTol’,’1e-6’) −1
set_param(’stepmodel’,’Refine’,’1’)
tic Simulink block diagram:
sim(’stepmodel’,6) 1 1
u2 y
toc s (s+1)(s+1)
subplot(2,1,1),plot(t,y),title(’y’)
subplot(2,1,2),plot(t,u),title(’u’)
-1
Lecture 2 9 Lecture 2 10
1
0.1000
1/A f(u) 1
1 s 0.1000
Sum q
In Gain Integrator Fcn
u0 =
2
9.7995e-006
h
y0 =
0.1000
q
q In
>> [aa,bb,cc,dd]=linmod(’twotank’,x0,u0);
h 1
1 In >> sys=ss(aa,bb,cc,dd);
h Out
In Subsystem2 >> bode(sys)
Subsystem
Lecture 2 11 Lecture 2 12
EL2620 2011 EL2620 2011
Lecture 2 13 Lecture 2 14
EL2620 2011
Homework 1
• Use your favorite phase-plane analysis tool
Lecture 2 15
EL2620 2011 EL2620 2011
Today’s Goal
EL2620 Nonlinear Control
You should be able to
Lecture 3 • Explain local and global stability
• Linearize around equilibria and trajectories
• Stability definitions
• Sketch phase portraits for two-dimensional systems
• Linearization
• Phase-plane analysis • Classify equilibria into nodes, focuses, saddle points, and
center points
• Periodic solutions
• Analyze stability of periodic solutions through Poincaré maps
Lecture 3 1 Lecture 3 2
Local Stability
Consider ẋ = f (x) with f (0) = 0
Definition: The equilibrium x∗ = 0 is stable if for all ! > 0 there Asymptotic Stability
exists δ = δ(!) > 0 such that
δ
The equilibrium is globally asymptotically stable if it is stable and
! limt→∞ x(t) = 0 for all x(0).
(x0 (t), u0 (t)) Note that A and B are time dependent. However, if
(x0 (t) + x̃(t), u0 (t) + ũ(t)) (x0 (t), u0 (t)) ≡ (x0 , u0 ) then A and B are constant.
Lecture 3 5 Lecture 3 6
T
√
Let x0 (t) = (h0 (t), v0 (t), m0 (t)) , u0 (t) ≡ u0 > 0, be a solution. λ(t) = λ =
α − 2 ± α2 − 4
Then, 2
which are stable for 0 < α < 2. However,
0 1 0 0
˙ e u0 e cos t e−t sin t
x̃(t) 0 t)
x(t) = x(0),
m0 −u0 t
0 0 0 1
' (α−1)t (
Lecture 3 7 Lecture 3 8
EL2620 2011 EL2620 2011
−3 −5
The theorem is also called Lyapunov’s Indirect Method. x0 is thus an asymptotically stable equilibrium for the nonlinear
system.
A proof is given next lecture.
Lecture 3 9 Lecture 3 10
Analytic solution: x(t) = eAt x(0), t ≥ 0. Given the eigenvalues λ1 < λ2 < 0, with corresponding
eigenvectors v1 and v2 , respectively.
If A is diagonalizable, then
Solution: x(t) = c1 eλ1 t v1 + c2 eλ2 t v2
λ1 t
0
eAt = V eΛt V −1 = v1 v2 λ2 t v1 v2
0 e
) *
where v1 , v2 are the eigenvectors of A (Av1 = λ1 v1 etc). Moves along the slow eigenvector towards x = 0 for large t
Lecture 3 11 Lecture 3 12
EL2620 2011 EL2620 2011
Im λi *= 0 : Re λi < 0 Re λi > 0 Re λi = 0
stable focus unstable focus center point
Lecture 3 13 Lecture 3 14
Example—Unstable Focus
ẋ = x, σ, ω > 0,
σ −ω λ1,2 = 1 ± i λ1,2 = 0.3 ± i
ω σ
λ1,2 = σ ± iω
) *
At 1 1 eσt eiωt 0 1 1
x(t) = e x(0) = x(0)
0
) *) *) *−1
−i i eσt e−iωt −i i
ṙ = σr
θ̇ = ω
Lecture 3 15 Lecture 3 16
EL2620 2011 EL2620 2011
Example—Stable Node
ẋ = x
−1 1
) *
0 −2
=
1 −1
v1 v2
0 1
(λ1 , λ2 ) = (−1, −2) and
) *
+ ,
Lecture 3 17 Lecture 3 18
ẋ = f (x) = Ax + g(x),
with lim%x%→0 !g(x)!/!x! = 0. If ż = Az has a focus, node, or
Hint: Two cases; only one linear independent eigenvector or all saddle point, then ẋ = f (x) has the same type of equilibrium at the
vectors are eigenvectors origin.
Lecture 3 19 Lecture 3 20
EL2620 2011 EL2620 2011
Lecture 3 21 Lecture 3 22
Let (x1 , x2 ) = (θout , θ̇out ), K, T > 0, and θin (t) ≡ θin . Linearization gives the following characteristic equations:
n even:
ẋ1 (t) = x2 (t)
λ2 + T −1 λ + KT −1 = 0
−1 −1
ẋ2 (t) = −T x2 (t) + KT sin(θin − x1 (t)) K > (4T )−1 gives stable focus
Equilibria are (θin + nπ, 0) since 0 < K < (4T )−1 gives stable node
ẋ1 = 0 ⇒ x2 = 0 n odd:
ẋ2 = 0 ⇒ sin(θin − x1 ) = 0 ⇒ x1 = θin + nπ λ2 + T −1 λ − KT −1 = 0
Saddle points for all K, T >0
Lecture 3 23 Lecture 3 24
EL2620 2011 EL2620 2011
Lecture 3 25 Lecture 3 26
− sin θ cos θ
Now, from (1)
ẋ1 = r(1 − r2 ) cos θ − r sin θ
2
• When does there exist a periodic solution?
ẋ2 = r(1 − r ) sin θ + r cos θ • When is it stable?
gives
Lecture 3 27 Lecture 3 28
EL2620 2011 EL2620 2011
Poincaré Map
Assume φt (x0 ) is a periodic solution with period T .
Let Σ ⊂ Rn be an n − 1-dim hyperplane transverse to f at x0 .
Flow Definition: The Poincaré map P : Σ → Σ is
Lecture 3 29 Lecture 3 30
Lecture 3 31 Lecture 3 32
EL2620 2011 EL2620 2011
• If |λi (W )| > 1 for some i, then the periodic orbit is unstable. First return time from any point (r0 , θ0 ) ∈ Σ is
τ (r0 , θ0 ) = 2π.
Lecture 3 33 Lecture 3 34
EL2620 2011
P (r0 , θ0 ) =
[1 + (r0−2 − 1)e−2·2π ]−1/2
θ0 + 2π
' (
dP e−4π 0
W = (1, 2πk) =
d(r0 , θ0 ) 0 1
' (
Lecture 3 35
EL2620 2011 EL2620 2011
Lecture 5 1 Lecture 5 2
History
f (y)
Gain
r y
G(s)
− y Idea: Generalize the concept of gain to nonlinear dynamical systems
f (·) u y
S
For what G(s) and f (·) is the closed-loop system stable? The gain γ of S is the largest amplification from u to y
• Luré and Postnikov’s problem (1944) Here S can be a constant, a matrix, a linear time-invariant system, etc
• Aizerman’s conjecture (1949) (False!)
Question: How should we measure the size of u and y ?
• Kalman’s conjecture (1957) (False!)
• Solution by Popov (1960) (Led to the Circle Criterion)
Lecture 5 3 Lecture 5 4
EL2620 2011 EL2620 2011
Lecture 5 5 Lecture 5 6
Signal Norms
Eigenvalues are not gains
A signal x is a function x : R+ → R.
The spectral radius of a matrix M
A signal norm " · "k is a norm on the space of signals x.
i
ρ(M ) = max |λi (M )|
Examples:
is not a gain (nor a norm).
∞
2-norm (energy norm): "x"2 = 0
|x(t)|2 dt
Why? What amplification is described by the eigenvalues?
"#
Lecture 5 7 Lecture 5 8
EL2620 2011 EL2620 2011
then ∞ ∞
1
y(t)x(t)dt = Y ∗ (iω)X(iω)dω. The gain of S is defined as γ(S) = sup
"y"2
= sup
"S(u)"2
0 2π −∞ u∈L2
$ $
Lecture 5 9 Lecture 5 10
y x∗
u
S2 S1
Lecture 5 11 Lecture 5 12
EL2620 2011 EL2620 2011
Lemma: γ(G)
0
10
|G(iω)| BIBO Stability
-1
γ G = sup 10 Definition:
"Gu"2
= sup |G(iω)|
ω∈(0,∞) S is bounded-input bounded-output (BIBO) stable if γ(S)
-2
u∈L2 "u"2
10 -1 0 1
10 10 10
< ∞.
% &
"y"22 = |Y (iω)|2 dω
2π −∞
$ ∞
Lecture 5 13 Lecture 5 14
Lecture 5 15 Lecture 5 16
EL2620 2011 EL2620 2011
G(s) 0.5
"r1 "2 + γ(S2 )"r2 "2
"e1 "2 ≤
0
To: Y(1)
1 − γ(S2 )γ(S1 ) −
γ(S2 )γ(S1 ) < 1, "r1 "2 < ∞, "r2 "2 < ∞ give "e1 "2 < ∞. -0.5
Similarly we get -1
-1 -0.5 0 0.5 1
Lecture 5 17 Lecture 5 18
0
−
f (y)
-0.5
f (0) = 0
y
0 ≤ k1 ≤ ≤ k2 , ∀y *= 0,
f (·) G(iω)
-1
Lecture 5 19 Lecture 5 20
EL2620 2011 EL2620 2011
Ky Let k
f (y) 1
0.5 1
= (k1 + k2 )/2, f((y) = f (y) − ky , and r(1 = r1 − kr2 :
y 0 2
=: R, ∀y *= 0,
K
−
) )
-0.5 G(iω)
) f((y) ) k2 − k1
) ) f((0) = 0
-1 G
) y )≤
−k
-1 -0.5 0 0.5 1 1.5 2
r1 e1 y1
G(s) y1
(
−f (·)
min Re G(iω) = −1/4 r2
the Circle Criterion gives that the system is BIBO stable if K ∈ (0, 4). −f((·)
Lecture 5 21 Lecture 5 22
r2 R
(
1 −k2 − k1
− k11 − k12
G(iω)
−f((·)
R
< 1, where 1
G(iω)
Small Gain Theorem gives stability if |G(iω)|R
G G(iω)
G is stable (This has to be checked later). Hence,
1 + kG
(
G
Note that
1 + kG
is stable since −1/k is inside the circle.
1
(=
G(iω) Note that G(s) may have poles on the imaginary axis, e.g.,
) )
|G(iω)|
integrators are allowed
) 1 )
=) ) + k )) > R
(
Lecture 5 23 Lecture 5 24
EL2620 2011 EL2620 2011
Scalar Product
Scalar product for signals y and u
u y
T S
,y, u-T = y T (t)u(t)dt
0
$
Passivity and BIBO Stability If u and y are interpreted as vectors then ,y, u-T
= |y|T |u|T cos φ
Lecture 5 25 Lecture 5 26
Passive System
u y
S
Lecture 5 27 Lecture 5 28
EL2620 2011 EL2620 2011
T
du du Cu2 (T ) Lemma: If S1 and S2 are passive then the closed-loop system from
i=C u(t)C dt =
dt dt 2
: ,u, i-T = ≥0
0 (r1 , r2 ) to (y1 , y2 ) is also passive.
$
Proof:
T
di di Li2 (T )
L i(t)dt = ,y, e-T = ,y1 , e1 -T + ,y2 , e2 -T = ,y1 , r1 − y2 -T + ,y2 , r2 + y1 -T
dt dt 2
u = L : ,u, i-T = ≥0
0
$
Lecture 5 29 Lecture 5 30
Re G(iω
Proof:
− )) ≥ 0, ∀ω > 0
Example: 0.6
1 0.4
)(|y|2∞ + |u|2∞ ) ≤ ,y, u-∞ ≤ |y|∞ · |u|∞ = "y"2 · "u"2
G(s) = is strictly passive, 0.2
s+1 0
Hence, )"y"22 ≤ "y"2 · "u"2 , so
1 -0.2
Lecture 5 31 Lecture 5 32
EL2620 2011 EL2620 2011
Proof
S1 strictly passive and S2 passive give
The Passivity Theorem
) |y1 |2T + |e1 |2T ≤ ,y1 , e1 -T + ,y2 , e2 -T = ,y, r-T
r1 e1 y1 Therefore
% &
S1
− 1
)
|y1 |2T + ,r1 − y2 , r1 − y2 -T ≤ ,y, r-T
y2 e2 r2 or
S2 1
)
|y1 |2T + |y2 |2T − 2,y2 , r2 -T + |r1 |2T ≤ ,y, r-T
Hence
Theorem: If S1 is strictly passive and S2 is passive, then the
closed-loop system is BIBO stable from r to y . 1 1
2+
) )
|y|2T ≤ 2,y2 , r2 -T + ,y, r-T ≤ |y|T |r|T
* +
Lecture 5 33 Lecture 5 34
Lecture 5 35 Lecture 5 36
EL2620 2011 EL2620 2011
replacements
Example—Gain Adaptation
Applications in telecommunication channel estimation and in noise
cancellation etc. Gain Adaptation—Closed-Loop System
Process
u y
θ∗ G(s)
u y
θ∗ G(s)
ym − c θ
θ(t) G(s) −
s
ym
Model θ(t) G(s)
Adaptation law:
dθ
c > 0.
dt
= −cu(t)[ym (t) − y(t)],
Lecture 5 37 Lecture 5 38
θ ∗ y , ym
0
− θ c
s -2
−
0 5 10 15 20
1.5
u S 1 θ
0.5
S is passive (see exercises).
0
If G(s) is strictly passive, the closed-loop system is BIBO stable 0 5 10 15 20
Lecture 5 39 Lecture 5 40
EL2620 2011 EL2620 2011
Storage Function
Consider the nonlinear control system Storage Function and Passivity
ẋ = f (x, u), y = h(x)
Lemma: If there exists a storage function V for a system
1 n
A storage function is a C function V
ẋ = f (x, u), y = h(x)
: R → R such that
• V (0) = 0 and V (x) ≥ 0, ∀x *= 0
with x(0) = 0, then the system is passive.
• V̇ (x) ≤ uT y , ∀x, u
Remark: Proof: For all T > 0,
• V (T ) represents the stored energy in the system T
,y, u-T = y(t)u(t)dt ≥ V (x(T ))−V (x(0)) = V (x(T )) ≥ 0
0
$
stored energy at t = T
absorbed energy
, -. / , -. /
, -. / stored energy at t = 0
Lecture 5 41 Lecture 5 42
Example—KYP Lemma
Lyapunov vs. Passivity Consider an asymptotically stable linear system
Lecture 5 43 Lecture 5 44
EL2620 2011
Today’s Goal
You should be able to
Lecture 5 45
EL2620 2011 EL2620 2011
Lecture 6 1 Lecture 6 2
Motivating Example
2
y
r e u y u A Frequency Response Approach
1
G(s)
− 0 Nyquist / Bode:
-1
A (linear) feedback system will have sustained oscillations
-2
(center) if the loop-gain is 1 at the frequency where the phase lag
0 5 10 15 20
is −180o
4 But, can we talk about the frequency response, in terms of gain and
G(s) = and u = sat e give a stable oscillation.
s(s + 1)2 phase lag, of a static nonlinearity?
Lecture 6 3 Lecture 6 4
EL2620 2011 EL2620 2011
Fourier Series
A periodic function u(t) = u(t + T ) has a Fourier series expansion
The Fourier Coefficients are Optimal
∞
a0
u(t) = + (an cos nωt + bn sin nωt) The finite expansion
2 n=1
k
!
∞
= + an + b2n sin[nωt + arctan(an /bn )] u + (an cos nωt + bn sin nωt)
2 2 n=1
n=1
a0 ! " 2 a0 !
solves
$k (t) =
T 0 T 0
# # $k (t) dt
Lecture 6 5 Lecture 6 6
That is, we assume all higher harmonics are filtered out by G Amplitude dependent gain and phase shift!
Lecture 6 7 Lecture 6 8
EL2620 2011 EL2620 2011
Lecture 6 9 Lecture 6 10
replacements
Periodic Solutions in Relay System
Existence of Periodic Solutions
0.1
0
−1/N (A)
G(iω) r e u y
e u y G(s) -0.1
0 f (·) G(s) -0.2
-0.3
−
− G(iω)
-0.4
−1/N (A)
-0.5
A
-1 -0.8 -0.6 -0.4 -0.2 0
3
G(s) =
(s + 1)3
Proposal: sustained oscillations if loop-gain 1 and phase-lag −180o with feedback u = −sgn y
Lecture 6 11 Lecture 6 12
EL2620 2011 EL2620 2011
φ ∈ (φ0 , π − φ0 )
Note that G filters out almost all higher-order harmonics.
where φ0 = arcsin D/A.
Lecture 6 13 Lecture 6 14
1
Hence, if H = D , then N (A) = 2φ0 + sin 2φ0 .
π
) *
1 2π 4 π/2
b1 = u(φ) sin φ dφ = u(φ) sin φ dφ 1.1
1
π 0 π 0
# #
0.9
4A φ0 2 4D π/2 0.8
N (A) for H = D = 1
= sin φ dφ + sin φ dφ
π 0 π φ0 0.7
# #
0.6
A 0.5
= 2φ0 + sin 2φ0 0.4
π
0.3
) *
0.2
0.1
0 2 4 6 8 10
Lecture 6 15 Lecture 6 16
EL2620 2011 EL2620 2011
Lecture 6 17 Lecture 6 18
Ω G(Ω)
G(Ω)
−1/N (A)
Lecture 6 19 Lecture 6 20
EL2620 2011 EL2620 2011
-0.2
−1
-5 -4 -3 -2 -1 0
(s + 10)2 1 u
G(s) = y
(s + 1)3
with feedback u = −sgn y
0
gives one stable and one unstable limit cycle. The left most
intersection corresponds to the stable one. −1
0 5 10
Time
Lecture 6 21 Lecture 6 22
0.8 e 1 2π
1 0.6 a1 = u(φ) cos φ dφ = 0
0.4 π 0
0.2 u
#
e 0
-0.2
1 2π 4 π/2
D -0.4
b1 = u(φ) sin φdφ = sin φdφ
-0.6
π 0 π φ0
# #
-0.8
-1
4
0 1 2 3 4 5 6 = cos φ0 =
π π
1 − D2 /A2
Let e(t)
4"
πA
1 − D2 /A2 , A ≥ D
1,
(
φ ∈ (φ0 , π − φ0 )
4 "
Lecture 6 23 Lecture 6 24
EL2620 2011 EL2620 2011
Lecture 6 25 Lecture 6 26
-0.2 -0.2
-0.4 -0.4
0 5 10 15 20 25 30 -1 -0.5 0 0.5
Corresponds to DF gives period times and amplitudes (T, A) = (11.4, 1.00) and
G s(s − z) (17.3, 0.23), respectively.
= 3
1 + GC s + 2s2 + 2s + 1
with feedback u = −sgn y
Accurate results only if y is close to sinusoidal!
The oscillation depends on the zero at s = z.
Lecture 6 27 Lecture 6 28
EL2620 2011 EL2620 2011
Harmonic Balance
e(t) = A sin ωt u(t)
f (·)
k = 1 and a0 = 0.
Example: f (x) = x2 gives u(t) = (1 − cos 2ωt)/2. Hence by
considering a0 = 1 and a2 = 1/2 we get the exact result.
Lecture 6 29 Lecture 6 30
• Approximate results
• Powerful graphical methods
Lecture 6 31 Lecture 6 32
EL2620 2011 EL2620 2011
Lecture 7 1 Lecture 7 2
PSfr
The Problem with Saturating Actuator
r e u y
Example—Windup in PID Controller
v
PID G
2
1
y
0
0 20 40 60 80
• The feedback path is broken when u saturates ⇒ Open loop 0.1
behavior!
0
u
−0.1
• Leads to problem when system and/or the controller are unstable
0 20 40 60 80
– Example: I-part in PID Time
Lecture 7 3 Lecture 7 4
EL2620 2011 EL2620 2011
Tt
Lecture 7 5 Lecture 7 6
Lecture 7 7 Lecture 7 8
EL2620 2011 EL2620 2011
y
= (F − KC)xc + (G − KD)y + Ku
xc u
G ∑ s −1 C ∑
D
Choose K such that F0 = F − KC has desired (stable)
eigenvalues. Then use controller
F − KC
ẋc = F0 xc + G0 y + Ku
u = sat(Cxc + Dy) y xc v u
G − KD ∑ s −1 C ∑ sat
where G0
K
= G − KD.
Lecture 7 9 Lecture 7 10
y = −C/Dxc
Thus, choose ”observer” gain
F (s) D
1 − 1
K = G/D ⇒ F − KC = F − GC/D
and the eigenvalues of the ”observer” based controller becomes equal It is easy to show that transfer function from y to u with no saturation
to the zeros of F (s) when u saturates equals F (s)!
Note that this implies G − KD = 0 in the figure on the previous If the transfer function (1/F (s) − 1/D) in the feedback loop is
slide, and we thus obtain P-feedback with gain D under saturation. stable (stable zeros) ⇒ No stability problems in case of saturation
Lecture 7 11 Lecture 7 12
EL2620 2011 EL2620 2011
r u y
T1 s + 1
Q(s) G(s) Q= , τ < T1
− τs + 1
Gives the controller
G(s) Q
− F = ⇒
y*
*
1 − QG
T1 s + 1 T1 1
Assume G stable. Note: feedback from the model error y F = = 1+
*
τs τ T1 s
! "
Lecture 7 13 Lecture 7 14
*
Example (cont’d)
IMC with Static Nonlinearity Assume r = 0 and abuse of Laplace transform notation
Include nonlinearity in model
τs + 1 τs + 1
u = −Q(y − Gv)
r u y
* = − T1 s + 1 y + 1 v
Lecture 7 15 Lecture 7 16
EL2620 2011 EL2620 2011
Bumpless Transfer
Another application of the tracking idea is in the switching between
Other Anti-Windup Solutions automatic and manual control modes.
PID with anti-windup and bumpless transfer:
Solutions above are all based on tracking.
1 ∑
Tr +
Other solutions include: −
uc 1
∑ 1
Tm s
• Tune controller to avoid saturation
y
PD
uc
M
• Don’t update controller states at saturation
e 1 u
∑ 1 ∑
Tr s
A
• Conditionally reset integration state to zero
− +
∑
1
• Apply optimal control theory (Lecture 12)
Tr
Note the incremental form of the manual control mode (u̇ ≈ uc /Tm )
Lecture 7 17 Lecture 7 18
Friction
Friction is present almost everywhere
Stick-Slip Motion
x
• Often bad:
– Friction in valves and other actuators 2 y
1
0
0 10 20
• Sometimes good:
– Friction in brakes Fp Time
0.4
Ff
0.2
ẏ
0
• Sometimes too small:
– Earthquakes 0 10 20
Time
y x Fp
Problems: 1
Ff
0
0 10 20
Time
• How to model friction?
• How to compensate for friction?
• How to detect friction in control loops?
Lecture 7 19 Lecture 7 20
EL2620 2011 EL2620 2011
0.5 5 minute exercise: Which are the signals in the previous plots?
0
0 20 40 60 80 100
Time
0.2
−0.2
0 20 40 60 80 100
Time
1
−1
0 20 40 60 80 100
Time
Lecture 7 21 Lecture 7 22
Stribeck Effect
Friction increases with decreasing velocity (for low velocities)
Friction Modeling
Stribeck (1902)
Steady state friction: Joint 1
200
150
100
50
0
Friction [Nm]
-50
-100
-150
-200
Lecture 7 23 Lecture 7 24
EL2620 2011 EL2620 2011
Lecture 7 25 Lecture 7 26
ê(t)
• Works if friction force changes slowly (v(t) ≈ const)
+t
xr u v x
1 1 Advantage: Avoid that small static error introduces oscillation
PID ms s
Disadvantage: Error won’t go to zero
Lecture 7 27 Lecture 7 28
EL2620 2011 EL2620 2011
replacements
Dither Signal
Avoid sticking at v = 0 (where there is high friction) Model-Based Friction Compensation
by adding high-frequency mechanical vibration (dither )
For process with friction F :
F
Friction
mẍ = u − F
Lecture 7 29 Lecture 7 30
Friction estimator:
= −k sgn v(F − F̂ )
= −k(a − â)
ż = kuPID sgn v
= −ke
â = z − km|v|
d
F̂ = â sgn v Remark: Careful with dt |v| at v = 0.
Lecture 7 31 Lecture 7 32
EL2620 2011 EL2620 2011
1
P-controller 1
PI-controller
0.5 vref 0.5 The Knocker
0 v 0
-0.5 -0.5
-1 -1
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
Coulomb friction compensation and square wave dither
10 10
Typical control signal u
5 u 5
2.5
0 0
-5 -5
2
-10 -10
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
1.5
0
0.5
-0.5
-1
0 1 2 3 4 5 6 7 8 0
10
5 -0.5
0 1 2 3 4 5 6 7 8 9 10
-5
-10
Hägglund: Patent and Innovation Cup winner
0 1 2 3 4 5 6 7 8
Lecture 7 33 Lecture 7 34
101
y
99
98
12
• Compensation for friction
11.5
u
11
10.5
time
Horch: PhD thesis (2000) and patent
Lecture 7 35 Lecture 7 36
EL2620 2011
Next Lecture
• Backlash
• Quantization
Lecture 7 37
EL2620 2011 EL2620 2011
• Backlash
• Backlash
• Quantization • Quantization
Lecture 8 1 Lecture 8 2
θin θout
Lecture 8 3 Lecture 8 4
EL2620 2011 EL2620 2011
D xin − xout
D (θin − θout )
D xin
θin
Lecture 8 5 Lecture 8 6
1.5
No backlash K = 0.25, 1, 4 1.6
Backlash K = 0.25, 1, 4
1.4
Effects of Backlash
1.2
1
1
P-control of motor angle with gearbox having backlash with D = 0.2 0.8
0.6
0.5
0.4
θref e u 1 θ̇in 1 θin θout
K 0.2
1 + sT s 0 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
−
Lecture 8 7 Lecture 8 8
EL2620 2011 EL2620 2011
-4
1 π
Re N (A) = -6
π 2
+ arcsin(1 − 2D/A)
"
-8
Im -1/N(A)
D D
-10
A A
+ 2(1 − 2D/A) 1−
-12
# $ %&
4D D
-14
πA A
Im N (A) = − 1−
-16
$ %
Lecture 8 9 Lecture 8 10
replacements
Describing Function Analysis
Nyquist Diagrams
Stability Proof for Backlash System
4 Input and output of backlash
1.8
3 1.6 The describing function method is only approximate.
2 1.4
1 1.2
Do there exist conditions that guarantee stability (of the
1 steady-state)?
0
Imaginary Axis
−1/N (A) 0.8
-1
0.6
-2 K = 0.25
θref e u 1 θ̇in 1 θin θout
0.4
K=4 K
-3 K=1 0.2 1 + sT s
-4 0
−
-4 -2 0 2 4 0 5 10 15 20 25 30 35 40
Real Axis
K = 4, D = 0.2:
= 0.33, ω = 1.24
DF analysis: Intersection at A Note that θin and θout will not converge to zero
Simulation: A = 0.33, ω = 2π/5.0 = 1.26
Q: What about θ̇in and θ̇out ?
Describing function predicts oscillation well
Lecture 8 11 Lecture 8 12
EL2620 2011 EL2620 2011
Homework 2
Rewrite the system:
Analyze this backlash system with input–output stability results:
θin θout θ̇in θ̇out
BL θ̇in θ̇out
BL
G(s) G(s)
G(s)
The block “BL” satisfies
Passivity Theorem BL is passive
θ̇in in contact
θ̇out = Small Gain Theorem BL has gain γ(BL) =1
0 otherwise
!
Lecture 8 13 Lecture 8 14
Lecture 8 15 Lecture 8 16
EL2620 2011 EL2620 2011
Backlash Inverse
1+sT2
F (s) = K 1+sT with T1 = 0.5, T2 = 2.0:
1 Idea: Let xin jump ±2D when ẋout should change sign
xout
Nyquist Diagrams
1.5
10
8 1
6
0.5
4 y, with/without filter
2 0
0 5 10 15 20 D
0
$
Imaginary Axis
2 ! xin
! #$ !
#
-2 u ! ## ! xout
1
# ##
-4
with filter # D
-6 0 xin
-8 without filter -1
u, with/without filter
-10 -2
-10 -5 0 5 10 0 5 10 15 20
Real Axis
Oscillation removed!
Lecture 8 17 Lecture 8 18
Example—Perfect Compensation
Motor with backlash on input in feedback with PD-controller
1.2
0.8
xin (t) =
0.4
u−D
+ if u(t) > u(t−)
in
u+D
0
+ if u(t) < u(t−)
0 2 4 6 8 10
x (t−) otherwise
• If D 4
0
+ = D then perfect compensation (xout = u)
• If D
-2
0 2 4 6 8 10
+ < D then under-compensation (decreased backlash)
• If D
+ > D then over-compensation (may give oscillation)
Lecture 8 19 Lecture 8 20
EL2620 2011 EL2620 2011
Example—Under-Compensation Example—Over-Compensation
1.2 1.2
0.8 0.8
0.4 0.4
0 0
0 20 40 60 80 0 1 2 3 4 5
6
4
4
2 2
0
0
-2
0 20 40 60 80 0 1 2 3 4 5
Lecture 8 21 Lecture 8 22
∆
u y e
D/A G A/D
∆/2
u y u y
Q +
Lecture 8 23 Lecture 8 24
EL2620 2011 EL2620 2011
0.4
0.2 u u y ∆
e 0
Q ∆/2
-0.2 u
D -0.4
-0.6
-0.8
-1
0 1 2 3 4 5 6
Lecture 6 ⇒
A < ∆2
0, A<D N (A) = n
2n+1
2A
1−
0,
πA i=1
- #
πA
1 − D2 /A2 , A > D
$ %2
4∆ /
4 .
Lecture 8 25 Lecture 8 26
N (A)
Nyquist of linear part (K & ZOH & G(s)) intersects at −0.5K :
0 Stability for K < 2 without Q.
0 2 4
A/Delta
Stable oscillation predicted for K > 2/1.27 = 1.57 with Q.
• The maximum value is 4/π ≈ 1.27 attained at A ≈ 0.71∆. r 1−e−s
u y
K s
G(s) Q
• Predicts oscillation if Nyquist curve intersects negative real axis to −
the left of −π/4 ≈ −0.79
Lecture 8 27 Lecture 8 28
EL2620 2011 EL2620 2011
1
2
Output
K = 0.8
0
0 50
(b)
0
Imaginary axis
Output
K = 1.2
0
0 50 −2
(c)
−6 −4 −2 0
Real axis
1
Output
K = 1.6
0 A/D
0 50 F D/A G
Time
Lecture 8 29 Lecture 8 30
Output
Output
0 0
0 50 100 150 0 50 100 150
0.05
1.02
Output
0.98
Unquantized
−0.05
0 50 100 150 0 50 100 150
0.05 0.05
0 0
Input
Input
−0.05 −0.05
0 50 100 150 0 50 100 150
Time Time
Describing function:
Ay = 0.01 and T = 39 Describing function:
Au = 0.005 and T = 39
Simulation: Ay = 0.01 and T = 28 Simulation: Au = 0.005 and T = 39
Lecture 8 31 Lecture 8 32
EL2620 2011 EL2620 2011
Quantization Compensation
• Improve accuracy (larger word length)
• Avoid unstable controller and gain margins < 1.3 Today’s Goal
Digital Analog
controller D/A You should now be able to analyze and design for
• Use the tracking idea from
anti-windup to improve D/A • Backlash
converter +−
• Quantization
• Use analog dither, oversam-
pling, and digital lowpass fil-
ter to improve accuracy of A/D + A/D filter decim.
converter
Lecture 8 33 Lecture 8 34
EL2620 2011 EL2620 2011
Lecture 9 1 Lecture 9 2
Lecture 9 3 Lecture 9 4
EL2620 2011 EL2620 2011
Inverting Nonlinearities
Compensation of static nonlinearity through inversion:
Controller
A Word of Caution
Nyquist: high loop-gain may induce oscillations (due to dynamics)! F (s) fˆ−1 (·) f (·) G(s)
−
Lecture 9 5 Lecture 9 6
Lecture 9 7 Lecture 9 8
EL2620 2011 EL2620 2011
Lecture 9 9 Lecture 9 10
Example—Distortion Reduction
r y
G
Let G = 1000, −
distortion dG/G = 0.1
Transcontinental Communication Revolution
K
The feedback amplifier was patented by Black 1937.
Choose K = 0.1 ⇒ S = (1 + GK)−1 ≈ 0.01. Then
dGcl dG Year Channels Loss (dB) No amp’s
=S
G
≈ 0.001
Gcl 1914 1 60 3–6
100 feedback amplifiers in series give total amplification 1923 1–4 150–400 6–20
Gtot = (Gcl )100 ≈ 10100 1938 16 1000 40
1941 480 30000 600
and total distortion
dGtot
= (1 + 10−3 )100 − 1 ≈ 0.1
Gtot
Lecture 9 11 Lecture 9 12
EL2620 2011 EL2620 2011
GF (iω) k1 y
1
k1 =
1+r y
Consider a circle C
:= {z ∈ C : |z + 1| = r}, r ∈ (0, 1).
GF (iω) stays outside C if 1
k2 =
|1 + GF (iω)| > r ⇔ |S(iω)| ≤ r−1 1−r
1 f (y) 1
Then, the Circle Criterion gives stability if
1+r y
≤ ≤
1−r
Lecture 9 13 Lecture 9 14
Common in temperature control, level control etc. Choose u such that V decays as fast as possible:
V̇ = xT (AT P + P A)x + 2B T P xu
r e u y
G(s) is minimized if u = − sgn(B T P x) (Notice that V̇ = a + bu, i.e. just
− a segment of line in u, −1 < u < 1. Hence the lowest value is at an
endpoint, depending on the sign of the slope b. )
ẋ = Ax − B
B T QP x = 0
The relay corresponds to infinitely high gain at the switching point. ẋ = Ax + B
Lecture 9 15 Lecture 9 16
EL2620 2011 EL2620 2011
Sliding Modes
σ(x) > 0
f+ Example
σ(x) < 0
f (x), σ(x) > 0
ẋ =
f − (x), σ(x) < 0 1
ẋ = x+ u = Ax + Bu
0 −1
# +
f− 1
$ % $ %
1 −1
f− x2 > 0
ẋ =
αf + + (1 − α)f − Ax − B,
Ax + B, x2 < 0
#
f+
Lecture 9 17 Lecture 9 18
x2 < 0
dx1
ẋ2 (t) ≈ x1 + 1, dx2 ≈ 1 + x1
x2 = 0
x1 = −1 x1 = 1
x2 < 0
Lecture 9 19 Lecture 9 20
EL2620 2011 EL2620 2011
dx dt dx
·
$ %
Lecture 9 21 Lecture 9 22
Assume CB
1
> 0. The sliding surface S = {x : Cx = 0} so BC Ax,
CB
ẋ = Ax + Bueq = I −
$ %
dσ
0 = σ̇(x) = f (x) + g(x)u = C Ax + Bueq under the constraint Cx = 0, where the eigenvalues of
dx
$ %
! "
Lecture 9 23 Lecture 9 24
EL2620 2011 EL2620 2011
1 1 ..
B ẏ .
d
CB CB
ẋ = I − BC Ax −
x2
xn xn−1
$ % .. = = f (x) + g(x)u
dt .
Lecture 9 25 Lecture 9 26
Closed-Loop Stability
Consider V (x) = σ 2 (x)/2 with σ(x) = pT x. Then,
Time to Switch
V̇ = σ T (x)σ̇(x) = xT p pT f (x) + pT g(x)u Consider an initial point x0 such that σ0 = σ(x0 ) > 0. Since
Lecture 9 27 Lecture 9 28
EL2620 2011 EL2620 2011
1
y= 0 1 x
! "
pT Ax µ
−1
pT B p B
u=− − T sgn σ(x)
= 2x1 − µ sgn(x1 + x2 ) −2 −1 0 1 2 x1
Lecture 9 29 Lecture 9 30
Time Plots
x1 The Sliding Mode Controller is Robust
Initial condition x2
x(0) = 1.5 0 . ẋ = f (x) + g(x)u is known. Still, however,
Simulation agrees well with
! "T Assume that only a model ẋ = f4(x) + g4(x)u of the true system
time to switch pT g
V̇ = σ(x) − µ T sgn σ(x) < 0
σ0
5 6
ts = =3
pT (f g4T − f4g T )p
µ if sgn(pT g)
pT g4 p g4
Lecture 9 31 Lecture 9 32
EL2620 2011 EL2620 2011
• Smooth version through low pass filter or boundary layer • High-gain control systems
• Applications in robotics and vehicle control • Sliding mode controllers
• Compare puls-width modulated control signals
Lecture 9 33 Lecture 9 34
EL2620 2011
Next Lecture
• Lyapunov design methods
• Exact feedback linearization
Lecture 9 35
EL2620 2011 EL2620 2011
Lecture 10 1 Lecture 10 2
Nonlinear Observers
What if x is not measurable?
Lecture 10 3 Lecture 10 4
EL2620 2011 EL2620 2011
ẋ = f (x) + g(x)u
Some state feedback control approaches idea: use a state-feedback controller u(x) to make the system linear
u(x) = − cos x + x3 − kx + v
yields the linear system
ẋ = −kx + v
Lecture 10 5 Lecture 10 6
Lecture 10 7 Lecture 10 8
EL2620 2011 EL2620 2011
Lecture 10 9 Lecture 10 10
Lie Derivatives
Example: controlled van der Pol equation
Consider the nonlinear SISO system
ẋ1 = x2
ẋ = f (x) + g(x)u ; x ∈ Rn , u ∈ R1
ẋ2 = −x1 + %(1 − x21 )x2 +u y = h(x), y ∈ R
y = x1
The derivative of the output
Differentiate the output
dh dh
ẏ = ẋ = (f (x) + g(x)u) ! Lf h(x) + Lg h(x)u
ẏ = ẋ1 = x2 dx dx
ÿ = ẋ2 = −x1 + %(1 − x21 )x2 + u where Lf h(x) and Lg h(x) are Lie derivatives (Lf h is the derivative
of h along the vector field of ẋ = f (x))
The state feedback controller
Repeated derivatives
x21 )x2 +v ÿ = v d(Lk−1 d(Lf h)
f h)
u = x1 − %(1 − ⇒
Lkf h(x) = f (x), Lg Lf h(x) = g(x)
dx dx
Lecture 10 11 Lecture 10 12
EL2620 2011 EL2620 2011
Lecture 10 13 Lecture 10 14
..
.
y (p) = Lpf h(x) + Lg Lp−1
f h(x)u
Lecture 10 15 Lecture 10 16
EL2620 2011 EL2620 2011
Lecture 10 17 Lecture 10 18
But, the term x3 in the control law may require large control moves!
Lecture 10 19 Lecture 10 20
EL2620 2011 EL2620 2011
7
600
u = − cos x − kx
6
2 400
Choose the same Lyapunov candidate V (x) = x /2 5
Input u
State x
4
200
3
2
V (x) > 0 , V̇ = −x4 − kx2 < 0
0
1
Thus, also globally asymptotically stable (and more negative V̇ ) 0 -200
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time [s] time [s]
Lecture 10 21 Lecture 10 22
g(x1 )
for some positive definite function W .
This is a critical assumption in backstepping control!
( (
f ()
Lecture 10 23 Lecture 10 24
EL2620 2011 EL2620 2011
u ζ x1
g(x1 )
( (
−φ(x1 )
f + gφ
( (
f + gφ
−φ̇(x1 )
Lecture 10 25 Lecture 10 26
Back-Stepping Lemma
Consider V2 (x1 , x2 ) = V1 (x1 ) + ζ 2 /2. Then,
dV1 dV1 Lemma: Let z = (x1 , . . . , xk−1 )T and
V̇2 (x1 , x2 ) = f (x1 ) + g(x1 )φ(x1 ) + g(x1 )ζ + ζv
dx1 dx1 ż = f (z) + g(z)xk
) *
dV1 ẋk = u
≤ −W (x1 ) + g(x1 )ζ + ζv
dx1
Assume φ(0) = 0, f (0) = 0,
Choosing
dV1 ż = f (z) + g(z)φ(z)
v=− g(x1 ) − kζ, k>0
dx1
gives stable, and V (z) a Lyapunov fcn (with V̇
2
≤ −W ). Then,
dφ dV
V̇2 (x1 , x2 ) ≤ −W (x1 ) − kζ
Hence, x = 0 is asymptotically stable for (1) with control law u= f (z) + g(z)xk
dz dz
− g(z) − (xk − φ(z))
u(x) = φ̇(x) + v(x).
) *
If V1 radially unbounded, then global stability. stabilizes x = 0 with V (z) + (xk − φ(z))2 /2 being a Lyapunov fcn.
Lecture 10 27 Lecture 10 28
EL2620 2011 EL2620 2011
Lecture 10 29 Lecture 10 30
Back-Stepping Example
Back-Stepping Lemma can be applied recursively to a system Design back-stepping controller for
Lecture 10 31 Lecture 10 32
EL2620 2011
ẋ1 = x21 + x2
ẋ2 = x3
ẋ3 = u
gives
dφ2 dV2
u = u2 =
dz dz
f (z) + g(z)xn − g(z) − (xn − φ2 (z))
) *
Lecture 10 33
EL2620 2011 EL2620 2011
Lecture 11 1 Lecture 11 2
Linear Systems
Controllability Lemma:
ẋ = Ax + Bu
Definition: is controllable if and only if
ẋ = f (x, u)
0 1 Wn = B AB . . . An−1 B
is controllable if for any x , x there exists T > 0 and
has full rank.
! "
Lecture 11 3 Lecture 11 4
EL2620 2011 EL2620 2011
Remark:
x 0 cos(ϕ + θ)
• Hence, if rank Wn = n then there is an ! > 0 such that for every
θ 1 0
y 0 sin(ϕ + θ)
Lecture 11 5 Lecture 11 6
Lie Brackets
Linearization for u1 = u2 = 0 gives
Lie bracket between vector fields f, g : Rn → Rn is a vector field
ż = Az + B1 u1 + B2 u2 defined by
∂g ∂f
with A = 0 and [f, g] = g
∂x ∂x
f−
Example:
0 cos(ϕ0 + θ0 )
cos x2 x1
f= , g=
x1 1
) * ) *
1 0
0 sin(ϕ0 + θ0 )
1 0 cos x2 x1
B1 = , B2 =
[f, g] =
0 − sin x2
0 sin(θ0 )
0 0 1 0 1
−
rank Wn = rank B AB . . . An−1 B = 2 < 4, so the x1
) *) * ) *) *
−x1
Linearization does not capture the controllability good enough
Lecture 11 7 Lecture 11 8
EL2620 2011 EL2620 2011
(u1 , u2 ) =
t ∈ [!, 2!)
1 dg2
2 dx
t ∈ [2!, 3!)
dg2
(0, −1), t ∈ [3!, 4!)
and with x(!) from (1), and g2 (x(!)) = g2 (x0 ) + dx !g1 (x0 )
gives motion
Lecture 11 9 Lecture 11 10
x(3!) = x0 + !g2 + ! g2 + g2
dx dx 2 dx
g1 −
0
) *
0 0 0 0 1
0 0 cos(ϕ + θ) cos(ϕ + θ) 0
4. Finally, for t
= − 0
∈ [3!, 4!]
0 0 cos(θ) 0
0
cos(ϕ + θ)
=
cos(θ)
Lecture 11 11 Lecture 11 12
EL2620 2011 EL2620 2011
y g4 := [g3 , g2 ] =
∂x ∂x 0
g3 −
cos(ϕ + 2θ)
θ 0
g2 = . . . =
ϕ
x
(x, y) (−sin(ϕ), cos(ϕ))
g4 direction corresponds to
sideways movement
Lecture 11 13 Lecture 11 14
Parking Theorem
You can get out of any parking lot that is ! > 0 bigger than your car 2 minute exercise: What does the direction [g1 , g2 ] correspond to for
by applying control corresponding to g4 , that is, by applying the a linear system ẋ = g1 (x)u1 + g2 (x)u2 = B1 u1 + B2 u2 ?
control sequence
Wriggle, Drive, −Wriggle, −Drive
Lecture 11 15 Lecture 11 16
EL2620 2011 EL2620 2011
Theorem:The system
• The system can be steered in any direction of the Lie bracket tree
Lecture 11 17 Lecture 11 18
Example—Unicycle
x cos θ 0
2 minute exercise:
dt
(x1 , x2 )
• Is the linearization of the unicycle controllable?
cos θ 0 sin θ
0 1 0
g1 = sin θ , g2 = 0 , [g1 , g2 ] = − cos θ
Lecture 11 19 Lecture 11 20
EL2620 2011 EL2620 2011
θ
d
φ 1 0
2 0
= u1 + u2
(x1 , x2 ) v u x z
dt θ 0 1
β ẋ = f T
Controllable because {g1 , g2 , [g1 , g2 ], [g2 , [g1 , g2 ]]} spans R4
Lecture 11 21 Lecture 11 22
Gain Scheduling
Control parameters depend on operating conditions:
Controller Valve Characteristics
parameters Gain
schedule
Flo
Operating
condition
Quickopening
Command
signal Control
signal
Controller Process Output
Linear
Eualpercentage
Example: PID controller with K = K(α), where α is the scheduling
variable. Position
Lecture 11 23 Lecture 11 24
EL2620 2011 EL2620 2011
Nonlinear Valve
Without gain scheduling:
Process
uc c u v y uc
Σ PI fˆ −1 f G 0 (s)
0.3
y
0.2
−1 0 10 20 30 40
Time
uc
1.1
Valve characteristics
y
1.0
0 10 20 30 40
Time
5.2
y
10
fˆ(u)
f (u)
5.0 uc
0
0 10 20 30 40
0 0.5 1 1.5 2
Time
Lecture 11 25 Lecture 11 26
Lecture 11 27 Lecture 11 28
EL2620 2011 EL2620 2011
Σ D/A Σ
40 Acceleration
3 4 Filter A/D D/A Σ
Lecture 11 29 Lecture 11 30
EL2620 2011
Today’s Goal
You should be able to
Lecture 11 31
EL2620 2011 EL2620 2011
Lecture 12 1 Lecture 12 2
Example—Boat in Stream
Optimal Control Problems Sail as far as possible in x1 direction
Lecture 12 3 Lecture 12 4
EL2620 2011 EL2620 2011
u:[0,tf ]→R 0
ẋ(t) = γu(t)x(t)
! "
ẋ(t) = u(t)
x(0) = x0 > 0
x(0) = a
Lecture 12 5 Lecture 12 6
u∈U
min H(x∗ (t), u, λ(t)) = H(x∗ (t), u∗ (t), λ(t)), ∀t ∈ [0, tf ]
Remarks:
• U ⊂ Rm set of admissible control where λ(t) solves the adjoint equation
Lecture 12 7 Lecture 12 8
EL2620 2011 EL2620 2011
T
• See textbook, e.g., Glad and Ljung, for proof. The outline is simply
to note that every change of u(t) from the optimal u∗ (t) must H = λ f = λ1 λ2
u2
increase the criterium. Then perform a clever Taylor expansion.
% &
# $ v(x2 ) + u1
= 0 λ1 ,
∂x
• Pontryagin’s Maximum Principle provides necessary condition: φ(x) = −x1
there may exist many or none solutions
∂H # $
• The Maximum Principle provides all possible candidates. λ̇1 (t) = 0, λ1 (tf ) = −1
• Solution involves 2n ODE’s with boundary conditions x(0) = x0 λ̇2 (t) = −λ1 (t), λ2 (tf ) = 0
and λ(tf ) = ∂φT /∂x(x∗ (tf )). Often hard to solve explicitly. have solution
• “maximum” is due to Pontryagin’s original formulation λ1 (t) = −1, λ2 (t) = t − tf
Lecture 12 9 Lecture 12 10
Lecture 12 11 Lecture 12 12
EL2620 2011 EL2620 2011
λ(t)
0
-0.2
-0.6
-0.8
-1
∗ ∗ ∗
u∈[0,1] 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
u (t) = arg min (u − 1)x (t) + λ(t)γux (t)
0.6
0, 0.4
=
λ(t) ≥ −1/γ
0.2
1, 0
'
For t
1,
≈ tf , we have u∗ (t) = 0 (why?) and thus λ̇(t) = 1.
∗ t ∈ [0, tf − 1/γ]
For t < tf − 1/γ , we have u∗ (t) = 1 and thus λ̇(t) = −γλ(t). • u (t) =
0,
'
t ∈ (tf − 1/γ, tf ]
• It’s optimal to reinvest in the beginning
Lecture 12 13 Lecture 12 14
u:[0,tf ]→R 0
!
ẋ = −x + u
"
Lecture 12 15 Lecture 12 16
EL2620 2011 EL2620 2011
History—Calculus of Variations
• Brachistochrone (shortest time) problem (1696): Find the
History—Optimal Control
(frictionless) curve that takes a particle from A to B in shortest • The space race (Sputnik, 1957)
time
• Pontryagin’s Maximum Principle (1956)
ds dx2 + dy 2 1 + y $ (x)2
dt = = dx • Bellman’s Dynamic Programming (1957)
v v 2gy(x)
" "
B
– Robotics—trajectory generation
1 + y $ (x)2
J(y) = dx – Aeronautics—satellite orbits
A 2gy(x)
"
Lecture 12 17 Lecture 12 18
m
dt
m
ψ(x(tf )) = 0
d
−γu
h =
h
Lecture 12 19 Lecture 12 20
EL2620 2011 EL2620 2011
D(v, h) ≡ 0:
• Easy: let u(t) = umax until m(t) = m1 ẋ(t) = f (x(t), u(t)), x(0) = x0
ψ(tf , x(tf )) = 0
• Burn fuel as fast as possible, because it costs energy to lift it
Then, there exists n0 ≥ 0, µ ∈ Rn such that (n0 , µT ) += 0 and
D(v, h) +≡ 0:
u∈U
min H(x∗ (t), u, λ(t), n0 ) = H(x∗ (t), u∗ (t), λ(t), n0 ), t ∈ [0, tf ]
• Hard: e.g., it can be optimal to have low speed when air
resistance is high, in order to burn fuel at higher level where
• Took 50 years before a complete solution was presented H(x, u, λ, n0 ) = n0 L(x, u) + λT f (x, u)
Lecture 12 21 Lecture 12 22
H(x∗ (tf ), u∗ (tf ), λ(tf ), n0 ) ẋ1 (t) = x2 (t), ẋ2 (t) = u(t)
∂φ ∂ψ ψ(x(tf )) = (x1 (tf ), x2 (tf ))T = (0, 0)T
(tf , x∗ (tf ))
∂t ∂t
= −n0 (tf , x∗ (tf )) − µT
Optimal control is the bang-bang control
Remarks: u∗ (t) = arg min 1 + λ1 (t)x∗2 (t) + λ2 (t)u
u∈[−1,1]
• tf may be a free variable
1, λ2 (t) < 0
• With fixed tf : H(x∗ (tf ), u∗ (tf ), λ(tf ), n0 ) = 0 =
'
−1, λ2 (t) ≥ 0
• ψ defines end point constraints
Lecture 12 23 Lecture 12 24
EL2620 2011 EL2620 2011
Lecture 12 25 Lecture 12 26
with
• Closed-loop system stable with u = −α(t)Lx for
ẋ = Ax + Bu
α(t) ∈ [1/2, ∞) (infinite gain margin)
Lecture 12 27 Lecture 12 28
EL2620 2011 EL2620 2011
10
.t
-5
-10
-15
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
Slosh
1
0.5
-0.5
-1
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
Lecture 12 29 Lecture 12 30
− Hard to find suitable criteria Pontryagin’s Maximum principle: Main results; Special cases such
as time optimal control and LQ control
− Hard to solve the equations that give optimal controller
Numerical Methods: Numerical solution of optimal control problems
Lecture 12 31 Lecture 12 32
EL2620 2011
Today’s Goal
You should be able to
Lecture 12 33
EL2620 2011 EL2620 2011
Lecture 13 1 Lecture 13 2
r e u y
• Implement as rules (instead of as differential equations)
C P
Example of a rule: −
IF Speed is High AND Traffic is Heavy
THEN Reduce Gas A Bit
Lecture 13 3 Lecture 13 4
EL2620 2011 EL2620 2011
1
µC µW
Cold Warm
Cold Warm
0
0 10 25
x 0
0 10 25 x
A fuzzy set is defined as (A, µA ) 15
Lecture 13 5 Lecture 13 6
Fuzzy Logic
How to calculate with fuzzy sets (A, µA )?
Conventional logic: Example
AND: A ∩ B
OR: A ∪ B Q1: Is it cold AND warm? Q2: Is it cold OR warm?
NOT: A!
1 1
µC∪W
Fuzzy logic:
AND: µA∩B (x) = min(µA (x), µB (x)) Cold Warm Cold Warm
OR: µA∪B (x) = max(µA (x), µB (x))
µC∩W
0 0
0 10 25 0 10 25
NOT: µA! (x) = 1 − µA (x)
x x
Defines logic calculations as X AND Y OR Z
Mimic human linguistic (approximate) reasoning [Zadeh,1965]
Lecture 13 7 Lecture 13 8
EL2620 2011 EL2620 2011
Fuzzy Controller
Fuzzy Controller
Fuzzy Control System
y Fuzzy u
r Fuzzifier Inference Defuzzifier
Fuzzy u y
Controller Plant
Lecture 13 9 Lecture 13 10
1
µC µW 3. Aggregate rule outputs
0 10 25 y 1. 2.
15
! "# $ ! is"#Low$
Lecture 13 11 Lecture 13 12
EL2620 2011 EL2620 2011
1. Calculate degree of fulfillment for rules 2. Calculate fuzzy output of each rule
Lecture 13 13 Lecture 13 14
Defuzzifier
3. Aggregate rule outputs
Lecture 13 15 Lecture 13 16
EL2620 2011 EL2620 2011
y Fuzzifier
Fuzzy
Defuzzifier
u
Inference
Lecture 13 17 Lecture 13 18
Lecture 13 19 Lecture 13 20
EL2620 2011 EL2620 2011
Lecture 13 21 Lecture 13 22
Lecture 13 23 Lecture 13 24
EL2620 2011 EL2620 2011
Lecture 13 25 Lecture 13 26
Model of a Neuron
Neurons Inputs: x1 , x2 , . . . , xn
Brain neuron Artificial neuron Weights: w1 , w2 , . . . , wn
Bias: b y = φ b + ni=1 wi xi
Nonlinearity: φ(·)
% & '
x1 w1 Output: y
x2 w2 y x1 w1
Σ φ(·)
wn x2 w2 y
Σ φ(·)
xn b
wn
xn b
Lecture 13 27 Lecture 13 28
EL2620 2011 EL2620 2011
Lecture 13 29 Lecture 13 30
Success Stories
Fuzzy controls:
• Zadeh (1965)
• Complex problems but with possible linguistic controls
Today’s Goal
• Applications took off in mid 70’s
You should
– Cement kilns, washing machines, vacuum cleaners
• understand the basics of fuzzy logic and fuzzy controllers
Artificial neural networks:
• understand simple neural networks
• McCulloch & Pitts (1943), Minsky (1951)
• Complex problems with unknown and highly nonlinear structure
• Applications took off in mid 80’s
– Pattern recognition (e.g., speech, vision), data classification
Lecture 13 31 Lecture 13 32
EL2620 2011
Next Lecture
• EL2620 Nonlinear Control revisited
• Spring courses in control
• Master thesis projects
• PhD thesis projects
Lecture 13 33
EL2620 2011 EL2620 2011
Exam
EL2620 Nonlinear Control • Regular written exam (in English) with five problems
Lecture 14 1 Lecture 14 2
Question 1 Question 2
What’s on the exam? What design method should I use in practice?
• Nonlinear models: equilibria, phase portaits, linearization and stability The answer is highly problem dependent. Possible (learning)
• Lyapunov stability (local and global), LaSalle approach:
• Circle Criterion, Small Gain Theorem, Passivity Theorem • Start with the simplest:
– linear methods (loop shaping, state feedback, . . . )
• Compensating static nonlinearities
• Describing functions
• Evaluate:
– strong nonlinearities (under feedback!)?
– varying operating conditions?
• Sliding modes, equivalent controls
• Lyapunov based design: back-stepping – analyze and simulate with nonlinear model
• Exact feedback linearization, input-output linearization, zero dynamics • Some nonlinearities to compensate for?
• Nonlinear controllability – saturations, valves etc
• Optimal control • Is the system generically nonlinear? E.g, ẋ = xu
Lecture 14 3 Lecture 14 4
EL2620 2011 EL2620 2011
Question 3
Can a system be proven stable with the Small Gain Theorem and
unstable with the Circle Criterion? Question 4
• No, the Small Gain Theorem, Passivity Theorem and Circle
Criterion all provide only sufficient conditions for stability
• But, if one method does not prove stability, another one may. Can you review the circle criterion? What about k1 < 0 < k2 ?
• Since they do not provide necessary conditions for stability, none
of them can be used to prove instability.
Lecture 14 5 Lecture 14 6
k1 y
− k11 − k12 The different cases
y
Stable system G
Lecture 14 7 Lecture 14 8
EL2620 2011 EL2620 2011
Tracking PID
(a)
−y
KTd s
Actuator
e v u
K ∑
K 1 − +
∑ ∑
Question 5 Ti s
es
1
Tt
(b)
Please repeat antiwindup −y
KTd s
K 1 − +
∑ ∑
Ti s
es
1
Tt
Lecture 14 9 Lecture 14 10
y xc u
G ∑ s −1 C ∑
Question 6
D
Lecture 14 11 Lecture 14 12
EL2620 2011 EL2620 2011
globally asymptotically stable, if asymptotically stable for all • V̇ (x) < 0 for all x ∈ Ω, x )= 0
then x = 0 is locally asymptotically stable.
x(0) ∈ Rn .
Lecture 14 13 Lecture 14 14
Lecture 14 15 Lecture 14 16
EL2620 2011 EL2620 2011
Remark : a compact set (bounded and closed) is obtained if we e.g., In particular, if the compact region does not contain any fixed point
consider then the ω -limit set is a limit cycle
Ω = {x ∈ Rn |V (x) ≤ c}
and V is a positive definite function
Lecture 14 17 Lecture 14 18
Lecture 14 19 Lecture 14 20
EL2620 2011 EL2620 2011
Lecture 14 21 Lecture 14 22
Lecture 14 23 Lecture 14 24
EL2620 2011 EL2620 2011
Thus, the sliding controller will take the system to the sliding manifold
S in finite time, and the equivalent control will keep it on S .
Lecture 14 25 Lecture 14 26
Backstepping Design
We are concerned with finding a stabilizing control u(x) for the
system
Question 8 ẋ = f (x, u)
General Lyapunov control design: determine a Control Lyapunov
function V (x, u) and determine u(x) so that
Can you repeat backstepping?
V (x) > 0 , V̇ (x) < 0 ∀x ∈ Rn
Lecture 14 27 Lecture 14 28
EL2620 2011 EL2620 2011
Lecture 14 29 Lecture 14 30
dφ dV
u(x) = f (x1 )+g(x1 )x2 − g(x1 )−(xk −φ(x1 ))−f2 (x1 , x2 )
dx1 dx1
! "
Lecture 14 31 Lecture 14 32
EL2620 2011 EL2620 2011
Backlash Compensation • Choose compensation F (s) such that the intersection with the
describing function is removed
• Deadzone 1+sT2
F (s) = K 1+sT with T1 = 0.5, T2 = 2.0:
1
Nyquist Diagrams
• Linear controller design
1.5
10
8 1
• Backlash inverse
6
0.5
Linear controller design: Phase lead compensation 4 y, with/without filter
2 0
0 5 10 15 20
θref e u θ̇in θin θout 0
Imaginary Axis
1+sT2 1 1 -2
2
K 1+sT 1
1 + sT -4 1
s with filter
-6 0
−
-8 without filter -1
u, with/without filter
-10 -2
-10 -5 0 5 10 0 5 10 15 20
Real Axis
Oscillation removed!
Lecture 14 33 Lecture 14 34
Inverting Nonlinearities
Compensation of static nonlinearity through inversion:
Question 10 Controller
Lecture 14 35 Lecture 14 36
EL2620 2011 EL2620 2011
f (·)
• BIBO stability
• Small Gain Theorem
• Circle Criterion
ê = v − f (u)
• Passivity Theorem
If k
# $
Lecture 14 37 Lecture 14 38
Question 12
e(t) = A sin ωt gives
∞
What about describing functions? u(t) = a2n + b2n sin[nωt + arctan(an /bn )]
n=1
% &
Lecture 14 39 Lecture 14 40
EL2620 2011 EL2620 2011
replacements
Lecture 14 41 Lecture 14 42
Lecture 14 43 Lecture 14 44
EL2620 2011 EL2620 2011
Lecture 14 45 Lecture 14 46
Lecture 14 47 Lecture 14 48
EL2620 2011 EL2620 2011
Lecture 14 49 Lecture 14 50